I started paying a fair amount of attention to artificial intelligence in 1985, and became modestly conversant about it, from talking to people at work about their neural network, having an email friend who’d invented the term “artificial intelligence” in 1956, and the like. But by the early 21st Century, nothing all that exciting had happened involving AI, so I checked out. Most of what was advertised as AI, I used to like to say, was just more clever traditional programming.
When I started reading Scott Alexander a dozen years ago, I discovered that a bunch of smart people worried that AI would go all Skynet on us and kill us all. In fact, Scott emerged out of a kind of a cult of AI apocalypticism centered around a guy named Eliezer Yudkowsky, whose most famous prophecy was that AI would turn rogue and kill us all in the name of maximizing paperclips, or something.
For example, in 2023, Yudkowsky wrote in Time:
Here’s what would actually need to be done:
The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology, not trained on text from the internet, and not to the level where they start talking or planning; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down.
Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
Is this the explanation for the Fermi Paradox: every highly intelligent life form in the galaxy invents AI, and it destroys them?
I read the arguments pro and con … and decided that I won’t worry about it because I won’t live long enough for AI to happen, and I’m not smart enough these days to figure out what will happen.
But then, in the 2020s, AI started to happen really fast.
Recently, some hope started to emerge among the AI alignment brigade that AI proliferation could be limited, the way nuclear weapon proliferation has been reasonably limited for the last 80 years due to its expensiveness, complexity, and reliance upon rare ingredients. For example, keep the latest Nvidia GPU chips out of the hands of the Red Chinese.
But then a 200 person Chinese start-up called DeepSeek came along last week and showed that AI could be done much cheaper than had been assumed. You don’t need your own nuclear power plant and the very latest Nvidia chips to do it. Instead, you need smart code.
Which they released to the whole world.
This means that every would-be Bond Villain can afford their own large learning model. Nobody can put the genie back in the bottle after January 2025.
Does open source, low cost DeepSeek mean that there is no way, short of full-blown Butlerian Jihad against computers, which we won't do, to keep AI bottled up, so we're going to find out if Yudkowsky's warnings that AI will go SkyNet and turn us into paperclips are right?
Well, we shall see.
Doesn't AI just generate our own nonsense back to us? When it can answer unknowns like how to generate power from nuclear fusion or cure metastatic cancer, then I'll worry.
My hypothesis is AI will lead to a Great Filter event because some deranged bureaucrats use it to justify some horrible policy, like simultaneous war with Russia and China in order to save one million future Ukrainian geniuses, or re-engineer the Spanish Flu virus to original potency, in order to develop a LIFESAVING Spanish Flu vaccine. And off humanity goes on another round of mass formation psychosis.
I haven’t been convinced by anyone in the field that AI is true intelligence. It is good at compiling large masses of information, blending and mimicking what it finds in that data, and convincing humans that such mimicry is accurate. While the data compilation will continue to improve, as will the mimicry, it cannot progress to any logical process not already done by humans. There’s no sign of consciousness. Oh, and it hallucinates from time to time, even in art, where six fingers and strange elbows predominate.
Interestingly, AI does help already smart people who can use it for shortcuts, but only if those people know their fields enough to recognize what a good product is and what they need to do in the process to shape it further. Dumb people think AI is an easy button, and don’t know any better. So already gifted people will probably amass more capital in the years ahead.
Hunter DeButts contributed to this response, with an assist by Skynet.