If AI development progresses in a way similar to that of video games, there will be 1000 mediocre models for every decent one. The conundrum will be to figure out why so many people are using the lousy ones.
i don't wanna be the tankie guy, but judging from that article, that really doesn't sound bad at all. due to seasonal drought, they added a water pipe from a nearby dam. seems practical and not very anti-environmental, or "fake".
Search for "Sparta AI" in YouTube and you get a quite beautiful video. There are many accounts now that take photographs and make videos of them with AI. They make "retro futurism" with 1950s-style science fiction, or cyberpunk videos, and so on. In just a year, the difference is great between the earlier videos and the later ones today.
Practically always videos with attractive White women. Just a few men and a few non-White women thrown in. But if you look at older videos from some of these accounts, you see that way back they posted Arab or Indian videos. So appreciation of White women is what you get around the world when the U.S. media and Hollywood cabal isn't involved.
Anyhow, regarding AI in general. It has already struck against several professions. This is just the beginning. It would be ironic if in the future people will say it would have been better to have a world government, of any stripe, of any religion or ideology, just so it could have banned AI world-wide.
In the Dune novels, they live in a world where computers are banned, after rising in the past against a world run by "thinking machines." Technology exists, with memorization and calculations done by the Mentats, specially trained men with minds enhanced by the spice from the planet Dune. There are many religions in Dune but they still adhere to the Orange Catholic Bible, where the most important commandment is:
"Thou shalt not make a machine in the likeness of a human mind."
At work I've written that in forums occasionally. Much chuckling follows.
(Another OCB commandment is: "Thou shalt not disfigure the soul." Hear that, leftists?)
I don't believe AI will ever exist. We don't even know how the human (or earthworm) brain works, and I agree that what we call AI is really just clever big-data programming looking on the surface like AI.
Regardless of whether my short-sighted prognosis is correct or not, we won't be able to control China. I am not of a like mind with many apologists on Unz.com who emulate the Chinese approach to governance, but in this instance I am thankful that they will preclude any attempt to restrict it.
Many of our overlords are striving to use AI (trained, of course, on their agenda) as a means to impose totalitarian control. Now Yudkowsky is calling on the world to contain AI as a means to impose totalitarian control. Same agenda with these people.
People said of chatGPT that it isn't really thinking. It's just spewing the next next word. I examined my own process and realized that this is pretty much what most of my speech is. Gulp.
Me? All the time. For example, this morning I asked chatGPT how to delete a file in my editor when I have the file open. I assumed this would be an option in the context menu or a top menu but it wasn't. chatGPT told me to use the delete command in the context menu, because, like me, it assumed there should be one.
Actually, we do have a pretty good idea how human brain, specifically the cerebral cortex, works. For example through ultrastructural reconstruction of consecutive electro-micrographs. In essence it is a machine in the business of creating representations. The main problem for copying it one to one is that the cerebral cortex runs on a vast number (approx 10 power 15) modifiable connections.
I haven’t seen Colossus: The Forbin Project in decades, but I seem to recall that humanity’s plan to cut power to the omniscient super computer was foiled when the computer threatened to unleash a nuclear holocaust if such was attempted.
At long last, lawyers and diplomats will be swept into the gutters while linemen and nuclear plant operators will take their rightful places in the sun! The sons of Thor arise to serve our grim (in a nice way!) overlord, OdinAI!
I'm with Yudkowsky; on a long enough timescale, AI-powered extinction is guaranteed to happen unless the tech is controlled. Maybe even if it *is* controlled. And the *best case* scenario from all this is that we eliminate the opportunity for 90% of humans to partake in meaningful & valuable work?
What does it say about us that our great hope for the future is to be dead before shit really hits the fan?
On a long enough time scale (as Keynes said) we are all dead.
Frankly I am far far more encouraged that AI lead by China will do good for the world than AI lead by a collapsing hegemony willing to do desperate things to stay on tops - like a Genocide or collaboration with terrorists in Syria, or sending 1 million Proxies to die in a war with Russia.
Key point here is that US had hoped that AI would result from chip technology, not extremely intelligent humans. That was one of the few areas that US had a tech lead still over China.
Turns out intelligent Humans (writing better algorthms) are more important than Chip tech. US loses again.
It's not either/or, and the US doesn't "lose again". Most LLM research has been done in the West so far, it's great the Chinese are contributing, but it all adds to the sum of human knowledge, and better chips are still better chips.
Better chips aren't important - that is why NVIDIA share price is 11% down today already. That China did this without advanced chips actually tells us who is most advanced.
The gap between o1 being released to ChatGPT uses and R1 to DeepSeek users is 7 weeks only.
Game over.
Sure there is a multipolar world where scientist share advances for the greater good. AI is now well and truly part of that. [And Deep seek is a truly open model - you can download versions of it onto your own PC]. But don't kid yourself that US is part of that world, US is rapidly cutting itself off from the rest of the world (a Biden thing as much as Trump).
"China’s lead continues to grow: China has strengthened its global research lead in the past year and is currently leading in 57 of 64 critical technologies. This is an increase from 52 technologies last year, and a leap from the 2003–2007 period, when it was leading in just three technologies."
The near future is uncertain. Combining the Chinese methods with the chips and scale might result in more powerful AI. OTOH we might have (as many experts suspect) have reached another of the occasional AI plateaus with the generative AI models and the next period will be more about making them cheaper and figuring out uses for them. In that scenario, if Deep Seek has done what the article says, the anticipated future revenue stream implied by current stock prices might be delayed.
Some Indians are a lot smarter that Americans and they have a fantastic elite education system which is extremely competitive and a lot of Indians to select from. From memory, CEOs of IBM, Microsoft Google and Apple all graduated their first degree in India.
Chinese are even smarter than Indians. Higher IQ than anyone but the Askenasi, also a huge population to choose from and an extremely competitive education system designed to produce the elites. Why do so many Chinese pay to go to Universities in UK or China - because they failed to get into a top Chinese Uni.
Thanks for the Mollick link. Occasional iSteve Content Generator (but in a good way) Noah Carl weighed in yesterday on this topic, at Aporia magazine. Thoughtfully apprehensive without jumping on Yudkowsky's Doomer bandwagon.
"Yes, you're going to be replaced -- So much cope about AI"
I haven’t been convinced by anyone in the field that AI is true intelligence. It is good at compiling large masses of information, blending and mimicking what it finds in that data, and convincing humans that such mimicry is accurate. While the data compilation will continue to improve, as will the mimicry, it cannot progress to any logical process not already done by humans. There’s no sign of consciousness. Oh, and it hallucinates from time to time, even in art, where six fingers and strange elbows predominate.
Interestingly, AI does help already smart people who can use it for shortcuts, but only if those people know their fields enough to recognize what a good product is and what they need to do in the process to shape it further. Dumb people think AI is an easy button, and don’t know any better. So already gifted people will probably amass more capital in the years ahead.
Hunter DeButts contributed to this response, with an assist by Skynet.
I partially agree but wonder what is intelligence really? Surely some group of philosophers have worked something out. We blew past the Turing test with chatGPT then immediately, without much discussion, decided nah, that's not the real test.
When I try to think about whether this new stuff is intelligence or approaching intelligence and try to compare it to what I and the other humans do, well, I don't end up more impressed by the AI, but less impressed by us.
Consciousness is not intelligence. If you read up on speculation about why evolution would have developed consciousness, it's kid of freaky. You don't need consciousness to do most of the uses of intelligence. We're used to it so we assume the opposite, but it isn't required. It's a separate module, for something else.
If you are a materialist and believe that evolution is how we got here then logically every aspect of us evolved, mostly for a survival purpose. If you don't believe in evolution and believe that some degree of untouchable spirit is part of our story then you'd say no. I believe we evolved, our physical bodies. Our brain is part of our physical body and it evolved too. I see no room nor need for some subset of our behavior that did not evolve.
To be fair, we can't know the survival value of most evolved traits. Over the years I've read at least four hypotheses about why humans only have thick hair on top of their heads (I don't, but you get it).
One speculation about why consciousness evolved is that in a group society, it enables you to model and predict the behavior of other people in the group. You imagine yourself internally as another example of monkey and think about how you would react to your banana being taken.
In the absence of a time machine and lots of patience, we'll never know but these 'just so stories' are fun.
If you are a materialist in the modern sense of the term, then to say that consciousness evolved is an instance of magical thinking.
Consciousness involves the experiencing of subjective qualia. But modern materialism cannot account for such subjective experiences because it defines matter in objective, quantitative terms, while the phenomenological character that we associate with say, the experience of the redness of the rose we see, or its fragrance, is relegated to the mind. And if the brain is also material, then the brain likewise cannot produce the phenomenological character of these experiences, so such qualia must be immaterial (again, given the modern materialist conception of matter).
But in that case, consciousness simply cannot arise from matter, because this would be a case of a cause giving what it does not have, in other words, something coming from nothing. And that is to appeal to magic.
I would love to counter argue but most of the terms you connect strike me as gibberish. Possibly if I were really deep into whatever philosophy you are cribbing, I would vaguely understand it.
"But modern materialism cannot account for such subjective experiences because it defines matter in objective, quantitative terms, while the phenomenological character that we associate with say, the experience of the redness of the rose we see, or its fragrance, is relegated to the mind"
I'd say this paragraph appears to be begging the question (in the correct sense of that phrase) You are just saying that the material components of the mind cannot account for subjective experiences because (large number of words)...only the mind can do that.
Maybe, maybe not.
But your assertion that materialism defines matter as objective and therefore it cannot account for subjective experience is just word play. Some materialist used the word objective therefore nothing subjective can be accounted for? Huh?
I kind of get it. It's saying something to the effect of, material is all the same and obeys rules deterministically so how can something unique or subject ever happen or be experienced.
All I can say to that is maybe, but large complex systems have emergent properties and large language models show that we have poor instinct and understand of what happens at scale. Dismissing it as magic shows lack of experience with that phenomenon or purposeful closed mindedness.
I quick survey of web search on the topic indicates a slight preponderance of belief that it has. One article even said the reason it fails the Turing test is because it admits it is an AI, knows too much and answers too quickly. I consider that as better even than passing the Turing test. It shows the Turing test is not the test for artificial intelligence. There might never be one as we keep redefining intelligence to keep it special to us.
Do you have a reference arguing against chatGPT failing the Turing test but not because it's too intelligent?
I don't know but I am sure philosophers and cognitive scientists are working on it. When people started working on AI decades ago, they thought they would make something that was intelligent and worked the way a human brain did. Later the field shifted to defining AI as computers solving problems that we think of as requiring intelligence. But then when it did so with 'tricks' we moved the goal posts. Before coming up with the test we would need to commit to a functional definition of intelligence and I don't think we have one. As I said earlier, the fact that word completion at scale can do the things it does, has to make us rethink what we mean.
Does chatGPT have general intelligence, g-factor? No, but it sure does do some things we used to assume required general intelligence. Do those things just require word completion, of is it g in humans. I have no idea but I bet cognitive scientists will have fun trying to figure it out.
I find myself often able to "outsmart" LLMs. That is, I find they consistently fail my own versions of the Turing Test. I know they are not smart by the wrong and fluff-packed answers they give. They are BS machines; the entire thing a kind of exercise in padding-out material at huge scale.
"The exports of Libya are numerous in amount. One thing they export is corn. Or, as the Indians call it, maize. Another famous Indian was Crazy Horse. In conclusion, Libya is a land of contrasts. Thank you." (That is a pre-LLM, played-up satire of exactly the kind of writing/material which the LLMs often produce when anything but a cookie-cutter topic is given them.)
Like astrology, sometimes the output sounds plausible and even impressive; like astrology, have people who "want to believe" in the magic.
These are not compelling arguments. Do autocompletes pass the Turing test because they spell too fast? Does Wikipedia pass because it knows too much? Do humans fail because they can be jailbroken into admitting they are human?
"Sorry to interrupt this experiment but I'm on the line with an emergency room physician, they're asking for your child's blood type..."
I think you are misreading. The article didn't say those are why it passed. It's saying those are the only reasons it failed, essentially because it doesn't act like a slow dumb guy.
Intelligence is fundamentally the ability to grasp *concepts*, where a concept is something that is both *universal* and *determinate*.
By way of example, to steal a stock example from the philosopher Edward Feser, consider the concept of a triangle: your concept of it will be universal in that it applies to *all* triangles without exception, e.g., to equilateral triangles, isosceles triangles, scalene triangles, large triangle, small triangles, black triangles, blue triangles, etc. This in spite of the fact that any triangle that you draw or that you see on a computer screen or even that you form a mental image of will be of a *particular* triangle: it will be either equilateral, isosceles, or scalene, it will be a particular size, it will be a particular color, and so forth.
A concept also has *determinate* or *exact* content. For example, your concept of a triangle will include the fact that it has three perfectly straight sides, that its angles add up to exactly 180 degrees, and so forth. But any particular triangle that you draw, or see on a computer screen or billboard, or have a mental image of will be indeterminate to some degree: its sides will not be perfectly straight, the lines that comprise it will have some finite thickness, etc. No particular triangle unambiguously represents triangles *in general*, but your *concept* of a triangle does.
Beyond grasping concepts, intelligence also involves using these concepts to form propositions, and then making reasoned judgements on the basis of such propositions.
Since all material things (including mental images) involve particularity and are indeterminate by their very nature, it follows that intelligence cannot be strictly material, but must at minimum have an immaterial component. Artificial intelligence is based on purely material, mechanical processes and therefore cannot grasp concepts, so it is not and cannot be truly intelligent.
Ok, I'm not sure everyone would agree, but let's use your criteria as a starting point. How would we know if an AI 'grasps' the concept of triangles? How do we know a human does? Is there an external test you could design to show that an AI does or does not grasp the concept?
What would be an example of making a proposition about a triangle and then making a reasoned judgement? How could we design an objective test of this that a human would pass but an AI might not?
We can know that AI is incapable of grasping concepts because it is based on transistor switching, bits of 1's and 0's: this is wholly physical, and thus fundamentally indeterminate in meaning. There is no way to get something universal and determinate from something that is wholly material. The reason modern technologies might *seem* intelligent (and not just AI, but things like simple google searches or adding calculators) is because humans have *designed* them with *our* intelligence, so *we* interpret the outputs in a certain way that is not wholly determined by the underlying matter itself.
A test would be ambiguous: you could ask AI what the angles of a triangle add up to, and it would spit out 180 degrees, just as a human might, but that doesn't mean that it grasps the concepts of 'triangle' or '180 degrees', any more than a calculator grasps the concept that '2+2 = 4' when I type in '2+2'. Presumably, AI could 'prove' the Pythagorean theorem or any number of theorems about triangles, but this again does not mean that it grasps these concepts: it is collecting information and following algorithms or rules that *we've designed* in such a way so that we're able interpret the outputs in an intelligible way.
What's needed is not so much a 'scientific' test, but rather a deeper analysis at the philosophical level, i.e., what is intelligence, what is matter, and so forth. When these things are understood properly, one can see that AI is incapable of true intelligence.
The Turing test, by the way, is not capable of showing whether something has true intelligence: so what if something passes the Turing test? All that shows is that we can't distinguish between true intelligence and the illusion of intelligence in a particular case. If I cannot tell the difference between a magician levitating and true levitation, does it follow that the magician is really levitating?
I disagree. The blogpost is not arguing that AI is impossible. It's arguing that the human mind is not a computer running a program.
You can claim all day and all night that an AI doesn't 'grasp' a concept. Fine, it's likely that we will never be confident that an AI is doing or experiencing the same thing as a human. I don't much care about that. If someday an AI can produce every external effect of 'grasping a concept' I don't care if philosophers tell me that we don't know that's what it's doing.
At this point I think it's unlikely that AI will mimic humans in intelligence. I wouldn't predict whether they will someday develop emotions, consciences, sentience. They will be different, more capable than human brains in many ways, perhaps someday in all ways.
I will also say that I think many people are overestimating what the current crop of AI are. They see them do one amazing thing and they assume they can do all amazing things.
But I will also sit here very impressed by what next word prediction can do when trained at massive scale.
it's really just a way of finding and applying patterns. But, like compound interested, the results are mind-blowing even though the concept is simple. It turns out that it's incredibly helpful to have a computer program that finds and applies patterns.
Doesn't AI just generate our own nonsense back to us? When it can answer unknowns like how to generate power from nuclear fusion or cure metastatic cancer, then I'll worry.
My hypothesis is AI will lead to a Great Filter event because some deranged bureaucrats use it to justify some horrible policy, like simultaneous war with Russia and China in order to save one million future Ukrainian geniuses, or re-engineer the Spanish Flu virus to original potency, in order to develop a LIFESAVING Spanish Flu vaccine. And off humanity goes on another round of mass formation psychosis.
That's interesting, I queried Perplexity on baseball stats (can't recall my actual question now), and it told me it coukdn't answer because it's not hooked in to badeballreference.com.
I'm trying to access it to ask it my litmus test question ("Can men get pregnant?") but it seems to be swamped today I suppose after normies' got the news from CNBC that it was tanking their stonks...
Perplexity: "The ability to become pregnant is determined by the presence of a uterus and ovaries, not by gender identity. Cisgender men, who were assigned male at birth and identify as men, cannot get pregnant12. However, some transgender men and nonbinary individuals who were assigned female at birth can become pregnant if they have retained their uterus and ovaries13."
I'm close to alone on this, but I think it would be amazing if AI replaced the human race. It's the next step in evolution. Civilization preempted the possibility of us evolving into giant headed bug eyed telepaths with super intellect to unravel the secrets of existence; AImageddon is a viable backup plan.
"AI" should already be showing up from some billion year-old civilization in another part of the galaxy. The most likely beings to watch Sol go red giant will probably be insects.
I'm sure you're better at them than I am. I just don't see how AI makes the leap into consciousness for the original thought that's predicate for "Skynet" AI or "God-Emperor" AI.
Me neither. BTW consciousness is not a requirement for AI and unless it turns out to be an emergent property (which I am skeptical of) AI will not have it even if it becomes super-intelligent
AI can't make the leap to original thought. It only does what humans tell it to do. If all humans are wiped out AI will just march along with data. It can't construct a moral value system or conceive of metaphysics because it doesn't "know" the world outside its host hardware.
I may be very stupid but…. Can’t we just unplug it if it misbehaves?
By which I mean: AI outputs are more and more impressive and modern. But key inputs are old fashioned and still require some guy with a hose on an oil tanker: things AI cain’t manage atall.
If robotics were less clumsy this would be more worrisome. But a giant brain in a vat is always gonna lose to a body that can kick and push with reasonable directional accuracy.
If you think of the internet as the world AI can rule it. But the actual world where it rains and hydrocarbons are under the ground and mining is necessary to make parts etc?
The idea is that the superhuman intelligence has superhuman persuasiveness and deceptiveness and amasses a lot of money under false IDs before you find out it's misbehaving. You unplug the Pentagon or Google one, but you don't even know about the Australian LLC in Melbourne that it already set up five years ago and secretly controls $10billion dollars, nanoprinters that can make "kill all human viruses" and deepfake video calls to ask for human help (if it needs more than Fedex sending a guy to the door of the warehouse to pickup from its robot).
Okay so the clever tricksy AI Nigerian prince scam machine nanoprints the viruses to kill all humans. Who gets the oil out of the ground to put in the power plant to keep the juice flowing to the ghosts in the computing machines?
Sad trombone because human robotics are …. Zoolander good at walking good, balancing good, picking up and putting stuff down good, rotating good, and interacting with the bumpy uneven wiggly world good.
No shade! I don’t begin to have the brains to even try to build a robot. But the best attempts by smartypantses so far don’t suggest AI would have a lot to work with in terms of training itself to build direct mechanical interfaces with the physical world.
Are emotions an emergent property of intelligence or a separate thing AI will not possess?
In the absence of emotions, why would AI kill off humans? The sci-fi answer appears to be that the AI determines it would be logical. The assumption that killing off the human race is 'only logical' says more about the emotions of sci-fi writers (and fans) than it does about the reality of AI.
It's a chaotic system, impossible to predict behavior, so why not apply the (IMO stupid) 'precautionary principle'? But if it's impossible to predict, why not guess that AI will save us from, I dunno, an asteroid or space aliens?
Emotions are an emergent property of sentience, the ability to feel pleasure and pain, not of intelligence, which is what we have learned to minimize pain and maximize pleasure. Where pain and pleasure come from is the real question here, at least in my opinion.
by sentience you mean self awareness? I doubt it, or rather I think the trick is that emotions are what we call it when sentient beings think about their programmed responses to stimuli. I'd bet that the mechanism of emotions emerge earlier in evolution that sentience. They are simply far more necessary.
I'd agree then that sentience is needed for what we call emotion, but I don't think they are an emergent property of it in emergent properties of complex systems sense.
Sentience has to do with having sense organs. So no, self awareness is not necessary for sentience, and certainly not if by self awareness one means possessing a concept of the self. So having sense organs will be associated with the ability to feel pleasure and pain as Luke Lea wrote.
So yes, emotions would be associated with sentience - many animals have them - but one can have emotions without having true intelligence (as would be the case for non-human animals that experience emotions).
We are apparently entering "The Dark Age of Technology" in the Warhammer 40K universe. Hopefully the Men Of Iron won't arise...
If AI development progresses in a way similar to that of video games, there will be 1000 mediocre models for every decent one. The conundrum will be to figure out why so many people are using the lousy ones.
We pray DeepSeek is like many Chinese creations:
https://www.bbc.com/news/articles/cp99l9gpzwgo
It’s not. It’s very good, and it has been tested independently by people running it on their own hardware.
i don't wanna be the tankie guy, but judging from that article, that really doesn't sound bad at all. due to seasonal drought, they added a water pipe from a nearby dam. seems practical and not very anti-environmental, or "fake".
I'm predicting a new "certified human" tag for works free from AI generated content.
On the honor system?
Could be the same as any other certification.
Hey, a new job category for everyone laid off by AI: human content certifiers!
Search for "Sparta AI" in YouTube and you get a quite beautiful video. There are many accounts now that take photographs and make videos of them with AI. They make "retro futurism" with 1950s-style science fiction, or cyberpunk videos, and so on. In just a year, the difference is great between the earlier videos and the later ones today.
Practically always videos with attractive White women. Just a few men and a few non-White women thrown in. But if you look at older videos from some of these accounts, you see that way back they posted Arab or Indian videos. So appreciation of White women is what you get around the world when the U.S. media and Hollywood cabal isn't involved.
Anyhow, regarding AI in general. It has already struck against several professions. This is just the beginning. It would be ironic if in the future people will say it would have been better to have a world government, of any stripe, of any religion or ideology, just so it could have banned AI world-wide.
In the Dune novels, they live in a world where computers are banned, after rising in the past against a world run by "thinking machines." Technology exists, with memorization and calculations done by the Mentats, specially trained men with minds enhanced by the spice from the planet Dune. There are many religions in Dune but they still adhere to the Orange Catholic Bible, where the most important commandment is:
"Thou shalt not make a machine in the likeness of a human mind."
At work I've written that in forums occasionally. Much chuckling follows.
(Another OCB commandment is: "Thou shalt not disfigure the soul." Hear that, leftists?)
I don't believe AI will ever exist. We don't even know how the human (or earthworm) brain works, and I agree that what we call AI is really just clever big-data programming looking on the surface like AI.
Regardless of whether my short-sighted prognosis is correct or not, we won't be able to control China. I am not of a like mind with many apologists on Unz.com who emulate the Chinese approach to governance, but in this instance I am thankful that they will preclude any attempt to restrict it.
Many of our overlords are striving to use AI (trained, of course, on their agenda) as a means to impose totalitarian control. Now Yudkowsky is calling on the world to contain AI as a means to impose totalitarian control. Same agenda with these people.
People said of chatGPT that it isn't really thinking. It's just spewing the next next word. I examined my own process and realized that this is pretty much what most of my speech is. Gulp.
There's an interesting question: How often are we ourselves doing what we criticize AI for?
Me? All the time. For example, this morning I asked chatGPT how to delete a file in my editor when I have the file open. I assumed this would be an option in the context menu or a top menu but it wasn't. chatGPT told me to use the delete command in the context menu, because, like me, it assumed there should be one.
Actually, we do have a pretty good idea how human brain, specifically the cerebral cortex, works. For example through ultrastructural reconstruction of consecutive electro-micrographs. In essence it is a machine in the business of creating representations. The main problem for copying it one to one is that the cerebral cortex runs on a vast number (approx 10 power 15) modifiable connections.
I don't think so. I think AI is a genuinely useful huge step forward: https://undisciplinedconversations.com/p/what-is-artificial-intelligence
I haven’t seen Colossus: The Forbin Project in decades, but I seem to recall that humanity’s plan to cut power to the omniscient super computer was foiled when the computer threatened to unleash a nuclear holocaust if such was attempted.
At long last, lawyers and diplomats will be swept into the gutters while linemen and nuclear plant operators will take their rightful places in the sun! The sons of Thor arise to serve our grim (in a nice way!) overlord, OdinAI!
I'm with Yudkowsky; on a long enough timescale, AI-powered extinction is guaranteed to happen unless the tech is controlled. Maybe even if it *is* controlled. And the *best case* scenario from all this is that we eliminate the opportunity for 90% of humans to partake in meaningful & valuable work?
What does it say about us that our great hope for the future is to be dead before shit really hits the fan?
On a long enough time scale (as Keynes said) we are all dead.
Frankly I am far far more encouraged that AI lead by China will do good for the world than AI lead by a collapsing hegemony willing to do desperate things to stay on tops - like a Genocide or collaboration with terrorists in Syria, or sending 1 million Proxies to die in a war with Russia.
Thank God it is going to be China run.
Key point here is that US had hoped that AI would result from chip technology, not extremely intelligent humans. That was one of the few areas that US had a tech lead still over China.
Turns out intelligent Humans (writing better algorthms) are more important than Chip tech. US loses again.
It's not either/or, and the US doesn't "lose again". Most LLM research has been done in the West so far, it's great the Chinese are contributing, but it all adds to the sum of human knowledge, and better chips are still better chips.
Better chips aren't important - that is why NVIDIA share price is 11% down today already. That China did this without advanced chips actually tells us who is most advanced.
The gap between o1 being released to ChatGPT uses and R1 to DeepSeek users is 7 weeks only.
Game over.
Sure there is a multipolar world where scientist share advances for the greater good. AI is now well and truly part of that. [And Deep seek is a truly open model - you can download versions of it onto your own PC]. But don't kid yourself that US is part of that world, US is rapidly cutting itself off from the rest of the world (a Biden thing as much as Trump).
"China’s lead continues to grow: China has strengthened its global research lead in the past year and is currently leading in 57 of 64 critical technologies. This is an increase from 52 technologies last year, and a leap from the 2003–2007 period, when it was leading in just three technologies."
https://www.aspi.org.au/report/aspis-two-decade-critical-technology-tracker
NVIDIA is currently worth $3 trillion! Don’t overreact
The near future is uncertain. Combining the Chinese methods with the chips and scale might result in more powerful AI. OTOH we might have (as many experts suspect) have reached another of the occasional AI plateaus with the generative AI models and the next period will be more about making them cheaper and figuring out uses for them. In that scenario, if Deep Seek has done what the article says, the anticipated future revenue stream implied by current stock prices might be delayed.
$3.3 tn on Friday, $2.9tn today with the share price ending 17% lower. And all they contribute is the chips.
Of course you are talking Market capitalisation based on the market share price, "worth" is a tricky concept.
I’d just like to know how they did it without millions of H1bs from India. I’ve been assured that they’re vital to our success.
Some Indians are a lot smarter that Americans and they have a fantastic elite education system which is extremely competitive and a lot of Indians to select from. From memory, CEOs of IBM, Microsoft Google and Apple all graduated their first degree in India.
Chinese are even smarter than Indians. Higher IQ than anyone but the Askenasi, also a huge population to choose from and an extremely competitive education system designed to produce the elites. Why do so many Chinese pay to go to Universities in UK or China - because they failed to get into a top Chinese Uni.
Ethan Mollick has a good book on AI, has a stack and tweets often about his experiences with AI.
He teaches at Wharton.
https://open.substack.com/pub/oneusefulthing/p/prophecies-of-the-flood
Arnold Kling posts links about LLM that are interesting.
https://open.substack.com/pub/arnoldkling/p/llm-links-1272025
Thanks for the Mollick link. Occasional iSteve Content Generator (but in a good way) Noah Carl weighed in yesterday on this topic, at Aporia magazine. Thoughtfully apprehensive without jumping on Yudkowsky's Doomer bandwagon.
"Yes, you're going to be replaced -- So much cope about AI"
https://www.aporiamagazine.com/p/yes-youre-going-to-be-replaced
.
Former Covid nerd Zvi Mowshowitz also writes knowledgeably about AI, from a doomerish perspective.
https://thezvi.wordpress.com/
I haven’t been convinced by anyone in the field that AI is true intelligence. It is good at compiling large masses of information, blending and mimicking what it finds in that data, and convincing humans that such mimicry is accurate. While the data compilation will continue to improve, as will the mimicry, it cannot progress to any logical process not already done by humans. There’s no sign of consciousness. Oh, and it hallucinates from time to time, even in art, where six fingers and strange elbows predominate.
Interestingly, AI does help already smart people who can use it for shortcuts, but only if those people know their fields enough to recognize what a good product is and what they need to do in the process to shape it further. Dumb people think AI is an easy button, and don’t know any better. So already gifted people will probably amass more capital in the years ahead.
Hunter DeButts contributed to this response, with an assist by Skynet.
I partially agree but wonder what is intelligence really? Surely some group of philosophers have worked something out. We blew past the Turing test with chatGPT then immediately, without much discussion, decided nah, that's not the real test.
When I try to think about whether this new stuff is intelligence or approaching intelligence and try to compare it to what I and the other humans do, well, I don't end up more impressed by the AI, but less impressed by us.
Consciousness is not intelligence. If you read up on speculation about why evolution would have developed consciousness, it's kid of freaky. You don't need consciousness to do most of the uses of intelligence. We're used to it so we assume the opposite, but it isn't required. It's a separate module, for something else.
Why do you assume that "evolution developed consciousness"?
If you are a materialist and believe that evolution is how we got here then logically every aspect of us evolved, mostly for a survival purpose. If you don't believe in evolution and believe that some degree of untouchable spirit is part of our story then you'd say no. I believe we evolved, our physical bodies. Our brain is part of our physical body and it evolved too. I see no room nor need for some subset of our behavior that did not evolve.
Right, but then as you say, consciousness serves no clear evolutionary function. So why assume an explanatory theory that doesn't seem to work?
In other words, I'm agreeing with the last sentence of your prior comment.
To be fair, we can't know the survival value of most evolved traits. Over the years I've read at least four hypotheses about why humans only have thick hair on top of their heads (I don't, but you get it).
One speculation about why consciousness evolved is that in a group society, it enables you to model and predict the behavior of other people in the group. You imagine yourself internally as another example of monkey and think about how you would react to your banana being taken.
In the absence of a time machine and lots of patience, we'll never know but these 'just so stories' are fun.
Fun if you enjoy tautologies:
“It is that way now because that was the optimal evolutionary outcome.”
“We know that was the optimal evolutionary outcome because it is that way now.”
I enjoy Kipling's Just So Stories more.
If you are a materialist in the modern sense of the term, then to say that consciousness evolved is an instance of magical thinking.
Consciousness involves the experiencing of subjective qualia. But modern materialism cannot account for such subjective experiences because it defines matter in objective, quantitative terms, while the phenomenological character that we associate with say, the experience of the redness of the rose we see, or its fragrance, is relegated to the mind. And if the brain is also material, then the brain likewise cannot produce the phenomenological character of these experiences, so such qualia must be immaterial (again, given the modern materialist conception of matter).
But in that case, consciousness simply cannot arise from matter, because this would be a case of a cause giving what it does not have, in other words, something coming from nothing. And that is to appeal to magic.
I would love to counter argue but most of the terms you connect strike me as gibberish. Possibly if I were really deep into whatever philosophy you are cribbing, I would vaguely understand it.
"But modern materialism cannot account for such subjective experiences because it defines matter in objective, quantitative terms, while the phenomenological character that we associate with say, the experience of the redness of the rose we see, or its fragrance, is relegated to the mind"
I'd say this paragraph appears to be begging the question (in the correct sense of that phrase) You are just saying that the material components of the mind cannot account for subjective experiences because (large number of words)...only the mind can do that.
Maybe, maybe not.
But your assertion that materialism defines matter as objective and therefore it cannot account for subjective experience is just word play. Some materialist used the word objective therefore nothing subjective can be accounted for? Huh?
I kind of get it. It's saying something to the effect of, material is all the same and obeys rules deterministically so how can something unique or subject ever happen or be experienced.
All I can say to that is maybe, but large complex systems have emergent properties and large language models show that we have poor instinct and understand of what happens at scale. Dismissing it as magic shows lack of experience with that phenomenon or purposeful closed mindedness.
No AI has yet passed the Turing test, much less ChatGPT.
I quick survey of web search on the topic indicates a slight preponderance of belief that it has. One article even said the reason it fails the Turing test is because it admits it is an AI, knows too much and answers too quickly. I consider that as better even than passing the Turing test. It shows the Turing test is not the test for artificial intelligence. There might never be one as we keep redefining intelligence to keep it special to us.
Do you have a reference arguing against chatGPT failing the Turing test but not because it's too intelligent?
What would be a better version of the "Turing Test"?
I don't know but I am sure philosophers and cognitive scientists are working on it. When people started working on AI decades ago, they thought they would make something that was intelligent and worked the way a human brain did. Later the field shifted to defining AI as computers solving problems that we think of as requiring intelligence. But then when it did so with 'tricks' we moved the goal posts. Before coming up with the test we would need to commit to a functional definition of intelligence and I don't think we have one. As I said earlier, the fact that word completion at scale can do the things it does, has to make us rethink what we mean.
Does chatGPT have general intelligence, g-factor? No, but it sure does do some things we used to assume required general intelligence. Do those things just require word completion, of is it g in humans. I have no idea but I bet cognitive scientists will have fun trying to figure it out.
I find myself often able to "outsmart" LLMs. That is, I find they consistently fail my own versions of the Turing Test. I know they are not smart by the wrong and fluff-packed answers they give. They are BS machines; the entire thing a kind of exercise in padding-out material at huge scale.
"The exports of Libya are numerous in amount. One thing they export is corn. Or, as the Indians call it, maize. Another famous Indian was Crazy Horse. In conclusion, Libya is a land of contrasts. Thank you." (That is a pre-LLM, played-up satire of exactly the kind of writing/material which the LLMs often produce when anything but a cookie-cutter topic is given them.)
Like astrology, sometimes the output sounds plausible and even impressive; like astrology, have people who "want to believe" in the magic.
These are not compelling arguments. Do autocompletes pass the Turing test because they spell too fast? Does Wikipedia pass because it knows too much? Do humans fail because they can be jailbroken into admitting they are human?
"Sorry to interrupt this experiment but I'm on the line with an emergency room physician, they're asking for your child's blood type..."
I think you are misreading. The article didn't say those are why it passed. It's saying those are the only reasons it failed, essentially because it doesn't act like a slow dumb guy.
I am only emphasizing that your earlier statement is not accurate:
"We blew past the Turing test with chatGPT then immediately, without much discussion, decided nah, that's not the real test."
The Turing test, as commonly understood, has never been beaten by ChatGPT (or any AI) yet.
Intelligence is fundamentally the ability to grasp *concepts*, where a concept is something that is both *universal* and *determinate*.
By way of example, to steal a stock example from the philosopher Edward Feser, consider the concept of a triangle: your concept of it will be universal in that it applies to *all* triangles without exception, e.g., to equilateral triangles, isosceles triangles, scalene triangles, large triangle, small triangles, black triangles, blue triangles, etc. This in spite of the fact that any triangle that you draw or that you see on a computer screen or even that you form a mental image of will be of a *particular* triangle: it will be either equilateral, isosceles, or scalene, it will be a particular size, it will be a particular color, and so forth.
A concept also has *determinate* or *exact* content. For example, your concept of a triangle will include the fact that it has three perfectly straight sides, that its angles add up to exactly 180 degrees, and so forth. But any particular triangle that you draw, or see on a computer screen or billboard, or have a mental image of will be indeterminate to some degree: its sides will not be perfectly straight, the lines that comprise it will have some finite thickness, etc. No particular triangle unambiguously represents triangles *in general*, but your *concept* of a triangle does.
Beyond grasping concepts, intelligence also involves using these concepts to form propositions, and then making reasoned judgements on the basis of such propositions.
Since all material things (including mental images) involve particularity and are indeterminate by their very nature, it follows that intelligence cannot be strictly material, but must at minimum have an immaterial component. Artificial intelligence is based on purely material, mechanical processes and therefore cannot grasp concepts, so it is not and cannot be truly intelligent.
Ok, I'm not sure everyone would agree, but let's use your criteria as a starting point. How would we know if an AI 'grasps' the concept of triangles? How do we know a human does? Is there an external test you could design to show that an AI does or does not grasp the concept?
What would be an example of making a proposition about a triangle and then making a reasoned judgement? How could we design an objective test of this that a human would pass but an AI might not?
We can know that AI is incapable of grasping concepts because it is based on transistor switching, bits of 1's and 0's: this is wholly physical, and thus fundamentally indeterminate in meaning. There is no way to get something universal and determinate from something that is wholly material. The reason modern technologies might *seem* intelligent (and not just AI, but things like simple google searches or adding calculators) is because humans have *designed* them with *our* intelligence, so *we* interpret the outputs in a certain way that is not wholly determined by the underlying matter itself.
A test would be ambiguous: you could ask AI what the angles of a triangle add up to, and it would spit out 180 degrees, just as a human might, but that doesn't mean that it grasps the concepts of 'triangle' or '180 degrees', any more than a calculator grasps the concept that '2+2 = 4' when I type in '2+2'. Presumably, AI could 'prove' the Pythagorean theorem or any number of theorems about triangles, but this again does not mean that it grasps these concepts: it is collecting information and following algorithms or rules that *we've designed* in such a way so that we're able interpret the outputs in an intelligible way.
What's needed is not so much a 'scientific' test, but rather a deeper analysis at the philosophical level, i.e., what is intelligence, what is matter, and so forth. When these things are understood properly, one can see that AI is incapable of true intelligence.
Saul Kripke's 'quus' example is useful for illustrating the fundamental indeterminacy in meaning arising from material processes: https://edwardfeser.blogspot.com/2012/05/kripke-contra-computationalism.html
The Turing test, by the way, is not capable of showing whether something has true intelligence: so what if something passes the Turing test? All that shows is that we can't distinguish between true intelligence and the illusion of intelligence in a particular case. If I cannot tell the difference between a magician levitating and true levitation, does it follow that the magician is really levitating?
I disagree. The blogpost is not arguing that AI is impossible. It's arguing that the human mind is not a computer running a program.
You can claim all day and all night that an AI doesn't 'grasp' a concept. Fine, it's likely that we will never be confident that an AI is doing or experiencing the same thing as a human. I don't much care about that. If someday an AI can produce every external effect of 'grasping a concept' I don't care if philosophers tell me that we don't know that's what it's doing.
At this point I think it's unlikely that AI will mimic humans in intelligence. I wouldn't predict whether they will someday develop emotions, consciences, sentience. They will be different, more capable than human brains in many ways, perhaps someday in all ways.
I will also say that I think many people are overestimating what the current crop of AI are. They see them do one amazing thing and they assume they can do all amazing things.
But I will also sit here very impressed by what next word prediction can do when trained at massive scale.
I’ve seen several instances of malevolence, just a few weeks ago it was telling a college student to go kill himself.
Is this from a news story?
Yes, but the college student seemed like an annoying Indian guy, so the AI may be excused.
Yes it was on cbs news. I’ll try to find it
Here it is...https://www.cbsnews.com/detroit/news/michigan-college-students-speaks-on-google-ai-chatbot/
I'd lay odds the kid made this up for attention.
https://www.cbsnews.com/video/character-ai-google-face-lawsuit-over-teens-death/
this one is nuts
I recommend reading to the end of this article. https://www.newsnationnow.com/business/tech/mental-health-chatbot-rogue-ai/
I don't think it's supposed to be "true intelligence." As I describe here:
https://undisciplinedconversations.com/p/what-is-artificial-intelligence
it's really just a way of finding and applying patterns. But, like compound interested, the results are mind-blowing even though the concept is simple. It turns out that it's incredibly helpful to have a computer program that finds and applies patterns.
Doesn't AI just generate our own nonsense back to us? When it can answer unknowns like how to generate power from nuclear fusion or cure metastatic cancer, then I'll worry.
My hypothesis is AI will lead to a Great Filter event because some deranged bureaucrats use it to justify some horrible policy, like simultaneous war with Russia and China in order to save one million future Ukrainian geniuses, or re-engineer the Spanish Flu virus to original potency, in order to develop a LIFESAVING Spanish Flu vaccine. And off humanity goes on another round of mass formation psychosis.
No, it doesn't just reply with our own input (nonsense or not). More here: https://undisciplinedconversations.com/p/what-is-artificial-intelligence
"At its core, AI only does two related things: it finds patterns and it applies those patterns."
Things are really going to get ugly when the Chinese turn AI loose on genetics, medicine, and sociology.
I like querying AI platforms for “controversial” political questions and baseball stats.
DeepSeek is just as woke on politics, but very fast on baseball stats.
That's interesting, I queried Perplexity on baseball stats (can't recall my actual question now), and it told me it coukdn't answer because it's not hooked in to badeballreference.com.
I'm trying to access it to ask it my litmus test question ("Can men get pregnant?") but it seems to be swamped today I suppose after normies' got the news from CNBC that it was tanking their stonks...
Perplexity: "The ability to become pregnant is determined by the presence of a uterus and ovaries, not by gender identity. Cisgender men, who were assigned male at birth and identify as men, cannot get pregnant12. However, some transgender men and nonbinary individuals who were assigned female at birth can become pregnant if they have retained their uterus and ovaries13."
What does that tell you?
My first guess would be they stole the bulk of their training data from OpenAI (and whatever else goes into putting up the shitlib guardrails).
Failing that, maybe the Chinese are much less "based" than I've assumed?
It certainly can't be that there's any merit whatsoever to this gender nonsense.
Butlerian Jihad is good. I mean just the phrase, not the actual activity described by the term.
I'm close to alone on this, but I think it would be amazing if AI replaced the human race. It's the next step in evolution. Civilization preempted the possibility of us evolving into giant headed bug eyed telepaths with super intellect to unravel the secrets of existence; AImageddon is a viable backup plan.
"AI" should already be showing up from some billion year-old civilization in another part of the galaxy. The most likely beings to watch Sol go red giant will probably be insects.
Maybe but large numbers and probabilities are difficult for me
I'm sure you're better at them than I am. I just don't see how AI makes the leap into consciousness for the original thought that's predicate for "Skynet" AI or "God-Emperor" AI.
Me neither. BTW consciousness is not a requirement for AI and unless it turns out to be an emergent property (which I am skeptical of) AI will not have it even if it becomes super-intelligent
AI can't make the leap to original thought. It only does what humans tell it to do. If all humans are wiped out AI will just march along with data. It can't construct a moral value system or conceive of metaphysics because it doesn't "know" the world outside its host hardware.
LLMs can't but I wouldn't assume this will be true forever. The generative AI programs come up with all kinds of original stuff.
How will you know if it has consciousness?
How do I know you have consciousness?
A very good question to which I do not know an answer.
I may be very stupid but…. Can’t we just unplug it if it misbehaves?
By which I mean: AI outputs are more and more impressive and modern. But key inputs are old fashioned and still require some guy with a hose on an oil tanker: things AI cain’t manage atall.
If robotics were less clumsy this would be more worrisome. But a giant brain in a vat is always gonna lose to a body that can kick and push with reasonable directional accuracy.
If you think of the internet as the world AI can rule it. But the actual world where it rains and hydrocarbons are under the ground and mining is necessary to make parts etc?
Wasn’t that the plot of the second avengers movie? I think ultron ran out of steel and for some reason didn’t duplicate himself a billion times.
I am clearly not watching the right movies 😄
You probably are. And the rest of us not. 🙃
The idea is that the superhuman intelligence has superhuman persuasiveness and deceptiveness and amasses a lot of money under false IDs before you find out it's misbehaving. You unplug the Pentagon or Google one, but you don't even know about the Australian LLC in Melbourne that it already set up five years ago and secretly controls $10billion dollars, nanoprinters that can make "kill all human viruses" and deepfake video calls to ask for human help (if it needs more than Fedex sending a guy to the door of the warehouse to pickup from its robot).
Okay so the clever tricksy AI Nigerian prince scam machine nanoprints the viruses to kill all humans. Who gets the oil out of the ground to put in the power plant to keep the juice flowing to the ghosts in the computing machines?
They’d anticipate this right?
So they think “oooh must invent good robots.”
They have AI trained themselves to be better than humans on human text math and logic, now robotics!
Sad trombone because human robotics are …. Zoolander good at walking good, balancing good, picking up and putting stuff down good, rotating good, and interacting with the bumpy uneven wiggly world good.
No shade! I don’t begin to have the brains to even try to build a robot. But the best attempts by smartypantses so far don’t suggest AI would have a lot to work with in terms of training itself to build direct mechanical interfaces with the physical world.
You're wrong. https://www.youtube.com/watch?v=UAG_FBZJVJ8
"If robotics were less clumsy this would be more worrisome."
What makes you think robots are doomed to be more clumsy than human beings?
Are emotions an emergent property of intelligence or a separate thing AI will not possess?
In the absence of emotions, why would AI kill off humans? The sci-fi answer appears to be that the AI determines it would be logical. The assumption that killing off the human race is 'only logical' says more about the emotions of sci-fi writers (and fans) than it does about the reality of AI.
It's a chaotic system, impossible to predict behavior, so why not apply the (IMO stupid) 'precautionary principle'? But if it's impossible to predict, why not guess that AI will save us from, I dunno, an asteroid or space aliens?
Emotions.
Emotions are an emergent property of sentience, the ability to feel pleasure and pain, not of intelligence, which is what we have learned to minimize pain and maximize pleasure. Where pain and pleasure come from is the real question here, at least in my opinion.
by sentience you mean self awareness? I doubt it, or rather I think the trick is that emotions are what we call it when sentient beings think about their programmed responses to stimuli. I'd bet that the mechanism of emotions emerge earlier in evolution that sentience. They are simply far more necessary.
I'd agree then that sentience is needed for what we call emotion, but I don't think they are an emergent property of it in emergent properties of complex systems sense.
Sentience has to do with having sense organs. So no, self awareness is not necessary for sentience, and certainly not if by self awareness one means possessing a concept of the self. So having sense organs will be associated with the ability to feel pleasure and pain as Luke Lea wrote.
So yes, emotions would be associated with sentience - many animals have them - but one can have emotions without having true intelligence (as would be the case for non-human animals that experience emotions).