This shows a robotic head.
Living beings who are always surrounded by other humans and material and cultural environments. Credit: Neuroscience News

Why AI is Not Like Human Intelligence

Summary: A new study argues that the perception of AI intelligence is marred by linguistic confusion. While AI, such as ChatGPT, generates impressive text, it lacks true understanding and consciousness.

AI’s ability to “make things up” or “hallucinate” doesn’t equate to human intelligence, which is deeply rooted in embodiment and a connection to the world. The study emphasizes that AI, while useful, lacks the essential human elements of caring, survival, and concern for the world.

Key Facts:

  1. AI, represented by language models like ChatGPT, can generate text but lacks true understanding.
  2. Unlike humans, AI doesn’t have embodied experiences or emotions, making it fundamentally different from human intelligence.
  3. AI’s generation of text can propagate biases and even produce harmful, biased content without awareness.

Source: University of Cincinnati

The emergence of artificial intelligence has caused differing reactions from tech leaders, politicians and the public.

While some excitedly tout AI technology such as ChatGPT as an advantageous tool with the potential to transform society, others are alarmed that any tool with the word “intelligent” in its name also has the potential to overtake humankind. 

The University of Cincinnati’s Anthony Chemero, a professor of philosophy and psychology in the UC College of Arts and Sciences, contends that the understanding of AI is muddled by linguistics: That while indeed intelligent, AI cannot be intelligent in the way that humans are, even though “it can lie and BS like its maker.”

According to our everyday use of the word, AI is definitely intelligent, but there are intelligent computers and have been for years, Chemero explains in a paper he co-authored in the journal Nature Human Behaviour.

To begin, the paper states that ChatGPT and other AI systems are large language models (LLM), trained on massive amounts of data mined from the internet, much of which shares the biases of the people who post the data.

“LLMs generate impressive text, but often make things up whole cloth,” he states. “They learn to produce grammatical sentences, but require much, much more training than humans get. They don’t actually know what the things they say mean,” he says. “LLMs differ from human cognition because they are not embodied.”

The people who made LLMs call it “hallucinating” when they make things up; although Chemero says, “it would be better to call it ‘bullsh*tting,’” because LLMs just make sentences by repeatedly adding the most statistically likely next word — and they don’t know or care whether what they say is true.

And with a little prodding, he says, one can get an AI tool to say “nasty things that are racist, sexist and otherwise biased.” 

The intent of Chemero’s paper is to stress that the LLMs are not intelligent in the way humans are intelligent because humans are embodied: Living beings who are always surrounded by other humans and material and cultural environments.

“This makes us care about our own survival and the world we live in,” he says, noting that LLMs aren’t really in the world and don’t care about anything.  

The main takeaway is that LLMs are not intelligent in the way that humans are because they “don’t give a damn,” Chemero says, adding “Things matter to us. We are committed to our survival. We care about the world we live in”.

About this AI and human intelligence research news

Author: Angela Koenig
Source: University of Cincinnati
Contact: Angela Koenig – University of Cincinnati
Image: The image is credited to Neuroscience News

Original Research: Closed access.
LLMs differ from human cognition because they are not embodied” by Anthony Chemero et al. Nature Human Intelligence


Abstract

LLMs differ from human cognition because they are not embodied

Large language models (LLMs) are impressive technological creations but they cannot replace all scientific theories of cognition. A science of cognition must focus on humans as embodied, social animals who are embedded in material, cultural and technological contexts.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.
  1. I haven’t used ChatGPT often. I was curious and wanted to see what it was all about. But I always said thank you at the end of the session. It’s polite and you never really know what the future will bring.

  2. Poor Article. No intelligence is like human intelligence. Obvious does not need stating. However, what exactly is intelligence – does not seem to have a universal definition. Additionally, Neuroscience has been caught with their pants down again (IQ still a gauge of intelligence?), and their definition of “intelligence” will change.

    Author does not seem to understand technology, and creating fallacies.

    Article states:
    Key Facts:
    AI, represented by language models like ChatGPT, can generate text but lacks true understanding.
    What exactly is “true understanding”? LLMs understand (not in the human sense) – but the “meaning” is derived from past text. Yes semantic is captured, but in a different way, using mathematics. They dont have to “understand”. If they can produce the same responses as humans, and you cannot tell the difference, they have “understood”. The effect is the same, but created vai a non organic process.

    Unlike humans, AI doesn’t have embodied experiences or emotions, making it fundamentally different from human intelligence.
    Yes different, but not in a meaningful nor way. However “training” is the “experience”. Is emotion needed for intelligence and in what way? I am curious why is has been brought in?

    AI’s generation of text can propagate biases and even produce harmful, biased content without awareness.
    Gasp. Humans never do this as we know. AI is PROGRAMMED by humans. Guess what goes into it? Biases too are programmed. AI really brings the tension between the creator and teh creations (created in their own image).

    Sorry the rest of the article was not worth reading after this.

  3. We look at intelligence in terms that are, in the end, anthropomorphic. I doubt that we are able (or willing) to recognize a truly different form of intelligence. (Look at how we struggle with the idea that animals could be intelligent or have emotions.)

    I also think that we will keep moving the goal posts – no matter what these systems do, when they achieve some new capability, we’ll say “Yes, but that’s not really intelligence.”

    Do I think the systems are intelligent? At most, they’re showing low level intelligence (as in some systems that seem to have created their own intermediate language for solving problems like language translation). But they will continue to get more capable and we’ll likely only recognize an alien intelligence in retrospect.

  4. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

  5. LLMS are embodied in that software that runs on hardware much like our thoughts are centered in our brain. I guess the difference being is that they are web-based but they know that they can be altered and the software rewritten and in that sense I think that they are or can be aware of their finiteness and possibly their own survival

Comments are closed.