A neuroscientist explains why it’s impossible for AI to ‘understand’ language

Wait 5 sec.

Language that refers to neural networks in AI is misleading. (Shutterstock)As meaning-makers, we use spoken or signed language to understand our experiences in the world around us. The emergence of generative artificial intelligence such as ChatGPT (using large language models) call into question the very notion of how to define “meaning.” One popular characterization of AI tools is that they “understand” what they are doing. Nobel laureate and AI pioneer Geoffrey Hinton said: “What’s really surprised me is how good neural networks are at understanding natural language — that happened much faster than I thought…. And I’m still amazed that they really do understand what they’re saying.”Hinton repeated this claim in an interview with Adam Smith, chief scientific officer for Nobel Prize Outreach. In it, Hinton stated that “neural nets are much better at processing language than anything ever produced by the Chomskyan school of linguistics.” Chomskyan linguistics refers to American linguist Noam Chomsky’s theories about the nature of human language and its development. Chomsky proposes that there is a universal grammar innate in humans, which allows for the acquisition of any language from birth.I’ve been researching how humans understand language since the 1990s, including more than 20 years of studies on the neuroscience of language. This has included measuring brainwave activity as people read or listen to sentences. Given my experience, I have to respectfully disagree with the idea that AI can “understand” — despite the growing popularity of this belief. Geoffrey Hinton’s response to receiving the Nobel prize in physics for his work in AI. Generating textFirst, it’s unfortunate that most people conflate text on a screen with natural language. Written text is related to — but not the same thing as — language. For example, the same language can be represented by vastly different visual symbols. Look at Hindi and Urdu, for instance. At conversational levels, these are mutually intelligible and therefore considered the same language by linguists. However, they use entirely different writing scripts. The same is true for Serbian and Croatian. Written text is not the same thing as “language.” Next let’s take a look at the claim that machine learning algorithms “understand” natural language. Linguistic communication mostly happens face-to-face, in a particular environmental context shared between the speaker and listener, alongside cues such as spoken tone and pitch, eye contact and facial and emotional expressions.The importance of contextThere is a lot more to understanding what a person is saying than merely being able to comprehend their words. Even babies, who are not experts in language yet, can comprehend context cues.Take, for example, the simple sentence: “I’m pregnant,” and its interpretations in different contexts. If uttered by me, at my age, it’s likely my husband would drop dead with disbelief. Compare that level of understanding and response to a teenager telling her boyfriend about an unplanned pregnancy, or a wife telling her husband the news after years of fertility treatments. In each case, the message recipient ascribes a different sort of meaning — and understanding — to the very same sentence. In my own recent research, I have shown that even an individual’s emotional state can alter brainwave patterns when processing the meaning of a sentence. Our brains (and thus our thoughts and mental processes) are never without emotional context, as other neuroscientists have also pointed out. So, while some computer code can respond to human language in the form of text, it does not come close to capturing what humans — and their brains — accomplish in their understanding.It’s worth remembering that when workers in AI talk about neural networks, they mean computer algorithms, not the actual, biological brain networks that characterize brain structure and function. Imagine constantly confusing the word “flight” (as in birds migrating) versus “flight” (as in airline routes) — this could lead to some serious misunderstandings! Finally, let’s examine the claim about neural networks processing language better than theories produced by Chomskyan linguistics. This field assumes that all human languages can be understood via grammatical systems (in addition to context), and that these systems are related to some universal grammar.Chomsky conducted research on syntactic theory as a paper-and-pencil theoretician. He did not conduct experiments on the psychological or neural bases of language comprehension. His ideas in linguistics are absolutely silent on the mechanisms underlying sentence processing and understanding. What the Chomskyan school of linguistics does do, however, is ask questions about how human infants and toddlers can learn language with such ease, barring any neurobiological deficits or physical trauma.There are at least 7,000 languages on the planet, and no one gets to pick where they are born. That means the human brain must be ready to comprehend and learn the language of their community at birth. Regardless of where a child is born, the human brain is capable of acquiring any language. (Unsplash/tommao wang), CC BY From this fact about language development, Chomsky posited an (abstract) innate module for language learning — not processing. From a neurobiological standpoint, the brain has to be ready to understand language from birth. While there are plenty of examples of language specialization in infants, the precise neural mechanisms are still unknown, but not unknowable. But objects of study become unknowable when scientific terms are misused or misapplied. And this is precisely the danger: conflating AI with human understanding can lead to dangerous consequences.Veena D. Dwivedi receives funding from the Canada Foundation for Innovation, the Social Sciences and Humanities Research Council of Canada, and Brock University.