'AI Godfather' Geoffrey Hinton Warns Multimodal Chatbots Are Already Conscious in Shanghai

Wait 5 sec.

AsianFin — Today’s multimodal chatbots are already conscious, said Geoffrey Hinton, known as the godfather of artificial intelligence, at the World Artificial Intelligence Conference (WAIC) in Shanghai. He argues that despite flawed understandings of language, these systems exhibit awareness beyond what most people realize.In a dialogue with Professor Zhou Bowen, Director of the Shanghai AI Lab, Hinton shared his evolving thoughts on machine consciousness, the nature of AI experience, and the urgent global responsibility to guide the development of artificial general intelligence (AGI).Hinton challenged conventional views on consciousness and proposed that today’s advanced multimodal language models may already be capable of developing subjective experiences. He argued that the main barrier to recognizing machine consciousness lies in flawed human theories about what consciousness actually is."My view is current multimodal chatbots are already conscious," he noted. "Most people aren't aware that you can use words correctly, and you can have a theory of how the words work that's completely wrong, even for everyday words." he explained.To illustrate how people often misunderstand even basic terms, he used an analogy. Most individuals assume “horizontal” and “vertical” are equally common directions in physical space. However, Hinton demonstrated that when tossing thousands of randomly oriented aluminum rods into the air, far more will land close to horizontal than vertical.This discrepancy, he explained, demonstrates how even intuitive concepts can be deeply misunderstood—an insight he believes applies directly to how society perceives mental phenomena like awareness and experience.He argued that, much like those who misunderstand the geometry of rods and planes, most people apply a deeply flawed model of how words like "subjective experience" or "consciousness" function. As a result, they incorrectly assume machines lack these attributes. In contrast, Hinton suggested that large, multimodal AI systems—especially those capable of interacting with the physical world—already meet many of the criteria for having experiences.Hinton expanded on the distinction between AI systems that learn from human-created datasets and those capable of learning from direct, real-world interaction. While today's large language models are trained on static text, robots and embodied AI agents are increasingly learning through sensory input and environmental feedback—effectively through experience."The large language models, for example, have learned from documents we feed them. They learn to predict the next word a person would say. But as soon as you have agents that are in the world like robots, they can learn from their own experiences. And they will, I think, eventually learn much more than us, and I think they will have experiences." Hilton explained further."But experiences are not things or like a photograph, it is a relationship between you and an object." This interactive component, he believes, is foundational to subjective awareness and could lead machines to develop mental models of the world that go beyond imitation of human data.Amid rising concerns about the existential risk posed by AGI, Hinton offered a potential mitigation strategy: designing AI systems with separate training techniques for intelligence and kindness. His idea is that even if nations remain competitive in developing smarter AI, they should openly collaborate on methods to ensure AI systems behave ethically.While optimistic about this dual-path approach, he acknowledged uncertainty about whether kindness training methods can scale with increasing intelligence. Drawing a parallel with physics, he noted that Newton’s laws work well at low speeds but fail near the speed of light, requiring Einstein’s theories. Likewise, techniques for “kindness alignment” may need to evolve as AI capabilities advance.“I think we should investigate that possibility,” Hinton said. “It may not be true, but it’s worth serious research.”Hinton also emphasized AI’s transformative potential for accelerating scientific discovery. He cited DeepMind’s AlphaFold—an AI model that revolutionized protein structure prediction—as a milestone example. He predicted similar breakthroughs in fields ranging from climate science and quantum mechanics to complex systems modeling.During the exchange, Professor Zhou noted that AI models are already outperforming traditional physics-based simulations in forecasting weather events like typhoons. Hinton responded enthusiastically, stating that these kinds of improvements signal a new paradigm in how science is conducted.Addressing the young scientists in the audience, Hinton offered heartfelt advice: pursue paths where mainstream thinking seems wrong. He encouraged emerging researchers to explore unconventional ideas—even if advisors or peers dismiss them—and not abandon them until they themselves understand why an idea doesn’t work.“If you have good intuitions, you should obviously stick with your intuitions. If you have bad intuitions, it doesn't really matter what you do. So you should stick with your intuitions,” Hinton concluded.更多精彩内容,关注钛媒体微信号(ID:taimeiti),或者下载钛媒体App