The rise of humanlike chatbots detracts from developing AI for the human good

Wait 5 sec.

Grok is a generative artificial intelligence (genAI) chatbot by xAI that, according to Elon Musk, is “the smartest AI in the world.” Grok’s latest upgrade is Ani, a porn-enabled anime girlfriend, recently joined by a boyfriend informed by Twilight and 50 Shades of Grey.This summer, both xAI and OpenAI launched updated versions of their chatbots. Each touted improved performance, but more notably, new personalities. xAI introduced Ani; OpenAI rolled out a colder-by-default GPT-5 with four personas to replace its unfailingly sycophantic GPT-4o model.Similar to claims made by Google DeepMind and Anthropic, both companies insist they’re building AI to “benefit all humanity” and “advance human comprehension.” Anthropic claims, at least rhetorically, to be doing so responsibly. But their design choices suggest otherwise. Instead of equipping every person with an AI assistant — a research collaborator with PhD-level intelligence — some of today’s leaders have released anthropomorphized AI systems that operate first as friends, lovers and therapists. Read more: More people are considering AI lovers, and we shouldn't judge As researchers and experts in AI policy and impact, we argue that what’s being sold as scientific infrastructure increasingly resembles science fiction gone awry. These chatbots are engineered not as tools for discovery, but as companions designed to foster para-social, non-reciprocal bonds.Human/non-humanThe core problem is anthropomorphism: the projection of human traits onto non-human entities. As cognitive scientist Pascal Boyer explains, our minds are tuned to interpret even minimal cues in social terms. What once aided our ancestors’ survival now fuels AI companies by capturing the minds of their users. AI companies claim to work towards equipping every person with an AI-assistant. (Matheus Bertelli/Pexels), CC BY When machines speak, gesture or simulate emotion, they trigger those same evolved instincts such that, instead of recognizing it as a machine, users perceive it like a human. Nonetheless, AI companies have pushed on, building systems that exploit these biases. The justification is that this makes interaction feel seamless and intuitive. However, the consequences that result can render anthropomorphic design deceptive and dishonest. Consequences of anthropomorphic designIn its mildest form, anthropomorphic design prompts users to respond as if there were another human on the other side of the exchange, and can be as simple as saying “thank you.”The stakes grow higher when anthropomorphism leads users to believe the system is conscious: that it feels pain, reciprocates affection or understands their problems. Although new research reveals that it’s possible the criteria for consciousness may be met in the future, false attributions of consciousness and emotion have led to some extreme outcomes, such as leading users to marry their AI companions.However, anthropomorphic design does not always inspire love. For others it has led to self-harm or harming others after forming unhealthy attachments. Some users even behave as though AI could be humiliated or manipulated, lashing out abusively as if it were a human target. Recognizing this, Anthropic, the first company to hire an AI welfare expert, has given its Claude models the unusual capacity to end such conversations.Across this spectrum, anthropomorphic design pulls users away from leveraging AI’s true capabilities, forcing us to confront the urgent question of whether anthropomorphism constitutes a design flaw — or more critically, a crisis.De-anthropomorphizing AIThe obvious solution seems to be stripping AI systems of their humanity. American philosopher and cognitive scientist Daniel Dennett argued that this may be humanity’s only hope. But such a solution is far from simple because the anthropomorphization of these systems has already led users to form deep emotional attachments. When OpenAI replaced GPT-4o with GPT-5 as the default in ChatGPT, some users expressed genuine distress and genuinely mourned the loss of 4o. However, what they mourned was the loss of its prior speech patterns and the way it used language. A public installation in Paris by the artist Rero. (Mathias Reding/Unsplash), CC BY This is what makes anthropomorphism such a problematic design model. As a result of the impressive language abilities of these systems, users attribute mentality to them — and their engineered personas exploit this further. Instead of seeing the machine for what it is — impressively competent but not human — users read into its speech patterns. While AI pioneer Geoffrey Hinton warns that these systems may be dangerously competent, something much more insidious seems to result from the fact these systems are anthropomorphized. Read more: A neuroscientist explains why it's impossible for AI to 'understand' language Flaw in the designAI companies are increasingly catering to people’s AI companion desires, whether sexbot or therapist.Anthropomorphism is what makes these systems dangerous today because humans have intentionally built them to mimic us and exploit our instincts. If AI consciousness proves impossible, these design choices will be the cause of human suffering. But in a hypothetical world in which AI does attain consciousness, our choice to force it into a human-shaped mind — for our own convenience and entertainment, replicated across the world’s data centres — may invent an entirely new kind, and scale, of suffering. Read more: Increasingly sophisticated AI systems can perform empathy, but their use in mental health care raises ethical questions The real danger of anthropomorphic AI isn’t some near or distant future where machines take over. The danger is here, now, and hiding in the illusion that these systems are like us.This is not the model that will “benefit all humanity” (as OpenAI promises) or “help us understand the universe” (as xAI’s Elon Musk claims). For the sake of social and scientific good, we must resist anthropomorphic design and begin the work of de-anthropomorphizing AI.Mark Daley receives funding from NSERC, SSHRC and Schmidt Initiative for Long Covid.Carson Johnston is supported in part by funding from the Social Sciences and Humanities Research Council.