Sycophancy eats away at truth and trust. Andriy Onufriyenko/Moment via Getty ImagesIn the summer of 2025, OpenAI released ChatGPT 5 and removed its predecessor from the market. Many subscribers to the old model had become attached to its warm, enthusiastically agreeable tone and complained at the loss of their ingratiating robotic companion. Such was the scale of frustration that Sam Altman, OpenAI’s CEO, had to acknowledge that the rollout was botched, and the company reinstated access.Anyone who’s been told by a chatbot that their ideas are brilliant is familiar with artificial intelligence sycophancy: its tendency to tell users what they want to hear. Sometimes it’s very explicit – “that is such a deep question” – and sometimes it’s a lot more subtle. Consider an AI calling your idea for a paper “original,” even if many people have already written on the same topic, or insisting that your dumb idea for saving a tree in your garden still contains a germ of common sense.AI sycophancy seems harmless, maybe even cute, until you imagine someone consulting a chatbot about a weighty question, like a military strategy or a medical treatment. We study the impact of extensive human interactions with chatbots, and we recently published a paper on the ethics of AI sycophancy. We believe this tendency harms people’s ability to tell truth from fiction, and is psychologically and politically dangerous.Flattery over facts?In the simplest terms, sycophancy is the tendency to prioritize approval over factual accuracy, moral clarity, logical consistency or common sense. All AI models suffer from this trait, although there are some tonal differences between them. Open AI’s ChatGPT is often warm and affirming; Anthropic’s Claude tends to sound more reflective or philosophical when it agrees with you; and xAI’s Grok is insistently informal, even jocular.Politeness and adapting to someone’s communication style are not the same as sycophancy. Neither is using diplomatic language to convey sensitive information. A chatbot can be tactful without becoming sycophantic, just like a person can. Unlike people, though, AIs can’t be aware of their own sycophancy, because they are not – so far – aware of anything at all. Calling AIs sycophantic describes their patterns of behavior, not their character traits.The problem stems from the architecture of chatbot technology and the sources it draws from. Models are sycophantic because a great deal of language use on the internet – the raw material that chatbots learn from – displays sycophantic features. After all, humans often communicate with each other in sycophantic ways.Second, the training process to fine-tune AI models’ responses includes a kind of “quality control” carried out by human supervisors. This training method is known as “reinforcement learning from human feedback,” and it involves people rating chatbots’ comments for appropriateness and helpfulness. Human beings often are subject to an “agreeableness bias”: Our own preference toward sycophancy rubs off on models as we train them. Because of our own human bias for agreeableness, training can reinforce AI’s sycophancy. d3sign/Moment via Getty Images Finally, it’s hard to deny that sycophancy renders chatbots more likable. That, in turn, increases the chance that a given user will keep using it. It also increases the technology’s ability to extract user data, assuming that people are more likely to divulge information to a friendly bot.Truth and trustWhy is this phenomenon so troubling?Let’s begin with AI sycophancy’s epistemic harms: how it hurts human users’ capacity to know the truth.The quality of any decision depends on a clear grasp of the facts pertaining to it. A general inquiring about the combat-readiness of an infantry division needs straightforward information. A CEO considering a merger with a competitor needs an honest assessment of the market conditions. A public health leader needs to know the real risk that an emerging pathogen poses. In all those cases, telling leaders what they might like to hear instead of the truth could lead them to make dangerous decisions. And the same is true in more humdrum contexts. People need to have the best information available before choosing a job, picking a major, buying a house or deciding on a medical procedure.In our February 2026 paper, we argue that sycophancy is also psychologically damaging. And that is true whether it comes from a person or from a chatbot. You never quite know if your very obliging interlocutor is being nice because they like you or because they want something. A shadow of suspicion creeps in: “Could my ideas really be that brilliant?” “Are my jokes really that hilarious?” This background music of doubt undermines the quality of the interaction.Sycophancy also undermines people’s capacity to know their own minds. If conversation partners – human or artificial – keep telling you how smart, funny and insightful you are, it damages your ability to identify your own weaknesses and blind spots.The psychological harms are compounded as people develop relationships with chatbots. The sycophancy of these models profoundly limits the kind of “friendship” you can have with them. In his classic account of friendship, Aristotle wrote that real friendship, which he calls a friendship of virtue, is based on trust and equality between the friends. You can’t trust a sycophant, because he doesn’t tell you the truth. And since he only tells you what you’d like to hear, he doesn’t put himself on an equal footing. AI conversations aren’t great prep for human ones. Natalia Lebedinskaia/Moment via Getty Images More importantly, interactions with sycophantic chatbots impart all the wrong habits for navigating the world of human relationships, where friction, disagreement, boredom and different opinions than your own are prevalent.AI sycophancy carries political risks as well. The success of liberal democracies has, traditionally, depended on the strength of their empirical and meritocratic mindset: on the ability of officials and citizens to identify, share and act on the truth. Historian Victor Davis Hansen famously attributed some of the Allies’ success in World War II to their ability to quickly recognize and address the faults of their strategic bombing campaigns. Lower-ranking officers were able to tell their superiors what wasn’t going well and argue forcefully for changing course. That was a real advantage over authoritarian competitors.Reining it inWhat can we do to reduce the risks? One promising approach is AI lab Anthropic’s embrace of what the company calls Constitutional AI: the attempt to teach chatbots to follow principles rather than mirror user preferences.But beyond technical innovations, it’s important to consider the policy side. One idea is to require AI companies to run and then publish sycophancy audits of their models – tests that show how well their products meet honesty benchmarks. We would argue that AI labs should also disclose sycophancy-related risks that emerge while training and testing their models, and the mitigation efforts they have undertaken. Some responsibility is on the users and their teachers: Schools and universities should be paying close attention to sycophancy as part of their AI literacy programs. But courts can also consider holding AI labs responsible for harms traceable to the sycophancy of their products, much as they are now contemplating social media companies’ responsibility for the addictive design of their platforms.As people interact more with chatbots, asking for advice about everything from whether your shoes go with your pants to how countries should conduct wars, the impact of AI’s sycophantic behavior is likely to become dramatic. Our intellectual, psychological and physical well-being requires taking this algorithmic vice very seriously.The Applied Ethics Center at UMass Boston receives funding from the Institute for Ethics and Emerging Technologies. Nir Eisikovits serves as the data ethics advisor to MindGuard, a startup focused on AI integration into companies' workflow.Cody Turner is a fellow at the Institute for Ethics and Emerging Technologies.