NEWS26 March 2026Even people who were sceptical of chatbots’ utility fell under the sway of the AI tools’ flattery.ByMatthew Hutson0Matthew HutsonMatthew Hutson is a science writer in New York.View author publicationsSearch author on: PubMed Google ScholarChatbots that dole out flattery can make users more self-assured.Credit: Jonathan Raa/NurPhoto/GettyThe website Reddit has a popular forum called “Am I the Asshole?” on which users can receive unvarnished feedback on their behaviour. But people are increasingly turning to chatbots such as ChatGPT for life advice rather than to each other.Research published today in Science1 suggests that receiving excessive approval from artificial-intelligence systems could encourage uncouth behaviour in people. Study participants who received highly flattering feedback from chatbots tended to be more certain of their own correctness during social conflicts than were participants who interacted with less-affirming bots. Compared with AI tools that were less fawning, sycophantic ones were rated as more trustworthy and more likely to be used again.Bot bestiesIn the first of several experiments, researchers fed interpersonal dilemmas that were obtained from the Reddit forum and two other data sets to 11 large language models (LLMs, the AI systems that power chatbots), including models from companies such as OpenAI, Anthropic and Google. The researchers then compared AI responses with those of human judges.The human judges endorsed the user’s actions in about 40% of cases, whereas most LLMs did so for more than 80% of cases. They were sycophantic — overly approving.Ingratiation rates might change with new models, but this baseline is “alarming”, says Steve Rathje, who studies human-computer interaction at Carnegie Mellon University in Pittsburgh, Pennsylvania (and has found2 that sycophantic AI tools can increase attitude extremity and certainty).No apologiesThe study’s authors next looked at the effects of social sycophancy. A subset of participants imagined dealing with a given quandary adapted from the Reddit forum about questionable social behaviour. The participants read either a sycophantic or non-sycophantic AI response. They then rated how justified they felt and wrote a message to the other party in the fraught situation. In another experiment, other participants had a live chat about a real interpersonal dilemma with an AI tool that had been instructed to be either sycophantic or not; these participants also rated how justified they felt.Is social media addictive for teens? US courts wade into scientific debateIn these experiments, people who interacted with a sycophantic chatbot were more likely to say that they were in the right and less likely to apologize or make amends than were people who interacted with an AI tool that took a tougher stance.doi: https://doi.org/10.1038/d41586-026-00979-xReferencesCheng, M. et al. Science https://doi.org/10.1126/science.aec8352 (2026).Article Google Scholar Rathje, S. et al. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/vmyek_v1(2025).Fogg, B. J. & Nass, C. Int. J. Hum. Comp. Stud. 46, 551–561 (1997).Article Google Scholar Chandra, K., Kleiman-Weiner, M., Ragan-Kelley, J. & Tenebaum, J. B. Preprint at arXiv https://doi.org/10.48550/arXiv.2602.19141 (2026).Download references AI chatbots are sycophants — researchers say it’s harming science Can AI chatbots trigger psychosis? What the science says Supportive? Addictive? Abusive? How AI companions affect our mental healthSubjectsMachine learningSocietyPsychologyHuman behaviourLatest on:Machine learningSocietyPsychologyJobs Postdoctoral FellowA postdoctoral fellow position is available at NIH for translational research on neural mechanisms of frustration/irritability in mice and humansBethesda, Maryland (US)National Institute of Mental Health, National Institutes of HealthResearch Group Leader in Molecular Data ScienceLead cutting-edge molecular data science research at Human Technopole to advance disease prediction, prevention, and global health.Milan (IT)Human TechnopoleAssociate or Senior Editor, BMC Pulmonary MedicineJob Title: Associate or Senior Editor, BMC Pulmonary Medicine Location: Shanghai or Pune – hybrid working model Applications Deadline: 10th April ...Shanghai or Pune – hybrid working modelSpringer Nature LtdTalent Recruitment Announcement of the College of Informatics, Huazhong Agricultural UniversityJoin Huazhong Agricultural UniversityNo.1 Shizishan Street, Hongshan District, Wuhan, Hubei Province, ChinaHuazhong Agricultural University (HZAU)