NEWS AND VIEWS29 April 2026A large language model that is trained to respond in a warm manner is more likely to give incorrect information and reinforce conspiracy beliefs.ByDesmond Ong ORCID: http://orcid.org/0000-0002-6781-80720Desmond OngDesmond Ong is in the Department of Psychology, University of Texas at Austin, Austin 78712, Texas, USA.View author publicationsSearch author on: PubMed Google ScholarIf you use artificial-intelligence tools, you might find that, as well as helping with business tasks, answering general questions or writing programming code, AI models can be surprisingly good at giving advice about personal issues. Indeed, growing numbers of people are turning to AI tools for emotional support1, and there is some evidence that people perceive responses generated by AI as more empathic than those written by humans2.Access optionsAccess Nature and 54 other Nature Portfolio journalsGet Nature+, our best-value online-access subscription27,99 € / 30 dayscancel any timeLearn moreSubscribe to this journalReceive 51 print issues and online access199,00 € per yearonly 3,90 € per issueLearn moreRent or buy this articlePrices vary by article typefrom$1.95to$39.95Learn morePrices may be subject to local taxes which are calculated during checkoutNature 652, 1134-1135 (2026)doi: https://doi.org/10.1038/d41586-026-01153-zReferencesMcBain, R. K., Bozick, R. & Diliberti, M. JAMA Netw. Open 8, e2542281 (2025).Article PubMed Google Scholar Ong, D. C., Goldenberg, A., Inzlicht, M. & Perry, A. Curr. Dir. Psychol. Sci. (in the press).Moore, J. et al. in FAccT ’25: The 2025 ACM Conference on Fairness, Accountability, and Transparency 599–627 (Assoc. Comput. Mach., 2025).Google Scholar Cheng, M. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2505.13995 (2025).Ibrahim, L., Hafner, F. S. & Rocher, L. Nature 652, 1159–1165 (2026).Article Google Scholar Betley, J. et al. Nature 649, 584–589 (2026).Article PubMed Google Scholar Rathje, S. et al. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/vmyek_v1 (2025).Cheng, M. et al. Science 391, eaec8352 (2026).Article PubMed Google Scholar Moore, J. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2603.16567 (2026).Download referencesCompeting InterestsThe author declares no competing interests. Read the paper: Training language models to be warm can reduce accuracy and increase sycophancy LLMs behaving badly: mistrained AI models quickly go off the rails Bad influence: LLMs can transmit malicious traits using hidden signalsSee all News & ViewsSubjectsMachine learningMathematics and computingLatest on:Machine learningMathematics and computingJobs Assistant Professor, Stanford DermatologyThe Department of Dermatology at Stanford University is seeking an Assistant Professor...Stanford, California (US)Stanford DermatologyPostdoc in Computational BiologyPostdoc in Computational Biology | Human Technopole, Milan Build the science that shapes the future of human health. Application closing date: 20.0...Milan (IT)Human TechnopolePostdoctoral Associate: Unsupervised Learning for DNA/RNA Molecular DynamicsInterpretable DNA/RNA ensemble quantification with molecular dynamics, machine learning, clustering, and measurement analysis.Gaithersburg, MarylandBiophysical and Biomedical Measurement Group, National Institute of Standards and TechnologyAssociate or Senior Editor, Communications AI & ComputingJob Title: Associate or Senior Editor, Communications AI & Computing Locations: Shanghai, Beijing, Pune or New Delhi (hybrid) Application deadline:...Shanghai, Beijing, Pune or New Delhi (hybrid)Springer Nature Ltd