When AI flatters, beware

Wait 5 sec.

2 min readApr 7, 2026 06:47 AM IST First published on: Apr 7, 2026 at 06:12 AM ISTAmong the many adjectives associated with sycophancy, the word unctuous is possessed of a unique beauty, combining flattery with a sense of — perhaps supercilious — insincerity. It is greased by a manner that is inevitably oily yet, alas, not altogether crude. The word instantly brings to mind an entire genre of smarmy advisers. The trope can allow those at the top to evade responsibility for their own incompetence or tyranny, as advisers and bureaucrats are assumed to be the villains. This is so prevalent that the Russians have an expression for it: “Good tsar, bad boyars”.Today, like many others, the unctuous vizier may be staring at the prospect of losing his job to AI. Anecdotal evidence has suggested that popular chatbots have a tendency to tell users that they are absolutely right, and also wonderful. Now, a study on “sycophantic AI”, published in Science, has found that this is indeed the case. Chatbots’ responses were “nearly 50 per cent more sycophantic than humans” even when the users were talking about doing something harmful or illegal. People, in turn, accepted the flattery, and became less likely to take responsibility for their actions or try to repair relationships.AdvertisementThis is especially concerning at a time when AI is increasingly becoming the arbiter both of truth and of right and wrong, a fact checker-cum-agony aunt-cum-therapist for many. Human counsel, for all its flaws, is statistically less sycophantic — unless, of course, one surrounds oneself with a coterie of particularly jelly-like specimens, who will keep mum even when the tsar’s phone really needs to be taken away.