AI models think humans are smarter than they actually are, study finds

Wait 5 sec.

New research suggests that popular artificial intelligence models view humans a little too optimistically. A study has found that leading AI models, including OpenAI’s ChatGPT and Anthropic’s Claude, tend to presume people are more reasonable and logical than they really are, especially in strategic situations.The study, reported by TechXplore, tested models like ChatGPT-4o and Claude-Sonnet-4 using a classic game theory experiment called the “Keynesian beauty contest.”The Experiment Researchers had the systems play a game called “Guess the Number.” In this challenge, each player selects a number between zero and 100. To win, a player must choose the number closest to half the average of all entries.The trick is that you cannot simply pick your favorite number; you must predict what everyone else will pick, and then reason one step further.The Results The AI models were given descriptions of their human opponents, ranging from first-year undergraduates to experienced game theorists. While the models did show some strategic thinking by adjusting their guesses based on who they were facing, they consistently failed in one key area.The AI assumed humans would use deep, perfect logic. Consequently, the models often “played too smart,” choosing mathematically optimal numbers (often close to zero) that failed to account for the irrational or simpler choices real humans make.Why It Matters The researchers argue that this gap has significant implications. As AI is increasingly used for economic forecasting and business negotiations, its tendency to overestimate human rationality could lead to poor predictions. To be truly effective, AI must learn not just how humans should think, but how we actually think.