[R] The Illusion of Thinking | Apple Machine Learning Research

Wait 5 sec.

Research Publication The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Main Findings The Complexity Cliff: Reasoning models don't gradually degrade—they catastrophically fail. Beyond specific complexity thresholds, even the most advanced models (Claude 3.5, DeepSeek-R1, o3-mini) plummet from near-perfect accuracy to complete failure. The sharp discontinuity suggests these systems lack true compositional reasoning; they're pattern-matching within their training distribution rather than building genuine logical structures. The Inference Paradox: When compute is held constant, a striking pattern emerges across three complexity regimes. Simple problems expose reasoning models as wasteful—standard LLMs achieve better results with fewer tokens. Only at medium complexity do reasoning models justify their computational overhead. At high complexity, all approaches fail equally, revealing that more "thinking" tokens can't overcome fundamental architectural limitations. The implication: current reasoning approaches may be solving the wrong problem. The Giving-Up Phenomenon: Perhaps the study's most puzzling finding: as problems approach critical difficulty, reasoning models reduce their thinking effort—well before hitting token limits. The self-limiting behavior suggests these models possess some implicit awareness of their own limitations, abandoning deeper exploration when problems exceed their capabilities. The models appear to "know" when they don't know, but lack the tools to push beyond. The Overthinking Trap: Examining reasoning traces reveals a troubling pattern. On simple problems, models find correct answers quickly but continue exploring dead ends—computational waste masquerading as thoroughness. Medium-complexity problems show productive exploration eventually yielding solutions. But complex problems trigger endless, fruitless wandering. The progression from overthinking to productive search to complete breakdown maps the boundaries of what these models truly understand versus what they merely approximate. The Execution Failure: The Tower of Hanoi experiments deliver a sobering verdict: even with step-by-step algorithms provided, models fail at the same complexity points. The challenge isn't search—the challenge is execution. These systems struggle with the mechanical application of logical rules, suggesting their "reasoning" is more associative than algorithmic. The finding challenges the narrative that these models have learned generalizable reasoning procedures; instead, they appear to have memorized reasoning patterns that break down under novel demands. Interesting Commentary https://hardcoresoftware.learningbyshipping.com/p/233-the-illusion-of-thinking-thoughts https://www.seangoedecke.com/illusion-of-thinking/ https://theahura.substack.com/p/a-few-quick-thoughts-on-apples-illusion   submitted by   /u/rfsclark [link]   [comments]