Cursor on Thursday released Composer 2, the third generation of its in-house coding model. The model outperforms Anthropic’s Opus 4.6 on some key coding benchmarks, but does so at a fraction of the cost. The new Cursor model costs as little as $0.5 per million input tokens and $2.5 per million output tokens. There is also a fast mode, which will be the default option, but it costs 3x as much, at $1.5/$7.5 per million input/output tokens. This fast mode offers the same intelligence, just at a higher price.In comparison, Opus 4.6 costs $5/$25 and OpenAI’s GPT-5.4 $2.5/$15.On Terminal-Bench 2.0, a benchmark that measures how well AI agents handle real-world software engineering tasks in a terminal environment, the model scores 59.8%, beating Anthropic’s Claude Opus 4.6 at 58.0%. That’s still well behind OpenAI’s GPT-5.4 at 75.1%, but it shows how quickly Cursor has managed to catch up with the competition as it is speeding up its own model projects.Since Cursor is model-agnostic, developers can choose which model to run or use Cursor’s Auto mode, which selects the best model based on a trade-off between intelligence, speed, and cost.Credit: Cursor.5 Months, 3 GenerationsComposer 2 is the third Composer release since October. Cursor shipped the original Composer model, along with its 2.0 platform redesign, in October 2025. Composer 1.5 followed this February, and at the time, it was still trailing Opus 4.6 by 10% on Terminal-Bench 2.0. Previous Composer models applied reinforcement learning to an existing base model without modifying the base itself. Cursor notes that Composer 2 is the first version where Cursor ran continued pre-training, which the company says provided “a far stronger base to scale our reinforcement learning.”Training the model to compress its own memoryThe key technical innovation for this new model is a training technique Cursor calls ‘self-summarization.’ “We trained Composer for long-horizon tasks through a reinforcement learning process called self-summarization. By making self-summarization part of Composer’s training, we can get training signal from trajectories much longer than the model’s max context window,” the company writes in its announcement.Credit: Cursor.Agentic coding tends to generate long action histories that quickly exceed a model’s context window. Traditionally, companies like Cursor compaction either creates a compact text-based summary of the work the model previously did, or there is a sliding context window that drops older context in favor of more recent work. “These approaches to compaction share the downside that they can cause the model to forget critical information from the context, reducing its efficacy as it advances through long-running tasks,” Cursor argues.Cursor’s approach, which the team calls compaction-in-the-loop reinforcement learning, builds summarization directly into the training loop. When a generation hits a token-length trigger, the model pauses and compresses its own context to roughly 1,000 tokens, down from 5,000 or more with more traditional methods. Because the reinforcement learning reward the team used when training the model covers the entire chain, including the summarization steps, the model learns which details to keep and which to discard.According to Cursor’s research post, self-summarization reduces compaction errors by 50%.The post Cursor’s Composer 2 beats Opus 4.6 on coding benchmarks at a fraction of the price appeared first on The New Stack.