Daybreak is OpenAI’s answer to the AI arms race in cybersecurity

Wait 5 sec.

OpenAI has unveiled Daybreak, a cybersecurity initiative that combines the company’s large language models with its Codex agentic framework to help organizations identify, patch, and validate software vulnerabilities across the development lifecycle.The platform is built around three model tiers: GPT-5.5 for general-purpose use, GPT-5.5 with Trusted Access for Cyber for verified defensive security workflows, and GPT-5.5-Cyber, a more permissive variant intended for specialized use cases such as authorized red-teaming and penetration testing. Each tier carries different safeguard levels and access controls, with the most capable tier paired with stronger identity verification and account-level oversight.“For cyber defense, it means seeing risk earlier, acting sooner, and helping make software resilient by design,” a company blog post reads. OpenAI did not respond to CyberScoop’s request for further comment. Daybreak arrives weeks after Anthropic unveiled Project Glasswing, built around Claude Mythos Preview, a cybersecurity-focused AI system Anthropic has described as capable of autonomously identifying software vulnerabilities at scale. Anthropic has kept access to Mythos tightly restricted, citing both safety concerns and national security considerations, and has not made the model commercially available.A tiered approach to accessThe structure of Daybreak reflects a deliberate effort to calibrate access against the risk these models present. The standard GPT-5.5 model is available for general enterprise and developer work. GPT-5.5 with Trusted Access for Cyber is aimed at security professionals engaged in defensive workflows, including vulnerability triage, malware analysis, detection engineering, and patch validation. GPT-5.5-Cyber, the highest-capability tier, is currently in preview and reserved for specialized workflows under controlled conditions.OpenAI has framed the access controls as a response to the dual-use nature of the underlying technology. The same AI capabilities that allow defenders to understand relationships across codebases, identify subtle vulnerabilities, and accelerate remediation could be misused, the company acknowledged. The platform pairs expanded capability with what OpenAI describes as trust, verification, proportional safeguards, and accountability.“We don’t think it’s practical or appropriate to centrally decide who gets to defend themselves,” the company said in a prior blog post related to the Trusted Access for Cyber program. “Instead, we aim to enable as many legitimate defenders as possible, with access grounded in verification, trust signals, and accountability.”Industry partners and government discussionsSeveral major technology and cybersecurity companies are already working within the Trusted Access for Cyber framework, including Cisco, Oracle, CrowdStrike, Palo Alto Networks, Cloudflare, Fortinet, Akamai, and Zscaler.Anthony Grieco, Cisco’s chief security and trust officer, described the technology as a “force multiplier for defenders,” noting that models like GPT-5.5 are changing the pace of security operations, from incident investigation to proactive exposure reduction. He added that the value of the technology lies not in the model alone but in the enterprise framework built around it.At the federal level, the Trump administration is weighing how Anthropic’s Mythos will be used to protect government networks, with Federal CIO Greg Barbaccia telling CyberScoop last month he sees its potential to strengthen federal cyber defenses and the significant uncertainties that remain about how it would perform in real-world conditions.Elsewhere, the European Commission is in discussions with OpenAI about potential access to its advanced AI models for identifying cybersecurity vulnerabilities. Other industry experts told CyberScoop that while these models are very good at finding vulnerabilities, that’s only part of the puzzle when it comes to an enterprise security plan. “The question that determines breach impact is not how fast you find the vulnerability. It’s how far a compromised identity can move before anyone knows it’s compromised,” said Doug Merritt, chairman & CEO, Aviatrix, a cloud security company. “That’s an infrastructure problem — what is each workload allowed to reach, on every path, independent of whether the breach has been detected? No patching tool answers that. Containment does.”Jared Atkinson, CTO of SpecterOps, an identity management company, says defenders need to focus on what attackers can reach once inside, while still working to identify vulnerabilities faster.“AI will accelerate portions of offensive security operations, but it does not fundamentally change the underlying problem defenders face. Most organizations still struggle to see and manage the attack paths that connect initial access to critical systems and data,” he said. “As these tools mature, visibility into identity exposure and post-compromise attack paths becomes increasingly urgent.”A widening competitionThe competitive cybersecurity dynamic between Anthropic and OpenAI has been building for months. OpenAI publicly announced the Trusted Access for Cyber program before Anthropic’s Glasswing rollout and has since expanded it to thousands of individuals and organizations. In April, the company released GPT-5.4 Cyber, a model variant specifically optimized for cybersecurity tasks, including testing and vulnerability research, governed by Know-Your-Customer and identity verification requirements.Cybersecurity experts in the United States and United Kingdom have described Claude Mythos as a meaningful improvement over previous frontier models in identifying cybersecurity vulnerabilities, though debate continues over its practical impact on information security. GPT-5.4 Cyber has similarly been fine-tuned for testing and vulnerability research, with OpenAI indicating it intends to make iterative improvements as the program matures.OpenAI’s stated intent is to expand access to Daybreak’s most capable models over time, working alongside industry and government partners as it deploys what it describes as “increasingly more cyber-capable models” through an iterative deployment approach. The company has indicated it is cautious about exercising too much centralized control over which sectors or industries participate in the program.CEO Sam Altman framed the initiative in broad terms. “AI is already good and about to get super good at cybersecurity,” he wrote on X. “We’d like to start working with as many companies as possible now to help them continuously secure themselves.”The post Daybreak is OpenAI’s answer to the AI arms race in cybersecurity appeared first on CyberScoop.