4 min readNew DelhiApr 22, 2026 03:50 PM ISTMythos is a cybersecurity-focused system that has been designed to detect and analyse software vulnerabilities. Its release has been restricted owing to the potential misuse risks. (Image: reuters)Frontier AI lab Anthropic has reportedly swung into action after a report claimed unauthorised users gained access to its unreleased Claude Mythos model. Earlier this month, while announcing its latest AI tool, the company had said that it is keeping the tool out of people’s reach owing to the fear of it enabling widespread hacking. However, the latest development seems to have triggered a fresh bout of concerns around access controls and overall AI safety.On Wednesday, April 22, Bloomberg reported that a small group of people accessed the model through one of Anthropic’s third-party vendor environments. Following the report, the US-based AI startup released its statement indicating that it has begun investigating the matter. In its report, the publication had revealed that a handful of users in a private online platform gained access to Claude Mythos the day when Anthropic announced that it was releasing the model to a small set of companies, including Goldman Sachs and Apple, for testing.Also Read | US security agency is using Anthropic’s Mythos despite blacklist: ReportReportedly, one of the unidentified users, who was a worker at a third-party contractor for Anthropic, along with methods used by cybersecurity researchers, accessed the advanced AI tool. The Bloomberg report said that the users did not run any cybersecurity prompts on the model but were more interested in simply playing around with the technology. This was verified by the publication based on screenshots and a live demo of the model. Regardless, the possibility of a potential breach of Claude Mythos has raised concerns.In simple words, Mythos is a frontier AI model, a large language model (LLM), that is capable of a wide range of tasks, including the ability to process software code. The model builds on the recent trend of LLMs advancing their performance on code-related tasks. Mythos stands out owing to what comes embedded inside it. Its system allows Mythos to instantly find and patch software vulnerabilities. It is backed by considerable compute power and is reportedly trained on troves of software-relevant data. The architecture is capable of tackling software vulnerabilities by probing and patching.Also Read | German central bank chief calls for wide access to Anthropic’s MythosUpon its announcement, Anthropic had shared that access to Mythos would be limited to a closed group of collaborators across the tech and security ecosystem. According to the company, the objective behind Mythos is to strengthen defensive cybersecurity capabilities at a time when the world is witnessing sophisticated AI-driven threats. Claude Mythos not only identifies vulnerabilities but can also assist in understanding how they would be exploited. In essence, Mythos shows defensive benefits and potential risks. The model, which is part of Anthropic’s Project Glasswing, for which the company has pledged up to $100 million in usage credits and $4 billion toward open-source security efforts.According to a report in The Guardian, the model has drawn scrutiny from one of the world’s leading safety authorities for technology – the UK’s AI Security Institute, which had cautioned last week that Mythos was a step up from earlier models when it came to the cyber-threats it posed. The institute claimed that the AI model could orchestrate attacks that would require multiple actions and could even detect weaknesses in IT systems without any human intervention. Reportedly, Mythos was the first AI model to complete a 32-step simulation of a cyber-attack created by the institute. In these steps, the model was required to solve the challenge in three out of 10 trials. © IE Online Media Services Pvt LtdTags:Anthropic