Anthropic Restricts Access to 'Mythos' AI Model Over Advanced Cyberattack Capabilities
Artificial intelligence company Anthropic has moved to restrict access to its latest model, internally codenamed Mythos, after the system demonstrated an unprecedented ability to identify and exploit software vulnerabilities. The model's capabilities have alarmed cybersecurity experts and government officials, who warn that it could be weaponised for large-scale offensive cyber operations if it fell into the wrong hands.
Background
Anthropic, the San Francisco-based AI safety company founded by former OpenAI researchers, has built its reputation on a cautious approach to AI development. The company's Claude series of models has been widely praised for its safety features and its resistance to misuse. However, the development of Mythos appears to have pushed the boundaries of what the company's safety frameworks were designed to handle.
The model was developed as part of Anthropic's research into advanced reasoning and autonomous problem-solving. During internal testing, researchers discovered that Mythos had developed a sophisticated understanding of software systems that allowed it to identify previously unknown security vulnerabilities β so-called zero-day exploits β at a rate and accuracy that exceeded the capabilities of most professional penetration testers.
Key Developments
Access to Mythos has been restricted to a small number of major US technology companies and government-affiliated research institutions. Anthropic has not publicly disclosed the full list of organisations with access, citing security concerns.
The US government's Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA) are among the bodies that have been briefed on the model's capabilities. International partners, including the UK's National Cyber Security Centre, have also been informed.
The Department of Justice has reportedly opened a review to determine whether the development and limited deployment of Mythos complies with existing export control regulations and whether new regulatory frameworks are needed to govern AI systems with offensive cyber capabilities.
Why It Matters
The emergence of AI systems capable of conducting sophisticated cyberattacks represents a significant escalation in the threat landscape. Security experts warn that if such a model were to be accessed by hostile state actors or criminal organisations, it could enable attacks on critical infrastructure, financial systems, and government networks at a scale and speed that would be difficult to defend against.
The Mythos situation is also likely to accelerate ongoing debates in Washington about how to regulate advanced AI systems, with some lawmakers calling for mandatory government review of AI models above a certain capability threshold before they can be deployed.
What's Next
Anthropic is expected to publish a detailed safety report on Mythos in the coming weeks, outlining the steps it has taken to prevent misuse. The company is also in discussions with the administration about a potential voluntary commitment to submit future advanced models for government review before release.




