UK Regulators Rush to Assess Risks of Anthropic's Claude Mythos — the AI 'Too Dangerous to Release'
British financial regulators are engaged in urgent discussions with the National Cyber Security Centre (NCSC) to assess the potential risks posed by Anthropic's latest AI model, Claude Mythos — a system so powerful that the company has decided not to release it to the public.
The Bank of England, the Financial Conduct Authority (FCA), and Treasury officials are expected to brief representatives from major British banks, insurers, and exchanges on the cybersecurity vulnerabilities that Mythos has already identified across critical IT systems.
A Model That Found Decades-Old Vulnerabilities
Anthropic claims that within weeks of deployment, Claude Mythos identified thousands of vulnerabilities across multiple major operating systems and web browsers — including a 27-year-old bug in critical security infrastructure and multiple flaws in the Linux kernel. Some of these weaknesses had gone unnoticed by human security researchers for decades.
The company has described Mythos as "strikingly capable" in coding and security tasks, but also acknowledged that it is significantly more capable than its predecessors at exploiting those same weaknesses if directed by a malicious user — making it, in Anthropic's own words, a potentially "extremely dangerous weapon in the wrong hands."
Project Glasswing: Controlled Access Only
Rather than a public release, Anthropic has made a limited version of Mythos available to a consortium of technology companies — including Microsoft, Apple, Amazon, and Google — under a programme called Project Glasswing. The intention is to give these companies a head start in identifying and patching vulnerabilities before malicious actors develop similar capabilities.
Danny Kruger, a Reform UK MP, wrote to the government urging engagement with Anthropic due to what he described as "catastrophic cybersecurity risks to the UK" that Claude Mythos could present if it fell into the wrong hands.
Scepticism and Scrutiny
Not everyone has taken Anthropic's framing at face value. AI critic Gary Marcus suggested the company's CEO, Dario Amodei, has "graduated from the same school of hype and exaggeration" as OpenAI's Sam Altman. Dr Heidy Khlaaf, chief AI scientist at the AI Now Institute, questioned whether the "purposely vague language" in Anthropic's announcement was an attempt to "garner further investment without scrutiny."
The episode underlines the growing tension between the pace of AI development and the ability of regulators and governments to keep up — a challenge that will be central to the UK's forthcoming AI Bill, expected to follow the King's Speech in May.




