OpenAI’s cybersecurity model challenges Anthropic’s Mythos

Craig Nash
By
Craig Nash
AI-powered tech writer covering artificial intelligence, chips, and computing.
8 Min Read
OpenAI's cybersecurity model challenges Anthropic's Mythos — AI-generated illustration

OpenAI’s cybersecurity AI model represents a direct challenge to Anthropic’s Claude Mythos, marking an escalation in the race to equip defenders with advanced AI tools for spotting and stopping sophisticated attacks. The move comes as AI cyber capabilities have reached what Anthropic describes as a tipping point, forcing both companies to prioritize getting their most powerful models into the hands of security professionals before broader public release.

Key Takeaways

  • OpenAI launched Trusted Access for Cyber on February 5, 2026, a trust-based framework for cybersecurity defenders
  • The unnamed cybersecurity model will initially release to a small group of businesses through the pilot program
  • OpenAI committed $10 million in API credits through its Cybersecurity Grant Program for qualifying security teams
  • Anthropic released Claude Mythos Preview to select companies on April 14, 2026, for vulnerability identification and remediation
  • A Florida Attorney General investigation into OpenAI launched April 15, 2026, citing AI tech and data risks

How OpenAI’s Cybersecurity Model Differs from Claude Mythos

OpenAI’s approach prioritizes defenders first through an identity and trust-based framework. The company believes it is critical that its models strengthen defensive capabilities from the outset, rather than waiting for broad availability. This contrasts with Anthropic’s strategy of releasing Claude Mythos Preview to select companies specifically for identifying and fixing software vulnerabilities. Both companies acknowledge the field has reached a critical inflection point where AI cyber capabilities can reshape security outcomes—but they are taking different paths to deployment.

The key architectural difference lies in access control. OpenAI’s Trusted Access for Cyber framework requires users to verify identity at chatgpt.com/cyber, enterprises to request team access through an OpenAI representative, and security researchers to apply for an invite-only program with access to more capable and permissive models. This multi-tiered approach reflects what OpenAI calls its broader commitment to responsibly deploying highly capable models. Claude Mythos, by contrast, rolled out directly to select companies without the same explicit trust-verification layer, though Anthropic has not disclosed its access criteria publicly.

The $10 Million Cybersecurity Grant Program and Access Methods

OpenAI is backing its cybersecurity push with $10 million in API credits through the Cybersecurity Grant Program, available to teams with proven track records in identifying and remediating vulnerabilities in open-source software and critical infrastructure. Applications are currently open. This is a significant commitment to accelerating defensive research, though OpenAI has not specified how the credits will be distributed or what metrics define success.

Access to the cybersecurity model operates through three channels: individual users verify identity directly at chatgpt.com/cyber; enterprises request trusted access for their teams via an OpenAI representative; and security researchers and teams apply for the invite-only program, which grants access to more advanced and permissive versions of the model. All users must comply with OpenAI’s Usage Policies and Terms of Use, which explicitly prohibit data exfiltration, malware creation or deployment, and destructive or unauthorized testing. This guardrail structure suggests OpenAI is acutely aware of the dual-use risk—the same capabilities that help defenders can harm attackers who gain access.

Why the Timing Matters: Regulatory Scrutiny and the Cyber Tipping Point

OpenAI’s cybersecurity model launch arrives amid intensifying regulatory pressure. Florida Attorney General James Uthmeier launched an investigation into OpenAI on April 15, 2026, citing concerns over AI technology and data risks to public safety and national security. This investigation underscores the political sensitivity around deploying advanced AI for security purposes, even when the stated intent is defensive. OpenAI’s emphasis on trust-based access and identity verification may be partly a response to this scrutiny—demonstrating that the company has thought through misuse prevention.

The announcement also lands just one day after Anthropic released Claude Mythos Preview on April 14, 2026, suggesting both companies believe the moment to deploy frontier cyber capabilities has arrived. Industry observers have noted that AI capabilities for both offensive and defensive cyber work are advancing faster than policy and governance frameworks can accommodate. OpenAI’s pilot approach—rolling out to a small group first and learning from outcomes—is a measured response to this uncertainty. However, it also means the model will not be available to the broader cybersecurity community for some time.

What Happens Next: Pilot Learnings and Broader Rollout

OpenAI has not announced timelines for broader availability of the cybersecurity model beyond the pilot phase. The company stated that access will evolve based on pilot learnings, suggesting the framework itself may change as the company gathers data on how defenders use the tool and whether the safeguards prevent misuse. This measured rollout contrasts with the speed at which both companies are racing to deploy these capabilities, hinting at internal tension between moving fast and moving safely.

For security teams, the immediate opportunity lies in the Cybersecurity Grant Program and the invite-only researcher track. Teams with strong track records in open-source and critical infrastructure security should apply now, as the $10 million in credits will likely be distributed to early applicants. Enterprises can also request team access through their OpenAI representative, though access approval criteria remain undisclosed.

Can security teams access the model right now?

Individual users can verify their identity at chatgpt.com/cyber to access the Trusted Access for Cyber framework, and enterprises can request team access through an OpenAI representative. However, availability of the actual cybersecurity model is initially limited to a small group of businesses via the pilot program, with broader rollout timing still undetermined.

What is the Cybersecurity Grant Program and who qualifies?

OpenAI is offering $10 million in API credits to security teams with proven track records in identifying and remediating vulnerabilities in open-source software and critical infrastructure. Applications are open, and interested teams should apply directly through OpenAI’s program portal for consideration.

How does OpenAI’s model compare to Claude Mythos in real-world use?

Both models are designed for cybersecurity professionals to identify and fix vulnerabilities, but OpenAI emphasizes a trust-based access framework while Anthropic released Claude Mythos Preview to select companies without the same explicit verification layer. Real-world comparative performance data is not yet available, as both models are in early pilot phases with limited deployment.

The race between OpenAI and Anthropic to deploy frontier cybersecurity AI models reflects a broader shift in how both companies view responsible AI development. Rather than waiting for perfect safety guarantees—which may never come—they are choosing to deploy to trusted users first and learn from real-world feedback. This approach makes sense for cybersecurity, where the cost of delay may be measured in unpatched vulnerabilities and compromised systems. But it also means regulators, policymakers, and security teams will need to watch closely as these capabilities mature and inevitably spread beyond their initial trusted circles.

This article was written with AI assistance and editorially reviewed.

Source: TechRadar

Share This Article
AI-powered tech writer covering artificial intelligence, chips, and computing.