AI-developed zero-day bypasses 2FA in first documented attack

Craig Nash
By
Craig Nash
Tech writer at All Things Geek. Covers artificial intelligence, semiconductors, and computing hardware.
8 Min Read
AI-developed zero-day bypasses 2FA in first documented attack

An AI-developed zero-day exploit has been discovered in active use, marking the first documented instance of threat actors weaponizing artificial intelligence to craft and deploy a critical vulnerability in the wild. Google’s Threat Intelligence Group identified the attack targeting an open-source web-based system administration tool, with the exploit designed specifically to bypass two-factor authentication protections that millions of organizations rely on daily.

Key Takeaways

  • Google identified the first zero-day exploit believed to have been developed using AI, bypassing 2FA on an open-source administration tool.
  • The Python script contained educational docstrings, hallucinated CVSS scores, and clean code structure characteristic of large language model training data.
  • North Korean APT45 group sent thousands of recursive prompts to analyze CVEs and validate exploits, building an arsenal impractical without AI assistance.
  • Google has high confidence the actor leveraged an AI model for vulnerability discovery and weaponization, though not Gemini.
  • The race to use AI for finding network vulnerabilities has already begun, with many more exploits likely undiscovered.

How the AI-developed zero-day Reveals a Shifting Threat Landscape

The exploit itself bore unmistakable hallmarks of artificial intelligence creation. The Python script contained abundant educational docstrings, including a hallucinated CVSS severity score, and followed a structured, textbook Pythonic format highly characteristic of large language model training data. The code included detailed help menus and a clean ANSI color class—patterns that would be unusual for a human attacker to implement without specific reason, but entirely natural output from an LLM trained on thousands of open-source repositories.

Google researchers determined with high confidence that the threat actor leveraged an AI model to support both the discovery and weaponization of the vulnerability. The vendor that produced the targeted tool was notified and released a patch before mass exploitation could occur, preventing what could have been a widespread compromise. This collaborative response prevented disaster, but it also confirmed a troubling reality: AI-assisted vulnerability discovery is no longer theoretical.

The Scale Advantage: Why APT45 Turned to AI

A separate campaign by North Korean APT45 demonstrated the operational advantage AI provides to threat actors at scale. The group sent thousands of repetitive prompts to recursively analyze CVEs and validate proof-of-concept exploits, building a robust arsenal of exploit capabilities that would be impractical to manage without AI assistance. This approach represents a fundamental shift in how sophisticated threat actors operate—moving from manual, labor-intensive vulnerability research to AI-accelerated discovery pipelines that compress months of work into hours.

John Hultquist, chief analyst at Google Threat Intelligence Group, emphasized the gravity of the moment: the race to use AI to find network vulnerabilities has already begun. For every zero-day traceable back to AI, there are probably many more operating undetected. Threat actors are using AI to boost the speed, scale, and sophistication of their attacks in ways that traditional security defenses were not designed to counter.

AI-developed zero-day vs. Traditional Vulnerability Discovery

The comparison between AI-assisted and traditional vulnerability research reveals why threat actors are racing to adopt these tools. Manual vulnerability discovery requires skilled researchers, time, and institutional knowledge—all scarce resources in the criminal underground. AI models eliminate these constraints. A threat actor can now generate dozens of candidate exploits, test them systematically, and refine the most promising ones without needing a team of PhD-level security researchers.

Google explicitly ruled out Anthropic’s Claude Mythos model as the AI used in this particular attack, though the company noted that Claude Mythos is known for independently finding thousands of vulnerabilities across every major operating system and web browser. The distinction matters: if Claude was not the culprit here, other AI models are clearly capable of the same work, and threat actors have options. The attack used an unidentified AI model, suggesting the threat landscape now includes multiple AI systems capable of vulnerability discovery.

What Happens Next: The New Era of Cybercrime

The title of Google’s finding references self-morphing malware and Gemini-powered backdoors as signals of a new era in cybercrime, though the specifics of these threats were not detailed in available disclosures. What is clear is that the integration of AI into the attacker’s toolkit is no longer a future scenario—it is happening now. Organizations relying on traditional security models, patching cycles, and human-speed incident response are facing an adversary that operates at machine speed.

The zero-day targeting 2FA is particularly alarming because two-factor authentication has become the baseline security control for sensitive systems worldwide. An exploit that bypasses it does not just compromise a single account—it potentially unlocks entire networks. When AI can discover such exploits faster than security teams can patch them, the traditional advantage of defenders begins to erode.

Can security teams keep pace with AI-assisted attacks?

Security teams face a fundamental challenge: they must defend against vulnerabilities discovered and weaponized at machine speed, using processes designed for human timelines. Patching cycles take weeks. AI-assisted discovery takes hours. The asymmetry is stark, and it favors attackers.

Is Gemini actually being used in cyberattacks?

Google stated it does not believe Gemini was used in the documented zero-day attack. However, the title’s reference to Gemini-powered backdoors suggests the company has observed or is concerned about Gemini being leveraged in other attack scenarios. Google has not released details on these separate threats.

What should organizations do right now?

Organizations should prioritize patching the targeted administration tool immediately and audit logs for suspicious activity. More broadly, assume that any vulnerability your security team discovers could also be discovered by an AI model in the hands of threat actors. Shift from reactive patching to proactive vulnerability management, threat hunting, and zero-trust architecture that does not rely solely on perimeter defenses or single authentication factors.

The discovery of the first AI-developed zero-day in active use is not a wake-up call—it is confirmation that the alarm has been ringing for some time. Threat actors have already moved ahead. The question now is whether defenders can catch up before the next exploit is discovered, weaponized, and deployed.

Edited by the All Things Geek team.

Source: Tom's Hardware

Share This Article
Tech writer at All Things Geek. Covers artificial intelligence, semiconductors, and computing hardware.