The AI vulnerability disclosure policy that governed responsible security research for decades is effectively dead. Artificial intelligence can now reverse-engineer security patches into working exploits in under 30 minutes, while multiple independent researchers discover identical critical bugs within weeks of each other using LLM-assisted tools. This collapse of the traditional 90-day disclosure window has fundamentally broken the assumptions that protected systems from mass exploitation.
Key Takeaways
- AI tools enable independent researchers to rediscover the same critical bugs within weeks, not months.
- Patch diffs can be weaponized into functional exploits in under 30 minutes using large language models.
- Eleven researchers independently reported the same payment bypass vulnerability within 6 weeks.
- Nation-state actors weaponized Linux kernel vulnerabilities within days of disclosure.
- Google Project Zero’s 90-day standard and similar industry policies no longer account for AI-accelerated exploitation timelines.
How AI Broke the 90-Day Vulnerability Disclosure Policy
The traditional AI vulnerability disclosure policy assumed a world where security researchers worked independently, discoveries were rare, and exploit development took weeks or months. That world no longer exists. Once a vulnerability is discovered using LLM prompts or automated scanning, a wave of duplicate reports arrives within days—same root cause, slightly different wording. If independent security teams can replicate findings this quickly, what stops malicious actors from doing the same before patches reach end users?
The most alarming evidence comes from patch analysis. A React security update was converted into a working exploit in just 30 minutes using AI tools, demonstrating that the window between patch release and weaponization has collapsed to hours, not days. This directly contradicts the assumption embedded in every major vulnerability disclosure policy: that vendors have time to distribute patches before attackers can exploit the flaw at scale.
Consider the case of 11 independent researchers reporting the same payment bypass bug within 6 weeks. In a pre-AI era, such parallel discoveries would be remarkable. Today, they are the norm. The AI vulnerability disclosure policy framework treats disclosure windows as safe havens where vendors patch in secret. But if a dozen researchers can find the same bug independently in weeks, the embargo period itself becomes the vulnerability.
Real-World Collapse: Nation-State Actors and Weaponized Patches
Theory became catastrophic reality with two consecutive Linux kernel privilege escalation vulnerabilities: Copy Fail and Dirty Frag. Nation-state actors weaponized both within days of disclosure. More alarming, Dirty Frag’s embargo was broken within hours by a third party who independently discovered the same bug class—proving that the AI vulnerability disclosure policy’s core assumption (coordinated silence) is no longer enforceable.
This pattern reveals why the 90-day model is dying. The policy assumes that keeping technical details secret prevents exploitation. But when AI can extract exploitable details from a patch diff in 30 minutes, and when multiple teams independently find the same vulnerability in parallel, the embargo becomes theater. Attackers do not need leaked details—they generate their own through automated reverse-engineering.
Current Disclosure Policies Are Already Obsolete
Google Project Zero’s standard AI vulnerability disclosure policy allows vendors 90 days to patch, with public disclosure 30 days after patch availability. For actively exploited vulnerabilities, the window shrinks to 7 days. Anthropic follows a 90-day default with 14-day extensions for progress, dropping to 7 days for critical active exploits. The U.S. Department of Commerce mandates 90 calendar days of confidentiality post-notification.
None of these timelines account for AI-assisted rediscovery and weaponization. Google Project Zero’s 2025 Reporting Transparency trial attempts a modest fix: within roughly one week of reporting, Project Zero publicly shares the vendor, affected product, report date, and 90-day deadline—but no technical details or proof-of-concept. The goal is to shrink the upstream patch gap for downstream users by signaling that a fix is coming, even if specifics remain secret.
But this transparency trial still assumes patches will travel reliably through supply chains before exploitation begins. In an AI-accelerated world, that assumption is increasingly fragile. Duplicate reports surge within days, each potentially triggering independent patch cycles and advisory processes. Monthly patch cycles, advisory-based responses, and sprint-based triages are no longer sufficient—they are relics of a slower threat landscape.
What Replaces the Broken AI Vulnerability Disclosure Policy?
No consensus replacement exists yet. Experts argue that the AI vulnerability disclosure policy framework must shift from embargo-based secrecy to real-time patching, AI-defensive continuous integration and deployment pipelines, and immediate P0 treatment for critical vulnerabilities. Some propose a new signal standard: affected project or component family, severity, exploitation status, patch timeline, and monitoring areas—without exploits, offsets, or payloads.
The core challenge is that AI has collapsed the time dimension. When patch diffs become exploits in 30 minutes and researchers independently find the same bugs in parallel, the old playbook of coordinated disclosure no longer works. Vendors cannot patch in secret. Researchers cannot assume they have weeks to report responsibly. The AI vulnerability disclosure policy must evolve to assume adversarial parallelism: multiple actors—researchers, defenders, and attackers—will discover and weaponize the same vulnerabilities simultaneously.
Is the 90-day disclosure window completely dead?
For actively exploited or critical vulnerabilities discovered via LLM tools, yes—the 90-day window is dead. For less severe bugs with slower exploitation paths, the model may limp forward. But the trend is clear: AI-assisted bug hunting has fundamentally shortened the safe disclosure window from months to days or hours, making the traditional policy unworkable at scale.
Can the AI vulnerability disclosure policy be fixed without abandoning responsible disclosure?
Possibly, but only if vendors move to real-time patching and monitoring rather than batch cycles. Early transparency about vulnerability existence (without technical details) can help downstream users prepare. But the core problem—that AI can weaponize patches faster than they can be distributed—may require abandoning the assumption that disclosure windows can ever be truly safe again.
What should security teams do if the AI vulnerability disclosure policy collapses?
Assume that any patch released publicly may be weaponized within hours. Prioritize rapid testing and deployment of critical patches, implement AI-powered threat detection in production systems, and monitor for exploitation attempts immediately after patch release. The era of leisurely patch cycles is over.
The AI vulnerability disclosure policy did not fail because vendors or researchers abandoned responsibility—it failed because artificial intelligence made the underlying assumptions impossible to maintain. A policy designed for a world of slow, independent discovery cannot survive in an era where LLMs rediscover the same critical bugs in parallel and turn patches into exploits in minutes. Until the security industry builds new frameworks that account for AI-accelerated timelines, systems will remain exposed to exploitation windows measured in hours, not months.
Edited by the All Things Geek team.
Source: Tom's Hardware


