The Trump administration is considering mandatory AI model vetting before public release, a stunning reversal from the president’s previous positioning as a champion of unfettered AI development. The White House is actively discussing an executive order that would establish government oversight procedures for new artificial intelligence models, according to reporting from The New York Times.
Key Takeaways
- Trump administration is considering an executive order requiring government vetting of AI models before public release
- The NSA, Office of the National Cyber Director, and Director of National Intelligence could oversee the review process
- Senior officials from the administration briefed executives at Anthropic, Google, and OpenAI on preliminary plans
- The proposed system would grant government early access to models without necessarily blocking their release
- A White House official called current discussion “speculation” and said any formal announcement would come from Trump
The Policy Reversal Behind Mandatory AI Model Vetting
This represents a dramatic shift in regulatory posture. Trump had previously marketed himself as the candidate who would protect AI companies from excessive government interference. Now his administration is considering mandatory AI model vetting through an executive order that would create what officials are calling an “AI working group” composed of both tech executives and government officials. The group would develop specific oversight procedures for evaluating frontier models before they reach the public.
The reversal signals internal disagreement about how aggressively to regulate artificial intelligence. Dean Ball, a former senior adviser on AI in the Trump administration, acknowledged the tension, stating that officials are “trying to avoid overregulation while keeping pace with the technology,” calling it a “tricky balance”. This framing suggests the administration wants to appear tough on AI safety without crushing innovation—a politically delicate position.
How the Proposed Mandatory AI Model Vetting System Would Work
Under the proposed framework, mandatory AI model vetting would operate differently than outright bans. The system would grant government agencies early access to new models for evaluation, but would not necessarily block their public release. This approach mirrors the UK’s AI Security Institute model, where government bodies evaluate frontier models against safety benchmarks both before and after deployment. The NSA, the Office of the National Cyber Director, and the Director of National Intelligence could oversee the review process, according to officials who spoke to The New York Times.
The critical detail here is that mandatory AI model vetting under this proposal aims for transparency and assessment rather than gatekeeping. Agencies would examine models, identify risks, and potentially issue guidance—but the power to block a release remains unclear. This distinction matters enormously for AI companies deciding whether to cooperate with the process or challenge it legally.
Industry Reaction and What Comes Next
Senior administration officials have already briefed executives from Anthropic, Google, and OpenAI on some of the plans in meetings the week prior to reporting. This direct engagement suggests the administration is trying to build consensus before formalizing the policy, though it also signals that major AI companies are aware a regulatory shift is coming.
However, a White House official told The New York Times that talk of an executive order is “speculation,” and that any announcement would come from Trump himself. This carefully worded denial leaves room for either moving forward or abandoning the proposal entirely. The administration may be testing industry reaction before committing to formal policy, or it may be signaling that nothing is finalized yet.
Is Mandatory AI Model Vetting Actually Coming?
The honest answer is: nobody outside the White House knows. The proposal exists in discussion form, briefing documents have circulated among tech executives, but no formal announcement has been made. The administration’s own denial that this is “speculation” rather than confirmed policy suggests internal deliberation is still ongoing. What seems certain is that the hands-off approach to AI regulation that characterized Trump’s first term is no longer the baseline assumption in his second.
What would mandatory AI model vetting actually require companies to do?
The research brief does not specify what concrete review procedures would entail, what safety benchmarks would be used, or what timeline would be required for vetting. The proposed system remains deliberately vague at this stage, with the “AI working group” tasked with developing those specifics rather than the executive order defining them upfront.
How does this compare to international AI regulation?
The UK’s AI Security Institute already evaluates frontier models against safety benchmarks before and after deployment. The Trump proposal appears designed to create a similar framework in the US, suggesting potential convergence of international AI governance approaches rather than a uniquely American regulatory path.
The Trump administration’s consideration of mandatory AI model vetting marks a genuine policy inflection point. Whether it becomes formal regulation or remains a floating proposal depends on internal White House dynamics, industry pushback, and Trump’s own appetite for the regulatory battle. For now, the AI industry should expect the hands-off era to end—the only question is how aggressively the new oversight will operate.
This article was written with AI assistance and editorially reviewed.
Source: Tom's Hardware


