Our Take
The administration that mocked AI safety concerns now quietly adopts the same pre-release review process it spent months dismantling.
Why it matters
When accelerationists encounter actual capability risks, regulatory capture becomes regulatory necessity. State-level AI bills now face weaker federal opposition.
Do this week
AI teams: Document your safety testing procedures this week so you can streamline inevitable federal review requirements.
White House creates AI model review process
The Trump administration is establishing an AI working group to examine pre-release government review of powerful AI models, according to officials speaking to the New York Times. The review process mirrors Britain's approach, where government bodies verify AI models meet safety standards before wide release.
This marks a complete reversal from February 2025, when Vice President JD Vance warned that "the AI future is not going to be won by hand-wringing about safety." The administration had revoked Biden's AI safety executive order on Trump's first day, then issued new guidance ending safety testing requirements three days later.
Google, Microsoft and xAI confirmed Tuesday they will provide early model access to the Commerce Department's Center for AI Standards and Innovation. The body was renamed from the US AI Safety Institute last June, when Commerce Secretary Howard Lutnick said "innovators will no longer be limited by these standards."
Anthropic's Mythos forces policy U-turn
The catalyst is Anthropic's latest model, Mythos, which demonstrates concerning capabilities at developing cybersecurity exploits (company-reported to government officials). The White House now opposes expanding Mythos access from 50 to 120 companies for national security reasons.
The National Security Agency is using Mythos to find vulnerabilities in Microsoft products, while contemplating that foreign nations will soon deploy similar capabilities against US infrastructure. This represents the "doomer moment" that safety advocates predicted but accelerationists dismissed as fearmongering.
The policy shift creates internal contradictions. The administration simultaneously designates Anthropic as a "supply chain risk" for refusing Pentagon contract amendments while working to expand government access to its technology. One set of officials plans to phase out Anthropic models within six months; another expands agency access.
Expect expanded safety requirements
Practitioners now face a regulatory environment resembling Biden-era requirements: submit models for government review before wide release. The Commerce Department will handle reviews, though critics worry about potential censorship of models deemed "woke" or pressure for unrelated administration favors.
Trump's effort to block state-level AI regulation now appears less viable given federal policy reversals. Declining accelerationist influence also signals potential expanded export controls on AI chips to China, as former AI czar David Sacks (who opposed such controls) recently left for an advisory role.
The administration's "let's see what happens" approach lasted exactly one year before confronting models capable enough to alarm national security agencies. Safety testing requirements return not through principled policy but through necessity.