The Trump administration is preparing AI oversight measures it explicitly rejected during the campaign and the first months in office, Fortune reported on May 6. The shift was triggered by Anthropic's disclosure of "Mythos" — a capability evaluation showing that frontier models can autonomously identify and exploit cybersecurity vulnerabilities at human-expert level. The administration is now reportedly drafting requirements that tie large-model deployment to capability evaluations administered by a federal AI safety body (CAISI), pre-deployment notice for models above a compute threshold, and reporting obligations for cyber-relevant capabilities. None of this is finalized.
Three things to keep clear, because the press will conflate them.
First: nothing has been signed. The reporting describes draft language inside the executive branch, not enacted policy. The previous EO (which the Trump administration revoked early on) had similar elements that never made it into binding regulation. Drafts get watered down between the working group and the desk.
Second: what triggered the reversal is the cyber capability, not AI in general. The administration's earlier position — that voluntary commitments and existing law are sufficient — held for use cases that look like consumer products. Mythos shifted the framing to dual-use national security capability. That is a category where federal regulation has decades of precedent (export controls, ITAR, EAR), and where the political coalition for oversight is broader and more durable than the abstract "AI safety" coalition was.
Third: the operational obligations under any plausible final rule are narrow at first. Compute thresholds in the public discourse have been around 10²⁶ FLOPs for training runs — well above where most enterprise customers live. If the rule lands at that threshold, the affected universe is the frontier labs, not the typical builder. That changes if Congress imports a lower threshold via legislation, which is possible but slow.
The view that this is overreach has a real basis. Capability evaluations administered by a federal body create a single point of regulatory failure: if the eval is wrong, it is wrong for everyone. Industry's preferred alternative — independent third-party labs with federal accreditation — is a more resilient design. The worry is that a quickly drafted rule, motivated by one capability disclosure, locks in centralized review structures that are hard to change later.
- "Mythos" appears to be Anthropic's internal name for the capability evaluation, not a model name — coverage has been ambiguous on this point
- CAISI = the Center for AI Standards and Innovation, the successor to NIST's AI Safety Institute
- State-level enforcement continues independent of federal action — California's algorithmic pricing prohibition took effect January 1, 2026, and Colorado, Texas, and Illinois all have AI laws active this year
If you are operating models below the frontier threshold, this will not change your obligations directly — but state AI laws already do. If you are at or near the frontier, prepare for capability disclosure, evaluation participation, and pre-deployment notice as eventual requirements. Either way, the operational shift is from "voluntary commitments" to "documented compliance" — start the documentation now.