President Donald Trump issued an EO on Thursday, December 11, 2025, intended to ease the road for AI startups and solidify the United States’ dominance by nationalizing AI policy in place of the current patchwork of state laws.
The EO, “Ensuring a National Policy Framework for Artificial Intelligence,” directs:
The EO further provides that the secretary of commerce, in evaluating whether state AI laws are onerous, will consider whether a state AI law (1) requires AI models to alter their truthful outputs, or (2) compels AI developers or deployers to disclose information in a manner that would violate the First Amendment or any other provision of the U.S. Constitution.
The EO cites a Colorado law banning “algorithmic discrimination,” arguing the law may cause AI to produce false results to avoid a “differential treatment or impact” on protected groups. White House AI advisor David Sacks has said the administration will not go after safety-critical areas such as child-safety laws, “but we're going to push back on the most onerous examples of state regulations." Indeed, § 8(b) of the EO states that it does “not propose preempting otherwise lawful State AI laws relating to ... child safety protections.” The EO also carves out AI compute and data center infrastructure state laws, as well as laws concerning state government procurement and use of AI.
The move is part of the administration’s far-reaching strategy, announced in July of 2025 and supplemented by the Genesis Mission, to advance U.S. leadership in global AI development. The Trump administration has focused on protecting the U.S.’s position in the AI race, particularly against China; according to the EO, this is a central motivation for the administration’s efforts. Congress has made several unsuccessful attempts to legislate AI nationally over the past several years.
During the last administration, several major federal agencies (including the Federal Trade Commission, the Equal Employment Opportunity Commission, the Consumer Financial Protection Bureau, and the Department of Justice) issued a joint statement affirming that existing law gives those agencies the power to regulate new and emerging technologies, including AI. Now, the Trump administration is moving to severely curtail the independent power of those agencies and has not shown an especially strong interest in using those agencies to regulate AI.
Impact to States and Potential Challenges
Several states have passed their own AI laws, with many more in the offing, all of which may become subject to scrutiny and potential loss of federal funding if states enforce them. The EO itself is likely to face legal challenges from states whose laws the EO would attempt to preempt. Last month, a bipartisan group of 36 state attorneys general sent a letter to Congress opposing any sort of federal moratorium on states' abilities to regulate AI, and a number of officials, including Florida Governor Ron DeSantis and Minnesota Senator Amy Klobuchar, have already spoken out against the EO.
Further, the EO does not specify how general-purpose state laws regulating areas such as privacy and data protection, cybersecurity, consumer protection, civil rights, employment, competition/antitrust, intellectual property, product liability and safety, or e-commerce will be impacted, as those laws can all be used to regulate AI technologies even if not specifically targeted at AI.
The EO requires the secretary of commerce to complete a review of state AI laws within 90 days, at which point the administration may determine more targeted next steps.
Many expect that unwinding the emerging patchwork of state AI laws will create a more innovation-friendly environment, but the transition is unlikely to be seamless, and the EO may face significant legal challenges. Companies should plan for a transitional period of legal uncertainty: continued state-level activity in exempted areas, potential federal challenges to certain laws, and uncertainty about how broadly “AI regulation” will be construed when general-purpose laws (such as privacy, consumer protection, civil rights, cybersecurity, and product safety) are applied to AI.
In the meantime, businesses should continue monitoring both federal and state developments and maintain a compliance posture that can adapt quickly. Even where legal requirements shift, state laws often drive stronger internal AI governance (e.g., risk assessments, accountability, and documentation) that can be commercially important: large enterprise customers increasingly expect robust AI governance commitments in their commercial agreements, including in the increasingly common AI addenda, and those same governance controls are frequently evaluated by investors and potential acquirors during diligence. As a practical matter, building compliance to key state-law requirements can also serve as a strong starting point for broader international alignment, including preparation for compliance obligations under the EU AI Act.