Trump Administration Releases Sweeping AI Action Plan

By: Stuart P. Meyer , Tyler G. Newby , Adine Mitrani , Zach Harned , Joanne Dynak

What You Need To Know

  • The Trump administration has released a sweeping national AI strategy, “Winning the AI Race: America’s AI Action Plan,” outlining over 90 federal policy suggestions and accompanied by three executive orders.
  • The plan calls for the review and repeal of federal regulations and federal procurement barriers that hinder AI innovation and adoption.
  • It also indicates that the federal government may withhold federal funding from states that impose restrictive AI laws and regulations, aiming to deter states from creating a patchwork of potentially conflicting AI rules.
  • The AI Action Plan requires federal agencies to procure only “truthful” and “ideologically neutral” large language models, with new procurement guidance expected from the Office of Management and Budget within 90 days.
  • The plan accelerates permitting for data centers, chip fabrication plants, and energy infrastructure, and it supports domestic semiconductor manufacturing tied to national security.
  • U.S. AI exports to allies will be promoted, while restrictions on transfers to “countries of concern” will tighten. Agencies will expand federal oversight of AI misuse and security threats.

On July 23 and 24, 2025, the Trump administration released its “Winning the AI Race: America’s AI Action Plan,” a far-reaching strategy aimed at advancing U.S. leadership in artificial intelligence. Accompanied by three executive orders, the plan outlines more than 90 proposed federal actions organized into three main pillars: accelerating innovation, building domestic AI infrastructure, and leading globally on AI diplomacy and security. Together, these initiatives represent a major shift in federal policy, prioritizing deregulation and private-sector leadership. For companies in the technology and life sciences sectors, the plan creates significant market opportunities to advance automation and build related infrastructure but also introduces new compliance challenges. In particular, businesses seeking federal contracts or exporting AI-related products will need to ensure their operations meet the plan’s new standards and adapt to evolving policies.

Below, we highlight key actions under each pillar:

Pillar 1: Accelerating AI Innovation

Remove Red Tape and Onerous Regulation

The plan instructs federal agencies to review and revise any existing regulations that could be seen as barriers to AI development. It also encourages agencies to withhold funding from states that adopt what the administration considers burdensome AI laws. A number of cities and states have already enacted AI legislation aimed at high-risk AI systems, including the Colorado AI Act, the Utah Artificial Intelligence Policy Act, NYC Local Law 144, and the Texas Responsible Artificial Intelligence Governance Act. The plan signals a potential move toward federal preemption over state-level AI governance. We recently saw this play out in the lobbying for a 10-year moratorium on state AI legislation, which Congress ultimately rejected.

Additionally, the administration seeks to roll back guardrails and AI enforcement activity the Federal Trade Commission (FTC) pursued under the previous administration. The plan proposes reviewing all investigations the FTC initiated under the Biden administration to “ensure that they do not advance theories of liability that unduly burden AI innovation.” Similarly, it proposes reviewing, modifying, or setting aside any FTC final orders, injunctions, or consent decrees that “unduly burden AI innovation.” Notably, the plan does not define what constitutes an "undue burden" on innovation, but it raises questions about whether the FTC will continue to use “algorithmic disgorgement” as a remedy in its enforcement actions.

In recent years, the FTC has required in consent orders the destruction of AI models and other algorithms allegedly developed from personal information that businesses collected through deceptive practices or used unfairly in AI applications. For example, a recent FTC consent order required a company to delete any “data, models, or algorithms” derived in whole or in part from an AI facial recognition and analysis system that the FTC alleged misidentified shoppers as having shoplifted in the past. Going forward, businesses may argue that the remedy of algorithmic disgorgement unduly hampers innovation, and that less burdensome remedies are available.

Ensure that Frontier AI Protects Free Speech and American Values

A central feature of the plan is a new federal procurement policy requiring agencies to use only AI systems, including large language models (LLMs), that are “truthful” and “ideologically neutral.” This recommended policy action was put into effect via the July 23, 2025, executive order “Preventing Woke AI in the Federal Government.” This EO prohibits the federal government from procuring LLMs that were not developed in accordance with the following two principles:

  1. “Truth-seeking. LLMs shall be truthful in responding to user prompts seeking factual information or analysis. LLMs shall prioritize historical accuracy, scientific inquiry, and objectivity, and shall acknowledge uncertainty where reliable information is incomplete or contradictory.”
  2. “Ideological Neutrality. LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI. Developers shall not intentionally encode partisan or ideological judgments into an LLM’s outputs unless those judgments are prompted by or otherwise readily accessible to the end user.”

However, the EO provides some leeway, noting that the Office of Management and Budget (OMB) shall provide guidance within 90 days on how to implement this standard, which shall “account for technical limitations in complying with this order… [and] avoid over-prescription and afford latitude for vendors to comply with the Unbiased AI Principles and take different approaches to innovation.” In parallel, the plan directs the National Institute of Standards and Technology (NIST) to revise its AI Risk Management Framework to remove references to misinformation, DEI, climate change, and other social policy considerations.

Encourage Open-Source and Open-Weight AI

The plan encourages greater access to open-source and open-weight AI models as a path to greater AI adoption by academic, government and commercial entities that cannot always rely on closed model vendors. Open-source and open-weight models often require large-scale computing power provided under expensive long-term hyperscaler contracts, which can be inaccessible to start-up companies or academics. Accordingly, the plan recommends improving the financial market for compute, expanding access to private sector computing through the National AI Research Resource pilot with guidance from the National Science Foundation (NSF), and convening stakeholders to drive open-source adoption among small- and medium-sized businesses. Companies developing AI models may wish to evaluate whether the current administration's stance could impact their decisions about whether to keep source code, make model weights or training data proprietary, or make them publicly available.

Invest in AI-Enabled Science, Interpretability, Control, and Robustness Breakthroughs

The AI Action Plan recommends developing infrastructure through public-private partnerships to advance scientific development. It also incentivizes researchers to generate and publicly release more high-quality scientific datasets to support the training and use of AI models across industries. This would be accompanied by the promulgation of minimum data quality standards for the use of biological, materials science, chemical, physical, and other scientific data modalities in AI model training. Lastly, to enable safer deployment of AI systems and LLMs in high-stakes defense and national security applications, the plan proposes Defense Advanced Research Projects Agency-led research into interpretability and control systems alongside multi-agency and academic partnerships to test AI transparency and security.

Build an AI Evaluations Ecosystem

The plan emphasizes that businesses, particularly in heavily regulated industries, should proactively evaluate their AI systems' performance and reliability. It encourages regulators to publish evaluation guidelines through NIST for assessing AI compliance with existing laws and specific use cases. More broadly, the plan supports advancing AI measurement and evaluation science through coordinated efforts by NIST, the National Science Foundation, the Department of Energy, and other federal science agencies. As these evaluation standards mature and become widely adopted, regular testing requirements may become the norm.

Accelerate AI Adoption in Government

In addition to supporting private sector expansion in AI, the plan explicitly calls for greater integration of AI tools across the federal government. Specifically, the plan mandates that all federal agencies should ensure that employees whose work could benefit from access to frontier language models have access to, and appropriate training for, such tools. The plan consolidates interagency activities on AI adoption efforts under the Chief Artificial Intelligence Officer Council, with OMB and the General Services Administration managing an AI procurement toolbox that will allow for federal agencies to easily choose among multiple models and customize and compare them.

Pillar 2: Building American AI Infrastructure

Streamlined Permitting for Data Centers, Chip Manufacturing, and Energy Infrastructure while Prioritizing Security

To accommodate the increasing demands of AI systems, the plan proposes significant investment in domestic infrastructure, including by making federal lands available for data center construction. Many of these recommended policy actions were put into effect via the July 23, 2025, executive order “Accelerating Federal Permitting of Data Center Infrastructure.” Policy actions focus on expediting federal permitting processes for data centers, semiconductor fabrication plants, energy transmission lines, and other critical facilities. These changes are designed to streamline approval processes under laws such as the National Environmental Policy Act, the Clean Water Act, and the Fixing America’s Surface Transportation Act. The plan also aims to ensure the domestic AI computing stack is built on American products and maintain security guardrails to prohibit adversaries from comprising the infrastructure.

Restore American Semiconductor Manufacturing

A major part of the infrastructure push involves revitalizing the domestic semiconductor industry. The plan aims to increase U.S. chip production by streamlining regulations on domestic semiconductor manufacturing efforts in the U.S. By removing certain policy requirements for CHIPS Act-funded semiconductor manufacturing and offering semiconductor grant and research programs, the administration says it will generate American jobs and protect supply chains from foreign disruption.

Bolster Critical Infrastructure Cybersecurity

The plan highlights the need for strengthened cybersecurity response and remediation guidance across the AI supply chain, with a particular focus on sectors such as defense, healthcare, and energy. By taking advantage of existing cyber vulnerability sharing mechanisms, federal agencies may collaborate with the private sector on the sharing of known AI vulnerabilities. A newly established AI Information Sharing and Analysis Center (AI-ISAC) would further promote the sharing of AI-security threat information across U.S. critical infrastructure sectors.

Pillar 3: Lead in International AI Diplomacy and Security

Export American AI to Allies and Partners

The plan outlines a strategy for strengthening the United States’ leadership in international AI diplomacy. It promotes the export of U.S.-developed AI tools and systems to allied nations. The Department of State (DOS) and DOC will coordinate with the private sector to develop comprehensive export packages that include software, hardware, and supporting standards. This will include the launch of a program within the DOC aimed at gathering proposals from industry consortia for full-stack AI export packages. Many aspects of the recommended policy actions were put into effect via the July 23, 2025, executive order “Promoting the Export of the American AI Technology Stack.”

Strengthen AI Compute Export Control Enforcement

To limit AI access to foreign adversaries, the administration aims to leverage new and existing location-verification features on advanced AI compute to ensure that the chips are not located in countries of concern. The DOC will lead collaborations with intelligence community officials to monitor emerging technological developments in AI compute and aim for full coverage of possible countries or regions where chips are being diverted. These measures are part of the administration’s efforts to manage the export of AI compute and related technologies, and to encourage international alignment on such export controls.

Invest in Biosecurity

AI tools enable both great advancement in biology and disease treatment as well as malicious actors in synthesizing harmful pathogens, making careful guardrails necessary. The plan will require that all federally funded research involving nucleic acids include robust nucleic acid sequence screening and customer verification procedures. The administration further plans to convene government and industry actors to develop a mechanism to facilitate data sharing between nucleic acid synthesis providers to screen for potentially fraudulent or malicious customers. Furthermore, in addition to establishing AI evaluation guidelines more broadly, the administration plans to build, maintain, and update biosecurity-related AI evaluations through collaboration between NIST’s Center for AI Standards and Innovation, national security agencies, and relevant research institutions.

Takeaways for Technology and Life Sciences Companies

The AI Action Plan makes clear that deregulation, private-sector partnerships, and “winning the AI race” will guide the administration's AI policy, signaling a significant shift toward market-driven innovation over government oversight. The plan’s approach prioritizes accelerating AI deployment across all sectors of the economy.

Companies should concentrate efforts on identifying the specific policy suggestions and compliance measures that will impact their operations. While details regarding these measures are forthcoming and may evolve as the administration refines the plan, companies may begin formulating their approach in light of the plan’s objectives.

For example:

  • Businesses involved in government contracting can prepare for changes in procurement standards, particularly those related to demonstrating ideological neutrality in sophisticated AI models.
  • Companies operating in multiple states may assess whether their compliance strategies could be affected by conflicts between state laws and the federal funding provisions outlined in the plan.
  • Firms involved in AI workforce training or educational partnerships may benefit from new funding opportunities and expanded federal engagement.
  • Infrastructure developers may find new opportunities through accelerated permitting and public-private collaboration. However, they should remain alert to possible legal and environmental risks as the regulatory landscape evolves.
  • Organizations engaged in AI exports or working with international partners can expect greater scrutiny of outbound technology transfers, along with increased expectations for traceability and safety.

Company business teams should collaborate with counsel to determine how they can optimize the AI Action Plan’s developments and initiatives while staying informed and agile in the continuously evolving AI regulatory landscape.