The US Approach to AI Regulation: Deregulation and Innovation

In 2025, the United States is still taking a very different approach to AI than many other parts of the world. Rather than passing a single, comprehensive law, the U.S. relies on a mix of frameworks, voluntary guidelines, state-level legislation, and targeted federal actions. The America’s AI Action Plan, released in July 2025, reflects this philosophy, emphasizing economic competitiveness, national security, and technological leadership over prescriptive mandates. Its three guiding pillars; namely, accelerating innovation, building infrastructure, and leading internationally, to set the direction for how the U.S. intends to remain at the forefront of artificial intelligence innovation.

The first pillar, accelerating innovation, is all about making it easier and faster to build new AI systems. The idea here is to cut through red tape and let companies and researchers move quickly. That means things like “regulatory sandboxes,” where organizations can test new AI tools in areas like healthcare or finance without getting bogged down by strict regulation, and speeding up approvals for new data centers, which are the backbone of AI computing. The plan also puts a spotlight on open-source AI projects so that more people and smaller players can contribute. In short, the goal is to give startups and researchers the freedom and resources to innovate without too many restrictions slowing them down.

The second pillar, building infrastructure, is about making sure the U.S. has the foundations in place to keep AI progress going over the long term. One of the most important pieces here is the National AI Research Resource pilot–a public tool kit for AI development–providing access to powerful computers, large datasets, and advanced software that normally only big tech companies can afford. By opening this up to universities, startups, and non-profits, the program levels the playing field and makes sure innovation isn’t concentrated in just a handful of corporations. Alongside this, the plan invests in training programs to grow the next generation of AI talent and includes safeguards like cybersecurity protections and biosecurity measures to prevent risks from advanced technologies being misused.

The third pillar, leading internationally, reflects the U.S. strategy to shape the rules of the game globally. This involves using export controls to limit access to advanced AI chips and systems by rival countries, while also working with allies to promote shared values and principles for AI use. Unlike the European Union, which is pushing ahead with binding laws like the AI Act, the U.S. approach prefers to promote voluntary standards and partnerships, aiming to influence the direction of AI worldwide without locking itself into rigid regulations.

The America’s AI Action Plan embodies the view that innovation should lead regulation, not the reverse. Its lighter-touch model promises rapid progress, but it also carries risks. Reliance on voluntary standards may leave gaps in privacy and fairness protections, uneven state laws could create fragmented oversight, and the prioritization of national security might come at the expense of civil liberties.

US America's AI Action Plan: https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf

Previous
Previous

Navigating the EU AI Act: A Risk-Based Approach to AI Governance