The 99-1 vote that saved state AI laws
In a rare show of unity, Republicans and Democrats in the U.S. Senate stood together last week to block a proposed federal moratorium on state-level AI regulation—an issue that has stirred debate from Silicon Valley to state capitals across the country.
In this week’s newsletter, we explore how a moratorium on state AI laws went from likely to dead in the span of a few days, the arguments for and against limiting state-level AI regulation, and what the moratorium’s defeat tells us about the future of tech policy.
// A last-minute reversal
The Republican-led Senate voted 99–1 to strike a provision from President Trump’s “Big Beautiful Budget Bill” that would have imposed a 10-year ban on state-level AI regulation, threatening more than 130 existing state laws and stalling dozens more in development.
Support for the moratorium eroded rapidly. Senator Marsha Blackburn (R-TN) led the opposition, and though at one point she negotiated a last minute compromise, she reversed her stance following concerns about child safety, deepfakes, and the risk of leaving consumers unprotected in an AI-powered world.
If passed, the moratorium would have banned states from regulating artificial intelligence for a decade. Initially proposed by Republicans in May, the moratorium was later tied to federal funding: Billions of dollars in subsidies for broadband internet would only be available to states that backed off of AI regulations.
The vast majority of the action surrounding AI policy in the U.S. is taking place at the state level:
- In 2016, only one state-level AI-related law was passed (Vermont). By 2023, 49 laws had been passed. In the past year alone, the number of laws more than doubled to 131.
- In California, the moratorium would have threatened 20 existing AI laws and 30 more proposals related to AI that are before the California Legislature.
State laws generally regulate the application of AI rather than the development of the foundational models themselves. For example, some states have passed laws expanding their civil rights laws in hiring and housing to the application of AI. Others have applied consumer protection and tort laws to deepfakes and AI chatbots that cause harm, especially to young people.
For now, states retain the right to enact AI-related regulations.
// The arguments for a moratorium
Executives at Microsoft, Google, and OpenAI testified before Congress in May to advocate for fewer, more streamlined regulations. The arguments for the moratorium centered on two arguments:
- The large number of state-level AI laws and introduced bills would make it challenging for tech companies to comply.
- A byzantine regulatory environment would put the U.S. AI industry at a comparative disadvantage in the global AI race against China. The pace of U.S. AI development would slow and likely move abroad.
// The arguments against a moratorium
The case against the moratorium centered on two arguments that received enthusiastic and widespread bipartisan support:
- Preventing states from passing AI regulations would weaken consumer protections and leave everyday Americans exposed to the most dangerous aspects of this rapidly evolving technology.
- Preventing states from passing laws—whether they’re related to AI or not—was a violation of the U.S. Constitution’s 10th Amendment and a “usurpation of federalism” and states’ rights.
Senator Maria Cantwell (D-WA) said, “We can't just run over good state consumer protection laws. States can fight robocalls, deepfakes and provide safe autonomous vehicle laws.”
Blackburn agreed. “Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens,” she said.
Opposition to the moratorium has lasted for weeks.
- A coalition of 260 lawmakers from all 50 states—half Republicans and half Democrats —signed a letter in opposition. In May, 40 attorneys general wrote a letter opposing the moratorium.
- In June, over 140 organizations, including Project Liberty and many members of the Alliance, signed on to another letter opposing the moratorium.
// The vacuum of federal policy
The lack of comprehensive federal AI legislation fueled resistance to blocking states from enacting their own laws. With Congress stalled, states have stepped in and begun passing measures independently.
Federal legislation has been hard to come by. The pace of AI development, the lack of political consensus, and concerns about overregulation have contributed to the absence of comprehensive AI legislation at the federal level. There is, however, other federal activity related to AI, including proposed bills, executive orders, and agency guidelines on the technology. Notably, earlier this year President Trump signed the Take It Down Act, bipartisan legislation that enacted stricter penalties on revenge pornography and deepfakes created by AI.
Carl DeMaio, a Republican Member of the California State Assembly, expressed a sentiment on both sides of the aisle. “My opposition to the moratorium is based more on an absence of federal action. When Congress doesn’t act, then a state needs to look for ways to act.”
Globally, other regions are moving in different directions. The European Union’s AI Act establishes guardrails for high-risk AI use cases across member states. Meanwhile, China continues to roll out centralized regulations, with a strong focus on content controls and algorithmic transparency. The U.S. approach remains fragmented by comparison, heightening the stakes for federal leadership.
// The false dichotomy
The moratorium debate revealed a false choice: either accept an assortment of varied state AI laws or impose a decade-long freeze on regulation. In reality, more balanced alternatives exist, like federal laws that promote innovation while safeguarding the public from AI’s most harmful effects. Europe is also searching for this middle ground, as discussed in a recent newsletter.
The future need not be defined by over-regulated “Responsible Tech” or a return to laissez-faire “Big Tech.” Project Liberty envisions a different path—one where AI can play a key role in advancing a more human-centered internet. In this vision of a “People’s Internet,” individuals have agency over their data, and AI serves as a tool for empowerment, with
protective agents that enhance both privacy and control.
We’ve developed several blueprints that offer a way forward:
// What's next
A moratorium on state-level AI laws might be stalled for now, but Republicans have said they plan to try again in an effort to pass a more comprehensive federal law regulating certain aspects of AI. Federal legislation may have more support; a recent poll found that most Americans support a national AI standard, and another poll revealed that the public is worried the government won't go far enough in regulating AI.
With public will and bipartisan support, it’s time to craft better policy and start again.