In a last-minute reversal, the U.S. Senate voted 99-1 to kill a 10-year ban on state AI regulations after bipartisan opposition mobilized
View in browser
logo_op-02

July 8th, 2025 // Did someone forward you this newsletter? Sign up to receive your own copy here.

Image by Project Liberty

The 99-1 vote that saved state AI laws

 

In a rare show of unity, Republicans and Democrats in the U.S. Senate stood together last week to block a proposed federal moratorium on state-level AI regulation—an issue that has stirred debate from Silicon Valley to state capitals across the country.

 

In this week’s newsletter, we explore how a moratorium on state AI laws went from likely to dead in the span of a few days, the arguments for and against limiting state-level AI regulation, and what the moratorium’s defeat tells us about the future of tech policy.

 

// A last-minute reversal

The Republican-led Senate voted 99–1 to strike a provision from President Trump’s “Big Beautiful Budget Bill” that would have imposed a 10-year ban on state-level AI regulation, threatening more than 130 existing state laws and stalling dozens more in development.

 

Support for the moratorium eroded rapidly. Senator Marsha Blackburn (R-TN) led the opposition, and though at one point she negotiated a last minute compromise, she reversed her stance following concerns about child safety, deepfakes, and the risk of leaving consumers unprotected in an AI-powered world.

 

If passed, the moratorium would have banned states from regulating artificial intelligence for a decade. Initially proposed by Republicans in May, the moratorium was later tied to federal funding: Billions of dollars in subsidies for broadband internet would only be available to states that backed off of AI regulations.

 

The vast majority of the action surrounding AI policy in the U.S. is taking place at the state level:

  • In 2016, only one state-level AI-related law was passed (Vermont). By 2023, 49 laws had been passed. In the past year alone, the number of laws more than doubled to 131.
  • In California, the moratorium would have threatened 20 existing AI laws and 30 more proposals related to AI that are before the California Legislature.

State laws generally regulate the application of AI rather than the development of the foundational models themselves. For example, some states have passed laws expanding their civil rights laws in hiring and housing to the application of AI. Others have applied consumer protection and tort laws to deepfakes and AI chatbots that cause harm, especially to young people.

 

For now, states retain the right to enact AI-related regulations.

 

// The arguments for a moratorium

Executives at Microsoft, Google, and OpenAI testified before Congress in May to advocate for fewer, more streamlined regulations. The arguments for the moratorium centered on two arguments:

  1. The large number of state-level AI laws and introduced bills would make it challenging for tech companies to comply.
  2. A byzantine regulatory environment would put the U.S. AI industry at a comparative disadvantage in the global AI race against China. The pace of U.S. AI development would slow and likely move abroad.

    // The arguments against a moratorium

    The case against the moratorium centered on two arguments that received enthusiastic and widespread bipartisan support:

    1. Preventing states from passing AI regulations would weaken consumer protections and leave everyday Americans exposed to the most dangerous aspects of this rapidly evolving technology.
    2. Preventing states from passing laws—whether they’re related to AI or not—was a violation of the U.S. Constitution’s 10th Amendment and a “usurpation of federalism” and states’ rights.

    Senator Maria Cantwell (D-WA) said, “We can't just run over good state consumer protection laws. States can fight robocalls, deepfakes and provide safe autonomous vehicle laws.”

     

    Blackburn agreed. “Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens,” she said.

     

    Opposition to the moratorium has lasted for weeks.

    • A coalition of 260 lawmakers from all 50 states—half Republicans and half Democrats —signed a letter in opposition. In May, 40 attorneys general wrote a letter opposing the moratorium.
    • In June, over 140 organizations, including Project Liberty and many members of the Alliance, signed on to another letter opposing the moratorium.

    // The vacuum of federal policy

    The lack of comprehensive federal AI legislation fueled resistance to blocking states from enacting their own laws. With Congress stalled, states have stepped in and begun passing measures independently.

     

    Federal legislation has been hard to come by. The pace of AI development, the lack of political consensus, and concerns about overregulation have contributed to the absence of comprehensive AI legislation at the federal level. There is, however, other federal activity related to AI, including proposed bills, executive orders, and agency guidelines on the technology. Notably, earlier this year President Trump signed the Take It Down Act, bipartisan legislation that enacted stricter penalties on revenge pornography and deepfakes created by AI.

     

    Carl DeMaio, a Republican Member of the California State Assembly, expressed a sentiment on both sides of the aisle. “My opposition to the moratorium is based more on an absence of federal action. When Congress doesn’t act, then a state needs to look for ways to act.”

     

    Globally, other regions are moving in different directions. The European Union’s AI Act establishes guardrails for high-risk AI use cases across member states. Meanwhile, China continues to roll out centralized regulations, with a strong focus on content controls and algorithmic transparency. The U.S. approach remains fragmented by comparison, heightening the stakes for federal leadership.

     

    // The false dichotomy

    The moratorium debate revealed a false choice: either accept an assortment of varied state AI laws or impose a decade-long freeze on regulation. In reality, more balanced alternatives exist, like federal laws that promote innovation while safeguarding the public from AI’s most harmful effects. Europe is also searching for this middle ground, as discussed in a recent newsletter.

     

    The future need not be defined by over-regulated “Responsible Tech” or a return to laissez-faire “Big Tech.” Project Liberty envisions a different path—one where AI can play a key role in advancing a more human-centered internet. In this vision of a “People’s Internet,” individuals have agency over their data, and AI serves as a tool for empowerment, with protective agents that enhance both privacy and control.

     

    We’ve developed several blueprints that offer a way forward:

    • A detailed Policy Blueprint for a People’s Internet that helps people reclaim a voice, a choice, and a stake in a better blueprint.
    • A comprehensive Blueprint for a Fair Data Economy that enumerates a plan for an economy that embraces both innovation and regulation.

    // What's next

    A moratorium on state-level AI laws might be stalled for now, but Republicans have said they plan to try again in an effort to pass a more comprehensive federal law regulating certain aspects of AI. Federal legislation may have more support; a recent poll found that most Americans support a national AI standard, and another poll revealed that the public is worried the government won't go far enough in regulating AI.

     

    With public will and bipartisan support, it’s time to craft better policy and start again.

    Project Liberty news & updates

    // Join Project Liberty's Alliance for the launch of “How Can Data Cooperatives Help Build a Fair Data Economy? on July 9th at 10am EDT. This interactive session will explore how data cooperatives can offer a fair, democratic model for data governance, drawing lessons from legacy co-ops, emerging global use cases, and practical pathways to scale. Register here.

     

    // On July 1-2, Project Liberty Institute co-organized the Global Digital Collaboration Conference in Geneva, which brought together over 1,000 stakeholders to advance digital identity, credentials, and trusted infrastructure. See highlights here.

    📰 Other notable headlines

    // 🕵 A group of young cybercriminals poses the ‘most imminent threat’ of cyberattacks right now, according to an article in WIRED. Researchers warn that its flexible structure creates challenges for defense. (Paywall).

     

    // 📱 As today’s platforms become all-powerful, an article in Noema Magazine explored how the metaphors we use to describe our digitally infused world exemplify a new, stealthier form of domination. (Free)

     

    // 🤳 Kids are making deepfakes of each other, and laws aren’t keeping up. A report by The Markup examined how schools and lawmakers are grappling with how to address a new form of peer-on-peer image-based sexual abuse that disproportionately targets girls. (Free).

     

    // 📄 An article in MIT Technology Review asked, What comes next for AI copyright lawsuits? Remarkably little has been settled by recent rulings in favor of Anthropic and Meta. (Paywall).

     

    // 🌐 In a fractured geopolitical landscape, tech sovereignty is no longer a choice but a strategic imperative. As governments around the world race to develop and deploy new technologies, an article in Project Syndicate argued that the primary challenge lies in addressing national-security risks without resorting to digital protectionism. (Paywall).


    // 🛡 Mobile apps and social media platforms now let companies gather much more fine-grained information about people at a lower cost. An article in Fast Company explored how your data is collected and what you can do about it. (Free).

    Partner news & opportunities

    // Future of Digital Finance Conference

    September 24-25 | Virtual 

    CIGI’s Future of Digital Finance conference will explore how India, China, and Africa are reshaping digital payments—from CBDCs to cross-border rails—and what it means for global finance and inclusion. Researchers can submit 300-word policy-brief abstracts for a chance to steer the discussion. Register today.

     

    // Stanford Trust & Safety Research Conference

    September 25-26 | In-person | Stanford University, CA

    The Cyber Policy Center’s Annual Trust and Safety Research Conference brings together scholars, industry professionals, civil society representatives, and policymakers for lightning talks, workshops, and networking sessions on the future of online governance. Early-bird registration is open until July 31. Register here.

    What did you think of today's newsletter?

    We'd love to hear your feedback and ideas. Reply to this email.

    // Project Liberty builds solutions that help people take back control of their lives in the digital age by reclaiming a voice, choice, and stake in a better internet.

     

    Thank you for reading.

    Facebook
    LinkedIn
    Sin título-3_Mesa de trabajo 1
    Instagram
    Project Liberty footer logo

    10 Hudson Yards, Fl 37,
    New York, New York, 10001
    Unsubscribe  Manage Preferences

    © 2025 Project Liberty LLC