A Republicans led House committee has triggered a political firestorm by proposing a sweeping 10-year ban on state-level AI regulation. The provision, buried in a newly submitted budget reconciliation bill, would prevent states from passing or enforcing any laws that govern a wide array of AI and automated decision-making systems—effectively freezing all local oversight of artificial intelligence until 2035.
The bill, introduced by Rep. Brett Guthrie (R-KY), defines “automated decision systems” in such broad terms that it would cover everything from AI chatbots and search algorithms to risk analysis tools in healthcare and criminal justice. That includes systems that issue a score, classification, or recommendation that influences or replaces human judgment. In other words, this isn’t just about AI models—it’s about the infrastructure of the internet itself.
According to Travis Hall of the Center for Democracy & Technology, the scope of the legislation would “permeate digital services,” blocking oversight of technologies that now shape everything from how we navigate to how courts decide sentences. More than 500 AI-related bills introduced by U.S. states during the 2025 legislative session could be rendered meaningless overnight. Some, like California’s performer likeness law and Colorado’s forthcoming AI anti-discrimination rules, would likely be nullified if this federal preemption becomes law.
The move has drawn sharp criticism from Democrats and AI accountability advocates. Rep. Jan Schakowsky (D-IL) accused the Republican AI regulation ban of giving companies free rein to violate privacy and mislead consumers. Sen. Ed Markey (D-MA) warned the ban would usher in a “Dark Age” for vulnerable communities, children, and the environment.
A Decade of Deregulation and a Gift to Big Tech?
The bill’s passage through the reconciliation process adds urgency to the debate. Because budget reconciliation only requires a simple majority in the Senate, it could allow Republicans to bypass traditional hurdles. That’s especially concerning to critics who say AI companies, including OpenAI and Google, have quietly lobbied for this kind of federal shield to avoid what they call a “patchwork” of state laws.
Advocacy groups argue that blocking states from protecting their residents is a massive mistake. Brad Carson, president of Americans for Responsible Innovation (ARI), drew parallels to the federal government’s delayed response to social media. “Lawmakers stalled on social media safeguards for a decade, and we are still dealing with the fallout. Now apply those same harms to a technology moving as fast as AI,” he said. “This is a giveaway to Big Tech that will come back to bite us.”
Already, states like Utah have passed rules requiring companies to disclose when users are interacting with AI. California nearly passed a landmark AI safety law that would have imposed liability on major tech players for harmful AI deployments. But now, those efforts—and many others aimed at preventing algorithmic bias, deepfake abuse, and deceptive political ads—may be stalled for years.
The proposed Republican AI regulation ban has ignited one of the most heated debates over tech oversight in recent memory. And with little federal regulation currently in place, critics warn that stifling state action now could leave consumers, voters, and marginalized communities exposed to unchecked AI-driven harm.
Whether the bill survives the Senate remains uncertain. The Byrd Rule restricts reconciliation bills to budget-related matters, and this sweeping preemption of state authority could be challenged on that basis. But for now, the message from critics is clear: in the absence of federal safeguards, silencing the states may put democracy, privacy, and fairness at risk.