Threat of AI law preemption has states on edge again

Omar Chatriwala via Getty Images
The moratorium on state-level AI regulations, which failed to be included in the federal reconciliation bill this summer, has been floated to make a return as an executive order or in must-pass defense legislation.
Months after a proposed federal preemption of state’s artificial intelligence regulations died in Congress, the issue appears to have reared its head again, to the chagrin of state officials.
A leaked executive order from the White House proposed withholding funds from states for various programs, including the Broadband Equity, Access and Deployment program, if they have AI regulations deemed too onerous and that “threaten to undermine that innovative culture,” according to a draft.
President Donald Trump has not yet formally released or signed the executive order amid reports it may have been put on pause. Meanwhile, other reports have suggested that House Republicans might try to preempt states’ AI laws in the National Defense Authorization Act. Even the threat of federal preemption of states’ AI laws has policymakers worried they will have to deal with this issue once again.
“These attempts undermine the democratic process and disregard the extensive bipartisan work already underway in state legislatures,” Illinois State Rep. Marcus Evans Jr. and Montana Sen. Barry Usher, who are also the president and president-elect of the National Conference of State Legislatures, respectively, said in a joint statement. “The best path forward is partnership, not preemption.”
A chorus of state lawmakers and other groups have joined the call to oppose any moratorium on state AI regulation. A bipartisan group of almost 300 state lawmakers from across the country sent a joint letter to Congress saying they hear regularly from their constituents about the potential harms of AI and the need to protect consumers, procurement and intellectual property. Removing states’ ability to regulate those potential harms and others would be dangerous, they said.
“A blanket prohibition on state and local AI and automated decision-system regulation would abruptly cut off active democratic debate in statehouses and impose a sweeping pause on policymaking at the very moment when communities are seeking responsive solutions,” the lawmakers wrote in the letter. A coalition of faith groups raised similar concerns in a joint letter of their own, as has the National Association of State Chief Information Officers.
In a letter to Congressional leadership, NASCIO Executive Director Doug Robinson said preempting AI regulation in a bid to protect children online would have the opposite effect. He said such preemption would "in effect strip states of the ability to address real AI risks in their communities and provide needed protection for children."
A moratorium on state AI laws has been in the offing for some time, in various forms. Republicans initially tried to insert one in the so-called “One Big, Beautiful Bill” that contained many of their legislative priorities. The provision passed the House but was removed in the Senate, with lawmakers saying it could return in some form as they argued that some state laws are too onerous, could stifle innovation and result in a patchwork of regulations.
Meanwhile, Federal Communications Commission Chair Brendan Carr argued the agency has a role to play in blocking what he called “heavy handed” regulation of the technology. And Trump’s AI Action Plan has suggested the Office of Management and Budget could “consider a state’s AI regulatory climate” and limit funding to states that it deems have gone too fair.
Tying AI laws to BEAD funding has raised the hackles of Democrats on the House Energy and Commerce Committee. In a letter to Arielle Roth, administrator of the National Telecommunications and Information Administration, which runs the program, they railed against a potential threat “to impound tens of billions of dollars that Congress authorized and appropriated in full to achieve specific policy outcomes, including universal connectivity, affordability, scalable infrastructure, and broadband adoption.”
In the meantime, numerous states have acted to regulate the technology, either with sweeping legislation or on an issue-by-issue basis. Colorado has taken the former approach but since delayed implementing its law pending further amendment, while California Gov. Gavin Newsom signed legislation to that end in September. Issues to be regulated have included AI’s use in chatbots, healthcare, elections and financial services, and NCSL estimated that state legislators have filed around 1,000 bills on various AI-related topics.
Not letting states regulate AI while Congress fails to address the technology, would be “kind of disastrous,” said Bruce Schneier, a cybersecurity specialist at Harvard.
“Congress is dysfunctional, and I'm not sure an authoritarian government is the best system to produce sensible regulation,” Schneier, who is also a co-author of the book Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, said in an email. “That leaves states, which are closest to the people and are in the best position to regulate the harms of AI. And, even better, as states experiment with different regulations we can see what works and what doesn't.”
State lawmakers said they are best placed to face the challenges of AI as opposed to federal policymakers and should be allowed to do their work unimpeded.
“States serve as laboratories of democracy, directly accountable to their residents, and must retain the flexibility to confront new digital challenges as they arise,” they wrote in their letter. “State experimentation and varied approaches to AI governance help build a stronger national foundation for sound policymaking. And as AI evolves rapidly, state and local governments may be better positioned than Congress or federal agencies to respond in real time. Freezing state action now would stifle needed innovation in policy design at a moment when it is most needed.”




