Lawmaker warns of ‘patchwork of indecision’ from state AI laws

Greggory DiSalvo via Getty Images
A 10-year ban on states regulating the technology was stricken from the reconciliation package this year, but speakers at a House hearing said the federal government must lead the way.
A year on from California Gov. Gavin Newsom vetoing expansive legislation regulating the developers of large artificial intelligence systems, a new version is back and awaiting either his signature or veto.
Once again sponsored by State Sen. Scott Wiener, the bill requires large AI companies to disclose their safety and security efforts, report safety incidents and protect whistleblowers. It also creates “CalCompute,” a public cloud computing network to provide startups and researchers with AI infrastructure.
Newsom may still veto this legislation, like he did last year’s bill from Wiener, but the senator said he is hopeful of passage given its bipartisan support and the efforts to include recommendations from a working group composed of AI experts. This effort comes on the heels of Colorado lawmakers delaying the implementation of that state’s AI law, and amid a lack of national regulations on the technology despite some interest from lawmakers.
The lack of national regulations and the growing patchwork of state AI laws did not stop Congress from trying to insert a 10-year moratorium on states regulating the technology into this year’s Republican-led “Big, Beautiful Bill.” And while that moratorium was removed from the final law, speakers at a House subcommittee hearing last week said some sort of national preemption effort will be the only way to support innovation and prevent companies suffering crippling compliance costs.
“Congress needs to act promptly to formulate a clear national policy framework for artificial intelligence to ensure our nation is prepared to win the computational revolution,” Adam Thierer, resident senior fellow for technology and innovation at the R Street Institute, said in written testimony to the House Judiciary Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet. “If we get this wrong, the consequences could be profound in terms of geopolitical competitiveness, national security, economic growth, small business innovation, and human flourishing.”
In the absence of federal action, states have stepped up to regulate AI for their residents. The National Conference of State Legislatures, which tracks legislation, found that every state and territory introduced AI-related bills this year. Of those, 38 states collectively adopted or enacted around 100 laws, including around ownership, risk management, deepfakes and whistleblower protections, among others.
But California has been among the highest profile states to try to regulate AI, given its status as a hotbed of technology full of AI companies, and given the desire of some state lawmakers to more heavily regulate the technology than some of their peers. Rep. Darrell Issa, a California Republican and the chair of the subcommittee, argued that too much regulation would mean the U.S. falling behind China and Europe in adopting AI, and said California is “part of the problem.”
“All 50 states have implemented some form of AI regulation,” Issa said. “And in fact, there are in the neighborhood of a thousand pieces of legislation spread over 50 states that will create, if allowed to continue, a patchwork of indecision by the AI industry given conflicting regulations… Let there be no doubt though. Either we win in innovation, and we win in AI, or we lose our edge on the international stage.”
These competing state laws get in the way of that progress, Issa argued, so the federal government should act. For some, that argument has a constitutional basis. Kevin Frazier, an AI innovation and law fellow at the University of Texas School of Law, said the federal government alone is “responsible for matters that implicate the economic and political stability of the country,” while states must only tend to local issues within their own borders.
“The founders centralized those matters that make or break the nation’s economic and political stability, reserved to the states the authority to govern local conduct, and rejected any arrangement that let one state rule another by virtue of the size of its economy or its voting power,” Frazier said in written testimony. “Applied here, that design yields a clear rule of decision: the development of frontier AI models is a national undertaking; the uses of those systems within a state are a proper subject of deployment rules tailored to local concerns.”
Others rejected that opinion, and said that denying states’ ability to regulate new technologies would be a mistake and in fact would be counterintuitive to the country’s federalist history. Not allowing states to regulate AI would also leave residents open to tremendous harm, said Chris Richards, a Koch Distinguished Professor in Law at Washington University School of Law and co-director of the university’s Cordell Institute for Policy in Medicine and Law.
“Contrary to the general myth that regulations ‘stifle innovation,’ to deprive states of their ability to regulate AI would be harmful both to innovation and to the public,” Richards said in written testimony. “In fact, law creates and enables innovation by stabilizing the marketplace and ensuring the consumer trust that is the essential precondition before they become willing to adopt emerging technologies.”




