California senator dings ‘ham handed’ approach to AI preemption

John Sciulli via Getty Images
Scott Wiener, a key architect of major tech legislation in the state, said efforts to stop others doing the same are overly broad, especially as Congress has not legislated on the issue.
A California legislator who has been at the heart of that state’s artificial intelligence regulations slammed Congressional Republicans for what he called a “ham handed” approach to preempting state-level laws.
State Sen. Scott Wiener, a Democrat who represents San Francisco, said in a speech this week in Washington, D.C. that in the absence of Congress acting on AI or any other tech issue, states should be allowed to do so without the threat of preemption. And he said efforts to revive the 10-year moratorium that was removed from the reconciliation package or those taken under the executive order tying AI laws to broadband funding, are “shameful.”
“It was not a good look to try to preempt deepfake revenge porn laws, which is what they were trying to do,” Wiener said on stage at the State of the Net conference. “I don't think that was the thing they were targeting, but the preemptions they were proposing were so broad it would have prevented all sorts of public safety measures. You can support AI strongly and still say there are bad things that AI can do that we should not allow to happen. It was so ham-handed.”
Wiener noted that Congress also has failed to pass a federal privacy law, net neutrality legislation to protect internet traffic, or regulations to protect minors on social media, which shows the states must step up instead.
“The idea that the federal government would say or the Congress would say, ‘We're not going to do anything about it, but you're not allowed to do anything about it either,’ that is absurd, and it's actually outrageous,” Wiener said. “It shows that this administration and the current leadership, Republicans in Congress, do not seem particularly interested in protecting the public. They just want to protect the companies that are helping them and supporting them, and I think that is outrageous.”
California has been one of the early movers on regulating AI. Lawmakers floated one bill that made it through the legislative process, only for Gov. Gavin Newsom to veto it. That legislation would have required those developing large AI models to have safeguards in place to prevent harm and would have appointed a state board to oversee them. In his veto message, Newsom said the regulatory framework “could give the public a false sense of security about controlling this fast-moving technology.”
Lawmakers returned to the drawing board with a state task force, and came back with new legislation that Newsom signed. That bill, which took effect at the start of this year, requires large AI companies to disclose their safety and security efforts, report safety incidents and protect whistleblowers. It also creates “CalCompute,” a public cloud computing network to provide startups and researchers with AI infrastructure.
Wiener said it is a “transparency bill” that requires company disclosure, and he rejected claims that it allows those companies and AI developers to “grade their own homework.” It is too soon to judge its success, he said, although he noted that AI companies Anthropic and OpenAI have issued compliance frameworks. The latter have reportedly had to deny claims they have violated the law.
“It's not grading our own work, because the public and experts and everyone else gets to grade it and say, ‘You have really good safety protocols, you have cruddy protocols, and you have no protocols,’ and the world will know,” Wiener said. “[I'm] not going to claim that everything that you could possibly want to do is in this bill, but we tried one approach, the governor vetoed it, and now we try another approach.”
Wiener said California’s new AI law would be a “good national standard,” but that “doesn’t mean it’ll be the only AI safety standard.” The work is never done, he said, and lawmakers cannot rest on their laurels given how quickly AI is evolving.
“I don't think we should ever say, ‘OK, we did this. We're done,’” Wiener said. “AI is not like a frozen in amber kind of thing. It's constantly evolving. We're constantly learning new things. I think it's a good thing to do nationally, but I don't think that is the end or the exclusive approach that should be taken.”
A lot of questions are ahead about the impact of AI on everyday life, including on work. Wiener said while he does not plan on introducing legislation regulating the technology this year, other colleagues will as they seek to get a handle on what it will mean for the future.
“My concern is that we're seeing so much wealth being generated via AI that we don't want to have a tiny group of people receive unbelievable benefit from AI, and you have a bunch of other people who are eating cat food in the gutter,” Wiener said. “I want to make sure that the benefits society sees from AI are being spread, so that society as a whole is benefiting.”
NEXT STORY: 5 takeaways from new Pennsylvania report on AI




