As White House blocks Utah AI bill, other chatbot and deepfake regulations advance

ReDunnLev via Getty Images
The race to develop AI includes a race to regulate it that is dividing Republicans.
This story was originally published by the Utah News Dispatch.
Utah has faced a big setback in its move to further regulate artificial intelligence this year, especially after receiving a letter from the White House strongly opposing a legislative proposal for AI developers. However, some bills are sticking, including two tackling non-consensual deepfakes, and chatbots that simulate relationships and feelings.
A bill Herriman Republican Rep. Doug Fiefia drafted this session to require safety plans from chatbot developers started off strong with unanimous support from a House committee and the endorsement of actor Joseph Gordon-Levitt, an advocate for more AI regulation. But the bill has been circled on the House floor, where it could potentially remain for the rest of the session after the White House called it “unfixable,” according to Axios.
But, a Companion Chatbot Safety Act is still swiftly advancing in the Legislature, clearing all House hurdles and heading to the Senate for consideration.
“When I say companions, these are chatbots specifically designed to replace relationships, mimic even sometimes romance or intimacy,” Fiefia told the House on Friday. “This isn’t something that’s rare. Early studies show that 72% of teens have used this once, and over half of them have used them or continue to use them regularly.”
Under Fiefia’s bill, minors must be reminded hourly that they aren’t speaking with a human. Those alerts would also have to remind minors to take a break, and that a companion chatbot may not be suitable for underaged people.
It would also ban chatbots from producing content promoting suicide, self-harm, illegal activity or sexual behavior when interacting with minors. And, it would require chatbots to provide crisis resources for users expressing suicidal ideation.
A Thin Line on Development
AI regulation has been a big priority for Utah Republican Gov. Spencer Cox’s agenda, and has represented a slight break from federal policy plans on the technology. Ahead of President Donald Trump’s executive order preempting states from governing AI, Cox had argued it was the states’ job to protect children from negative impacts emerging from the technology.
The line, Republicans leaders say, comes at legislating technology development, since it could incentivize companies to leave the country searching for lax laws.
During a committee presentation of the bill that caught the attention of the White House, Fiefia said his proposal would not touch on development or micromanage algorithms.
“What we’re asking them to do is just tell us how you’re going to keep our kids safe. Tell us how you’re going to keep the public safe, and then if there’s an incident, you report it,” Fiefia said.
But still, the Trump administration maintains it goes against its AI agenda.
House Speaker Mike Schultz told reporters last week he understood the federal government’s concerns. The House will “for sure” move forward with the Companion Chatbot Safety Act bill, he said, and will continue studying the legislation the Trump administration opposed.
“When you start putting into code in the states, telling the federal government around what to do on national security and different things, I can understand some of the concerns the Trump administration had with that bill,” Schultz said.
However, some are still calling for change, especially parents who have lost their children to the harms of online technologies.
ParentsRISE! — a group advocating for online safety reforms in the country — sent a letter to the governor and legislative leaders pleading to move Fiefia’s bill forward, since, they wrote, “the risks posed by AI are not hypothetical.”
“We know exactly what it looks like when a powerful industry moves fast and dismisses concern because they are counting on no one being held responsible. We know where that road ends for families,” a group of parents from the organization wrote. “And when we look at what is happening with AI, and at who is trying to stop HB 286, we are watching the same deadly cycle begin again.”
The advocates described the bill as “the bare minimum” on accountability for AI platforms, and reiterated the proposal would not tell companies how to develop their systems. They also criticized that the White House officer who opposed the legislation didn’t offer any specific objections, legal arguments or potential amendments to advance the bill.
“Unelected officials in DC who are unwilling to engage with this bill on its merits, unwilling to sit with the families paying the price for the status quo, have not earned the authority to kill it,” the advocates wrote.
Protections Against Deepfakes
Another policy that’s advancing in the Legislative process is the Voyeurism Prevention Act, a proposal sponsored by Kaysville Republican Rep. Ariel Defay.
“The bill requires large technology platforms to obtain consent if an intimate image is created using generative or deepfake tools,” Defay told the House Economic Development and Workforce Services Committee in mid-February.
That consent must be given by the individual depicted in the deepfakes before they are generated, and can be revoked at any time.
After receiving unanimous approval on the House floor, Defay’s bill is headed to the Senate. If the Legislature passes it, the bill would expand on SB271, a 2025 law that banned using a person’s name or image without their consent to endorse products or campaigns.
This year’s legislation, Defay said, was a product of collaboration between Utah’s Office for AI Policy, Regulation and Innovation, and industry players, and included discussions on liability protections to avoid frivolous lawsuits.
With deepfakes becoming more realistic, Defay’s proposal may also mitigate the now constant online issue of distinguishing what images are real or digitally altered, by requiring platforms to disclose provenance data. That information, Defay said, would show the history of how an image or video has been changed.
“This is an important bill. It’s important for protecting children, for protecting each other, and helping us gain trust back in the things that we’re seeing online,” Defay told the House on Friday. “And also to protect from intimate images that can be created that can be severely harmful for families, for individuals, and can have lifelong effects.”
A bill by Senate Majority Leader Kirk Cullimore, R-Sandy, to clarify that defamation law applies to content created through AI or other technologies also passed the Senate unanimously on Wednesday and is up for consideration in the House.
“In the fast-moving pace that we have online that might be difficult to do and somebody could put up a deepfake and use somebody else’s name, image or likeness and it can cause serious damage, reputational harm,” Cullimore told the Senate. “But you might not be able to actually put a dollar figure to that, which undermines your defamation claim.”
Cullimore’s bill also recognizes an exclusive right in people’s own identity for non-commercial content and includes a process to demonstrate damages if it isn’t taken down 10 days after receiving a notice of defamatory content.
Utah News Dispatch is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Utah News Dispatch maintains editorial independence. Contact Editor McKenzie Romero for questions: info@utahnewsdispatch.com.




