Backers urge Ohio lawmakers to pass AI restrictions

Yuichiro Chino via Getty Images
The measure focuses on prohibiting deepfake child pornography but also require watermarks and punishes AI identity fraud.
This article was originally published by Ohio Capital Journal.
Ohio senators heard from supporters Wednesday of a proposal establishing guardrails around media produced with artificial intelligence. The proposal would prohibit the use of AI to create deepfake porn — particularly involving minors. But with provisions requiring watermarks and punishing identity fraud, the bill’s impact could extend far beyond the creation of pornography.
Senate Bill 163
The bill’s sponsors, state Sens. Louis Blessing, R-Colerain Twp., and Terry Johnson, R-McDermott, argue the restrictions will “prevent potentially harmful uses” of an emerging technology while protecting Ohioans “safety and privacy.”
The bill goes after AI-generated child porn by expanding the definition of obscenity to include an “artificially generated depiction.” Blessing explained “current laws against child sexual abuse material require an actual real photo of a child to be able to prosecute someone.”
“With AI not being a real photo,” he added, “this leads to issues of prosecuting someone generating these photos. Senate bill 163 will give attorneys the ability to prosecute these people.”
The sponsors argue AI can also be used to engage in fraud for financial, political and reputational purposes. So, the proposal extends identity fraud statutes to include a “replica” of an individual’s voice or likeness. It prohibits the use of a replica persona to defraud, damage a person’s reputation, or depict a person in a state of nudity or engaged in a sexual act.
Beyond its prohibitions, the bill aims to get ahead of deceptive uses by requiring any media created with artificial intelligence to include a watermark identifying it as such. Removal of a watermark is subject to a civil lawsuit for damages, and anyone who removes a watermark faces the presumption that they caused the alleged harm.
Proponents’ Testimony
Ohio Attorney General Dave Yost praised the measure’s “three-pronged approach.” He argued the watermark requirement “would provide a minimum level of transparency and notice” when an individual encounters AI-generated content.
Speaking about the bill’s identity fraud provisions, Yost brought up a case from his time as state auditor. A scammer successfully mimicked a school district’s email system and then sent a fake funds transfer request to the accounts payable department posing as the district’s financial controller. Best practice, Yost said, would be to call the sender for confirmation.
“But now, in the era of deepfakes with audio,” Yost explained, “you can send that fake email, call up (accounts payable) using the controller’s voice and say, ‘Hey, I just sent you an email asking you to do a wire transfer. This is really important. We need to move it. I wanted to follow up with phone calls so you didn’t have any questions.’”
As for the restrictions on child sexual abuse material, Yost urged lawmakers ensure “these powerful tools are not used for evil,” and added that “these are the kinds of things that keep me up at night.”
Sen. Kent Smith, D-Euclid, pressed Yost on how useful state legislation can be when it comes to addressing a “borderless” crime.
Yost acknowledged he’d prefer to see federals laws and even international treaties governing the use of AI-generated images. But “possession or use within Ohio can still be proscribed by this body and it ought to be.” He added that one way to push Congress to act is for states to pass an array of legislation.
Lou Tobin, speaking on behalf of the Ohio Prosecuting Attorneys Association, noted many states have passed bills to prohibit AI-generated child sexual abuse material or CSAM.
“As of last month,” he said, “Thirty-eight states, including every state surrounding Ohio, have enacted laws that criminalize the creation, possession and distribution of artificially generated CSAM.”
But while many states have taken action, it’s not clear those laws will hold up in court.
“I think a federal district court has found one of these statutes to be in violation of the Ashcroft decision,” Tobin told lawmakers. “The Ashcroft decision was a U.S. Supreme Court decision from the early 2000s that said you could not criminalize artificially generated images of child pornography because there wasn’t a real victim.”
In February, a federal judge in Wisconsin threw out one charge related to possession of “virtual child pornography,” but allowed three others to go forward. Prosecutors in that case have appealed the decision to dismiss the charge.
Tobin explained his office and the AG’s worked with state lawmakers to narrowly tailor S.B. 163 bill to avoid problems with the First Amendment. Regardless of how the case in Wisconsin or others play out, Tobin agued, “We think that’s a fight worth having.”