Connecting state and local government leaders
The bill would expand identity theft statutes to criminalize deceit aided by artificial intelligence.
Lawmakers are targeting AI-assisted frauds with a bill that would expand identity theft statutes to cover deceit aided by synthetic media.
The bipartisan bill sponsored by Sens. Doug Steinhardt, a Democrat, and Brian Stack, a Republican, would allow prosecutors to charge individuals who use deepfakes—synthetic media created with the aid of machine learning techniques that appears real but is not—to aid identity thefts.
“Identity theft isn’t the old ‘steal somebody’s birth date and Social Security or forge their name or copy a document.’ Thieves and criminals are a lot more sophisticated, and now it only takes a couple seconds of a sample of your voice from your social media account or a screenshot or a picture, and they can impersonate you,” said Steinhardt, the bill’s prime sponsor.
Synthetic video, image, and audio technology has advanced rapidly alongside a recent boom in generative artificial intelligence that has made once-complicated tools increasingly available and accessible to the public, said Siwei Lyu, a professor of computer science and engineering at the University at Buffalo who studies deepfakes.
The widespread availability of this technology has supplanted the need for technical know-how once required to create synthetic media, said Lyu, who began studying deepfakes in 2018.
“Back then, you didn’t have tools that are easy to use for end users. You need someone to know programming, AI systems, and you needed a lot of computing power to be able to train the model and then create fake videos,” he said. “Right now, there are tools available online where you can deepfake somebody’s face.”
Steinhardt said he introduced the bill after a friend produced and showed him synthetic audio of a mutual friend generated by short audio clips pulled from social media pages.
“When I asked him how one would defend themselves or prove that was an electronic copy and not the real person … his answer to me was he didn’t have any idea, and I thought that was just a terrifying revelation,” Steinhardt said.
The bill would add false depiction through such manipulated media to the state’s identity theft statutes, with the severity of the charge tied to how many individuals an identity thief defrauds using synthetic media.
Those who defraud a single person could face up to 18 months in jail. That rises to five years if up to four people are defrauded, and five years if five or more people are made victims.
The bill would allow victims to lodge civil suits against the perpetrator even if they are already facing criminal charges in the same case.
“The law needs to keep up to date,” Steinhardt said. “Criminals always seem to be a couple steps ahead of us.”
Though lawmakers have attempted to enact numerous safeguards on the use of deepfakes in recent years, those bills seldom reached floor votes, and the law remains largely unchanged.
Sen. Kristin Corrado and Assemblyman Herb Conaway in March introduced bills that would subject deepfake pornography to criminal and civil penalties under state revenge porn statutes, but they have yet to see a committee vote.
A separate bill backed by Assemblyman Lou Greenwald (D-Camden), the body’s majority leader, would criminalize the creation of deepfakes that are later used to aid the commission of certain crimes, like extortion and harassment, among others. That bill is expected to get a hearing in front of the Assembly’s judiciary committee on Thursday, along with another bill that would use $10 million to create a deepfake technology unit within the state Department of Law and Public Safety.
In the previous legislative session, Assemblywoman Pam Lampitt sponsored a bill that would have required all deepfakes to carry a disclosure about their synthetic nature.
Another introduced last session and sponsored by Sen. Andrew Zwicker, then an assemblyman, would have barred deepfakes of political candidates within 60 days of an election, but like the others, it never reached a vote on the floor of either chamber.
Despite the proliferation of AI-created images, video, and audio in recent years, research about human detection of manipulated media remains mixed.
Though deepfakes are not yet sophisticated enough to pass muster upon close inspection—synthetic audio of human speech often produces strange tone and diction—they rarely come under such scrutiny when encountered on fast-moving social media feeds, Lyu said.
“The best defense against deepfakes is actually increased user awareness. We need to tell the users that fake media exists. I don’t have the exact number, but I know a lot of the cases happen because we simply are not aware,” Lyu said.
NEXT STORY: Lawmakers Push For Expanded Cyber Training