AI’s elections impact likely to grow next year, report warns
wildpixel via Getty Images
Experts warned that 2025 was just the “tip of the iceberg” in how the technology could be used, especially in deepfakes and misinformation campaigns.
In a 2025 of notable off-year election results, several campaigns embraced artificial intelligence in noteworthy and public ways, which could signal an onslaught of the tech in next year’s midterms.
In Virginia, Republican lieutenant governor candidate John Reid debated an AI-generated version of his Democratic opponent and eventual victor in that race, state Sen. Ghazala Hashmi, after she declined all his debate requests. And in New York, former Gov. Andrew Cuomo briefly posted and then deleted a deepfake ad that targeted his opponent and the eventual winner in the New York City mayoral race, Zohran Mamdani, with racist stereotypes.
Meanwhile, Utah Lt. Gov. Deirdre Henderson shared a post on X, formerly Twitter, warning voters in the state not to be fooled by an AI-generated image of fake election results in the state posted before polls closed.
Given the number of high-profile races next year, including at the gubernatorial level, experts are worried that the 2026 midterm elections will be filled with AI-generated deepfakes and misinformation.
"We’ve only seen the tip of the iceberg when it comes to AI’s impact on elections,” Isabel Linzer, a policy analyst on the elections and democracy team at the Center for Democracy and Technology, said in a statement. “The tech is getting better and politicians — and bad actors — are getting more comfortable using it. Anyone who thought the danger had passed after last year’s U.S. election avoided any major AI incidents needs to wake up.”
State lawmakers have tried to fight back against the negative effects of AI in campaigns and elections. The National Conference of State Legislatures found that 26 states have enacted laws regulating how political deepfakes are used, either banning them or requiring disclosure. Chelsea Canada, a program principal for financial services, technology and communications at NCSL, said regulating deepfakes will continue to be a “huge trend” at the state level as states think about how to regulate AI in a targeted way and protect the integrity of their elections.
“I'll note that that's top of mind of legislators, really trying to think of the harms of what they're hearing from constituents right now,” Canada said during a panel discussion at the National Association of State Chief Information Officers’ annual conference in Denver earlier this year. “A lot of the conversations are thinking long-term; what could be those long-term effects? We do see legislators taking action right now in this targeted phase.”
Recent CDT research warned of the risks of generative AI’s use in elections, especially in amplifying disinformation, facilitating foreign interference and automating voter suppression campaigns. And while the group said the technology could be used in positive ways — whether to help with data analysis, draft communications or even prepare for debates and other public events — the risks are tremendous.
While the potential drawbacks of using generative AI in a negative way during a campaign are enormous, CDT said voters appear to be less likely to punish candidates for using it in such a way given the recent past.
“What we found is that the primary inhibitor on the harmful use of generative AI in 2024 was self-imposed and primarily due to norms: campaign staff and consultants believe that the voters would penalize candidates that made deep fakes at the polls,” said Tim Harper, a report co-author and CDT’s senior policy analyst for elections and democracy. “What we're seeing is that those norms, those self-imposed beliefs that voters would penalize candidates are crumbling, and we've seen that in a few ways this year already.”
Harper pointed to various AI-generated videos posted by the White House and President Donald Trump on their social media channels, including ones that denigrate protestors and their political opponents, that have been condemned by some but not all. Regulation cannot be the only way out of it, Harper said, as campaigns must appeal to their own better angels.
“I will emphasize that it's not purely a question of law, but also a question of building societal resilience,” he said. “It's a challenging thing to rely on people being good. Campaigns have incentives to do this sort of behavior. And as the disincentives decline, those incentives become more prominent.”
But in the absence of good behavior, those norms will continue to “erode” next year, Harper said, given that many laws around AI-generated political content are not focused on stopping it but instead around requiring disclosure. Social media and AI companies could also step in, but Harper said the “political incentives for the companies to act in this space are not strong right now.”
“The norms that we saw in 2024 definitely are beginning to erode, and that's happening in a bipartisan way right now,” Harper said. “We expect this to continue to escalate into 2026, given that the guardrails that exist for political campaigns… These things are existing primarily because those laws are focused on transparency. That's not to say that the laws regulating the use of AI are the only solution here. They are a piece of the puzzle.”




