Generative AI’s state government use ticks up

Keeproll via Getty Images
States have found more than 100 uses for the technology according to a recent survey, and while there might be excitement and momentum for its continued use, plenty of pitfalls lie ahead.
DENVER — Since generative artificial intelligence burst onto the scene a few years ago, state governments have found more than 100 uses for it, according to a recent survey.
The National Association of State Chief Information Officers found in its 2025 State CIO Survey that states are using the technology to draft and summarize documents, contracts and legislation; provide translation services; streamline employee onboarding; code generation; cybersecurity event analysis and myriad other uses. NASCIO said in its report that most states “are certainly giving GenAI a try with low-risk approaches.”
And those uses look set to grow in the coming years, especially as more states integrate generative AI tools and policies into their workflows. A joint survey between NASCIO and professional services company Accenture found that more than 90% of state CIOs are optimistic that the technology can enhance citizen experience and reduce administrative backlogs for their staff, freeing them up to focus on other tasks.
NASCIO found in its annual survey that 82% of employees in state CIO's’ organizations are using generative AI, up from 53% a year ago. “This represents a big leap in the willingness of the CIO organization to allow the use of GenAI as well as an acknowledgement that this technology is highly accessible to anyone who wants to use it — sanctioned or otherwise,” the report says.
Experts warned that full adoption will take time, however.
“The optimism is there,” said Eyal Darmon, Accenture’s managing director for U.S. public service data and AI, and its agentic AI lead, during a panel discussion at NASCIO’s annual conference last week. “But the reality is, adoption is evolving.”
But there is evidence that state CIOs want to understand generative AI and govern its use, especially among employees. NASCIO’s CIO survey found that most states have implemented responsible use guidelines and have taken inventory of its uses in agencies and applications. More than 80% have also created advisory committees or task forces.
But the biggest challenge state leaders will face in the coming years around generative AI is adoption, Darmon said, especially among employees who may be reluctant to embrace new technology. Enthusiasts for generative AI were likely early adopters, he said, and those who have been slower may be tougher nuts to crack. Change management, then, is key.
“Some people, you had them at hello, and they went home, and they created a bunch of agents for themselves,” Darmon said. “But those are the same people that would have done it, regardless of the technology. Then you've got other folks that need an extra nudge, and then some folks that you got to think about how you will figure out the capabilities to change the way that you work. It's a spectrum.”
One of the biggest obstacles for state governments right now is making their generative AI pilot projects permanent and scaled up. Many have already proven that their ideas work but struggle to get them into full production. Massachusetts CIO Jason Snyder said during a panel discussion that the state instituted an AI center of excellence to figure out the concerns and risks associated with making a pilot permanent. His state has since seen 12 generative AI pilots go into production, Snyder said.
Other state leaders noted that having enough workers is critical to making generative AI projects scale. It is one thing to produce a small pilot project for a few months, but quite another to scale it, make it permanent and support it, especially with limited staff.
“If we take our best and brightest [developers] and put them all on AI pilots and then say, ‘OK, great, now scale it and then support it,’ we now no longer have capacity in those devs,” said Josiah Raiche, Vermont’s chief data and AI officer. “Building out a support model that preserves the capacity for continued innovation, I think, is really crucial.”
Despite the excitement around AI and its use in government, some observers warned there are still not enough discussions around how it can be used safely and securely. A lot of work lies ahead on making sure the technology does not use sensitive personal data, and in ensuring that the use of so-called “shadow IT,” where employees use their own technology and tools that may be infused with AI, does not create more security risks.
“You need to know what's happening on your networks. It's not even shadow IT. In some cases, it's just browsing and submitting data,” said Eric Trexler, senior vice president for U.S. public sector at cybersecurity company Palo Alto Networks, in an interview at NASCIO’s annual conference. “You’ve got to know what's going on, so you've got to be able to inspect that, and then you've got to be able to decide how to control it, what to do with it based on your policies, your programs, whatever it may be. Those are relatively simple concepts. They're hard to implement.”




