Report: States are taking a cautious approach to agentic AI

Thai Liang Lim via Getty Images
The new phase of AI has piqued tech leaders’ interest, but a recent report indicates that they are still trying to understand the technology first.
Just a couple years after generative artificial intelligence became popular for public use, states are already dipping their toes into the next STAGE of the technology: agentic AI. But a new report suggests that tech leaders are taking a cautious approach to the budding technology.
Agentic AI refers to advanced large language models that “can actually start doing the work for us,” said Amy Glasscock, program director for innovation and emerging issues at the National Association of State Chief Information Officers.
Unlike generative AI, for example, where the technology is creating content, like summaries of policies or translations, agentic AI can facilitate “decision making, planning and executing steps without human intervention [and] doing a lot more things automatically,” she explained.
As government agencies constantly strive to do more work with fewer resources, agentic AI has piqued the interest of many state lech leaders as a force multiplier and to address ongoing workforce shortages, according to a report released last week by NASCIO. But state chief information officers are recognizing that the transition to leveraging more agentic AI tools will be gradual, particularly as the technology poses greater security and privacy risks than its predecessors.
“Like with any AI, we always recommend starting small with pilot projects [or] doing things internally before using it externally on citizen services … before it’s scaled and could cause widespread problems,” Glasscock said.
In Virginia, for example, former Gov. Glenn Youngkin signed an executive order in July to launch the nation’s first agentic AI pilot program aimed at improving agencies’ efficiency, and ultimately, residents’ experience with government services, the report stated. The same month, Delaware lawmakers launched an AI sandbox initiative to offer startups and industry leaders a controlled environment to develop tech solutions, like agentic AI, before they are deployed in real-world situations.
A gradual expansion into agentic AI capabilities also gives states the opportunity to more effectively address the tech’s security risks, as an AI agent’s task “might include accessing sensitive data, misusing trusted systems or escalating privileges, and before a human realizes it, the damage is done,” the report stated.
Glasscock pointed to Tennessee’s Department of Finance and Administration, which released a request for information late last year for a modernized enterprise resource planning system as the state’s current ERP is set to expire in 2035. The RFI includes inquiries about how generative AI could influence the technology, including how agentic capabilities could be leveraged for the ERP’s purpose and its potential impacts.
Tennessee officials, for instance, are seeking more information on how AI-enabled ERP platforms can operate in compliance with government security and privacy requirements, how such technology is designed to ensure client data security and confidentiality and how other security controls are being leveraged, like encryption or zero-trust principles.
States can implement security mitigation efforts, such as monitoring and logging records of the agentic AI’s actions to track any unexpected behaviors over time. The report also suggests establishing limits on how often or how long AI agents can be used with automatic throttling or system shutdowns, based on recommendations from OWASP, a nonprofit dedicated to software security.
Another way agencies can mitigate security threats is to give AI agents “minimum permissions” for the tools and tasks they are completing, which might include read-only access or specific APIs, according to the report. States should also require human oversight and approval to guide AI agent actions, particularly for “destructive or sensitive actions like deleting data, transferring funds or publishing content.”
The report also identifies five phases that a state may experience as they continue to explore and implement AI solutions. Those phases include characteristics of AI tools currently in use that indicate where states are in maturing systems from generative to agentic AI.
The five phases can help tech leaders “think about the possibilities, especially when it comes to citizen services, how agentic AI might be super helpful,” Glasscock said. For example, a tech leader can take inspiration on how to leverage agentic AI to assist with benefits applications like reviewing and approving an application automatically to yield a “quicker turnaround time,” she said.
The first phase entails the use of AI to complete tasks, such as writing computer code and answering employee or citizen questions, while human users remain the “primary doers and decision makers,” according to the report. The second phase of agentic AI refers to the tech’s ability to remember context within a task, like a chatbot that can output content based on previous inquiries during a user’s session.
A third phase of agentic AI refers to the technology being able to take some form of action independently, which is “the subtle tipping point into agentic AI,” the report stated. For agencies deploying AI solutions, the third phase could look like an AI tool that prefills a form using collected data or automatically routs a document for approval.
More advanced uses of agentic AI are likely to fall in the fourth phase, which is described as the AI agent being able to manage a task “across time, steps and systems,” like processing a new employee’s onboarding procedures or managing an application from intake to a final decision, according to the report.
The fifth phase, which Glasscock said nobody has fully entered yet, advances the technology to where an AI agent initiates work without being prompted. This phase could include, for example, a tool that conducts outreach to citizens when an action is needed to complete an application or flagging policy changes before a compliance deadline.
While states are far from adopting and implementing agentic AI regularly, tech leaders are starting to consider where they are in their AI journeys now to lay the groundwork for its eventual rollout, Glasscock said.
Indeed, states “are moving in this direction,” particularly as agentic AI holds promise to offer governments the ability to give residents a “smoother experience [to get] their questions answered more quickly and their needs met more quickly,” she said.




