How to seamlessly integrate AI into state HHS systems

Barbacane via Getty Images
COMMENTARY | The technology offers great rewards in this area of state government, but also presents great risks. Agencies must adopt it thoughtfully to avoid those pitfalls.
To date, state governments have been hesitant to go all in on artificial intelligence in the health and human services space, and with good reason.
These agencies house high volumes of sensitive data and are responsible for a wide range of initiatives critical to everyday life, including healthcare and social services. Therefore, if AI is applied improperly without sufficient oversight, training and governance, the results could be disastrous.
Agencies can unlock real benefits for constituents by integrating AI — but first, IT leaders must clearly understand the unique risks and rewards this technology brings to HHS systems.
The Unique Benefits of AI for State HHS Systems
The benefits that can be realized from AI in the HHS space are perhaps the greatest anywhere in state government.
Improved Outcomes: State HHS systems’ AI implementations have improved outcomes in rehabilitation treatments and holistic care, two areas particularly important for their most vulnerable residents, such as children.
AI can play a transformative role in supporting child welfare by enhancing early detection, decision-making, and resource allocation. By analyzing vast amounts of data from social services, schools, healthcare systems, and law enforcement, AI can identify patterns that signal potential risks to a child's well-being, such as signs of neglect, abuse, or chronic instability, enabling caseworkers to intervene earlier and more effectively.
The technology can also assist in matching children with the most suitable foster care placements, while predictive models can help agencies prioritize cases and allocate limited resources more efficiently.
Efficiency: AI facilitates faster processing and quicker responses for constituents during transactions like applications, eligibility determinations, benefits distribution and appeals. In turn, this reduces the administrative burdens faced by state staff.
One such example is the promise of AI to perform predictive eligibility in integrated systems. AI can proactively identify applicants who are likely qualified for services — often even before a formal application is completed — by pre-populating applications from data held by other state systems. This helps make certain constituents receive all benefits to which they are entitled, without having to even know these programs exist or enter their information multiple times.
Access and Experience: AI can streamline access to programs for those who currently face barriers to enrolling in and using public benefits. The technology can understand and use constituent feedback and mood to personalize support and recommendations, provide more convenient user authentication and access options and generate rapid and accurate responses to important questions on topics such as potential program or eligibility changes.
Reduced Fraud, Waste, and Abuse: AI can help guarantee the right people get the right benefits, alleviating the burden on the pay and chase system. Through AI-powered technologies such as advanced identity proofing, agencies can ensure the person requesting benefits is who they claim to be, while simultaneously streamlining and accelerating eligibility verification.
This enables constituents to access benefits faster while reducing manual errors and ensuring only qualified individuals are considered. Additionally, machine learning algorithms can identify trends linked to known fraud cases, allowing agencies to proactively prevent improper payments.
Potential Pitfalls of AI Implementation
While the positive impacts of implementing AI in state HHS systems could be enormous, and even life-changing for constituents, the risks associated with poor AI integration could be catastrophic.
Algorithmic Bias: Algorithmic bias is perhaps the biggest concern in the implementation of AI in state HHS systems. If these systems’ datasets lack adequate representation from across the population, the effects could be detrimental, including the under-representation or servicing of specific groups.
Hallucinations: Hallucinations, or the return of incorrect information from AI, can introduce potentially false information that can affect agencies’ ability to make decisions regarding access to, eligibility for, or revocation of critical resources or services. If these types of hallucinations were to occur, the results could be devastating for the affected individuals.
Security and Privacy: AI-related cybersecurity and privacy issues, such as data leakage, excessive agency, and training data poisoning, could be particularly problematic in HHS systems due to the sensitive nature of the data they process.
Securing AI is more complex than traditional IT systems, as the technology introduces new dimensions of risk and complexity. For example, AI behavior is non-deterministic, meaning outputs can vary in unpredictable ways, and models can “learn” vulnerabilities if they're exposed to harmful inputs or compromised data.
Attack surfaces are also larger, particularly as many large language models are connected to external data sources or integrated with enterprise systems. On top of this, security teams must now consider new models, prompts, generated data, Application Programming Interfaces and feedback loops.
AI Governance: Clear governance frameworks need to be coupled with accountability and transparency in any AI implementation, but these concepts are even more important in HHS systems to guarantee strong, ethical and required regulatory oversight. Without these guidelines, there is a risk of inconsistent implementation, misuse of AI tools, or a lack of recourse for individuals affected by AI-driven decisions
Unlocking the Power of AI to Transform State HHS Systems
To successfully integrate AI, state HHS agencies must prioritize transparent and clear governance, consistent human oversight, and hands-on training to enable effective implementation.
Transparent Governance: Clear governance frameworks should be published to ensure AI tools are consistently and safely implemented. These guardrails can also explain how more complex AI systems make decisions, helping to increase transparency, accountability, and trust in agencies.
Human Oversight: As state HHS agencies integrate AI solutions, IT experts must be kept in the loop to ensure accurate decision-making, help fix errors and hallucinations, and create more empathetic experiences for constituents.
Proper Training: Regardless of whether agencies have access to AI now, state HHS employees should receive hands-on training on the technology to empower them to swiftly integrate it into their workflow once available.
HHS is perhaps the place where AI offers the greatest rewards but also the greatest risk for state governments. It is essential that agencies embrace AI, but do so in a thoughtful and measured way, with proper oversight, training, and governance, to ensure that the enormous benefits are realized, while avoiding the equally large pitfalls.
John Evans is chief technology officer for state and local government at World Wide Technology.




