Michigan’s use of AI to process SNAP applications draws concerns about past automation failures

jetcityimage via Getty Images
Given the state’s track record in using an algorithmic fraud detection system, the Michigan Department of Health and Human Services’ use of AI in SNAP determinations holds a lot of reason for caution and concern, an expert says.
This story was originally published by Michigan Advance.
The Michigan Department of Health and Human Services has begun using artificial intelligence to help boost the number of Supplemental Nutrition Assistance Program cases it can review, a department official told members of the Senate Appropriations Subcommittee on DHHS last week.
While discussing efforts to comply with new federal requirements, David Knezek, the department’s chief operating officer said the agency has deployed an AI case reading tool to help employees go through cases line-by-line to ensure the department is making accurate determinations on payments before that money goes out the door.
Under H.R. 1, also known as the One Big Beautiful Bill Act, states are required to pay for a portion of SNAP benefits based on their payment error rate, or how accurately states make eligibility and benefit determinations among households participating in the program. In analyzing the changes, the nonpartisan Brookings Institution notes that wrongly rejecting an applicant is not considered an error under the measure, and that the rate is not a measure of fraud.
Knezek told members of the committee that the department is only able to review a relatively small number of cases manually.
“Using this AI case reading tool, we’re now not only going to be able to scan every single case in a perfect environment before that money goes out the door, we’re also going to be able to target it to the cases that have the highest likelihood of resulting in a payment error rate,” Knezek said.
He noted that the largest number of errors come from single and dual person households, while the largest dollar errors come from households with larger numbers of individuals.
“Using that AI case reading tool, we’re able to target the ones that are most likely for fraud,” Knezek said.
Knezek said the department is also deploying an optical character recognition tool to scan documents and input information such as pay stubs submitted to the department, to avoid human error up front, while allowing for human verification on the back end.
On Monday the Michigan Advance asked the department when the AI case reading tool and character recognition tool was deployed, what programs are being used, whether there was any disclosure to applicants that AI was being used to review their case and what safeguards were in place.
We see too many times with these AI systems that they're rolled out without adequate testing, and then it turns recipients into guinea pigs in an AI experiment, and that is not acceptable.– Michele Gilman, the Venable professor of law at the University of Baltimore Law School
After two days, Erin Stover, a public information officer for the department said that the agency had used optical character recognition tools for several years, and had more recently begun using AI-assisted case reading to support case review.
Eligibility staff remains responsible for all case decisions while using the tools to flag discrepancies, Stover said in an emailed statement.
“AI-assisted case reading capability is part of our broader efforts to strengthen accuracy and prepare for federal policy changes under H.R. 1, which increase the importance of accurate eligibility determinations,” Stover said.
The department uses tools approved by the Department of Technology, Management and Budget within its secure system, and does not use public-facing generative AI to process cases, Stover said.
“Safeguards are in place to protect applicant data, which is only accessible to authorized personnel and is handled in accordance with state and federal privacy requirements,” Stover said.
Applicants are also informed that their information may be verified through data matching and review processes as part of determining eligibility, Stover said, with all applications subject to review to determine eligibility, consistent with federal requirements.
Stover later told Michigan Advance the state’s case reader uses Google Vertex AI, which the company describes as a “unified, open platform for building, deploying, and scaling generative AI and machine learning models and AI applications.”
New AI Tools Aim To Reduce Errors, but Raise Familiar Concerns
The agency’s decision to incorporate artificial intelligence into its case determinations calls to mind the state’s 2013 effort to automate review of its unemployment cases through the Michigan Integrated Data Automated System, or MiDAS, leading to multiple lawsuits and settlements providing repayments and damages to many individuals wrongfully accused of fraud.
According to reporting from Undark Magazine, more than 40,000 individuals were charged with misrepresentation within the first two years of the system’s rollout, with the agency demanding payments of roughly five times what they paid in benefits.
The Michigan Auditor General later reviewed 22,000 cases marked as fraudulent, determining that 93% did not actually involve fraud.
Given the state’s track record in using an algorithmic fraud detection system, the Department of Health and Human Services’ use of AI in SNAP determinations holds a lot of reason for caution and concern, Michele Gilman, the Venable professor of law at the University of Baltimore Law School, told Michigan Advance.
One key question on Gilman’s mind: How well has the case reader tool been tested and vetted?
“We see too many times with these AI systems that they’re rolled out without adequate testing, and then it turns recipients into guinea pigs in an AI experiment, and that is not acceptable,” Gilman said.
One of the challenges of working with fraud detection systems is that actual rates of fraud are low, whether you’re looking at public benefits, banks or credit cards, Gilman noted. As a result, programmers struggle to program tools to detect fraud, as they do not have robust data, leading to high rates of false positives and false negatives, she said.
According to the Benefits Technology Advocacy Hub, the MiDAS system would flag any data discrepancy – no matter how minor – as fraud, requiring follow up from the applicant in 10 days. The system also took the average of an applicant’s entire income rather than looking at individual paychecks, creating discrepancies on system determined income, which led to more fraud determinations.
Given the false positives and negatives that arise in these systems it’s all the more important to have some layer of human review, Gilman said. However, those reviewers need to be knowledgeable on the limits of AI systems, to avoid deferring too much to the system’s determinations.
While there is a role technology can play in tandem with staff, Gilman said, the ultimate accountability has to lie with the agency.
“It can’t be ‘our vendor screwed up’ or ‘the AI went haywire’ like the actual accountability has to ultimately be with agency officials,” Gilman said
She pointed to the AI risk management framework released by the National Institute for Standards and Technology, which emphasizes the broad integration of humans at all phases of the AI lifecycle.
Jennifer Lord represented individuals falsely accused of fraud in a class action lawsuit against the Michigan Unemployment Insurance Agency. While working on Bauserman v. Unemployment Insurance Agency, Lord said she also advocated for guardrails on the use of AI in government services, though those efforts have yet to bear fruit.
Under former President Joe Biden’s administration, Gilman says there was a lot of attention to the ways that AI can go wrong. In 2023, Biden issued an executive order placing guardrails on AI development and tasking the U.S. Department of Agriculture and the Department of Health and Human Services with issuing guidelines on the use of AI in programs like SNAP and Medicaid. The guidelines discussed issues with AI impacting civil rights and safety and acknowledged data privacy concerns.
Due Process Concerns Loom for Benefit Recipients
However, the Biden Administration’s emphasis on fairness, equity and accountability has been thrown out the window, Gilman said, with the Trump Administration placing faith in AI companies, with less emphasis on consumer rights.
“There’s a lot of faith in AI for cost savings and efficiency that is unwarranted,” Gilman said.
As a lawyer representing low-income people receiving public benefits, Gilman said she doesn’t have many hooks to hang her hat on outside of due process rights guaranteed by the U.S. Constitution.
“As a Constitutional matter, you’re entitled to human review at some point,” Gilman said, explaining that the problem with the state’s unemployment system was that the only way to get human eyes on a case was through filing an appeal and appearing before an administrative law judge. However, the system’s determinations could not be explained, creating a “black box” problem and rendering that human review meaningless, she explained.
Lord noted that programs written to detect fraud typically overcorrect, raising further concerns about the role program developers play in public benefits determinations.
“We’ve got private companies who are now basically writing regulations, implementing the law, and their goal is ‘save us as much money as possible,’” Lord said.
If the state turns over a government function to a private entity designing and implementing a system without checks and balances, it will have another disaster like the MiDAS system on its hands, Lord said.
Additionally, the individuals who rely on public benefits are the ones who have the least access to legal assistance, Lord said.
“They are already in dire financial straits, otherwise they wouldn’t be applying for the benefits,” Lord said, noting that some individuals may not have a computer, or the ability to meet tight timelines for challenging administrative decisions.
Michigan Advance is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Michigan Advance maintains editorial independence. Contact Editor Jon King for questions: info@michiganadvance.com.




