AI use in financial services could add to bias risks, GAO warns

ipopba/Getty Images
A new Government Accountability Office report suggests federal regulators offer updated guidance to weed out bias in financial institutions' artificial intelligence systems.
The use of artificial intelligence in financial institutes’ business operations can pose data privacy and bias risks, which demand new risk management guidance from regulatory bodies, according to a new Government Accountability Office report released Monday.
In documenting the benefits AI software can lend financial institutions in their operations — such as improved customer service, investment decisions and detection of threats or fraud — GAO authors found that the same AI tools can “amplify” present risks: harm to fair lending practices, threats to privacy, conflicts of interest and false information.
GAO also found that AI could contribute to bias in lending and other credit decisions.
“Bias in credit decisions is a risk inherent in lending, and AI models can perpetuate or increase this risk, leading to credit denials or higher-priced credit for borrowers, including those in protected classes,” the report said. “According to one consumer advocate, AI could steer borrowers, including those in protected classes, toward inferior or costlier credit products.”
A given algorithm’s potential to reinforce or contribute to discriminatory decisions based on biased data has been well-documented over the years. GAO noted that this poor-quality data could then end up training a problematic algorithm.
“Factors that can increase risk include complex and dynamic AI models, poor-quality data, and reliance on third parties,” the report stated. “The function and outputs of AI can be negatively affected by data quality issues, such as incomplete, erroneous, unsuitable, or outdated data; poorly labelled data; data reflecting underlying human prejudices; or data used in the wrong context.”
AI models deployed in financial institutions’ environments also run the risk of producing misleading information, hallucinating entirely made-up answers and exposing sensitive consumer data. Cybersecurity risks — namely novel cyberthreats — could also result in other failures within the environment’s information technology architecture.
GAO recommended that financial oversight groups offer more AI-specific guidance, saying that the National Credit Union Administration should update its model risk management guidance to include a broader array of AI models with more details on their use and risks.
Notably, the report said the NCUA lacked the authority to investigate financial institutions’ technology services, citing the GAO’s previous recommendation to Congress that it grant such power to the NCUA.
“Some regulators have issued AI-specific guidance, such as on AI use in lending, or conducted AI-focused examinations,” the report said. "Regulators told GAO they continue to assess AI risks and may refine guidance and update regulations to address emerging vulnerabilities.”