4 questions to ask before turning to AI for translation services

Ekaterina Goncharova via Getty Images
Minnesota officials developed a framework to help users determine when and how to use large language models and artificial intelligence for language translation.
CHICAGO — As artificial intelligence increasingly piques the interest of state and local leaders looking to leverage it to improve government services like language translations, it’s important to consider if AI is the right solution in the first place, a team of Minnesota officials said.
Language accessibility is a growing priority for leaders across the U.S. as state and local officials increasingly turn to AI-enabled translation services and tools in an effort to connect more non-English speakers to government resources, like public assistance or emergency services.
“There is an increasing presence of [generative artificial intelligence] in the tools we use, and frankly, given the scope and breadth of work that we are doing, these tools are often necessary,” said Alyssa Lawrence, content strategist and user experience writer at the Minnesota Department of Human Services, during a breakout session at Code for America’s 2026 Summit last week in Chicago.
“We can, however, assess to what extent, how [and] why we engage with these tools,” she added. “In other words, there are conditions of our work that constrain us, and within those constraints, we have choices.”
That’s where a framework designed by an interdisciplinary design team at DHS comes into play. The framework helps staff assess why and how to leverage large language models for delivering the state’s language access services.
The guide first prompts government users to consider whether the material or content in question warrants the effort to adopt an LLM for translations, Lawrence said. The framework also suggests users consider the frequency and quantity of required translations to determine if an LLM is necessary.
In Minnesota’s case, officials were looking to offer AI-enabled translations for content related to the state’s Provider Hub, an online platform for social service providers — child care, mental health or substance use disorder service providers and the like — to apply for and manage their licenses.
The hub also serves as a resource for social service providers to receive training and instructional design materials, including maintenance alerts, newsletters, brochures and other communications. Such content needs to be updated and translated four to 12 times a year, depending on the specific product, according to Lawrence.
Based on that criteria, the design team determined that the hub was an appropriate use case for a translation LLM because “for us, support for providers offering critical services equals support for residents seeking [those] services,” Lawrence said.
The next consideration for users exploring LLM solutions for translations is to evaluate what resources and tools are at their agency’s disposal, according to the framework. For instance, users should assess the speed, accuracy and cost of various translation options against their organization’s funding and staffing capacity.
As an example, Minnesota officials determined that the state’s options were to rely solely on an LLM translation product, a staff of or contracted human translators or a combination of those two choices, said Katie Lane, lead instructional designer at DHS.
To meet the state’s translation needs, Minnesota lawmakers ultimately created the Enterprise Translation Office in 2024, which comprises six multilingual staff members who complete translation requests for executive branch agencies in the state. The office leverages ChatGPT to assist with generating translations, and staff conduct quality assurance of translated material, Lane said.
The ETO has effectively cut the time to translate material in half for some languages, she said. Spanish and Somali translations, for example, previously took two hours to complete when only human staff were available and now takes around one, and the time to translate Hmong content dropped from four to two hours, according to Lane.
Those impacts helped the design team confirm why they wanted to leverage an LLM and which product to implement for state workers, a third key consideration that the framework recommends users to evaluate, Lane said.
A final, critical topic to contemplate under the framework is how to prepare the LLM and the content being fed into it to ensure generated translations are accurate and high quality, said Melissa Landin, instructional design coordinator at DHS.
“We find it's really important to have a very good understanding of the languages that you are actually translating in … [because] there are different nuances to languages, and because of that, it's super important to understand what you might need to do differently with your LLM because of the language that you're translating in,” she said.
For instance, users must consider how to train an LLM to recognize differences in how the same thought is conveyed differently between English and another language, Landin said. One way to do that is by distilling English training content or LLM prompts into plain, common language so that outputs are translated more directly.
Users should also consider what additional content should undergo translation or not, Landin said. For instance, she pointed to alt-text on graphics as an effective use case to optimize the language accessibility of content.
However, official or proper names — like the title of the Provider Hub — could forego translation so that the public can better recognize the resource across languages to avoid confusion, she said.
The framework also suggests that users develop a translation quality assurance plan, such as repeated human reviews, to ensure content is as accurate and culturally appropriate as possible, Landin added.
Quality assurance “is probably the most important step of your process,” she said. “That time spent [doing quality assurance] makes for better translation. It takes a little bit more time, but in the end, people understand our translations better. They don't come back to us as much with revisions, and it really does help the end user.”




