State officials should use ‘safety goggles’ when implementing AI, Minnesota official says

Laurence Dutton via Getty Images
As state legislators interested in AI expands, so too should their efforts to establish guardrails and policies that ensure the tech’s responsible use.
State policymakers are grappling with a technology revolution as artificial intelligence proves itself a gamechanger for streamlining government operations, optimizing workflows and improving service delivery, one Minnesota official says.
More and more, leaders see AI as a critical lever to “improve efficiency, to build better applications and … to better support the legislative staff members and the public in general,” said Chris Cantey, information systems manager at the Minnesota Legislative Coordinating Commission.
But policymakers must remember that successful AI outcomes require a responsible approach to get there, Cantey said, speaking during a session at the National Conference of State Legislatures’ Legislative Summit in Boston last week.
The Minnesota Office of Diversity, Inclusion, Accessibility and Language Services, for instance, tasked Cantey’s team with developing language translation services, so that legislative materials could be more accessible to non-English speakers in the state.
Last year, DIAL launched a pilot program leveraging ChatGPT for translation services, which Cantey said helped improve the speed and accuracy of machine-translated materials.
The AI helps translate content and documents into plain language, while maintaining specific grammatical and semantic nuances throughout translations. For instance, the AI model is trained to recognize that, in the context of government, the term chamber refers to the legislative houses or the physical space the bodies meet, Cantey explained.
Such AI tools have also significantly reduced the application development times for Cantey’s team, he said. “To develop an application, it might have taken four months. Now it takes four weeks. A script that might take four weeks, now takes four days,” he explained.
While AI has transformed the translation process for Minnesota officials, human translators and interpreters remain an essential part of the process to ensure quality control and assurance of translations, Cantey said.
Ultimately, technologists should “treat AI as a power tool and use safety goggles,” he said.
Indeed, a cautious approach to AI could help policymakers better get a handle on the “considerable risks with AI that we need to be aware of,” said Chad Dahl, group infrastructure manager for the Washington State Legislature. “I don’t think we should allow fear to stop us from an excellent opportunity, but we do need to tread lightly, carefully and thoughtfully into what we do [with AI].”
Officials must consider, for example, how they may track and record prompts used for AI tools, how they manage the ownership of AI outputs or how they protect the data and privacy of content used for AI services, he explained.
Dahl pointed to the state’s effort to create a legislature-wide acceptable use policy for generative AI, which urges leaders to consider appropriate use cases for generative AI to be implemented, such as summarizing meetings or writing assistance.
“The approach is education, not prohibition,” he said.
Such education is imperative, as the adoption of AI across states shows little signs of stopping, said Will Clark, program principal at NCSL.
In 2024, 20% of legislative staff said they use generative AI in their work, according to an NCSL survey that covered 79 responses from 40 states conducted by the organization. This year’s survey results, which are not yet public, show that figure has grown to 44%, Clark said.
But the number of respondents saying their legislative office had a policy regarding the use of AI remained the same — 16 — from 2024 to 2025, he said.
“If you don’t have a policy, you should really start thinking about that,” Clark said.
For legislators who reported having an AI policy, for instance, Clark said policy priorities include guardrails ranging from prohibition of AI use at all by legislative staff, the allowance of free AI use with caution or restricted use to certain AI tools.
Legislators also report having policies that require staff to obtain consent from their manager when using AI or limit what information and data staff are allowed to use in AI services, he added.
Whether legislators like it or not, artificial intelligence has found a place in government and is not likely to leave, particularly as agencies see AI’s potential to be a force multiplier for small, underresourced teams, Dahl said in an interview with Route Fifty.
“For us, [AI] has become a need-to-have,” he said, which is why it’s crucial for legislatures to establish security and safety policies and procedures for AI adoption and use in government. “It’s gone beyond [being] a shiny tool to a partner in our program.”
NEXT STORY: Could AI prevent teacher burnout?




