A good AI policy needs to consider these 12 factors
Connecting state and local government leaders
As states develop guidelines and roadmaps for the new technology in the year ahead, the National Association of State Chief Information Officers has suggested a dozen priorities it says are key to a successful AI blueprint.
Since this summer, one after the other, governors have issued executive orders and policy documents on artificial intelligence, as they’ve tried to prepare their agencies and their residents for the technology.
California Gov. Gavin Newsom looked to preserve the state’s role as a “global hub” for generative AI by issuing an executive order in September that included various mandates for departments to establish the best use cases while protecting residents from AI’s pitfalls and risks. A follow-up report recently identified a number of beneficial uses for California agencies.
Pennsylvania Gov. Josh Shapiro signed a similar order to take what he called a “proactive approach” to the technology, while many other states like New Jersey, Oklahoma and Wisconsin have created task forces to help study AI further. And some states, including Kansas, South Dakota and Utah have issued policy guidance on generative AI in a bid to ensure employees use the technology ethically and in the right cases.
Given this flurry of activity in state executive offices and the fast-moving nature of the new technology, the National Association of State Chief Information Officers have issued a dozen considerations they say state leaders must think about when crafting their own AI blueprints. NASCIO said such a roadmap “will emerge as an indispensable tool for states in the months and years ahead.”
“An AI roadmap not only facilitates the seamless adoption of AI but also enhances efficiency for an already strained state government workforce,” the organization continued.
Among the first considerations suggested is for government leaders to align how AI can help their organizations achieve their strategic goals. IT leaders are implored to think about how the technology fits into those goals, and to not just “assume AI will solve every problem or help you reach every goal.” Business cases and overall goals must be identified first, NASCIO said.
The group also warned of the issues that an absence of proper AI governance and oversight can create, including data breaches, privacy violations and the erosion of citizen trust. NASCIO urged states to adopt or at least be inspired by existing governance frameworks, like the National Institute of Standards and Technology’s AI Risk Management Framework, the Organisation for Economic Co-operation and Development’s Recommendations on Artificial Intelligence or the European Union’s AI Act.
Another consideration is for states to take inventory and document existing applications that use AI, as they may have employed the technology “knowingly and unknowingly,” NASCIO said. Leaders should be mindful of the quality and source of the data their AI models will rely on and be aware of potential biases.
Once those steps have been taken, NASCIO recommended states create an advisory board or task force containing relevant agency heads, as well as lawyers and others with expertise in AI ethics. The group also suggested utilizing industry partnerships through task forces to help leverage expertise and innovation.
NASCIO called on governments to assess the privacy and cybersecurity risks of adopting AI—leaning on the National Institute of Standards and Technology’s guidance—and assess the current state of technology infrastructure. Legacy tech has been a “common roadblock” for adopting AI, the group said.
States should create acquisition and development guidelines so they can properly procure AI systems, making sure the language is easy to update to cover any concerns and developments in the technology. And they must work to expand their employees’ AI expertise, identify where it already exists and partner with local education institutions for training and other opportunities. Staff with ethical, legal and policy expertise also must be empowered, NASCIO said.
Government leaders should create their own guidelines for the responsible, ethical and transparent use of AI, and by doing so ensure that any users are informed about risks of discrimination or bias. NASCIO said states should “prioritize transparency measures to foster trust among citizens.” And governments will want to have clear metrics in place to measure the progress and success of any AI initiatives, as well as be able to communicate those properly.
“While each state’s AI roadmap will be unique to its specific needs, strategic plans and priorities, including these important considerations ensures the establishment of a solid foundation for the seamless integration of AI into state IT initiatives,” NASCIO said.
The group clearly believes that AI will be one of the most important issues that state CIOs will face next year. The organization named the technology—as well as machine learning and robotic process automation—at No. 3 on its list of top 10 priorities for 2024. Sharing the first two slots on that list are cybersecurity and risk management, and digital government and digital services.
In a separate priority list, NASCIO also named AI and robotic process automation as top 10 priority technologies, applications and tools, with chatbots and virtual assistants mentioned as some of the most promising use cases. It is the first time AI has been included on NASCIO’s priority lists.
NEXT STORY: Friendly competition heats up energy conservation among neighbors