Connecting state and local government leaders
A report from California on the technology found that it could improve performance and make services more accessible to residents, but risks must be mitigated.
Generative artificial intelligence has several “significant, beneficial use cases” for state governments. It could help public employees work more efficiently, improve how a state communicates with residents and help better design services.
But generative AI could also be inaccurate, unreliable and susceptible to manipulation, according to a new report published by the California Government Operations Agency. It is the first of several reports required under a September executive order from Gov. Gavin Newsom that looks to create guidelines and policies for the technology’s use in state government.
The report takes a preliminary dive into the “economic and transformative” benefits of Generative AI, as well as the potential risks that policymakers will need to address.
“California can pioneer thoughtful and innovative approaches to GenAI adoption in state government,“ said Amy Tong, California’s secretary of government operations, in a statement. “Through careful use and well-designed trials, we will learn how to deploy this technology effectively to make the work of government employees easier and improve services we provide to the people of California,” Tong, who led the team that issued the report, added.
The report examines the potential of generative AI to help improve employees’ performance, capacity and efficiency. Since the technology can synthesize hundreds of millions of data points simultaneously and summarize and classify them, the report says it could be a powerful tool in improving the speed of work. It could, for instance, be useful when analyzing public feedback on state policies, or it could be used to summarize meetings, work and other documents.
Generative AI could also help provide meaningful insights and predict outcomes in complex datasets, and then help explain those findings in plain language. Cybersecurity staff could use it to help analyze network activity logs, detect anomalies, explain those anomalies and propose remediation.
The report gives several specific use cases. It says generative AI technology could help analyze data collected by drones, satellites and other sensors for damage and deterioration to public infrastructure in order to help improve maintenance forecasts. Generative AI could help optimize software coding and explain unfamiliar code.
And the technology could help promote environmental sustainability by optimizing workloads, as it helps prioritize the allocation of certain resources, maximize energy efficiency and promote eco-friendly policies like going paper free.
Enabling better communication with California residents is a potential benefit of generative AI, according to the report. The technology could help convert educational materials into formats like audio books, large print text or braille. It could help translate government websites, documents, policies, forms and other materials into multiple languages, meaning external content is “more accessible to and inclusive of all Californians.”
The report says generative AI could help make it easier for the state to design services and products that are more responsive to residents’ needs and better reflects California’s diverse geography and demographics.
In practice, the report says that generative AI could be used to identify specific groups from government service data who need more outreach or support services. Alternatively, it could identify groups that are disproportionately not accessing services and suggest ways to break down any barriers.
“Through responsible planning and implementation, Gen AI has the potential to enhance the lives of Californians,” said Liana Bailey-Crimmins, director of the California Department of Technology. “The state is excited to be at the forefront of this work. With streamlined services and the ability to predict needs, the deployment of Gen AI can make it easier for people to access government services they rely on, saving them time and money.”
But the report also warns of generative AI’s potential risks and pitfalls. It says the technology could be inaccurate or unreliable, so conclusions need validation before being released publicly. The report also warned that AI could impact people’s safety if not used correctly, and must be both accountable and transparent to ensure everyone has access to information in each stage of an AI tool’s life cycle.
The report also warned of the potential for data breaches and privacy violations against AI tools and noted that they can be “susceptible to unique attacks and manipulations, such as poisoning of AI training datasets, evasion attacks, and interference attacks.”
The technology’s potential workforce impacts must also be fully assessed, including whether staff may need upskilling or retraining to use generative AI as part of their daily responsibilities. The report also called on the private and public sectors to provide “proactive and thoughtful” support for anyone whose job is displaced by generative AI.
More initiatives on AI are set to follow as part of Newsom’s wide-ranging executive order. That will include a joint risk analysis report from agencies and departments of potential threats to California’s energy infrastructure from the technology; guidelines for agencies and departments to analyze generative AI tools’ impacts may have on vulnerable communities; a blueprint for public procurement of generative AI tools; training for state employees; and partnerships with academic institutions to culminate in a joint summit on the technology next year.