Report: How to protect public sector workers against AI’s rise in government

Thapana Onphalai via Getty Images
A new report points to how governments and unions can work together to ensure AI is rolled out responsibly and effectively among staff.
Artificial intelligence is bringing hopes of streamlined workflows and enhanced service delivery to the workplace, but the technology has also stirred concerns about its impact on people’s job security and public services that leverage it.
The development and implementation of AI in public services requires a human-centered approach to ensure the tech is leveraged responsibly, according to a new report released last week from the American Federation of Labor and Congress of Industrial Organizations.
The collaboration between state and local leaders and unions is one lever that can help maintain trust and transparency in the adoption of AI among public sector workers, according to the report. The findings, for instance, highlight how government leaders and public sector unions can enable the responsible rollout of AI through practice and policy.
Adopting tech like AI in government operations and services “affects the workers who perform the services, and it affects the people who rely on the services that the public service performs,” said Ed Wytkind, the interim director of the AFL-CIO Technology Institute. “It is important to be good stewards of taxpayer dollars and not [implement] AI tools that are harmful either to the employees or to the public.”
One way to protect public safety workers from potential AI harm is through AI training and digital literacy programs, according to the report. Without such support, the advancement of AI could outpace workers’ skills, leading to displacement.
It’s also crucial for public servants to be trained on AI tools as they are the ones directly implementing services that impact the public, Wytkind said.
The American Federation of Teachers, for instance, is partnering with major tech companies to offer a training program for educators to prepare them for using AI in the classroom. The National Academy for AI Instruction, announced in July, will offer workshops, online courses and technical assistance for educators to learn how to incorporate AI into their curricula and build AI tools for their classrooms.
Workers and unions should also be part of the research and development process as governments explore new AI solutions, particularly as “working people know what is needed day to day better than a distant engineer, software developer or senior executive,” the report stated.
Ensuring frontline staff are included in discussions about how AI-enabled services are built and used can help mitigate inefficiencies or errors in a new tech application, Wytkind explained.
“The chances of having a better, smarter, human-centered technology that takes into account the frontline workforce is much, much higher if the unions and their members are involved in [research and development] work,” he said.
Similarly, the procurement process is another way that governments and unions can shape responsible AI use, Wytkind said. Unions can help public servants negotiate with government employers to have a role in evaluating a potential AI tool’s impact on their work and service to residents.
For instance, public servants could help identify bias or discrimination in an AI service that unfairly denies certain people assistance services like public housing, he said. Such insights can also help shape the development of AI oversight and regulations that impact the private sector as well.
“In the public sector, there’s a certain responsibility with implementing tech … which is you establish patterns in the marketplace as you begin to implement new tools,” Wytkind explained.
To further protect staff amid AI’s expansion in the workplace, officials should also consider union negotiations that call for restricting public sector employers’ use of AI services to monitor staff activity or productivity while working or make decisions related to hiring, firing or compensation, he said.
Some people fear that “AI can be a tool that can be weaponized against working people,” Wytkind said, but with proper guardrails and policies for its implementation in the public sector, “AI could be a technology that creates new and better opportunities for lots of people.”
NEXT STORY: Generative AI’s state government use ticks up




