Connecting state and local government leaders
The Connecticut legislation calls for an inventory of the technology’s use in government and establishes an artificial intelligence working group to make recommendations.
Connecticut Gov. Ned Lamont last week signed a bill to govern the state’s use of artificial intelligence, and tasked the legislature with building an AI “bill of rights.”
The law, which passed both chambers of the Connecticut General Assembly by the end of May, requires the legislature to form a working group to make recommendations on how AI should be regulated and on a potential bill of rights based on the blueprint released last year by the White House Office of Science and Technology Policy.
The legislation also requires the Department of Administrative Services to undertake an inventory of and provide impact assessments for the state’s use of AI systems by the end of this year. The department then would provide ongoing assessments of the technology’s use. The state’s Judicial Department must conduct a similar inventory and develop policies on AI’s use to prevent discrimination and disparate impacts.
Separately, the state’s Office of Policy and Management must produce by Feb. 1, 2024, “policies and procedures concerning the development, procurement, implementation, utilization and ongoing assessment of systems that employ artificial intelligence” and are used by state agencies, per the bill text.
State Sen. James Maroney, the Senate Chair of the General Law Committee where the bill originated, cited testimony from earlier this year that algorithms are trained on biased data, with police departments relying on predictive policing tools to determine where to deploy officers and resources. When those algorithms rely on historic crime rates—and many communities of color have been overpoliced in the past—AI can perpetuate racial profiling and other biases.
“We owe it to our residents to ensure that as a government we do not discriminate in providing or have disparate impacts through the provision of services that our constituents need and deserve,” Maroney said in a statement in May after the bill passed the Connecticut State Senate.
As the legislation worked its way through the General Assembly, it received strong support from the American Civil Liberties Union. In a statement, the organization’s Connecticut branch said that while AI can have “incredible benefits,” it also poses “threats to our civil rights and civil liberties if misused.” In urging its passage, the group also noted that algorithms and AI “can perpetuate racial bias and inequity and deeply change how people interact with the government.”
The legislation was proposed partly as a result of a study conducted last year by the Connecticut Council on Freedom of Information and the Media Freedom and Information Access Clinic at Yale Law School.
Researchers found that while state agencies had begun using AI and other automated systems to make decisions that affect residents’ lives, algorithmic decision-making was not transparent. The public was not being told whether AI tools had been properly and equitably developed or how they were being used, the report said.
Agencies are using AI and algorithms “in ways neither the public nor the agencies themselves fully understand,” said Kelsey Eberly, clinical lecturer and Abrams fellow at the MFIA Clinic.
“When that happens, we don’t know why certain children find seats at magnet schools, certain job seekers’ applications filter to the top, or certain families are flagged for child welfare visits—decisions far too weighty to be made by black box technology,” Eberly continued. “This legislation brings much-needed ‘sunshine.’”
State lawmakers have introduced a slew of legislation to regulate AI this year, with the nonpartisan National Conference of State Legislatures noting that many bills are designed to study the impact of AI or algorithms and the role policymakers could play.