AI Is Used Widely, but Lawmakers Have Set Few Rules

Photo by Jaap Arriens/NurPhoto via Getty Images

 

Connecting state and local government leaders

Connecticut is the latest state to legislate new guardrails for artificial intelligence.

This story was first published by Stateline. Read the original article here.

In the fall of 2016, the Connecticut Department of Children and Families began using a predictive analytics tool that promised to help identify kids in imminent danger.

The tool used more than two dozen data points to compare open cases in Connecticut’s system against previous welfare cases with poor outcomes. Then each child received a predictive score that flagged some cases for faster intervention.

Even as more states began to adopt the tool, however, some agencies found that it seemed to miss urgent cases and inaccurately flag less serious ones. A study published in the journal Child Abuse & Neglect later found it didn’t improve child outcomes. Connecticut and several other states abandoned the tool, which was developed by a private company in Florida. In 2021—five years after Connecticut’s Department of Children and Families first used the tool, and two years after the state junked it—researchers at Yale University requested information about the mechanics of how it worked and concluded that the agency had never understood it.

“This is a huge, huge public accountability problem,” said Kelsey Eberly, a clinical lecturer at Yale Law School. “Agencies are getting these tools, they’re using them, they’re trusting them—but they don’t even necessarily understand them. And the public certainly doesn’t understand these tools, because they don’t know about them.”

Connecticut is the latest state to pass explicit regulations for artificial intelligence and other automated systems, thanks in part to the legacy of the tool to screen for at-risk kids. A bipartisan bill passed May 30, which Democratic Gov. Ned Lamont is expected to sign into law, would require state agencies to inventory and assess any government systems that use artificial intelligence and create a permanent working group to recommend further rules.

Many states already regulate aspects of these technologies through anti-discrimination, consumer protection and data privacy statutes. But since 2018, at least 13 states have established commissions to study AI specifically—and since 2019, at least seven states have passed laws aimed at mitigating bias, increasing transparency or limiting the use of automated systems, both in government agencies and the private sector.

In 2023 alone, lawmakers in 27 states, plus Washington, D.C., and Puerto Rico, considered more than 80 bills related to AI, according to the National Conference of State Legislatures.

Artificial intelligence tools—defined broadly as technologies that can perform complex analysis and problem-solving tasks once reserved for humans—now frequently determine what Americans see on social media, which students get into college, and whether job candidates score interviews.

More than a quarter of all American businesses used AI in some form in 2022, according to the IBM Global AI Adoption Index. In one striking illustration of AI’s growing ubiquity, a recent bill to regulate the technology in California drew comment from organizations as diverse as a trade association for the grocery industry and a state nurses union.

But federal legislation has stalled, leaving regulation to local governments and creating a patchwork of state and municipal laws.

“The United States has been very liberal on technology regulation for many years,” said Darrell M. West, a senior fellow in the Center for Technology Innovation at the Brookings Institution think tank and the author of a book on artificial intelligence. “But as we see the pitfalls of no regulation — the spam, the phishing, the mass surveillance — the public climate and the policymaking environment have changed. People want to see this regulated.”

Lawmakers’ interest in regulating technology surged during this legislative session, and is likely to grow further next year, thanks to the widespread adoption of ChatGPT and other consumer-facing AI tools, said Jake Morabito, the director of the Communications and Technology Task Force at the conservative American Legislative Exchange Council (ALEC), which favors less regulation.

‘Tremendous’ Potential and Dangers

Once the stuff of science fiction, artificial intelligence now surfaces in virtually every corner of American life. Experts and policymakers have often defined the term broadly, to include systems that mimic human decision-making, problem-solving or creativity by analyzing large troves of data.

AI already fuels a suite of speech and image recognition tools, search engines, spam filters, digital map and navigation programs, online advertising and content recommendation systems. Local governments have used artificial intelligence to identify lead water lines for replacement and speed up emergency response. A machine-learning algorithm deployed in 2018 slashed sepsis deaths at five hospitals in Washington, D.C., and Maryland.

But even as some AI applications yield new and unexpected social benefits, experts have documented countless automated systems with biased, discriminatory or inaccurate outcomes. Facial recognition services used by law enforcement, for instance, have repeatedly been found to falsely identify people of color more often than white people. Amazon scrapped an AI recruiting tool after it discovered the system consistently penalized female job-seekers.

Critics sometimes describe AI bias and error as a “garbage in, garbage out” problem, said Mark Hughes, the executive director of the Vermont-based racial justice organization Justice for All. In several appearances before a state Senate committee last year, Hughes testified that lawmakers would have to intervene to prevent automated systems from perpetuating the bias and systemic racism that often inherently appear in their training data.

“We know that technology, especially something like AI, is always going to replicate that which already exists,” Hughes told Stateline. “And it’s going to replicate it for mass distribution.”

More recently, the advent of ChatGPT and other generative AI tools—which can create humanlike writing, realistic images and other content in response to user prompts—have raised new concerns among industry and government officials. Such tools could, policymakers fear, displace workers, undermine consumer privacy and aid in the creation of content that violates copyright, spreads disinformation and amplifies hate speech or harassment. In a recent Reuters/Ipsos poll, more than two-thirds of Americans said they were concerned about the negative effects of AI—and 3 in 5 said they feared it could threaten civilization.

“I think that there’s tremendous potential for AI to revolutionize how we work and make us more efficient—but there are also potential dangers,” said Connecticut state Sen. James Maroney, a Democrat and champion of that state’s AI law. “We just need to be cautious as we move forward.”

Connecticut’s new AI regulations provide one early, comprehensive model for tackling automated systems, said Maroney, who hopes to see the regulations expand from state government to the private sector in future legislative sessions.

The law creates a new Office of Artificial Intelligence in the state executive branch, tasked with developing new standards and policies for government AI systems. By the end of the year, the office must also create an inventory of automated systems used by state agencies to make “critical decisions,” like those regarding housing or health care, and document that they meet certain requirements for transparency and nondiscrimination.

The law draws from recommendations by scholars at Yale and other universities, Maroney said, as well as from a similar 2021 law in Vermont. The model will likely surface in other states too: Lawmakers from Colorado, Minnesota and Montana are now working with Connecticut to develop parallel AI policies, Maroney said, and several states—including Maryland, Massachusetts, Rhode Island and Washington—have introduced similar measures.

In Vermont, the law has already yielded a new advisory task force and a state Division of Artificial Intelligence. In his first annual inventory, Josiah Raiche, who heads the division, found “around a dozen” automated systems in use in state government. Those included a computer-vision project in the Department of Transportation that uses AI to evaluate potholes and a common antivirus software that detects malware in the state computer system. Neither tool poses a discrimination risk, Raiche said.

But emerging technologies might require more vigilance, even as they improve government services, he added. Raiche has recently begun experimenting with ways that state agencies could use generative AI tools, such as ChatGPT, to help constituents fill out complex paperwork in different languages. In a preliminary, internal trial, however, Raiche found that ChatGPT generated higher-quality answers to sample questions in German than it did in Somali.

“There’s a lot of work to do to make sure equity is maintained,” he said. But if done right, automated systems “could really help people navigate their interactions with the government.”

A Regulatory Patchwork

Like Connecticut, Vermont also plans to expand its AI oversight to the private sector in the future. Raiche said the state will likely accomplish that through a consumer data privacy law, which can govern the data sets underlying AI systems and thus serve as a sort of backdoor to wider regulation. California, Connecticut, Colorado, Utah and Virginia have also passed comprehensive data privacy laws, while a handful of jurisdictions have adopted narrower regulations targeting sensitive or high-risk uses of artificial intelligence.

By early July, for instance, New York City employers who use AI systems as part of their hiring process will have to audit those tools for bias and publish the results. Colorado, meanwhile, requires that insurance companies document their use of automated systems and demonstrate that they do not result in unfair discrimination.

The emerging patchwork of state and local laws has vexed technology companies, which have begun calling for federal regulation of AI and automated systems. Most technology companies cannot customize their systems to different cities and states, said West, of the Brookings Institution, meaning that—absent federal legislation—many will instead have to adopt the most stringent local regulations across their entire geographic footprint.

That is a situation many companies hope to avoid. In April, representatives from a wide range of business and technology groups lined up to oppose a California AI bill, that would have required private companies to monitor AI tools for bias and report the results—or face hefty fines and consumer lawsuits. The bill survived two committee votes in April before dying in the Assembly Appropriations Committee.

“Governments should collaborate with industry and not come at it with this adversarial approach,” said Morabito, of ALEC. “Allow the market to lead here … a lot of private sector players want to do the right thing and build a trustworthy AI ecosystem.”

ALEC has proposed an alternative, state-based approach to AI regulation. Called a “regulatory sandbox,” the program allows businesses to try out emerging technologies that might otherwise conflict with state laws in collaboration with state attorneys general offices. Such sandboxes encourage innovation, Morabito said, while still protecting consumers and educating policymakers on industry needs before they draft legislation. Arizona and Utah, as well as the city of Detroit, have recently created regulatory sandboxes where companies can conduct AI experiments.

Those programs have not prevented lawmakers in those states from also pursuing AI regulations, however. In 2022, a Republican-sponsored bill sought to bar AI from infringing on Arizonans’ “constitutional rights,” and the Utah legislature recently convened a working group to consider possible AI legislation.

Policymakers no longer consider AI a vague or future concern, Yale’s Eberly said—and they aren’t waiting for the federal government to act.

“AI is here whether we want it or not,” she added. “It’s part of our lives now … and lawmakers are just trying to get ahead of it.”

X
This website uses cookies to enhance user experience and to analyze performance and traffic on our website. We also share information about your use of our site with our social media, advertising and analytics partners. Learn More / Do Not Sell My Personal Information
Accept Cookies
X
Cookie Preferences Cookie List

Do Not Sell My Personal Information

When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link.

Allow All Cookies

Manage Consent Preferences

Strictly Necessary Cookies - Always Active

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data, Targeting & Social Media Cookies

Under the California Consumer Privacy Act, you have the right to opt-out of the sale of your personal information to third parties. These cookies collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of the sale of personal information by using this toggle switch. If you opt out we will not be able to offer you personalised ads and will not hand over your personal information to any third parties. Additionally, you may contact our legal department for further clarification about your rights as a California consumer by using this Exercise My Rights link

If you have enabled privacy controls on your browser (such as a plugin), we have to take that as a valid request to opt-out. Therefore we would not be able to track your activity through the web. This may affect our ability to personalize ads according to your preferences.

Targeting cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.

Social media cookies are set by a range of social media services that we have added to the site to enable you to share our content with your friends and networks. They are capable of tracking your browser across other sites and building up a profile of your interests. This may impact the content and messages you see on other websites you visit. If you do not allow these cookies you may not be able to use or see these sharing tools.

If you want to opt out of all of our lead reports and lists, please submit a privacy request at our Do Not Sell page.

Save Settings
Cookie Preferences Cookie List

Cookie List

A cookie is a small piece of data (text file) that a website – when visited by a user – asks your browser to store on your device in order to remember information about you, such as your language preference or login information. Those cookies are set by us and called first-party cookies. We also use third-party cookies – which are cookies from a domain different than the domain of the website you are visiting – for our advertising and marketing efforts. More specifically, we use cookies and other tracking technologies for the following purposes:

Strictly Necessary Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Functional Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Performance Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Social Media Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Targeting Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.