How preemption worsens the AI accountability gap

alexsl via Getty Images
COMMENTARY | The federal government wants to stop states regulating the technology, but doing so risks various harms and does not guarantee international leadership.
America absolutely needs to compete in — and win — the global artificial intelligence race. But federal preemption of state and local AI rules before we have a strong, enforceable national safety-and-accountability framework is troubling: it shifts more power to a few federal chokepoints and a handful of dominant vendors, while shrinking the practical tools communities are trying to use to prevent harm.
That risk isn’t theoretical. In December 2025, the White House issued an executive order aimed at discouraging or challenging state AI laws, including by directing federal agencies to review state measures, establishing litigation strategies, and signaling potential leverage by choking off federal funding.
Regardless of where you stand politically, the policy direction is clear: reduce the ability of states and localities to set guardrails even as AI is rapidly embedded into public-facing decisions — benefits eligibility, child services, housing, workforce screening, licensing, education and public safety.
Why does preemption raise public risk?
It Removes the “Closest” Layer of Democratic Accountability
When AI harms people, the fastest and most tangible accountability often happens locally: city councils that can demand answers, state attorneys general who can investigate, state courts that can order remedies, and state lawmakers who can quickly tighten rules after a scandal.
Preemption replaces this multi-layered system with a single, often slower, and more political process. When something goes wrong, residents ask, “Who do we call?” and, too often, the answer is a federal agency with limited staff, limited bandwidth and competing priorities.
It Encourages a “Minimum Viable Compliance” Culture
A single national standard sounds appealing, especially to companies facing a patchwork of rules. But if the federal standard is weak — or lags behind real-world harms — it becomes a ceiling rather than a floor.
Preemption can turn “one-size-fits-all” into “one-size-fits-none,” creating a compliance shield that vendors can cite while communities carry the consequences.
It Undermines the U.S. Tradition of States as Policy Laboratories
States are where we often pilot protections and learn what works. In privacy and digital governance, state action has repeatedly filled gaps when federal action stalls.
With AI, many states are already experimenting with targeted guardrails — especially around transparency, consumer deception, biometric data, deepfakes, and discriminatory automated decision-making. The National Conference of State Legislatures tracks extensive state AI activity, illustrating just how rapidly this policy surface is evolving.
It Narrows Enforcement Pathways That Actually Change Behavior
In practice, accountability comes from enforceable rules and credible consequences. State laws can create additional enforcement routes — state agencies, AG actions, and sometimes private rights of action — that materially affect how vendors design, test, and monitor systems.
Consider Illinois’ Biometric Information Privacy Act, which requires informed consent and retention policies for biometric data — an area deeply implicated by face recognition and identity-related AI systems.
Preemption that neutralizes state enforcement tools doesn’t just “simplify compliance”; it can reduce deterrence, which is another way of saying it increases public exposure.
The sentiment that we need uniformity to compete is true, but incomplete.
Yes: an inconsistent regulatory landscape can slow adoption and raise costs, especially for smaller firms. And yes: the United States needs a coherent national approach so we’re not ceding innovation leadership. But competitiveness does not require immunity.
A better framing is this: the U.S. will lead in AI because we build systems that are trusted by citizens, by businesses, and by democratic allies abroad. Trust is not a marketing slogan; it’s an outcome of testing, transparency, and consequences.
If federal preemption is used to block state safeguards while the federal government simultaneously lacks clear, enforceable accountability rules for high-risk uses, we risk a familiar pattern: rapid deployment, headline harms, public backlash, and then overcorrection. That cycle is far more damaging to “AI supremacy” than smart guardrails ever will be.
What’s Lost When States and Localities Can’t Act
Look at where public-sector AI risk clusters:
- Automated eligibility and case management can produce wrongful denials or delays in benefits and services
- Hiring and workforce tools can encode bias and create discriminatory outcomes at scale
- Education AI can mislead students, compromise privacy, or widen inequities
- Policing and public safety tools can introduce false positives, due process concerns, and surveillance creep
- Misinformation and deepfakes can distort local elections, emergency response, and community trust
States have been moving — sometimes imperfectly, but importantly — toward rules tailored to these risks. Utah, for example, enacted an AI policy act focused on disclosure obligations and accountability boundaries for certain uses. Other states have advanced measures around transparency and discrimination controls, aiming to create “pressure” for more responsible design and deployment.
If preemption cuts off that pressure, the burden shifts to a smaller set of federal levers — often the Federal Trade Commission’s unfair/deceptive authority, sector regulators and procurement language. Those tools matter, but they are not enough on their own, especially given how fast AI is advancing.
A More Innovative Alternative
This doesn’t have to be a binary choice between “50 states of chaos” and “one federal rule to bind them all.”
A pragmatic, pro-innovation model looks like this:
- Federal baseline standards for high-risk AI, which include transparency, testing, documentation, incident reporting, red-teaming, and clear liability lines
- Room for states and localities to go further in defined domains — civil rights, consumer protection, biometrics, youth protection, election integrity, and government service delivery
- Shared definitions and interoperable compliance artifacts so vendors can comply once in a meaningful way — through model cards, audit reports, data governance attestations and the like — rather than reinvent paperwork in every jurisdiction
- Procurement-driven accountability, so if you sell AI to government, you must meet auditable requirements, provide logs, support independent evaluation and accept contractual penalties for harmful misrepresentation.
This “cooperative federalism” approach preserves national competitiveness while keeping accountability close to where harms are felt. It also aligns with the common-sense idea that innovation accelerates when rules are clear, and trust is high.
The Bottom Line
Preempting state and local AI laws may seem like a shortcut to competitiveness, but it can also lead to public harm — especially if it weakens the accountability mechanisms that compel AI developers and deployers to act responsibly. America can lead the world in AI.
But leadership is not just speed. It’s responsible outcomes, enforceable accountability, and public trust — and we shouldn’t preempt the safeguards that help deliver all three.
Alan R. Shark is an associate professor at the Schar School for Policy and Government at George Mason University, where he also serves as a faculty member in the Center for Human AI Innovation in Society. He is also a senior fellow and former executive director of the Public Technology Institute, a fellow of the National Academy of Public Administration, and founder and co-chair of its Standing Panel on Technology Leadership. He is the host of the podcast series Sharkbytes.net.





By