Human review, responsibility should be the ‘core feature’ of AI solutions, official says

Mitch Diamond via Getty Images
Keeping human judgement at the center of AI tools, like automated parking enforcement, can help improve the accuracy of citations and dispel community backlash, experts say.
Artificial intelligence has emerged as a tool to help agencies issue parking fines and tickets more efficiently, particularly as many cities have understaffed enforcement teams, but well-trained human reviewers remain critical to the approval process, experts say.
Across the U.S., cities and towns are expanding, but, in many cases, their parking and curb real estate is not, said Subhash Challa, CEO of SenSen, an AI platform provider. As more residents and visitors pass through communities where apartments, businesses and other facilities vie for curb space, AI-enabled camera systems and sensors are helping traffic authorities more efficiently catch people who stay parked past their meter time or drop their vehicle in a restricted area.
Indeed, cities like Philadelphia, Boston, and Santa Monica, California, have recently installed surveillance systems onto street signs or government vehicles to enforce parking regulations that help reduce traffic build up on public streets. For many municipalities, more streamlined parking enforcement can be an additional revenue stream for cities grappling with declining budgets.
It can be tempting to incorporate an AI solution into parking enforcement, like an automated ticketing or fine system, but “everybody adopting any of these [artificial intelligence] technologies needs to address the risks … and develop appropriate risk reduction or mitigation strategies,” said Marc Pfeiffer, senior policy fellow at Rutgers University’s Center for Urban Policy Research.
“That’s where subject matter expertise becomes important. AI seems so confident, and the language [the tech] uses is intended to build confidence in you,” he said. That’s where being trained on how AI works and its limitations can help agency staff be more attuned to double-checking AI-enabled results or identifying potential errors that need further evaluation.
Without that expertise, more mistakes, like incorrectly issuing fines to drivers, can occur, “and there will be times when there is an egregious error made, and it’s going to snap back into the agency’s face,” Pfeiffer said.
Indeed, New York’s Metropolitan Transportation Authority faced backlash against the agency’s use of AI-enabled cameras on certain public buses that mistakenly flagged and ticketed approximately 3,800 vehicles for blocking bus lanes in 2024. More than 870 of those tickets were issued to vehicles that were legally parked. Similar challenges unfolded in Alameda, California, after reports emerged of the Alameda-Contra Costa Transit District incorrectly issuing $110 tickets to some cars that were parked in legal spaces away from a bus stop.
Both agencies claimed the ticket cases were being reviewed by human staff, but such incidents underscore the value of proactive risk assessment and management before deploying AI for enforcement purposes.
For example, potential lawsuits, negative pushback from the community and other impacts are where prevention and risk management upfront could have saved agencies from trouble, Pfeiffer explained.
“Technology alone does not determine success,” Maria Tamayo-Soto, parking services manager for Las Vegas, said in an email to Route Fifty. “Implementation strategy, staff training, clear public communication and well‑defined processes play an equally important role.”
Las Vegas has been leveraging a platform from SenSen to enable AI-driven parking enforcement through license plate reader units throughout the city since 2020. The system enables LPRs to flag vehicles that are violating parking regulations in zones based on GPS data, which generates an “evidence package” that includes images of the vehicle, nearby signage, its license plate and relevant geolocation data, Tamayo-Soto said.
Officers also receive training so they can responsibly leverage the platform and better decipher whether a case flagged by AI was a valid violation or not, at which point “the decision to issue [a ticket], change [it] to a warning or dismiss the citation remains entirely human,” she said.
Ultimately, “the officer is responsible for the outcome,” Tamayo-Soto said.
Officers may also be required to evaluate the scene in person and take additional pictures to validate citations, particularly since “AI cannot fully interpret unique circumstances such as temporary signage, unusual conditions or exceptions which fall outside typical patterns,” Tamayo-Soto explained.
The city’s cautious approach to AI-enabled parking enforcement could be paying off, she said. “To date, we have not received community concerns regarding the use of the technology. What the community does see is more consistent and equitable enforcement, which is exactly what the system was designed to support.”
“Human oversight prevents errors and ensures each citation is accurate and defensible,” Tamayo-Soto said, which is why “human review should be treated as a core feature rather than a safeguard for AI limitations. It is fundamental to legal, public and operational defensibility.”




