AI framework aims to help criminal justice agencies adopt the tech responsibly

Alexander Sikov via Getty Images
Concerns about using AI in criminal justice operations are valid, according to a Council on Criminal Justice report, and agency leaders may just need a helping hand to unlock the technology’s potential.
Artificial intelligence is changing government operations significantly, and while the technology can pose numerous benefits to agency efficiency and service delivery, its impacts are often unclear and uncertain, which is why helping agencies establish AI basics for assessing, procuring and adopting the tech is critical, one expert says.
Indeed, while AI has the potential to help court rooms and other legal professionals sift through or draft documents, it can also generate false information, quotes or cases. Such was the case in Illinois after a judge from Williams County last year realized a legal brief that he was reviewing referenced a case that never existed.
Growing exploration of AI’s place in the criminal justice system has pushed several states to consider laws and policies aimed at regulating AI’s safe and responsible use in legal, law enforcement and court systems.
But there remains “a hunger for reliable information and aid to guide decision-making and implementation,” particularly as the stakes of using AI in criminal justice operations “are very high … everybody wants to do a good job,” said Jesse Rothman, director of the Council on Criminal Justice’s task force on AI.
That’s why the Council on Criminal Justice has released a framework to help criminal justice agencies and professionals evaluate purpose-built AI systems before adopting and deploying them into their workflows.
The user-decision framework aims to actualize the benefits of AI for criminal justice agencies while also helping them mitigate the technology’s risk, Rothman said. The framework includes five phases for agencies to refer to as they assess and implement AI systems.
Criminal justice leaders should first define a specific problem or opportunity for improvement within their agency that they want to resolve, according to the first phase. From there, officials should determine how AI could address an issue — such as reducing document backlogs — better than non-AI solutions, according to CCJ.
“Technology should not be a solution looking for a problem,” the report states.
During this phase, criminal justice agencies should also conduct an internal assessment to determine if their organization has the capacity to adopt an AI-based system. For instance, agency leaders should consider if data governance policies exist or need to be established, and whether they need to bring in additional resources, such as technical expertise, to deploy AI, according to the report.
Similarly, the second phase suggests that criminal justice agencies assess risks and opportunities of an AI system. The framework, for example, prompts users to consider the risk level of a particular AI tool, including how it could impact a resident’s procedural or legal rights, create errors in legal proceedings and documents or negatively influence decisions like arrests, sentencing, parole determinations and others.
To help evaluate AI systems, the report also suggests that criminal justice agencies establish a review team that includes a diverse array of staff, like legal experts, IT employees and operational managers, that can help develop a comprehensive assessment, the framework states.
Collaboration among staff, from law enforcement to technologists, is vital to creating an agencywide understanding of an AI system, Rothman said. For example, some staff may support the use of AI-enabled surveillance tools in public spaces, while others could propose the security and privacy risks of such tools. This communication and idea sharing is critical for shaping agencies’ decisions on what are acceptable and prohibited AI use cases for their jurisdiction, Rothman said.
A diversified team approach to AI can also strengthen a criminal justice agency’s approach to procuring the tech, according to the report. The third phase under the framework underscores how “procurement is a key safety point for agencies to make sure they really understand what they’re getting into” because the procurement process is where they “have leverage” to set standards and requirements for vendors’ AI solutions,” Rothman said.
Indeed, “the procurement phase establishes the contractual foundation that protects your agency, ensures accountability, and maintains compliance throughout the system’s lifecycle,” the report states.
Criminal justice agencies should, for example, consider including contractual agreements that require vendors to offer documentation of their AI system testing and validation, comply with accuracy and reliability standards, adhere to relevant privacy laws and regulations, accept liability for system errors or failures and other factors.
The fourth and fifth phases of the framework offer guidance for responsible implementation and monitoring of AI systems once agencies are ready to leverage functional AI tools. In the former phase, criminal justice leaders should “pay careful attention to how the system will function in your environment and how you’ll ensure it performs as intended,” the report states.
That means agency leaders can deploy the AI system under a pilot program first to test the tech in a realistic environment. Criminal justice users can, for example, more closely evaluate an AI system’s usability in areas like its design interface and how that impacts staff’s ability to fully leverage the tech.
The implementation phase also offers leaders the chance to establish AI training for staff, which can help agency officials better understand the system's functionality, limitations and other characteristics, according to the report.
The fifth and final phase of the AI framework suggests that criminal justice agencies should prepare ongoing monitoring and periodic reassessments to ensure the AI systems they pursue continue to function properly and accurately, the report states. The framework recommends that agencies evaluate high-risk systems annually, but lower-risk AI tools can be reassessed as contract renewals occur.
However, more comprehensive assessments should be conducted if an AI system undergoes any major changes or updates, is applied to a new use case than its original purpose, creates performance issues or spawns other significant challenges, according to the report.
AI systems and the steps needed to make sure they are leveraged properly by criminal justice agencies can create serious complications and doubt among potential users, but such hesitation can lead to “a risk of the perfect being the enemy of the good,” Rothman said.
Resources like CCJ’s assessment framework can help remove perceived barriers to exploring and implementing AI solutions in criminal justice and create “a really good basis for ongoing engagement,” he said.




