Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


CCC responds to National Institute of Justice on Safe, Secure, and Trustworthy Artificial Intelligence

June 10th, 2024 / in AI, CCC / by Petruce Jean-Charles

Imagine a situation where an AI system labels someone as high risk without taking into account important factors like their low income, family responsibilities, and parental duties. It’s crucial to realize that most defendants, especially those from marginalized backgrounds, aren’t likely to pose a serious threat to society. Additionally, past infractions, especially those that occurred years ago, should not be the sole basis for predicting future inmate behavior. Judges need to be careful about relying only on AI algorithms because they can’t fully understand the complexities of human life and may not consider all the relevant details. Human judgment, which considers all aspects of a person’s situation, should always guide legal decisions.

Last week, the CCC responded to the National Institute of Justice’s Request for Input on Section 7.1(b) of Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The RFI response was written by CCC Council members, members of the CRA’s Socially Responsible Computing working group, members of the computing community, and CCC staff. The authors are Nadya Bliss (Arizona State University, CCC Vice Chair), Kevin Butler (University of Florida, CCC), David Danks (University of California, San Diego, CCC), Stephanie Forrest (Arizona State University, CRA), Catherine Gill (CCC), Daniel Lopresti (Lehigh University, CCC Chair), Mary Lou Maher (CCC), Helena Mentis (University of Maryland, Baltimore County, CRA), Cris Moore (Santa Fe Institute), Shashi Shekhar (University of Minnesota, CRA), Amanda Stent (Colby College, CRA), and Matthew Turk (Toyota Technological Institute at Chicago, CCC).

“Due to the federal government’s obligation to ensure fair and impartial justice for all, AI should be transparent and augment, not replace, legal professionals in making judicial arguments and decisions,” says Shashi Shekhar, one of the authors of the response. 

In the quest for justice, transparency is the basis where fairness and trust are built. As society navigates the integration of artificial intelligence into legal proceedings, there exists a dual challenge: harnessing AI’s power to bolster efficiency without compromising fundamental principles of accountability and transparency. However, there is an opportunity and an obligation to ensure AI in the justice system operates with maximum transparency. Every citizen, every defendant, has the right to understand why a decision was made and what factors influenced it. Transparency is not optional; it’s essential for fairness and accountability. 

According to author Cris Moore, policymakers need computer scientists to cut through the hype, and explain what the strengths and weaknesses of AI really are.

“We need to give words like “transparency” clear, effective meanings, so that they’re not just vague aspirations,” Moore said.

But transparency alone isn’t enough. There is a need to confront inherent biases within AI systems such as biases against marginalized communities. Just as judges and attorneys meticulously examine evidence, they must scrutinize the algorithms underpinning AI-driven decisions. Careful observation is key in mitigating biases and ensuring AI supplements human judgment rather than replacing it.

Oversight also plays a critical role. Local audits, conducted regularly and collaboratively, serve as a vital safeguard against disparities and injustices. By illuminating AI system performance across different jurisdictions, areas for improvement can be identified, upholding accuracy and fairness.

Another author, Stephanie Forrest, believes CCC’s response illuminates important points to consider when using AI in the justice system.

“The success of a democracy rests squarely on the fairness of its justice system. As AI and related technologies are integrated into law enforcement and the courts, it is imperative that we preserve the basic principles of transparency and human decision-making in our justice system,” Forrest said.

Embracing transparent AI systems isn’t just about efficiency; it’s about reaffirming commitment to a society where every individual is treated with dignity and respect. By fostering openness and accountability, a more just and equitable society is shaped—one where all rights are upheld, and trust in the rule of law remains unshakable.

“The criminal justice system literally transforms people’s lives, and so it is critical that we have trustworthy and responsible AI to support the decisions made there,” says David Danks, another author of the response.

Read the full CCC response here.

CCC responds to National Institute of Justice on Safe, Secure, and Trustworthy Artificial Intelligence