Imagine a situation where an AI system labels someone as high risk without taking into account important factors like their low income, family responsibilities, and parental duties. It’s crucial to realize that most defendants, especially those from marginalized backgrounds, aren’t likely to pose a serious threat to society. Additionally, past infractions, especially those that occurred years ago, should not be the sole basis for predicting future inmate behavior. Judges need to be careful about relying only on AI algorithms because they can’t fully understand the complexities of human life and may not consider all the relevant details. Human judgment, which considers all aspects of a person’s situation, should always guide […]
Computing Community Consortium Blog
The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.
Archive for the ‘AI’ category
CCC responds to National Institute of Justice on Safe, Secure, and Trustworthy Artificial Intelligence
June 10th, 2024 / in AI, CCC / by Petruce Jean-CharlesCCC’s Weekly Computing News: Using AI to Understand Dog Barks
June 7th, 2024 / in AI, CCC / by Petruce Jean-CharlesIn another installment of CCC’s Weekly Computing News, we are highlighting a fascinating article by the University of Michigan News. This article explores how artificial intelligence is being used to develop tools that can understand what a dog’s bark means. Using AI to decode dog vocalizations Researchers at the University of Michigan, in collaboration with Mexico’s National Institute of Astrophysics, Optics and Electronics (INAOE), achieved a breakthrough in animal communication research by repurposing AI models originally trained for human speech analysis to understand dog barks. Led by CCC Council Member Rada Mihalcea, the team adapted the Wav2Vec2 machine-learning model to interpret a dataset of dog vocalizations collected by the INAOE. […]
CCC’s Weekly Computing News: Confidential Computing
May 31st, 2024 / in AI, CCC, Privacy / by Petruce Jean-CharlesThis week we discovered an interesting article from ACM Queue, a bimonthly magazine of the Association for Computing Machinery (ACM). This article, written by researchers Jinnan Guo, Peter Pietzuch, Andrew Paverd, and Kapil Vaswanin, explores how as the demand for trustworthy AI systems grows, the confluence of Federated Learning (FL) and Confidential Computing emerges as a promising solution. Trustworthy AI Using Confidential Federated Learning The article emphasizes the crucial need to ensure the trustworthiness of AI systems, particularly in safeguarding personal information. It highlights two key methodologies, Federated Learning (FL) and Confidential Computing, as effective approaches to achieving this goal. While FL addresses privacy concerns by enabling collaborative model training […]
CCC’s Weekly Computing News: Artificial Intelligence
May 24th, 2024 / in AI, CCC / by Petruce Jean-CharlesWelcome to the first addition of CCC’s Weekly Computing News series. This week we are analyzing two recent reports which cover current trends in Artificial Intelligence research and public perceptions on how generative AI will impact the upcoming election. AI and Elections The Elon University Poll, working with the Imagining the Digital Future Center, recently conducted a nationwide survey on Americans’ worries about potential misuse of artificial intelligence (AI) in the upcoming 2024 presidential election. The results reveal widespread concerns, with a striking 78% of adults fearing AI could be exploited to influence the election. People are particularly worried about AI manipulating social media, spreading fake content like videos and […]
Addressing the Unforeseen Deleterious Impacts of Technology
May 13th, 2024 / in AI, Announcements, CCC / by Haley GriffinRecent years have seen increased awareness of the potential negative impacts of computing technologies, and yet these harms are often unforeseen when the technology is first deployed. The CCC Council formed a task force on Addressing the Unforeseen Deleterious Impacts of Technology (AUDIT) to investigate possible harmful consequences of computing technology, to what extent these outcomes could have been mitigated or avoided, and who should be responsible for negative impacts. The task force, composed of Nadya Bliss, Kevin Butler, David Danks, Ufuk Topcu, and Matthew Turk, brings together diverse expertise across cybersecurity, artificial intelligence, human-computer interaction, data science, philosophy/ethics, computer vision, and autonomous systems. The group also spoke with multiple […]
Opportunity to Respond to U.S. Air Force RFI on Countering Bias in AI/ML Datasets
April 17th, 2024 / in AI, Announcements / by Petruce Jean-CharlesEarlier this month The U.S. Air Force Chief Scientist’s inter-agency working group sent out a Request for Information (RFI) on unintended Artificial Intelligence (AI) bias. The group is delving into the critical issue of bias within AI and Machine Learning (ML) algorithms, with a primary emphasis on datasets. They are seeking to learn about these biases from academic institutions like minority serving institutions (MSIs) and historically black colleges and universities (HBCUs), alongside industry, the federal government and other academic institutions. Despite the development of several tools aimed at identifying and addressing bias, such as the Department of Defense (DoD) Responsible AI toolkit, significant challenges persist in combating bias within AI […]