With the rise of Artificial Intelligence (AI) and its increasingly ubiquitous role in society, the Biden administration, a multitude of government agencies, and nonprofits are turning their attention to the assurance and implementation of responsible AI practices. The Center for Security and Emerging Technology (CSET) is no exception and has contributed to the effort with three recent reports seeking to help organizations implement responsible AI. A Matrix for Selecting Responsible AI Frameworks By Mina Narayanan and Christian Schoeberl Synopsis: Process frameworks provide a blueprint for organizations implementing responsible artificial intelligence (AI). A new issue brief by CSET’s Mina Narayanan and Christian Schoeberl presents a matrix that organizes approximately 40 process […]
Computing Community Consortium Blog
The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.
Archive for the ‘AI’ category
CSET Releases Reports to Help Organizations Implement Responsible AI
June 6th, 2023 / in AI, Announcements / by Maddy HunterBiden-Harris Administration Takes New Steps to Advance Responsible Artificial Intelligence Research, Development, and Deployment
May 25th, 2023 / in AI, Announcements / by Maddy HunterThe Biden-Harris Administration is continuing their recent efforts to advance the research, development, and deployment of responsible AI. With the rise of AI and its increasing capabilities these initiatives are meant to protect American citizens’ rights and safety. Last week the CCC blog highlighted responsible AI efforts from the White House. Yesterday the White House announced three more initiatives summarized below. An update to the National AI Research and Development Strategic Plan. This plan builds on plans issued in 2016 and 2019, and sets out key priorities and research goals to guide federal investments in AI research and development (R&D). It will focus federal investments in R&D to promote responsible […]
The Biden-Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety
May 16th, 2023 / in AI / by Maddy HunterThe development and implementation of responsible artificial intelligence systems has come to the forefront of conversations and concerns in government, industry and academia. Last week the Biden-Harris Administration introduced new actions to advance responsible AI. The actions include: New investments to power responsible American AI research and development (R&D). The National Science Foundation is announcing $140 million in funding to launch seven new National AI Research Institutes. This investment will bring the total number of Institutes to 25 across the country, and extend the network of organizations involved into nearly every state. These Institutes catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI […]
ACM Article Featuring CCC Council Member David Danks on AAAS Session
May 9th, 2023 / in AAAS, AI, Announcements / by Maddy HunterComputing Community Consortium (CCC) council member David Danks was recently featured on ACM News for his involvement in a CCC-sponsored scientific session at AAAS 2023 “Maintaining a Rich Breadth for Artificial Intelligence.” The session featured discussions highlighting the importance of incorporating a broad range of multi-discipline research and expertise. Panelists recognized that neural networks and deep learning have driven progress in AI over the year resulting in an imbalance and dominance of these disciplines in AI research. These silos can stunt the development of AI and lead to missed opportunities for growth in the field. Accompanied by panelists Melanie Mitchell and Bo Li, David Danks discussion topic: “Let a Thousand […]
CCC at AAAS Panel Recap: “Maintaining a Rich Breadth for Artificial Intelligence” Q&A
April 28th, 2023 / in AAAS, AI, CCC / by Catherine GillThis blog post is a continuation of yesterday’s summary of the Maintaining a Rich Breadth for Artificial Intelligence panel at the 2023 AAAS meeting. This panel was moderated by Maria Gini (University of Minnesota) and the panel comprised David Danks (University of California – San Diego), Bo Li (University of Illinois – Urbana-Champaign), and Melanie Mitchell (Santa Fe Institute) Following the panel, Dr. Gini opened the discussion up to the audience for Q&A. The first question came from a researcher in the audience: To what extent do you think homogeneity is an effect of cost in terms of the available hardware? Neural networks are cheap to create and […]
AAAS Panel Recap: Maintaining a Rich Breadth for Artificial Intelligence
April 27th, 2023 / in AAAS, AI, CCC / by Catherine GillThe final CCC panel of AAAS 2023, “Maintaining a Rich Breadth for Artificial Intelligence”, was held on Sunday, March 5th, the last day of the conference. This panel was composed of David Danks (University of California – San Diego), Bo Li (University of Illinois – Urbana-Champaign), and Melanie Mitchell (Santa Fe Institute) and was moderated by Maria Gini (University of Minnesota). Dr. Bo Li began the panel by discussing the importance of conducting trustworthy machine learning (ML), and the ways in which we can ensure ML is safe, equitable, and inclusive. Machine learning is ubiquitous, Li said, and today is used in a significant number of everyday activities, such […]