Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


The Biden-⁠Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety

May 16th, 2023 / in AI / by Maddy Hunter

The development and implementation of responsible artificial intelligence systems has come to the forefront of conversations and concerns in government, industry and academia. Last week the Biden-Harris Administration introduced new actions to advance responsible AI. The actions include:

  • New investments to power responsible American AI research and development (R&D). The National Science Foundation is announcing $140 million in funding to launch seven new National AI Research Institutes. This investment will bring the total number of Institutes to 25 across the country, and extend the network of organizations involved into nearly every state. These Institutes catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good. In addition to promoting responsible innovation, these Institutes bolster America’s AI R&D infrastructure and support the development of a diverse AI workforce. The new Institutes announced today will advance AI R&D to drive breakthroughs in critical areas, including climate, agriculture, energy, public health, education, and cybersecurity.

  • Public assessments of existing generative AI systems. The Administration is announcing an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems, consistent with responsible disclosure principles—on an evaluation platform developed by Scale AI—at the AI Village at DEFCON 31. This will allow these models to be evaluated thoroughly by thousands of community partners and AI experts to explore how the models align with the principles and practices outlined in the Biden-Harris Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers to take steps to fix issues found in those models. Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation.

  • Policies to ensure the U.S. government is leading by example on mitigating AI risks and harnessing AI opportunities. The Office of Management and Budget (OMB) is announcing that it will be releasing draft policy guidance on the use of AI systems by the U.S. government for public comment. This guidance will establish specific policies for federal departments and agencies to follow in order to ensure their development, procurement, and use of AI systems centers on safeguarding the American people’s rights and safety. It will also empower agencies to responsibly leverage AI to advance their missions and strengthen their ability to equitably serve Americans—and serve as a model for state and local governments, businesses and others to follow in their own procurement and use of AI. OMB will release this draft guidance for public comment this summer, so that it will benefit from input from advocates, civil society, industry, and other stakeholders before it is finalized.

This initiative builds on the administration’s recent, considerable efforts to mitigate risks and consequences of AI development to ensure US citizens’ rights and safety. Other government initiatives include: the Blueprint for an AI Bill of Rights, various related executive actions, the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource. These efforts demonstrate the importance of implementing responsible practices into the development of AI systems.

The Computing Community Consortium held a workshop last week on “Community-Driven Approaches to Research in Technology & Society” working towards a similar goal as the recent government initiatives. The workshop focused on enabling conversations between computing researchers and those that are negatively impacted by AI systems to better understand the consequences these systems are having on various underrepresented communities and discussing potential solutions. The workshop stressed the importance of participatory research and outlined how community partners and researchers can effectively and ethically work together to conduct community-driven research. Be on the lookout for the upcoming report!

 

The Biden-⁠Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety

Comments are closed.