Yesterday, December 5, CCC submitted a response to the Office of Management and Budget (OMB)’s Request for Comments (RFC) on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Draft Memorandum. The following CCC Council Members and CCC staff authored the response: David Danks (University of California, San Diego), Haley Griffin (Computing Community Consortium), David Jensen (University of Massachusetts Amherst), Chandra Krintz (University of California Santa Barbara), Daniel Lopresti (Lehigh University), Rajmohan Rajaraman (Northeastern University), Matthew Turk (Toyota Technological Institute at Chicago), and Holly Yanco (University of Massachusetts Lowell).
OMB sought responses to many different specific questions regarding their proposed memorandum that would implement “new agency requirements in areas of AI governance, innovation, and risk management, and would direct agencies to adopt specific minimum risk management practices for uses of AI that impact the rights and safety of the public.”
In response to one question of the RFC, “How can OMB best advance responsible AI innovation?”, CCC responded:
- Bring in non-agency and non-vendor expertise, perhaps through outside individuals on advisory boards for agencies, both technical experts and average citizens. The work of such boards should be observable and transparent where possible (so that the public can give feedback as needed).
- Prioritize that the outcomes from using AI are trustworthy, fair, and reliable. While using good, clean data is important in developing AI systems, the results and impacts on individuals and communities need to be top of mind, especially since even good data can be misused.
- Implement monitoring mechanisms that recognize that even if a large number of individuals have used an AI system over a period of time and have not yet reported any issues, that does not mean that the AI system is safe or reliable. Rare but impactful edge cases are common across all fields of computing.
- Reevaluate an AI/ML system each time a software update is implemented or a dataset is changed. We recognize there are difficult questions about when a software or dataset update is significant enough to require re-evaluation. We do not propose a hard-and-fast rule, but we contend that any such criterion should err on the side of too much re-evaluation rather than too little.
- Encourage independent evaluation and testing of all AI/ML systems so as not to become dependent on vendors who may employ differing criteria.
- Actively plan and prepare for unforeseen deleterious impacts because the behavior of many AI systems remains unpredictably complex.
- Release metrics and reasons for determinations that an AI is neither “safety impacting” nor “rights impacting.” The public should have access to these measures, given the significant incentives for companies to claim their systems are neither safety- nor rights-impacting in order to minimize scrutiny.
- Use a more fluid characterization of an AI system, as the impact of such technology can change rapidly and needs to be reconsidered often.
- Encourage and support public scrutiny of the AI use case inventories that Agencies are releasing. There is always a possibility they are choosing to display the work they know will please the public instead of a random/average sample.
- Require that AI use cases be published with sufficient detail and, where appropriate, representative data so that outside independent experts can evaluate potential risks and provide constructive feedback for ongoing improvement of AI deployments.
Throughout the response, authors also emphasize that sometimes non-generative AI solutions are more effective than generative AI ones, there needs to be a transparent and robust structure on how different AI Governance Bodies at different Agencies will collaborate, and there should be a clear redress mechanism if someone feels as though they have been negatively impacted by the use of AI. Read CCC’s full response here.