Last month, the National Telecommunications and Information Administration (NTIA) released a request for comment on Artificial Intelligence (AI) system accountability measures and policies. The request sought comments pertaining to both potential and existing self-regulatory, regulatory, and other measures designed to provide reliable evidence to external stakeholders that AI systems are legal, effective, ethical, safe, and otherwise trustworthy.
Written by Nadya Bliss (Arizona State University), David Danks (University of California, San Diego), Maria Gini (University of Minnesota), Jamie Gorman (Arizona State University), William Gropp (University of Illinois), Madeline Hunter (Computing Community Consortium), Odest Chadwick Jenkins (University of Michigan), David Jensen (University of Massachusetts Amherst), Daniel Lopresti (Lehigh University), Bart Selman (Cornell University), Ufuk Topcu (University of Texas at Austin), Tammy Toscos (Parkview Health), and Pamela Wisniewski (Vanderbilt University) the Computing Community Consortium (CCC) submit a response outlining methods of accountability, the different levels of audit, subjects of accountability, how to leverage existing resources and models and lastly, the importance of transparency and the its current barriers. You can read the full response here.
NTIA will rely on public input such as the CCC’s to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.