The CCC, on Friday of last week, responded to a Request for Information to assist NIST with executing the tasks assigned to them in the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, issued on October 30, 2023.
The CCC’s response raised critical points about the challenges associated with the widespread adoption of generative AI. The rapid advancement of AI tools poses risks, particularly in content authentication, where reliable detection methods are still lacking. The complexity of the problem demands a nuanced approach, acknowledging the limitations of current detection tools and emphasizing the need for flexible and adaptive solutions, especially as both AI detection tools and AI generation tools improve in the future.
CCC advocated for cautious evaluation of AI technologies that will be suggested to NIST, such as watermarking programs. Despite ongoing advancements, content authentication efforts continue to be hindered by the possibility of removing or forging metadata and watermarks.
CCC also emphasized the importance of accurate terminology in discussing AI concepts. Terms like “hallucination” can be misleading; AI technologies do not hallucinate like humans do, they produce inaccurate answers and sometimes invent false information in their efforts to best address a prompt. CCC suggested a collaborative effort among stakeholders to establish precise terminology to describe the behavior of AI applications.
In order to effectively address the assignments prescribed to NIST in the Executive Order, CCC also recommended establishing a standardized feedback mechanism for reporting AI incidents and leveraging resources like the US Artificial Intelligence Safety Institute Consortium for testing and evaluating AI applications.
Read the CCC’s full RFI response here.