Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Assessing Security Considerations for Artificial Intelligence

April 23rd, 2026 / in AI, CCC, resources / by Marla Mackoul

As artificial intelligence (AI) has become increasingly ubiquitous across domains, the need for it to be reliably secure has only grown. Yet in many ways, ensuring the security of AI agents is fundamentally different from the cybersecurity challenges of the past.

To address this growing challenge, the U.S. Center for AI Standards and Innovation (CAISI), housed within the National Institute of Standards and Technology (NIST) at the Department of Commerce, released a Request for Information (RFI) on practices and methodologies for measuring and improving the secure development and deployment of AI agent systems. The Computing Community Consortium (CCC) and Computing Research Association (CRA) recently submitted a response to this RFI, providing key insights and recommendations for the future of AI security.

Identifying New Roadblocks

AI agent systems pose unique challenges when it comes to security threats and vulnerabilities. One of the most significant difficulties is precisely what can make them so useful: their ability to adapt and learn. This quality means they are inherently more unpredictable, which can make debugging the system more difficult and can lead to problematic actions created by code errors quickly escalating.

Some of the other questions current security models are grappling with include:

  • How to delegate credentials to an AI agent
  • How to identify risk levels around highly context-dependent and unpredictable AI agents and their users
  • How to determine accountability for security failures
  • How to avoid cascade effects when one compromised agent taints the decision-making of downstream agents

The trend towards replacing human workers with AI programs in activities like coding poses its own particular security risk. Humans who understand the underlying systems behind AI agents, how they function, and how they generate their output are essential to tackling emerging threats. To that end, this new RFI recommends that agencies like NIST set expectations around terms like “human-in-the-loop” practices, building on the NIST AI Risk Management Framework and Playbook to provide implementation guidance.

Fostering Security in the Age of AI

Other ways to increase the security of AI agents include fundamentally reevaluating how we approach typical cybersecurity. Some first approaches suggested in this RFI include:

  • Examining other periods of major leaps in abstraction in computing for applicable lessons
  • Reevaluating the concepts of fuzz testing and input sanitization failures in the dynamic context of autonomous agents
  • Looking closely at the extent to which AI agents are capable of policing other AI agents
  • Assessing the particular risks of multi-agent systems, especially when the different agents have differing goals
  • Investment in research looking for effective guardrails that could be embedded within AI frameworks or could withstand assaults from such systems
  • Targeting research efforts on assessing if a human or another AI agent is interacting an AI agent
  • Forging stronger connections between the computing research community and industry cybersecurity practitioners

Historically, much of cybersecurity takes place after the design or deployment of an application. This RFI recommends a fundamental shift away from this practice, encouraging NIST to create incentives for industry to make security by design in AI systems rather than a second thought.

Read the Full Response

For the full scope of CCC and CRA findings and analysis on advancing the security of AI, access the full response below. You can also see more CCC responses to the community here.

Read the Full RFI Response Here

Please note any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the authors’ affiliations, or of the National Science Foundation, which funds CCC.

Assessing Security Considerations for Artificial Intelligence

Leave a Reply