The Defense Advanced Research Projects Agency (DARPA) released a new program Guaranteeing AI Robustness against Deception (GARD) to aid Artificial Intelligence (AI) developers in testing their models’ defenses against attacks. A team of large technology players from academia and industry, including IBM, MITRE, University of Chicago and Google Research collaborated to make a set of open source testing tools. The tools, ranging from a virtual evaluation testbed, a benchmark dataset, and “test dummies” help identify vulnerabilities in AI systems and make systems more robust against an increasingly complex range of attacks.
The growing field of Machine Learning (ML) enables a large range of opportunities for societal and technological growth, but this increase in use and capabilities also comes with a greater capacity for vulnerabilities. GARD is designed to meet this growing need and arm developers with the tools they need to protect their systems against new threats.
“Currently, ML defenses tend to be highly specific and are effective only against particular attacks. GARD seeks to develop defenses capable of defending against broad categories of attacks. Furthermore, current evaluation paradigms of AI robustness often focus on simplistic measures that may not be relevant to security. To verify relevance to security and wide applicability, defenses generated under GARD will be measured in a novel testbed employing scenario-based evaluations. – Dr. Bruce Draper“
All tools are available to developers through GitHub. You can find out more on the GARD website.