Promoting AI security
DARPA’s GARD Program to Help AI Developers Protect Their Algorithms
The Defense Advanced Research Project Agency has released a new “toolbox” intended to help developers of artificial intelligence improve the security of their algorithms. Called the “Guaranteeing AI Robustness Against Deception” program, the new tools have been made available to any researcher in hopes they will be used to keep adversary states from accessing databases and code that may be used in weapons-making, FedScoop reported Thursday.
The GARD program includes several evaluation tools for developers, including a platform called Armory that tests code against a range of known attacks. It aims to thwart hackers that can alter training data, adjust the weights of a neural network or otherwise change an algorithm without being detected could wreak havoc on systems that rely on AI.
In a statement, Bruce Draper, GARD’s program manager, said that with the new toolbox, DARPA is “taking a page from cryptography and are striving to create a community to facilitate the open exchange of ideas, tools and technologies that can help researchers test and evaluate their machine learning defenses.” He added that the agency’s goal is to raise the bar on existing evaluation efforts, bringing more sophistication and maturation to the field.
Draper said the GARD program includes tools to test against data posing. One of them is called, the Adversarial Robustness Toolbox, which started as an academic project but has since been picked up by DARPA for further research.
The GARD program employs the services of researchers from Two Six Technologies, IBM, MITRE, University of Chicago and Google Research. Their combined efforts have made fresh resources available to the broader research community via a public repository, Draper said further.
Category: Digital Modernization
Tags: Armory artificial intelligence Defense Advanced Research Project Agency digital modernization FedScoop GARD program