DARPA looking for tools to test AI
The agency wants ways to assess the variety of threats to artificial intelligence-enabled systems deployed across DOD.
The Defense Advanced Research Projects Agency is looking for tools that can identify vulnerabilities in AI-enabled systems.
In a new sources sought notice, DARPA said it wants industry as well as academia and government labs to share information about techniques and tools that can assess weaknesses in artificial intelligence based systems.
DARPA isn’t just interested in AI systems but also “vulnerabilities presented by the entire AI-enabled systems development and deployment pipeline,” according to the notice.
The agency identifies four key areas of interest:
- AI red teaming frameworks
- Cyber methods for affecting AI systems
- Electronic warfare techniques
- Physical manufacturing of adversarial effects
Physical manufacturing of adversarial effects refers to materials and printing techniques that look normal to people but are designed to confuse artificial intelligence systems.
DARPA is interested in information on extracting model data, manipulating AI models, sensor inputs and creating objects to deceive AI systems.
The agency has been working on these issues since at least 2019 when it launched the Guaranteeing AI Robustness Against Deception program to develop methods to defend AI models against threats.
DARPA needs to evaluate systems the Defense Department has deployed in tactical, operational, and strategic operating environments.
Responses to the RFI are due Feb. 28.