Future Trends

DARPA Names Teams to Undertake Artificial Intelligence Assurance Projects

Trustworthy AI

DARPA Names Teams to Undertake Artificial Intelligence Assurance Projects

The Defense Advanced Research Projects Agency has selected four teams from private and academic-sector organizations to participate in Assured Neuro Symbolic Learning and Reasoning, a three-phase program aimed at developing hybrid artificial intelligence algorithms and evidence-based techniques to support assurance decisions.

Assurance refers to the services that assess an AI’s systems and processes to decide whether it is trustworthy, DARPA said.

Alvaro Velasquez, ANSR program manager at DARPA, defined trust as an expression of confidence that an autonomous system can perform an underspecified task. He explained that the program will explore how combining data-driven neural learning with symbolic reasoning can achieve trust in such systems.

A team comprising Rockwell Collins, SRI International and various universities is expected to develop neuro-symbolic AI algorithms and architectures.

Separately, the two companies will work with the University of California, Berkeley, and Vanderbilt University to craft an assurance framework for deriving correctness evidence.

Meanwhile, Systems and Technology Research will work on hybrid AI algorithm applications where assurance is necessary and the Johns Hopkins University Applied Physics Laboratory will evaluate and demonstrate technologies made by other performers

Potomac Officers Club Logo
Become a Potomac Officer Club Insider
Sign up for our weekly email & get exclusive event, and speaker updates, and find networking opportunities to connect with GovCon decision makers.

Category: Future Trends