Future Trends

DARPA Names Teams to Undertake Artificial Intelligence Assurance Projects

Trustworthy AI

DARPA Names Teams to Undertake Artificial Intelligence Assurance Projects

The Defense Advanced Research Projects Agency has selected four teams from private and academic-sector organizations to participate in Assured Neuro Symbolic Learning and Reasoning, a three-phase program aimed at developing hybrid artificial intelligence algorithms and evidence-based techniques to support assurance decisions.

Assurance refers to the services that assess an AI’s systems and processes to decide whether it is trustworthy, DARPA said.

Alvaro Velasquez, ANSR program manager at DARPA, defined trust as an expression of confidence that an autonomous system can perform an underspecified task. He explained that the program will explore how combining data-driven neural learning with symbolic reasoning can achieve trust in such systems.

A team comprising Rockwell Collins, SRI International and various universities is expected to develop neuro-symbolic AI algorithms and architectures.

Separately, the two companies will work with the University of California, Berkeley, and Vanderbilt University to craft an assurance framework for deriving correctness evidence.

Meanwhile, Systems and Technology Research will work on hybrid AI algorithm applications where assurance is necessary and the Johns Hopkins University Applied Physics Laboratory will evaluate and demonstrate technologies made by other performers

Sign Up Now! Potomac Officers Club provides you with Daily Updates and News Briefings about Future Trends

Category: Future Trends

Tags: AI assurance artificial intelligence Assured Neuro Symbolic Learning and Reasoning Defense Advanced Research Projects Agency Future Trends trustworthy AI