Governable AI Key to Military’s JADC2 Goals, Northrop Executives Say
The Department of Defense must ensure that its artificial intelligence systems are governable before the technology is applied to Joint All-Domain Command and Control, according to an AI expert at Northrop Grumman.
Governability entails that operators can easily determine if an AI system is not performing as intended, Breaking Defense said.
Amanda Muller, consulting AI systems engineer and technical fellow at Northrop, added that governability gives the human operator a chance to deactivate or disengage the misbehaving system.
“At that point, the human operator can either take over or make adjustments to the inputs, to the algorithm, or whatever needs to be done,” Muller said.
Governability is one of the five elements included in the Responsible AI Guidelines in Practice published by the Defense Innovation Unit in November. The other principles are responsibility, equitability, traceability and reliability.
Vern Boyle, vice president of advanced processing solutions for Northrop’s networked information solutions division, said that the military’s JADC2 concept will heavily rely on AI’s ability to determine how best to move information across different platforms and nodes.
He said that the task of maintaining a communications link cannot be delegated to humans operating in contested environments.
The DOD is also facing other technical gaps in its AI and machine learning ambitions, Boyle said, noting that many of the platforms that the military needs to support JADC2 are still running legacy technologies.
Boyle highlighted other prerequisites to JADC2 such as having a standard way of sharing information between different platforms.
Category: Digital Modernization
Tags: Amanda Muller artificial intelligence Breaking Defense Department of Defense digital modernization JADC2 Northrop Grumman Responsible AI Verb Boyle