David Berger
Commandant
Marine Corps
Current AI Systems Not Trustworthy Enough, Marine Commandant Says
Current unmanned and artificial intelligence technologies are not trustworthy enough to be more widely implemented in the Marine Corps' arsenal, according to the service's top official.
Marine Corps Commandant David Berger said he plans to use AI someday to create better threat identification systems, self-updating logistics systems and unmanned supply vehicles and medevac, USNI News reported Tuesday.
“In the same way that a squad leader has to trust his or her Marines, the squad leader’s going to have to learn to trust the machine. Trust. In some instances today, I would offer we don’t trust the machine," Berger said at the National Defense Industrial Association's annual expeditionary warfare conference.
Berger said AI capabilities, including automatic sensor-to-shooter targeting, are already operational. The gap lies in the lack of trustworthy data and processes, he said.
The longer humans intervene in the AI training process, the more opportunities there are to make mistakes, Berger said.
In February 2020, the Pentagon's Defense Innovation Board laid out its principles for AI use in war, emphasizing responsibility, equity and governability.
The Defense Advanced Research Projects Agency has also sought more resources to improve the reliability and trustworthiness of AI-based systems.
AI development officials at the agency recently announced plans to train and educate personnel on ethical AI practices as the military begins operationalizing the technology's applications across the services.
Alka Patel, an official at DOD's Joint Artificial Intelligence Center and a past Potomac Officers Club event speaker, said the department will work with allies and industry partners to ensure they are on the same page regarding the importance of the trustworthiness of AI development.
Category: Popular Voices