Responsible AI use
New State Department Guidance Covers International Military AI Use
The Department of State has issued guidance on the responsible use of artificial intelligence for military purposes, outlining a set of best practices that the U.S. and other nations would agree to abide by in future development and implementation efforts. Titled “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” the guidance was revealed Thursday at a summit in the Hague, the Netherlands.
The declaration is not legally binding but it is meant to represent a consensus on the issue. According to a press email sent by the State Department, the guidance is intended to be a global basis for proper AI practices.
One principle expressed in the guidance is that AI users in armed conflict should follow international humanitarian laws, maintain accountability, consider risks and benefits and mitigate unintended biases.
The State Department emphasized that humans retain decision-making power over critical capabilities such as nuclear weapons. Senior government officials are also called upon to oversee AI implementation over weapon systems and other “high-consequence” military capabilities, Nextgov reported Thursday.
Other agencies have devised guidance for AI use in various contexts. The National Institute of Standards and Technology recently shared AI Risk Management Framework 1.0, a ruleset meant to improve the technology’s trustworthiness and promote development while preserving civil rights and liberties.
Category: Digital Modernization
Tags: artificial intelligence Department of State digital modernization Nextgov technology guidance