NIST Working on Industry-Specific AI Guidance
Researchers at the National Institute of Standards and Technology are expected to provide recommendations on the proper use of artificial intelligence and machine learning algorithms tailored to each sector that employs AI and ML in their technologies.
The recommendations would be included in future editions of the recently released Artificial Intelligence Risk Management Framework. Speaking at a George Washington University-hosted event, Elham Tabassi, the chief of staff in the Information Technology Laboratory at NIST, highlighted the importance of providing tailored AI risk management techniques, noting that each industry has different use cases and priorities, Nextgov reported.
The AI RMF 1.0, published in January, aims to guide organizations in developing low-risk AI systems. It contains the common types of risks found in AI and ML technology and provides system development recommendations that are applicable to all sectors, including employing human judgment to ensure the trustworthiness of AI systems. NIST is still accepting public comments on the document until Feb. 27. Tabassi said an updated version of AI RMF will be released in spring 2023.
Like the NIST document, the Department of Defense’s revised autonomous weapon systems guidance requires human control over autonomous and semi-autonomous weapon platforms. According to the DOD, providing commanders and operators authority over AI systems will ensure compliance with the AI Ethical Principles and the Responsible AI Strategy and Implementation Pathway.
Kathleen Hicks, deputy secretary of defense and a 2023 Wash100 awardee, the updates to the Autonomy in Weapon Systems directive were meant to address challenges brought by current technological advances.
Category: Digital Modernization
Tags: Artificial Intelligence Risk Management Framework digital modernization Elham Tabassi machine learning Nextgov NIST