NIST AI development
guidance document
NIST Seeks Feedback on Second Draft of AI Risk Management Framework
The National Institute of Standards and Technology has published the second draft of its artificial intelligence risk management framework.
The AI RMF is intended to serve as a voluntary guide for the development and use of the emerging technology. The document builds on the initial draft published in March 2022 and incorporates feedback provided since, NIST said.
The document provides guidance for managing the risks of AI, warning about the technology’s potential to exacerbate inequities if improperly implemented.
NIST stressed that the framework is not a checklist and only provides items that organizations should consider in their enterprise risk management.
The guidance is best applied at the beginning of an AI system’s development, minimizing its adverse effects on individuals and communities, the agency said.
The framework’s target audience is AI actors that, as defined by the Organisation for Economic Co-operation and Development, “play an active role in the AI system lifecycle.”
The public may provide feedback on the latest draft until Sept. 29. Comments will be reviewed during a workshop to be held on Oct. 18 and 19. NIST plans to publish the framework’s version 1.0 in January 2023.
NIST is also seeking input on a draft companion AI RMF playbook, which emphasizes AI system trustworthiness. The playbook has the same deadlines as the framework.
Category: Digital Modernization