NIST Nearing Completion of AI Trustworthiness Platform Plan
The platform, which will be called the Trustworthy and Responsible AI Resouce Center, will provide support documents and guidance about the technology’s characteristics and use cases. It will also contain data from other agencies and industry members about AI trustworthiness.
According to an NIST spokesperson, the center will also include a knowledge base of terms associated with responsible and trustworthy AI. The spokesperson added that NIST is also asking for third-party contributions for additional guidance, including categories and subcategories for specific AI applications.
The online repository complements the AI Risk Management Framework, FedScoop reported.
Elham Tabassi, the chief of staff of NIST’s IT Laboratory, told Congress members on Thursday that the AI Risk Management Framework will map and measure the effects of the technology on organizations and on society. Tabassi said the framework is on track to be released in January 2023.
According to the NIST official, the second draft of the AI RMF, which was made available for public comment in August, came with recommended actions to ensure that AI systems will be trustworthy throughout their life cycles.
The National AI Initiative Office said AI is trustworthy if it can mitigate bias and exhibit key characteristics, including reliability, interpretability and resilience to attacks. NAII believes trustworthy AI requires a multifaceted approach that includes research, metrics and standards developments, AI technical standards development and governance.
Category: Federal Civilian
Tags: AI Risk Management Framework artificial intelligence Elham Tabassi federal civilian FedScoop National AI Initiative Office National Institute of Standards and Technology Trustworthy and Responsible AI Resource Center