NIST to Release Guidance on AI Vulnerabilities by 2021
NIST said it wants to release guidance on AI trustworthiness that will be welcomed by the international AI community but that it needs more time to understand how to measure bias in data and algorithms.
“While we understand the urgency, we want to take time to make sure that we build the needed scientific foundations. Otherwise developing standards too soon can hinder AI innovations that allow for evaluations and conformity assessment program," said Elham Tabassi, chief of staff at NIST's Information Technology Laboratory.
NIST is also taking steps to identify the technical requirements for measuring AI trustworthiness, which so far falls into the categories of accuracy, security, robustness explainability, objectivity and reliability.
In August, NIST held a workshop on AI bias, engaging with scientists, engineers, psychologists and lawyers. Tabassi said the results of the findings could be published in early February 2021 at the latest.
“My wish list, how I see this program succeeding, is that we build a resource center — I call it a metrologist’s guide to AI — that talks about everything that you need to consider," Tabassi said.
She said government agencies should also know the trade-offs associated with, among other things, improving explainability at the expense of accuracy.
NIST's guidance is expected to let agencies decide what levels of trustworthiness and risk they are comfortable with for each technical requirement. “Every user or implementer can make the right choice for themselves,” Tabassi said.
Category: Future Trends
Tags: AI algorithm algorithm bias artificial intelligence data Elham Tabassi FedScoop Future Trends Information Technology Laboratory National Institute of Standards and Technology trustworthiness workshop