AI development and use
NIST Official Says AI Risk Assessment Must Tackle Fairness, Equitability of Technology’s Benefits
An official from the National Institute of Standards and Technology recently underscored the importance of carrying out risk assessments before an artificial intelligence system is deployed.
Elham Tabassi, the NIST Information Technology Laboratory’s chief of staff, said during a panel on March 6 that risk assessments must determine whether an AI system benefits everyone in a fair, responsible and equitable manner, though she noted that carrying out such an evaluation presents “major technical challenges”.
Tabassi also said that AI risk assessments, in order to validate the trustworthiness of a system, require metrics that are tailored to the use cases where the AI system will be applied, since AI operates differently depending on the information it is made to process, Nextgov reported.
During the same panel, the NIST official went on to say that the lack of formal U.S. legislation would not hamper the development of trustworthy AI, and that valid and technically sound standards would suffice as a foundation for future regulations.
In October last year, the White House released a document titled the Blueprint for an AI Bill of Rights, whose aim is to provide guidance on the development of AI technologies while providing user privacy and security. Earlier this year, the NIST released the first version of its AI Risk Management Framework and its accompanying playbook.
None of the documents, however, are legally binding.
Category: Future Trends