Removing bias from AI
NIST Report: AI Developers Should Account for More Factors in Eliminating Bias
Artificial intelligence developers should look at more factors when rooting out potential bias in their algorithms, according to a report from the National Institute of Standards and Technology.
Reva Schwartz, the principal investigator for AI bias at NIST and one of the report’s authors, said developers should not only focus on bias in machine learning processes and the data used to train algorithms. Schwartz said that biases can also be brought about by broader societal factors that affect the development process, the NIST website reported.
“AI systems do not operate in isolation. They help people make decisions that directly affect other people’s lives,” Schwartz said.
For instance, AI can be used to drive decisions on whether a person is accepted into a school, authorized to receive a bank loan or be accepted as a rental applicant, NIST said.
The expanded scope of bias is a key element of NIST’s revised version of its Special Publication 1270 titled “Towards a Standard for Identifying and Managing Bias in AI.”
The document includes guidance from the NIST AI Risk Management Framework, a model that offers trustworthiness considerations in AI design, development, use and evaluation.
NIST added that the new version of SP 1270 accounts for “human and systemic biases.” The agency said that human biases are connected to how people use data to fill in missing information about a person.
Systemic biases can result from long-standing issues that disadvantage certain social groups or result in racial discrimination, NIST added.
Category: Digital Modernization