Hello, Guest!

Federal Civilian

NIST Researchers Recommend Prioritizing Persons, Beneficence, Justice in AI Training

Ethical AI

NIST Researchers Recommend Prioritizing Persons, Beneficence, Justice in AI Training

Researchers from the National Institute of Standards and Technology have recommended prioritizing “respect for persons, beneficence and justice” in training artificial intelligence systems.

The recommendation comes as concerns arise over the impact of AI on human lives, including how it can make biased judgments on hiring decisions, loan applications and welfare benefits. 

Published in the February issue of Computer magazine, the researchers suggested adopting the 1979 Belmont Report’s core ideas, which have long guided the U.S. government in developing policies on research involving human subjects, NIST said.

NIST Social Scientist Kristen Greene said that her team unearthed the principles after going through human subjects research. She added that the 1979 report’s core values can be applied to ensure transparency with humans training the technology.

The institute has been working to ensure the responsible development and deployment of the technology. Among its advocacies is to have federal agencies adopt its AI Risk Management Framework. In November 2023, it received bipartisan support from the Senate after Sens. Jerry Moran, R-Kan., and Mark Warner, D-Va., introduced a bill requiring the adoption of the AI framework.

Sign Up Now! Potomac Officers Club provides you with Daily Updates and News Briefings about Federal Civilian

Category: Federal Civilian

Tags: artificial intelligence Belmont Report ethical AI federal civilian Kristen Greene National Institute of Standards and Technology