AI safety cooperation
New NIST Body Seeking International AI Safety Collaboration, Director Says
Developing international alliances on securing artificial intelligence use is among the goals that the new U.S. AI Safety Institute is pursuing, according to Elizabeth Kelly, the institute’s inaugural director.
The body, which was established under the National Institute of Standards and Technology in February, is working with countries that have their own AI safety agencies, like Japan and the United Kingdom, as well as with allies planning their counterparts, Kelly disclosed.
She outlined two forms of collaboration that the institute is seeking among U.S. allies, with the first focused on developing interoperable guidelines to level the AI playing field in the private sector. The second would call for joint efforts to grow the science driving AI technology advancements, Kelly added.
The USAISI director discussed the institute’s strategies during the launch of nonprofit Mitre’s new AI Assurance and Discovery Lab on Monday, Nextgov/FCW reported.
During the event’s panel discussion, Kelly also identified the institute’s three main tasks, which include creating AI test beds and developing protocols to pinpoint problems and offer solutions. The third task focuses on developing AI-generated content identification, with a guidance to be operationalized across the federal government.
The institute hosts a consortium of over 200 AI developers and researchers from government, industry and academia organized by the Department of Commerce to help USAISI develop standards for trustworthy AI technologies.
Category: Future Trends