Artificial intelligence
Researchers: New NIST Framework Should Account for AI-Triggered Global Catastrophes
The National Institute of Standards and Technology should account for the possibility of a global catastrophe caused by artificial intelligence, according to a research team from the University of California, Berkeley.
UC Berkeley issued the warning in an evidence paper responding to NIST’s draft framework on the adoption of AI across the federal government, FedScoop reported.
In June, NIST published a request for evidence to inform its creation of a special publication aimed at managing bias in AI development.
NIST said people will only trust AI if it is characterized by accuracy, explainability and interpretability, privacy, reliability, robustness, safety and resilience.
The UC Berkeley researchers warned that “increasingly advanced and general AI models” could pose catastrophic risks if they suffer “robustness failures” in applications such as critical infrastructure.
NIST defines general AI as that which can perform general or “human” intelligent action and can learn by themselves based on their operating environment.
The academics highlighted OpenAI’s Generative Pre-trained Transformer 3, a general AI system that uses deep learning to mimic human writing and speech as well as demonstrate apparent creativity.
Policymakers should also account for the risks of AI because of the technology’s ability to scale and its adoption in a range of critical applications, the research team said.
They also cited arguments that the introduction of AI into nuclear forces could damage their stability and increase the probability of a nuclear war.
Outside of AI’s global catastrophic risk, the researchers are also concerned about the technology’s impact on human rights and wellbeing and on democracy and security.
Category: Federal Civilian