AI safety
NIST Publishes Guidance on Data-Poisoning Tactics Against AI Systems
The National Institute of Standards and Technology has released a document describing how malicious actors manipulate artificial intelligence systems‘ behaviors.
The document, titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” is designed to help AI developers and users understand the different types of cyberattacks they might experience and to provide suggestions on how to defeat them., the NIST said Thursday.
Among the topics included are data trustworthiness, the effect of the environment on AI’s learning tendencies and the different kinds of attacks against AI.
Researchers said malicious actors rely on four kinds of attacks, namely evasion, poisoning, privacy violation and abuse. The guidance describes how each attack affects AI systems.
The guidance was developed with partners from the academia, industry and government agencies.
The document was published weeks after the NIST solicited feedback for separate guidance on trustworthy AI. The institute issued a request for information in mid-December to learn more about AI red-teaming, generative AI and synthetic content risk management and global technical standards for AI development.
Category: Digital Modernization