NIST Identifies Main Types of Adversarial Machine Learning Threats
A new National Institute of Standards and Technology (NIST) publication identifies general types of cyberattacks – so-called “adversarial machine learning” threats – that can be used to attack or manipulate the behavior of AI/ML systems.
The four main types, according to the news statement, are:
- Evasion attacks, which attempt to alter an input after the AI is deployed.
- Poisoning attacks, which occur in the training phase through the introduction of corrupted data.
- Privacy attacks, which attempt to gain and misuse sensitive information about the AI or the data on which it was trained.
- Abuse attacks, which involve malicious insertion of incorrect information into a source.
The publication “is intended to help AI developers and users get a handle on the types of attacks they might expect along with approaches to mitigate them – with the understanding that there is no silver bullet.”
01/26/2024
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.