NIST Releases Open Source Tool for Assessing Risk of AI Models

By

Dioptra aims to help users test the effects of attacks on models.

NIST has released new guidance and an open source tool called Dioptra “to help AI developers evaluate and mitigate risks stemming from generative AI and dual-use foundation models.”

Dioptra is aimed at helping users test the effects of adversarial attacks on machine learning models and determine how well their AI software stands up to a variety of attacks.

The software tool, which is available for free download, “could help the community, including government agencies and small to medium-sized businesses, conduct evaluations to assess AI developers’ claims about their systems’ performance,” the announcement says.

Along with this tool, NIST’s AI Safety Institute has released the initial public draft of Managing Misuse Risk for Dual-Use Foundation Models , which outlines best practices to help AI developers protect their systems from misuse.

Three other AI-related guidance documents have now been finalized:

Learn more at NIST’s AI Safety Institute.
 
 
 

 
 
 

08/08/2024

Related content

comments powered by Disqus