NIST Releases Open Source Tool for Assessing Risk of AI Models
NIST has released new guidance and an open source tool called Dioptra “to help AI developers evaluate and mitigate risks stemming from generative AI and dual-use foundation models.”
Dioptra is aimed at helping users test the effects of adversarial attacks on machine learning models and determine how well their AI software stands up to a variety of attacks.
The software tool, which is available for free download, “could help the community, including government agencies and small to medium-sized businesses, conduct evaluations to assess AI developers’ claims about their systems’ performance,” the announcement says.
Along with this tool, NIST’s AI Safety Institute has released the initial public draft of Managing Misuse Risk for Dual-Use Foundation Models , which outlines best practices to help AI developers protect their systems from misuse.
Three other AI-related guidance documents have now been finalized:
- AI RMF Generative AI Profile can help organizations identify unique risks posed by generative AI.
- Secure Software Development Practices for Generative AI and Dual-Use Foundation Models augments the Secure Software Development Framework by providing practices and recommendations specific to AI model development.
- A Plan for Global Engagement on AI Standards is designed to drive worldwide creation and use of AI-related consensus standards, cooperation, and information sharing.
Learn more at NIST’s AI Safety Institute.
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.