The free tool, called Dioptra, is designed to facilitate AI developers understand some of the unique data risks in AI models and facilitate them “mitigate those risks while supporting innovation,” the NIST director says.
Nearly a year after the Biden administration issued an executive order on Secure, Secure, and Trustworthy Development of Artificial Intelligence (AI), the National Institute of Standards and Technology has released a up-to-date open-source tool to facilitate test the security of AI and machine learning models.
WHY IS THIS IMPORTANT
The up-to-date platform, known as Dioptersintroduces a mandate in the White House executive order that states that NIST will play an lively role in helping to test algorithms.
“One of the vulnerabilities of an AI system is the model at its core,” the NIST researchers explain. “By exposing the model to large amounts of training data, it learns to make decisions. But ANDf opponents poison training data with inaccuracies – for example, by introducing data that might cause the model to misidentify stop signs as speed limit signs – the model could make erroneous, potentially catastrophic decisions.”
The goal is to facilitate healthcare providers and other organizations better understand AI-based software and assess how it handles “a wide variety of attacks,” according to NIST.
Open source tool – available for free download – could facilitate healthcare providers, other enterprises, and government agencies evaluate and validate AI developers’ promises about the performance of their models.
“Dioptra does this by allowing the user to determine what types of attacks would make the model perform less efficiently, and by quantifying the performance reduction so the user can understand how often and under what circumstances the system would fail.”
BIGGER TREND
In addition to unveiling the Dioptra platform, NIST’s AI Security Institute also released up-to-date draft guidelines last week Managing the Risk of Misuse for Dual-Use Foundation Models.
Such models – known as dual-use because they have “the potential for both benefit and harm” – can pose security risks when used inappropriately or by the wrong people. The up-to-date proposed guideline describes “seven key approaches to mitigating the risk that models will be misused, along with recommendations on how to implement them and transparency around their implementation.”
Additionally, NIST also published three finalized AI security white papers focusing on: Generative risk mitigation artificial intelligence, reducing risks to data used to train AI systems AND global commitment to AI standards.
In addition to the AI Executive Order, there have been a number of recent efforts at the federal level to establish safeguards for AI in healthcare and other areas.
This includes a major agency reorganization within the Department of Health and Human Services to “focus the mission on policy and operations related to technology, data, and artificial intelligence.”
The White House also introduced up-to-date rules for the employ of AI at federal agencies such as the CDC and VA hospitals.
Meanwhile, NIST has been working diligently on other AI and security initiatives, such as the AI Research Privacy Guidelines and the recent major update to the groundbreaking Cybersecurity Framework.
IN THE DOCUMENT
“Despite all of its potential transformative benefits, generative AI also carries risks that are very different from those we see with traditional software,” NIST Director Laurie E. Locascio said in a statement. “These guidance documents and testbed will inform software developers about these unique risks and help them develop ways to mitigate them while supporting innovation.”
“AI is the defining technology of our generation, and we are moving fast to keep pace and help ensure the safe development and deployment of AI,” added U.S. Secretary of Commerce Gina Raimondo.[These] announcements demonstrate our commitment to providing AI developers, implementers, and users with the tools they need to safely leverage the power of AI while minimizing the risks associated with it. We’ve made great progress, but we still have a lot of work to do.”
