HITRUST this week unveiled its novel AI Risk Management Assessment, which it describes as a comprehensive assessment approach to mitigating the risks associated with implementing AI in healthcare and other organizations.
WHY IS THIS IMPORTANT
The assessment aims to facilitate ensure that organisations have appropriate policies in place to govern the deployment of AI tools and that companies can effectively communicate this information to management teams and boards of directors.
HITRUST says its approach aligns with standards issued by NIST and ISO/IEC and is supported by an assessment framework and SaaS platform to facilitate users demonstrate that AI risk management expectations are met.
“The total effort of managing risk at scale can take weeks or months of work to design and maintain an assessment approach, socialize that approach, and prepare for the assessment work itself,” adds Bimal Sheth, executive vice president of standards development and quality assurance operations at HITRUST. “Even then, there can be questions about completeness and quality, and the work can be exhausting if the organization is trying to comply with multiple industry standards.”
Designed for any organization using these types of tools—including machine learning algorithms and vast language models for generative AI—the framework aims to facilitate leaders in healthcare and other sectors validate their risk management approaches in the context of these rapidly evolving technologies.
“The AI RM solution can be used as a self-assessment and benchmarking tool, or companies can engage one of HITRUST’s more than 100 third-party assessment firms to review and verify their implementation,” Jeremy Huval, HITRUST’s chief innovation officer, said in a statement.
BIGGER TREND
The novel risk management tool comes less than a year after HITRUST announced its AI Assurance Program in October 2023. The project aims to offer an approach inspired by the HITRUST Joint Security Framework to facilitate healthcare organizations develop strategies for unthreatening, sustainable, and trustworthy AI models.
HITRUST said it also plans to release a novel AI Security Certification Program later this year, which will include AI-specific control specifications included in the HITRUST CSF and enhancements to the company’s quality assurance methodology, systems and ecosystem.
Earlier this month, another organization, NIST, unveiled an open-source AI security assessment framework. The free tool, called Dioptra, aims to facilitate developers understand and mitigate some of the unique risks of data using AI and machine learning models.
IN THE DOCUMENT
“AI risk management standards are evolving rapidly, and it is critical for companies to approach these principles in a thoughtful and comprehensive manner,” said Robert Booker, chief strategy officer at HITRUST, in a statement announcing the AI Risk Management Assessment. “Governing this important and powerful capability is critical to unlocking the potential that AI offers, and managing risk is critical to responsible AI deployment.”
