Friday, January 10, 2025

CHAI introduces an open-source AI nutrition label model chart

Share

The AI ​​Health Coalition announced availability open source version of the AI ​​applied model card on GitHub. The healthcare industry-led coalition said Thursday that it designed the charter to enable healthcare AI developers to provide key information on how to train their AI systems.

According to CHAI CEO Brian Anderson, sections of the open source model’s draft charter – the health care AI “nutrition label” – go beyond health data, technology and interoperability for U.S. Health and Human Services: certification program updates, algorithm transparency, and the Final information sharing principle regarding the certification of health care IT systems.

The nutrition label also offers the ability to align with other voluntary standards, such as the National Academy of Medicine’s Artificial Intelligence Code of Conduct.

“This is an important step in starting a conversation between a customer and a salesperson, rather than leaving it to a PowerPoint slide and anecdotal stories that help build trust,” Anderson said Wednesday.

Built by consensus

Driven by growing demand from both startups and healthcare systems, CHAI has stated that its mission is to ensure that everyone who creates and uses AI in healthcare can make informed decisions, and the open availability of the nutrition label provides greater transparency and trust in selected artificial intelligence tools.

If we want doctors, nurses and patients “to trust the artificial intelligence models that will be used in increasingly important cases in health care, we need to provide greater transparency about how these models are created and how they work,” he said. “The model card achieves this level of transparency.”

Any HIT company or health system can apply the CHAI model charter in any way they want, which the coalition says can aid streamline procurement processes and improve implementation at scale.

“We want the card model to be widely available and widely used by the supplier and customer community,” Anderson said.

The CHAI Nutrition Label was created through a collaborative effort between multiple stakeholders to create a “consensus set of definitions for what responsible AI looks like,” Anderson explained. This includes agreed evaluation metrics, performance thinking, and fairness and bias assessments.

The coalition sought to unite regulators and developers to develop AI standards.

“It’s a real challenge when you bring together vendors, AI modelers and their customers and try to build consensus on the minimum level of transparency that we can agree on and that we need from creators,” he said of the work over the past eight months.

While CHAI stated in October that certification rubrics and draft model cards could be expected to be available by the end of April 2025, after incorporating stakeholder feedback, the coalition is asking for test feedback to be submitted via GitHub no later than January 22.

Standards and alignment

While the organization has nearly 3,000 member organizations, making nutrition information freely available ultimately helps health systems take advantage of hundreds, if not thousands, of artificial intelligence tools. The open source, digital, encoded version of the model card is a standard that can be used over and over again.

“You want to have scalable solutions that meet the challenge of managing and monitoring multiple AI systems and tools,” Anderson said.

Another thing Anderson spends a lot of time thinking about is alignment.

For example, many in the industry are focused on making patients part of the development process.

“We think it’s really important because our patient community groups believe in it, and I think if we want to put patients at the center, developers should do it from the beginning” in a way “that meets patients where they are.”

The CHAI AI Model Charter also includes a section on the National Academy of Medicine’s AI Code of Conduct, which is not part of the HTI one principle.

“We believe that providing providers with the opportunity to express their views that if they believe that the development of their model is consistent with the AI ​​code of conduct outlined by NAM, this should be the case and that they should have the opportunity to share their perspectives on whether they believe in the models developed accordingly.”

Latest Posts

More News