Saturday, March 14, 2026

CHAI publishes a responsible AI framework for public comment

Share

The Coalition for AI in Health has published a draft framework for the responsible development and implementation of artificial intelligence in health care.

It is now open for 60-day public review and comment period.

WHY IT IS IMPORTANT

Project CHAI, which launched in December 2021, previously released a roadmap for trustworthy AI in April 2023 as a result of a consensus of experts from leading academic medical centers, regional health systems, patient advocates, federal agencies and other stakeholders from healthcare and technology.

In its Wednesday announcement, CHAI said the novel guide combines the principles of the roadmap with guidance from federal agencies, and the checklists provide practical steps for applying assurance standards to everyday operational processes.

Functionally, the Assurance Standards Guide outlines industry-agreed standards for implementing AI in healthcare, and assurance reporting checklists can assist identify operate cases, develop AI products in healthcare, and then deploy and monitor them.

The principles underlying the design of these documents are consistent with the National Academy of Medicine’s Artificial Intelligence Code of Conduct, the White House’s draft AI Bill of Rights, several National Institute of Standards and Technology guidelines, and the Cybersecurity Framework developed by CHAI, Department of Health and Human Services Administration for Strategic Preparedness and Response.

Dr. Brian Anderson, CEO of CHAI, emphasized the importance of the public review and comment period to ensure AI is effective, useful, secure, fair and equitable.

“This step will demonstrate that a consensus-based approach across the health ecosystem can both support innovation in health care and build confidence that AI can serve us all,” he said in a statement.

The guide would provide a common language and understanding of the health AI lifecycle and explore best practices in the design, development and implementation of AI in health care workflows, while draft checklists would assist in the independent review of health AI solutions throughout their lifecycle to ensure that they are effective, valid, secure and minimize bias.

The framework presents six operate cases to demonstrate considerations and best practices:

  1. EHR predicted risk (childhood asthma exacerbation)
  2. Diagnostic imaging (mammography)
  3. Generative AI (EHR query and extraction)
  4. Claims-based outpatient care (care management)
  5. Clinical operations and administration (prior authorization with medical code)
  6. Genomics (precision oncology with genomic markers)

Public reporting of checklist results would provide transparency, CHAI noted.

The coalition’s editorial team reviewed the guide and checklists, which were presented in May at a public forum at Stanford University.

One of the CHAI participants, Ysabel Duron, founder and executive director of the Latina Cancer Institute, said in a statement that collaboration and engagement of diverse and multisectoral patient voices are needed to ensure “protection against bias, discrimination, and unintended harmful effects.” “

“Artificial intelligence can be a powerful tool in overcoming barriers and closing gaps in health care access for Latino patients and health care workers, but it can also cause harm if we are not present at the table,” CHAI said in a statement.

A BIGGER TREND

First raised by the House Energy and Commerce Subcommittee during a hearing on the U.S. Food and Drug Administration’s regulations for medical devices and other biologics last month, more lawmakers are now asking questions of the FDA and the Centers for Medicare and Medicaid Services regarding their operate and surveillance of AI in healthcare.

announced Tuesday that more than 50 lawmakers in both the House and Senate called for increased surveillance artificial intelligence in Medicare Advantage insurance decisions, claiming to have a letter from Republicans criticizing the FDA’s partnership with CHAI.

Dr. Mariannette Jane Miller-Meeks, D-Iowa, asked the FDA during a May 22 hearing whether it would outsource AI certification to CHAI, a group she said was lacking in diversity and showing “clear signs of trying to take control.”

“It doesn’t pass the smell test,” she said.

Dr. Jeff Shuren, director of the Center for Devices and Radiological Health, told Miller-Meeks that CDRH works with CHAI and other AI industry coalitions as a federal liaison and does not engage the organization to review applications.

“We also told CHAI that it needs more representation on the health technology side,” Shuren added.

ON RECORDING

“Collaborative ways to quantify the usefulness of AI algorithms will help us realize the full potential of AI for patients and healthcare systems,” said Dr. Nigam Shah, co-founder and board member of CHAI and chief data scientist at Stanford Healthcare. in a statement. “The guide represents the collective consensus of our CHAI community of 2,500 people, including patient advocates, clinicians and technologists.”

Latest Posts

More News