The goal of UCSF Health and the UCSF Department of Clinical Informatics and Digital Transformation, thanks to a $5 million gift from Ken and Kathy Hao, is to ensure the effectiveness and safety of artificial intelligence used in clinical care by creating a platform that continuously reports whether a tool is achieving intended results or if it requires improvement.
The researchers said the platform will flag a tool if it is potentially perilous or threatens to worsen health disparities, calling for immediate action if necessary.
WHY IT IS IMPORTANT
Called the AI in Clinical Care Impact Monitoring Platform, USCF Health and DoC-IT will design IMPACC to report on the effectiveness of AI in clinical decision making and patient care, as well as potential risks to patient health and widening health disparities.
Once IMPACC is developed, UCSF Health will test its apply using a suite of artificial intelligence tools currently used in clinical care, according to the researcher. announcement from researchers.
Julia Adler-Milstein, head of UCSF DoC-IT, and Dr. Sara Murray, director of AI for health at UCSF Health, will lead a collaborative effort to improve patient care at UCSF while advancing the science of evaluating AI tools in real apply, they said.
“By creating IMPACC, we will take a huge step forward in how we analyze the performance of artificial intelligence in healthcare,” Murray said in a statement.
“As new AI technologies are deployed, this innovative, scalable platform will provide our healthcare system with direct and actionable insight into ongoing performance, ensuring not only the effectiveness of these new tools, but also system-wide safety and patient benefits.”
Researchers say the platform will also be used to support healthcare leaders in deciding whether to scale or discontinue the apply of certain artificial intelligence tools. We reached out to UCSF to ask if IMPACC would be available to other health systems once tested and ready for apply. We will update the story with each response.
A BIGGER TREND
The researchers pointed out that healthcare lacks established protocols for continuous AI monitoring, and therefore healthcare systems need a way to identify any issues in their real-world performance in real time using longitudinal monitoring and defined criteria for escalation and human intervention.
The risk of adverse outcomes for patients and providers that could go undetected if healthcare AI malfunctions is a concern for healthcare organizations, physicians, patients, policymakers, and many others.
“There are many valid concerns about what appears to be a new normal built on this powerful and rapidly changing technology,” Dr. Sonya Makhni, chief medical officer of the Mayo Clinic platform and senior associate consultant in the hospital’s Department of Internal Medicine, said earlier this year.
While the Mayo Clinic platform has developed a risk grading system to qualify AI before its apply, healthcare systems using AI algorithms “should use the AI development lifecycle as a framework for understanding where bias could potentially occur,” it advises healthcare leaders , thinking through the secure apply of artificial intelligence.
“Both solution developers and end users are responsible for developing an AI solution for risk to the best of their ability.”
ON RECORDING
“This philanthropic gift is transformative in so many ways,” Adler-Milstein said in a statement. “This comes at a critical time as the healthcare industry more broadly integrates artificial intelligence into clinical practice.”
“This is the first partnership between UCSF and UCSF Health for artificial intelligence monitoring,” added Suresh Gunasekaran, president and CEO of UCSF Health.
“Together, we are in a unique position to create the first effective model platform for U.S. healthcare systems that will provide real-time insight into AI tool performance and clinical impact.”
