BOSTON – When asked about the evolution of AI and when AI intelligence will become an integral part of healthcare, Dr. Patrick Thomas, director of digital innovation for pediatric surgery at the University of Nebraska Medical Center, said clinical training needs to be modernized.
“We need to prepare clinical students for the real world they’re going to be in,” he said Thursday at the HIMSS AI in Healthcare forum.
As host of a panel on how to support healthcare professionals in trusting and adopting AI—which emphasized the importance of governance, keeping people informed about progress, and sharing responsibility with developers to maintain the clinical quality of AI—Thomas asked the other panelists how they deal with physician skepticism and concerns.
Thomas was joined in the discussion by Dr. Sonya Makhni, medical director of the Mayo Clinic Platform, Dr. Peter Bonis, chief medical officer of Wolters Kluwer Health, and Dr. Antoine Keller of Ochsner Lafayette General Hospital.
Expanding Clinical Resources
The utilize of huge language models to alleviate the intense cognitive load that clinicians face is still fraught with complications ranging from data bias and hallucinations to costs.
Bonis noted that app developers will likely face development costs in addition to the costs of innovating the underlying models.
Keller added that the health service has more information for clinical staff to digest.
“We don’t have enough staff to make” correct clinical decisions in a timely manner, he said. As physicians focus on risk innovation — building safeguards to address concerns about AI utilize — providing physicians with a comfort level to embrace it is vital.
Keller, a cardiac surgeon, described how Louisiana-based Oschner Health is providing its healthcare partners with an AI-powered tool called Heart Sense to guide interventions.
He added that by diagnosing underserved communities using low-cost technology, the AI-based tool “geometrically increases the workforce.”
Improving access in underserved communities
Apply heart examination tool not only improves Oschner’s utilization, but also allows us to focus attention on patients who need it most.
Thomas asked what it means for AI to impact healthcare when there are no data scientists in the healthcare setting, how Oschner’s community partners in healthcare are learning about the AI tool, and how that tool is being used.
Keller said there is a lot of uncertainty about the communities they serve being eased by the support they provide.
“You have to be present and aware of the obstacles and problems people face when using technology,” he explained.
However, people who utilize this technology in areas where there is a shortage of medical personnel are grateful for it, he added.
One of the key diagnostic criteria—the presence of a heart murmur in patients—is required before a patient can have an aortic valve replacement. Using an AI-powered screening tool, the health system found that 25% of people over 60 in its communities will have a pathogenic murmur—which can be treated with surgery.
“The prevalence data shows that a significant portion of the population remains undiagnosed,” Keller said.
Using AI to understand which patients are at risk and can be cured is a significant advantage in treating patients before they develop irreversible dysfunction.
But acceptance depends on an educated workforce, he added.
“Visually – using something concrete” and effortless to understand even with a low level of education, he added.
Rates and shared responsibility
“The stakes are so high in clinical care,” Bonis acknowledged, noting that always including humans in clinical decisions is a guiding principle in developing trustworthy AI across a range of nuances.
While primary care physicians aren’t always interested in the “sausage pie” that is AI, “from a provider perspective, caution is key,” he said.
Makhni added that the question she is asking herself is how to connect expertise across the lifecycle.
The Mayo Clinic platform is working directly with AI developers and Mayo Clinic to look at ways to implement clinical AI in a way that is also user-centric — “and then communicate that information in a transparent way to enable the end user.”
Such multidisciplinary analysis could, for example, determine whether the creator of an AI was attempting to assess for bias, and the resulting information could be communicated to doctors in a way they can understand.
“We meet [developers] “Where they are in their journey,” but the goal is to provide a framework for safety, honesty and accuracy, she added.
Considering the digital divide and asking clinical staff what worries them is key to ensuring protected AI systems. This burden cannot fall solely on users.
“Sometimes it should also fall on the creator of the solution,” she said. “We need to have shared responsibility.”
While healthcare won’t solve all the AI-related problems quickly, “we can move forward in steps,” she added.
