While some clinicians and healthcare workers are ready and eager to welcome AI into their daily routines, others are a bit more hesitant.
“We’re becoming less trustworthy in AI results as we use it more and more and see how unreliable it is,” says Sarah M. Worthy, CEO of DoorSpace, an employee relationship management technology and services company that includes AI.
“Rather than building AI to replace key healthcare workers, we should be thinking about how we can use AI to make their jobs easier and more productive.”
While many healthcare executives are eager to allocate money to AI investments, Worthy believes there are ideal ways to invest that kind of money in AI in healthcare.
“We’re not going to be able to replace physicians with AI anytime soon, which is why it pains me that so much money and attention is being put into these types of technologies,” she said. “There are so many areas on the administrative side of healthcare where AI could have a huge impact on lowering costs, reducing delays in care, and improving the overall experience for both patients and physicians, without putting patients’ lives at risk.”
We spoke with Worthy to talk about where and how she thinks AI funding in healthcare should be allocated and what the outcomes might be.
Q. You suggest that some clinicians and other health care workers are unsure about AI in health care. Why?
AND. I hear a lot of skepticism from both the clinical and administrative sides of healthcare about the exploit of AI, and their skepticism is justified. What most people think of as “AI” and what we see in the media is a large language model, or as I like to say, “AI talking to us.”
Doctors with a Masters of Laws have a number of known issues that can cause them to provide false information and contribute to social prejudices, which can have a stern impact on patients.
In a life-or-death situation where a person’s well-being is literally on the line, we want our healthcare providers to be hesitant to bring this technology to the patient’s bedside. Our clinicians are already overworked and exhausted, and expecting them to incorporate unpredictable technology into their practice of care is unreasonable and hazardous.
Q. How can this barrier to AI adoption be overcome?
AND. Healthcare leaders need to raise the voice of ethical AI demands from the tech sector and ensure their RFPs list AI requirements that have safeguards in place. This will take time, and it means stepping back from investing in most AI tools.
In the miniature term, while we wait for these AI tools to become safer and more reliable, executives may want to consider investing in AI for the nonpatient side of the healthcare business. There are many great and proven ways that AI is helping to automate and manage operational and workforce data to save time and money while accelerating better decision-making across the business in ways that don’t directly involve patient care.
Most importantly, virtually every healthcare organization in the U.S. today has a data management crisis. Their data is in silos, fragmented across departments, paper, and spreadsheets. There is a saying: “Bad data in, bad reports out.”
Healthcare organizations need to get their data in order and have documented data lifecycle management processes in place, or any AI investment will have a negative ROI.
Q. You say that AI work in healthcare today should focus on administrative tasks. What exactly is your vision for AI today?
AND. I don’t have a vision for AI in healthcare, I have a vision for a better workplace experience in healthcare that includes AI and other technologies to get us there. This distinction is crucial to make because I often have executives approach AI from a position of, “I have this AI, so how should I use it?” When a better way to approach it is to say, “I have this problem, what’s the best way to solve it?”
I think that’s one of our unique strengths and what really sets our job apart at DoorSpace. There’s a common trend among healthcare leaders to try to add modern things to the process in an attempt to solve a problem. But often the best way to solve those problems is to remove things from the process.
When I look at administrative problems in healthcare, most of them have their origins in this very problem, where a problem arose and modern regulations were created to streamline the process.
Over time, with each modern problem, they added more CME, more compliance rules, more forms, more paperwork. All of this led to a situation where doctors were spending nine hours a week just on non-patient paperwork. That’s a whole workday.
We’re looking at how we can exploit data to better measure and understand how to remove things from the process while also increasing quality and efficiency. We think AI is one way we can automate a lot of the data management that currently takes up valuable physician and management time on low-value data entry and reporting.
Q. What are your thoughts on where AI fits into the clinical side of healthcare in the coming years?
AND. Recently I saw a product that I really like that brings AI into the doctor’s office by using AI as a writing tool. This tool listens in the background as the doctor and patient discuss issues and writes everything down in EHR notes that the doctor can then review and edit in a matter of minutes.
This allows the doctor and patient to be face to face during the examination, rather than the doctor having to stare at a computer and enter data into the EHR throughout the examination.
We’re also seeing a lot of success in radiology, where AI is speeding up radiologists’ scan evaluations to make more right diagnoses. In all the test cases I’ve seen so far, the only ones that have yielded any positive results have one thing in common: the AI was a tool that supported the clinician’s work.
So I think we’ll continue to see AI being used to facilitate healthcare workers do their jobs more accurately, more efficiently, and more quickly, to give them time in their days. I don’t foresee AI replacing doctors and nurses anytime soon without catastrophic consequences.
