Friday, March 13, 2026

Patients want clarity on the operate of AI in healthcare

Share

BOSTON – The opportunity to transform the way healthcare providers communicate with patients through the operate of artificial intelligence is not just about the accuracy, transparency, fairness and maintenance of the data model, but also about finding a way to address the challenges of personalization.

What patients want to know and when is adding complexity — challenging the healthcare AI industry to incorporate both expected and unexpected patient perspectives, according to panelists at Thursday’s HIMSS AI in Healthcare Forum.

By contextualizing and giving doctors the freedom to input data so they can communicate with patients in a more human way, AI has the potential to transform patient-doctor interactions.

“To some extent, these impressive tools that are evolving much faster than the healthcare system are even able to consider how they embed themselves, really position themselves or provide a huge opportunity to personalize that dialogue, to identify what’s important to the individual, and to advise and support them in making decisions that are important to them,” said Anne Snowdon, director of research at HIMSS, the parent company of .

While the usability of AI technologies is an essential part of the trust discussion, mapping transparency, choice, autonomy, and decision-making is critical for patients.

“From this perspective, you start to redefine and rethink care,” said Snowdon, the panel’s moderator. Snowdon has a doctorate in nursing.

Improving communication with patients

Snowdon was joined by Alexandra Wright, patient advocate and director of research at HIMSS, Dr. Chethan Sarabu, director of clinical innovation at the Health Tech Hub of Cornell Tech, Mark Polyak, president of analytics at IPSOS, and Dr. Lukasz Kowalczyk, physician at Peak Gastroenterology Associates, to explore what patients want from AI-enabled healthcare.

“While healthcare is still grappling with the challenges of AI hallucinations, AI has the potential to elevate conversations and build greater trust,” said Sarabu, a board member of Lithe Collective, a nonprofit that seeks to advance the collective rights, interests, and voices of the patient community in health technology.

Sarabu said that while working with the collective, he heard during a patient panel discussion that one patient who thought she was communicating with a very helpful nurse named Jessica at her clinic through the patient portal lost trust when she asked the nurse in person to come to the doctor’s office.

“She just said she wished she had been told earlier it was a chatbot,” he said.

“You shouldn’t cheat on your patients,” joked Kowalczyk, a gastroenterologist and consultant at Cliexa, a digital health platform based in Denver.

However, if a patient knows that healthcare chatbots like Jessica are not real people, the patient’s plight can be made easier by the AI’s ability to communicate empathetically.

“Compassion fatigue is a real thing in health care,” Kowalczyk said. “Sometimes it’s really hard, especially when you’re going through the day and it takes one or two patients to really make it hard to take on the next one.”

He added that huge language models are great at transforming and translating information and describing patients’ concerns to doctors, giving them a “moment to catch their breath” and regain empathy.

“I think those are the opportunities where patients feel like the AI ​​is acting as their advocate and helping me better understand who they are as a person.”

The dynamics of personalization

AI may not contribute anything to a patient’s vision of care. In various scenarios – for example, in predictive analytics – it may offer information that patients do not want.

“Some patients may want more information, others less, and someone may want less information during an in-person visit but want more material to review later,” Sarabu said.

From a physician’s perspective, “It’s difficult to truly personalize all the information, context, and content for each patient.”

According to Polyak, care has three components – access to care, access to right information, and the speed of information flow.

He noted that 16% of patients using ChatGPT asked health care questions to reduce health care costs.

“[They] asked ChatGPT to give them different scenarios of how our doctors should approach their care based on what they had – in order to reduce costs.”

“It wasn’t something I expected, but it was mostly about generating scripts that were printed out and taken” to meetings.

The sense of control also varies between patients.

For patients and their families facing a health crisis, Wright said, “information is power.”

“Often in situations like this you can feel like you’re losing control,” she added.

“And if you don’t fully understand your condition or fully understand what’s happening, you can really feel like you have no control over what’s happening to you.”

When the doctor is no longer in the office and patients have questions, they look for information in search engines and on ChatGPT, she added.

Context also plays a role in choosing which information patients want to control.

“When I first went into the hospital, would I have liked them to have told me my chances of survival? Probably not, because I don’t think it would have helped in this situation,” Wright said.

“But now, if someone told me about my risk of, say, future cancer, would I want to know if there was anything I could do to prevent it? Probably.”

What the detailed discussion suggests turns the operate of AI in healthcare on its head, Snowdon said: “How can we facilitate people make decisions for themselves, give them information with confidence and [discover] “what is most important to them?”

Latest Posts

More News