Monday, December 23, 2024

Yale study shows how AI bias worsens health care disparities

Share

A novel research report from the Yale School of Medicine takes a closer look at how biased artificial intelligence can impact clinical outcomes. The study specifically focuses on different stages of AI model development and shows how data integrity issues can impact health equity and quality of care.

WHY IT’S IMPORTANT
Published earlier this month in PLOS Digital Healththe study provides both real-world and hypothetical illustrations of how AI biases adversely impact health care delivery – not only at the point of care, but at every stage of medical AI development: training data, model development, publication, and implementation.

“Bias in; bias out,” said the study’s senior author, John Onofrey, assistant professor of radiology and biomedical imaging and urology at the Yale School of Medicine.

“After many years of working in the field of machine learning/artificial intelligence, the idea of ​​bias in algorithms is not surprising,” he said. “But listing all the potential ways that bias can enter the AI ​​learning process is amazing. This makes mitigating bias seem like a daunting task.”

As the study notes, bias can appear at almost any stage of algorithm development.

This can be the case for “data features and labels, model development and evaluation, implementation and publication,” the researchers say. “Insufficient sample sizes for certain patient groups may result in suboptimal performance, algorithm underestimation, and clinically irrelevant predictions. Missing patient test results can also cause biased model behavior, including captureable but non-random missing data, such as diagnosis codes, and data that is not typically or not easily captured, such as social determinants of health.”

Meanwhile, “expert annotation labels used to train supervised learning models may reflect hidden cognitive biases or substandard care practices. Overreliance on performance metrics during model development may obscure bias and reduce the clinical utility of the model. When applied to data outside the training cohort, model performance may deteriorate relative to previous validation and this may occur differently across subgroups.”

Of course, the way clinical end-users interact with AI models can also create bias in itself.

Ultimately, “this is where AI models are developed and published and who determines who influences the trajectories and priorities of future medical AI development,” say the Yale researchers.

They note that any efforts to mitigate this bias – “collection of large and diverse data sets, statistical bias methods, thorough model evaluation, emphasis on model interpretability, and standard error reporting and transparency requirements” – must be implemented carefully, with astute eye on how these guardrails will work to prevent an adverse impact on patient care.

“Rigorous validation through clinical trials is necessary before implementation in real-world clinical settings to demonstrate unbiased applicability,” they said. “Addressing bias in the model development stages is critical to ensuring that all patients equitably benefit from the future of medical AI.”

But the report “Bias in medical artificial intelligence: Implications for clinical decision making” offers some suggestions for tempering this bias towards improving health equity.

For example, previous research has shown that considering race as a factor in assessing kidney function can sometimes lead to longer waits for black transplants to be placed on transplant lists. Yale researchers offer several recommendations that will assist future artificial intelligence algorithms employ more precise measures such as zip code and other socioeconomic factors.

ON RECORDING
“Increased capture and use of social determinants of health in medical AI models to predict clinical risk will be of paramount importance,” James L. Cross, a first-year medical student at Yale School of Medicine and first author of the study, said in a statement.

“Bias is a human problem,” added Yale associate professor of radiology and biomedical imaging and study co-author Dr. Michael Choma. “When we talk about ‘bias in artificial intelligence,’ we must remember that computers learn from us.”

Latest Posts

More News