Wednesday, December 25, 2024

Unthreatening and fair AI needs guardrails from legislation and people in the loop

Share

Healthcare organizations have sometimes been sluggish to adopt modern AI tools and other leading innovations due to legitimate concerns about security and transparency. But to improve quality of care and patient outcomes, health care needs these innovations.

However, it is necessary to exploit them correctly and ethically. Just because a generative AI application can pass a medical school exam doesn’t mean it’s ready to work as a practicing physician. Healthcare should leverage the latest advances in artificial intelligence and immense language models to put the power of these technologies in the hands of medical experts so they can deliver better, more precise and safer care.

Dr. Tim O’Connell is a practicing radiologist and CEO and co-founder of emtelligent, a developer of artificial intelligence technology that transforms unstructured data.

We spoke with him to better understand the situation the importance of guardrails for AI in healthcare as they aid modernize medical practice. We also talked about how algorithmic discrimination can perpetuate health inequities, legislative efforts to establish AI safety standards, and why a human in the loop is vital.

Q. How essential are guardrails for artificial intelligence in healthcare as the technology helps modernize medical practice?

AND. Artificial intelligence technologies have opened up thrilling opportunities for healthcare providers, payers, researchers and patients, offering the potential for better outcomes and lower healthcare costs. However, to fully realize the potential of artificial intelligence, particularly in the case of medical artificial intelligence, we must ensure that healthcare professionals understand both the capabilities and limitations of these technologies.

This includes being aware of risks such as non-determinism, hallucinations, and problems with reliably referencing source data. Healthcare professionals must have not only knowledge of the benefits of AI, but also a critical understanding of its potential pitfalls so that they can exploit these tools safely and effectively in a variety of clinical settings.

Developing a set of well-thought-out principles and following them is extremely essential to ensure the unthreatening and ethical exploit of AI. These policies should address issues related to privacy, security and bias and must be based on transparency, accountability and fairness.

Reducing bias requires training artificial intelligence systems on more diverse datasets that account for historical disparities in diagnoses and health outcomes, while shifting training priorities to ensure AI systems are aligned with real health care needs.

A focus on diversity, transparency and stalwart oversight, including the development of guardrails, allows AI to be a highly effective tool that remains error-proof and helps achieve significant improvements in health care outcomes.

This is where protective barriers – in the form of well-designed regulations, ethical guidelines and operational safeguards – become crucial. These safeguards aid ensure the responsible and effective exploit of AI tools by addressing concerns about patient safety, data privacy and algorithm bias.

They also provide accountability mechanisms so that any errors or unintended consequences of AI systems can be traced back to specific decision points and corrected. In this context, guardrails act as both protective and enabling measures, enabling healthcare workers to trust AI systems while protecting against potential threats.

Q. How can algorithmic discrimination perpetuate health inequalities and what can be done to address this problem?

AND. If the artificial intelligence systems we rely on in healthcare are not properly developed and trained, there is a very real risk of algorithmic discrimination. AI models trained on datasets that are not immense or diverse enough to represent the full spectrum of patient populations and clinical characteristics can and do produce biased results.

This means that AI may provide less precise or less effective care recommendations for underserved populations, including racial or ethnic minorities, women, people from lower socioeconomic backgrounds, and people with very uncommon or uncommon conditions.

For example, if a medical language model is trained primarily on data from a specific demographic, it may be tough to accurately extract relevant information from clinical notes that reflect different medical conditions or cultural contexts. This can lead to missed diagnoses, misinterpretation of patient symptoms, or ineffective treatment recommendations for populations that the model was not properly trained to recognize.

As a result, an AI system may perpetuate the inequalities it is intended to alleviate, especially for racial minorities, women and lower socioeconomic patients who are often no longer cared for by customary healthcare systems.

To solve this problem, it is essential to ensure that AI systems are built on immense, highly diverse datasets that cover a wide range of patient demographics, clinical presentations and health outcomes. The data used to train these models must be representative of different races, ethnicities, genders, ages, and socioeconomic statuses to avoid skewing the system’s results toward a narrow view of health care.

This diversity enables models to perform accurately across a variety of populations and clinical scenarios, minimizing the risk of perpetuating bias and ensuring that AI is unthreatening and effective for everyone.

Q. Why are humans in the loop vital to AI in healthcare?

AND. While AI can process massive amounts of data and generate insights at speeds far beyond human capabilities, it lacks the detailed understanding of intricate medical concepts that are necessary to provide high-quality care. Humans in the loop are vital to AI in the healthcare context because they provide the clinical expertise, oversight, and context necessary to ensure algorithms operate accurately, safely, and ethically.

Let’s consider one exploit case, which is the extraction of structured data from clinical notes, lab reports, and other healthcare documents. Without clinicians responsible for development, training, and ongoing validation in AI models, there is a risk of missing essential information or misinterpreting medical jargon, abbreviations, or context-specific nuances in clinical language.

For example, the system may incorrectly flag a symptom as essential or miss critical information included in a doctor’s order. Human experts can aid refine these models, ensuring that intricate medical language is correctly captured and interpreted.

From a workflow standpoint, humans in the loop can aid interpret and respond to AI-driven insights. Even when AI systems generate precise predictions, health care decisions often require a level of personalization that only physicians can provide.

Human experts can combine AI results with their clinical experience, knowledge of a patient’s unique situation, and understanding of broader healthcare trends to make informed and compassionate decisions.

Q. What is the status of legislative action to establish AI safety standards in health care, and what do lawmakers need to do?

AND. Regulations establishing AI safety standards in healthcare are still in their early stages of development, although there is increasing recognition of the need to develop comprehensive guidelines and regulations to ensure the unthreatening and ethical exploit of AI technologies in clinical settings.

Several countries have begun rolling out AI regulatory frameworks, many of which are built on foundational, trusted AI principles that emphasize security, fairness, transparency and accountability, which are starting to shape these conversations.

In the United States, the Food and Drug Administration has introduced a regulatory framework for artificial intelligence-based medical devices, specifically software as a medical device (SaMD). The FDA’s proposed framework is based on a “total product lifecycle” approach that is consistent with the principles of trustworthy AI, emphasizing continuous monitoring, updates and real-time assessment of AI performance.

However, while this framework covers AI-based devices, it has not yet fully addressed the challenges posed by AI applications other than devices that process intricate clinical data.

Last November, the American Medical Association published proposed guidelines for the use of artificial intelligence in an ethical, fair, responsible and crystal clear manner.

In its “Principles for the Development, Implementation, and Use of Augmented Intelligence,” the AMA reinforces its position that artificial intelligence enhances rather than replaces human intelligence, and argues that “it is important for the medical community to help guide the development of these tools in ways that that best meets both physician and patient needs, and helps determine your organization’s risk tolerance, particularly where AI impacts direct patient care.”

By fostering collaboration between policymakers, healthcare professionals, AI developers and ethicists, we can create regulations that promote both patient safety and technological progress. Policymakers must strike a balance to create an environment in which AI innovation can thrive, while ensuring that these technologies meet the highest safety and ethical standards.

This includes developing regulations to nimbly adapt to modern developments in AI, ensuring that AI systems remain malleable, crystal clear and responsive to changing healthcare needs.

Latest Posts

More News