Friday, March 13, 2026

As the rush towards AI in healthcare continues, explainability becomes key

Share

AI is becoming increasingly popular in healthcare, with many hospitals and healthcare systems already implementing the technology—most often in administrative settings—with great success.

However, the success of applying AI in healthcare – especially in clinical settings – will not be possible without addressing growing concerns about the transparency and explainability of models.

In an industry where decisions can mean life or death, the ability to understand and trust AI decisions is not just a technical need – it is an ethical necessity.

Neeraj Mainkar is vice president of software engineering and advanced technologies at Proprio, which develops immersive tools for surgeons. He has extensive experience in applying algorithms to healthcare. Healthcare IT News spoke with him about explainability, the need for patient safety and trust, error identification, regulatory compliance, and ethical standards in AI.

Q. What does explainability mean in the context of AI?

AND. Explainability refers to the ability to understand and clearly articulate how an AI model arrives at a particular decision. In simpler AI models, such as decision trees, this process is relatively straightforward because the decision paths can be easily traced and interpreted.

However, as we enter the realm of intricate deep learning models that consist of multiple layers and intricate neural networks, the challenge of understanding the decision-making process becomes much more challenging.

Deep learning models operate with a huge number of parameters and intricate architectures, making it nearly impossible to directly trace their decision paths. Reverse engineering these models or investigating specific issues in the code is extremely challenging.

When a prediction falls brief of expectations, pinpointing the exact cause of the discrepancy is challenging due to the complexity of the model. This lack of transparency means that even the creators of these models can have difficulty fully explaining their behavior or results.

Opacity Elaborate AI systems pose significant challenges, especially in areas such as healthcare, where understanding the reasoning behind decisions is crucial. As AI becomes more integrated into our lives, the need for explainable AI grows. Explainable AI aims to make AI models more interpretable and limpid, ensuring that their decision-making processes are understandable and trustworthy.

Q. What are the technical and ethical implications of AI explainability?

AND. The pursuit of explainability has both technical and ethical implications to consider. From a technical perspective, simplifying models to augment explainability can reduce efficiency, but it can also lend a hand AI engineers debug and improve algorithms by giving them a clear understanding of where their results come from.

From an ethical perspective, explainability helps identify biases in AI models and promote fairness in treatment, eliminating discrimination against smaller, underrepresented groups. Explainable AI also provides end users with an understanding of how decisions are made while protecting confidential information, in compliance with HIPAA.

Q. Please discuss error identification in the context of possible explanations.

AND. Explainability is an critical part of effectively identifying and correcting errors in AI systems. The ability to understand and interpret how an AI model makes decisions or generates results is necessary for effectively detecting and correcting errors.

By tracing the decision paths, we can determine where the model may have gone wrong, allowing us to understand the “why” behind the incorrect prediction. This understanding is crucial to making the necessary adjustments to improve the model.

Continuous improvement of AI models depends largely on understanding their errors. In healthcare, where patient safety is paramount, the ability to quickly and accurately debug and refine models is crucial.

P. Please clarify the regulatory compliance issue in terms of the possibility of clarification.

AND. Healthcare is a highly regulated industry with stringent standards and guidelines that AI systems must meet to ensure safety, effectiveness, and ethical employ. Explainability is critical for compliance because it addresses several key requirements, including:

  • Transparency. Explainability ensures that every decision made by AI can be traced and understood. This transparency is needed to maintain trust and ensure that AI systems operate within ethical and legal boundaries.
  • Validation. Explainable AI makes it easier to demonstrate that models have been thoroughly tested and validated to perform as expected in a variety of scenarios.
  • Reducing bias. Explainability allows for the identification and mitigation of biased patterns of decision-making, ensuring that models do not unfairly discriminate against any particular group.

As AI evolves, the emphasis on explainability will continue to be a key aspect of the regulatory framework to ensure the responsible and effective employ of advanced technologies in healthcare.

Q. Where do ethical standards come into play in the context of explainability?

AND. Ethical norms play a fundamental role in the development and implementation of responsible AI systems, especially in sensitive and risky domains such as healthcare. Explainability is inherently linked to these ethical norms, ensuring that AI systems operate transparently, fairly, and responsibly, in accordance with fundamental ethical principles in healthcare.

Responsible Artificial Intelligence means operating within ethical boundaries. Striving for advanced explainability in AI increases trust and reliability, ensuring that AI decisions are limpid, justifiable, and ultimately beneficial to patient care. Ethical standards guide responsible disclosure of information, protecting user privacy, adhering to regulatory requirements such as HIPAA, and encouraging public trust in AI systems.

Latest Posts

More News