Responsible AI is essentially the only solution if someone is implementing AI technology in their hospital or healthcare system. It is critical that AI, while intricate and vital, is trustworthy.
Anand Rao is a professor at Heinz College at Carnegie Mellon University. He is an expert in responsible artificial intelligence, the economics of artificial intelligence and generative artificial intelligence. Over his 35-year consulting and academic career, he has focused on innovation, business and social exploit of data, analytics and artificial intelligence.
Previously, Rao was the global artificial intelligence leader at consulting giant PwC, a partner in its data, analytics and artificial intelligence practice, and the artificial intelligence innovation leader in PwC’s product and technology segment.
We interviewed Rao to discuss responsible AI, how it is being applied in healthcare, how to connect responsible AI with generative AI in particular, and what society needs to understand about implementing responsible AI.
Q. Please define what responsible artificial intelligence is from your point of view.
AND. Responsible AI is the research, design, development, and implementation of artificial intelligence that is secure, protects or enhances privacy, is lucid, accountable, interpretable, explainable, bias-aware, and fair. You can really think of it as three successive levels of artificial intelligence:
- Secure artificial intelligence. This is the minimum bar at which “AI does no harm.” This includes not causing physical or emotional harm, presenting facts when necessary, and protecting yourself from your opponent’s attacks.
- Trustworthy artificial intelligence. This is the next level where “AI makes good.” It involves artificial intelligence that is accountable, interpretable and explainable. It covers both building AI systems and managing AI systems.
- Beneficial artificial intelligence. This is the next level of “AI doing good for everyone.” It includes artificial intelligence that is aware of bias and is built to be equitable in at least one or more dimensions of justice.
Q. How should responsible AI be applied to healthcare? Healthcare is a completely different industry than others. Life is constantly at risk.
AND. Given the high stakes in healthcare, responsible AI should be used in healthcare primarily to enhance human decision-making, rather than replace human tasks or decision-making. “Human-in-the-loop” must be a core feature of most, if not all, AI implementations in healthcare.
Additionally, AI-based healthcare systems must comply with applicable privacy regulations and be thoroughly tested, assessed, validated and validated using the latest techniques before being deployed on a vast scale.
Q. Generative artificial intelligence is one of your specializations. How to combine responsible AI with generative AI?
AND. When it comes to generative AI, it involves more powerful and intricate technology that has the potential to cause more harm than classic AI. Generative AI has the potential to produce erroneous results and have a confident tone.
This can result in harmful and toxic language that is more complex to explain or justify. As a result, responsible AI for generative AI must consider broader governance and oversight, as well as exacting testing in a variety of contexts.
Q. One area you are focusing on is the societal adoption of artificial intelligence. What does society need to understand about adopting responsible AI, especially when people go to the doctor?
AND. With the widespread exploit of generative AI, society is increasingly turning to it for medical advice. Given that it is complex to determine when generative AI is valid and when it is not, there could be disastrous consequences for patients or caregivers who do not consult their doctors.
Educating the public and caregivers about the negative consequences of generative AI is crucial to ensure responsible exploit of generative AI.
