It can be argued that one of the primary responsibilities of a physician is to constantly evaluate and reassess the odds: what are the chances of a medical procedure succeeding? Is the patient at risk of developing severe symptoms? When should the patient come for further tests? Among these critical considerations, the development of artificial intelligence can assist reduce risk in clinical settings and assist physicians prioritize care for high-risk patients.
Despite its potential, researchers from MIT’s Department of Electrical Engineering and Computer Science (EECS), AI Equality, and Boston University are calling for greater regulatory oversight of AI in new comment published in the October issue after the U.S. Office for Civil Rights (OCR) at the Department of Health and Human Services (HHS) issued a up-to-date rule under the Affordable Care Act (ACA).
In May, OCR published final rule in the ACA, which prohibits discrimination based on race, color, national origin, age, disability, or sex in “patient care decision support tools” – a newly established term that covers both artificial intelligence and non-automated tools used in medicine.
Developed in response to President Joe Biden’s proposal Implementing Regulation on the safe, secure and trustworthy development and use of artificial intelligence starting in 2023, the final rule builds on the Biden-Harris Administration’s commitment to advancing health equity by focusing on preventing discrimination.
According to senior author and EECS associate professor Marzyeh Ghassemi, “this rule represents an important step forward.” Ghassemi, who is affiliated with the MIT Abdul Latif Jameel Health Machine Learning Clinic (Jameel Clinic), the Computer Science and Artificial Intelligence Laboratory (CSAIL), and the Institute for Medical Engineering and Sciences (IMES), adds that the principle “should mandate equity-based improvements to algorithms non-artificial intelligence and clinical decision support tools that are already used in various clinical specialties.”
The number of AI-enabled devices approved by the U.S. Food and Drug Administration has increased dramatically over the past decade since the approval of the first AI-enabled device in 1995 (PAPNET Test System, a cervical screening tool). From OctoberThe FDA has approved nearly 1,000 AI-enabled devices, many of which are designed to support clinical decision-making.
However, researchers note that there is no regulatory body overseeing clinical risk scores generated by clinical decision support tools, even though the majority of U.S. physicians (65 percent) use these tools monthly to determine next steps in patient care.
“Clinical risk assessments are less opaque than artificial intelligence algorithms because they typically involve just a few variables combined into a simple model,” comments Isaac Kohane, chair of the Department of Biomedical Informatics at Harvard Medical School and editor-in-chief of . “However, even these results are only as good as the datasets used to ‘train’ them and the variables selected by experts or studied in a specific cohort. If they influence clinical decision-making, they should be held to the same standards as their newer and much more complex AI relatives.”
Moreover, while many decision support tools do not use artificial intelligence, researchers note that these tools are equally guilty of perpetuating bias in health care and require oversight.
“Regulating clinical risk assessments poses significant challenges due to the proliferation of clinical decision support tools embedded in electronic health records and their widespread use in clinical practice,” says co-author Maia Hightower, CEO of Equality AI. “Such regulations remain necessary to ensure transparency and non-discrimination.”
But Hightower adds that under the new administration, regulating clinical risk metrics may prove “particularly complex given the emphasis on deregulation and opposition to the Affordable Care Act and some nondiscrimination policies.”