To support the continued development and commercialization of protected and effective AI-enabled medical devices, the U.S. Food and Drug Administration will provide recommendations for the submission of marketing materials, including documentation and information needed throughout the product lifecycle for regulatory oversight of safety and effectiveness , on Tuesday.
WHY IT’S IMPORTANT
Following last month’s release of final, predetermined guidance for its AI and machine learning change control roadmap – defining what is required to maintain AI/ML components and submitting it for regulatory review without the need for an entirely up-to-date marketing application – the FDA is providing health services device developers with key recommendations for product design, development and documentation for initial submissions.
Tips to come published in the Federal Register On Jan. 7, it would be the first to provide total product lifecycle recommendations for AI-enabled devices, bringing together all design, development, maintenance and documentation recommendations if and when they are finalized, the FDA said in its Monday announcement.
The agency said it generally encourages developers and innovators to engage early and often to conduct activities throughout the device’s lifecycle – planning, development, testing and continuous monitoring.
After approving more than 1,000 AI-enabled devices through established premarket pathways, the FDA developed the requirements, along with findings shared by the agency, to provide “a first reference point for specific recommendations applicable to these devices, from the earliest stages of development throughout the lifecycle.” device life,” Troy Tazbaz, director of the Digital Health Center of Excellence in the FDA’s Center for Devices and Radiological Health, said in a statement.
The agency said the up-to-date guidance will also address transparency and bias strategies, with detailed advice on managing the risk of bias and suggestions for thoughtfully designing and evaluating artificial intelligence.
The FDA said it will accept public comments on the draft guidance through April 7 and specifically solicits comments on AI lifecycle alignment, the adequacy of generative AI recommendations, the approach to performance monitoring, and the types of information that should be provided to AI users of medical devices.
CDRH said it would also host webinars on February 18 to discuss its up-to-date regulatory proposal and on January 14 for it final PCCP guidelines released in December.
A BIGGER TREND
In blog Last year, Tazbaz wrote with John Nicol, a digital health specialist at the FDA’s Digital Health Center of Excellence, that lifecycle management principles can aid manage the complexities and risks associated with AI software in healthcare.
As AI continually learns and adapts to real-world conditions, adaptability poses significant risks, “such as exacerbating biases in data or algorithms, potentially harming patients and even more disadvantaging underrepresented populations,” they wrote.
To address evolving regulatory risks for AI-enabled medical devices, the FDA first attempted to establish a PCCP for AI/ML devices.
“The approach proposed by FDA in this draft guidance would ensure that essential performance considerations, including race, ethnicity, disease severity, gender, age, and geographic considerations, are taken into account in the ongoing development, validation, deployment, and monitoring of artificial intelligence/AI/enabled devices ML,” said the center’s then-deputy director, Brendan O’Leary.
ON RECORDING
“As we continue to see exciting developments in this field, it is important to recognize that there are specific considerations unique to AI-enabled devices,” Tazbaz said in a statement.