The integration of Artificial Intelligence (AI) into medical devices is revolutionizing healthcare, offering innovative diagnostics and therapeutic tools. Yet, this advancement is met with regulatory complexities under the EU’s Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR), compounded by the emerging European AI Act (AIA).
The MDR and IVDR do not currently provide definitions for “AI” or “software,” leaving a gap in regulatory clarity. Software intended for medical purposes falls under these regulations but lacks specific guidance for AI. In contrast, the draft AIA offers a detailed definition of an “AI system” as software employing specified techniques to generate outputs that influence their environments.
AI-integrated medical devices, particularly software designed to serve a medical purpose using techniques specified in Annex I of the draft AIA, find themselves at the intersection of the MDR/IVDR and the AIA. Notably, many AI-integrated medical devices defined under MDR/IVDR could also qualify as “high-risk AI systems” under the AIA. According to Article 6(1) of the draft AIA, high-risk AI systems include those under specific harmonization regulations listed in Annex II or those that are crucial components of regulated products. The MDR and IVDR are included in these harmonization regulations. Consequently, if an AI-integrated medical device falls into risk class IIa or higher and requires a Notified Body’s involvement in the conformity assessment, it also constitutes a high-risk AI system under the draft AIA.
Vigilance requirements necessitate enhanced data validation and continuous monitoring to mitigate AI errors. Regulations must enforce the use of diverse datasets and ensure accountability and traceability in AI development to prevent bias and ensure fairness. As AI is biased as its training data, errors due to discrepancies in clinical data inputs or variations between training and real-world data would require mandated transparency.
Furthermore, the Agency may also need to clarify some misalignment between the AIA, MDR, and GDPR, due to the AIA requiring the use of demographic data for training and validation of AI systems.
Another controversial aspect is that it implements a dual pre-market and post-market control to be conducted by two (potentially unconnected) supervisory authorities for medical devices and AI systems. Both legislations implement their own control framework, which might mean longer procedures for market approval and discouragement of manufacturers to produce AI-integrated devices.