The integration of artificial intelligence into medical devices upends traditional logics of regulation. Between requirements of the medical device framework, constraints of the AI Act, governance of algorithmic risks and management of data (GDPR, Data Act), companies must rethink their compliance strategy in the face of a complex European framework combining constraints that are at once legal, ethical and technological.
A technological and regulatory upheaval
The integration of artificial intelligence (AI) into medical devices (MD) transforms industrial, regulatory and clinical practices. From assisted imaging to predictive monitoring, these devices embed algorithms capable of learning, adjusting and taking part in medical decision-making. Faced with this technological revolution, European law offers a double framework: Regulation (EU) 2017/745 on medical devices (MDR) and Regulation (EU) 2024/1689 on artificial intelligence (AI Act), which entered into force on August 1, 2024. For organizations, the challenge is twofold: to guarantee compliance in a complex regulatory landscape and to make it a strategic lever of trust.
Evolving devices under reinforced regulatory surveillance
Unlike traditional software, AI profoundly modifies the nature of the medical device. It introduces a non-fixed behavior, which can evolve after market placement, through update or retraining of the model. This algorithmic dynamic imposes continuous monitoring of compliance. The medical device framework, based on safety and clinical performance requirements, provides for post-market surveillance and exhaustive technical documentation.
The AI Act precisely targets medical devices subject to third-party conformity assessment as “high-risk AI systems,” subject to specific obligations: transparency, human oversight, management of training data, traceability of decisions, risk governance and also post-marketing surveillance.
The challenge of regulatory synergy
The two texts are complementary, but not perfectly superimposable. The medical device framework targets the product as a whole, while the AI Act specifically targets the AI component. A regulatory bridge allows, under conditions, to recognize compliance under one through the evaluation of the other. This articulation requires coherent documentation and interdisciplinary compliance teams as well as designated and trained notified bodies. To this is added the Data Act, which imposes at the design level for connected medical devices the integration of functionality serving interoperability and, ultimately, the new regulation on the European Health Data Space.
Three axes of vigilance must notably guide the action of organizations
First, traceability: being able to reconstruct the environment in which the algorithm made a decision, through technical logs and metadata.
Next, explainability: making intelligible, for the user, the internal logics of the system, even if they rely on complex models.
Finally, responsibility: clarifying the chain of operators and documenting algorithmic choices to guard against litigation linked to biases, errors or a lack of human oversight in the context of the Directive on defective product liability recently amended and considerably tightening the applicable regime.
The other major difficulty lies in the evolution of the model after its certification. In connection with technological reality, the AI Act admits, under conditions, adaptation of the system post-market, notably through the notion of “substantial modification.” This implies the establishment of active monitoring, an alert mechanism, but also of internal processes of continuous follow-up. The company must therefore build dynamic compliance, based on regulatory watch, regular audits and robust quality steering. Beyond compliance with texts, governance of AI applied to medical devices must also respond to ethical requirements: avoiding biases, operationalizing human oversight, ensuring clinical transparency. Certain manufacturers are already choosing to go further, by setting up algorithmic ethics committees or internal charters in connection with the French specificity of human guarantee referred to in the Public Health Code for digital medical devices. This responsible approach can be a factor of differentiation in a market increasingly sensitive to digital trust.
Legal innovation as corporate strategy
The challenge is therefore not only legal. It is strategic. For AI in medical devices is not limited to a technological functionality: it redefines the product, its value, its regulation, its relationship with the user, here most often patient or healthcare professional. For the company, this imposes thinking of compliance as a living, transversal, evolving process and making regulation not a brake, but a vector of sustainable innovation.
Key points
The integration of AI into medical devices obliges companies to rethink compliance within a European framework now structured around three texts: the MDR, the AI Act and the Data Act. Five major issues must be addressed:
- The necessary articulation of the three regulations;
- Traceability, explainability and interoperability of systems;
- Allocation of responsibilities in the chain of actors;
- Continuous management of algorithmic modifications;
- Ethics as a lever of strategic differentiation.
AI imposes thinking of compliance no longer as a fixed milestone, but as a living process. Applied to medical devices, it is a veritable paradigm shift and a critical challenge of access to and maintenance on the market.
By Nathalie BESLAY and Olivia RIME