
EU AI Act Compliance for Healthcare AI: A Complete Guide
Healthcare AI Faces a Dual Regulatory Burden
Healthcare artificial intelligence is one of the most heavily regulated domains under the EU AI Act, and for good reason. AI systems that assist in diagnosing disease, recommending treatments, triaging patients, or managing clinical workflows directly affect patient safety and fundamental rights. The European legislators recognised this by classifying most healthcare AI as high-risk, subjecting it to the full weight of the regulation's compliance obligations.
But the AI Act does not operate in isolation. Healthcare AI providers and deployers must simultaneously comply with the Medical Devices Regulation (MDR, Regulation 2017/745) and, in some cases, the In Vitro Diagnostic Medical Devices Regulation (IVDR, Regulation 2017/746). This creates a dual regulatory framework that is more demanding than what companies in most other sectors face.
The full set of high-risk AI obligations takes effect on August 2, 2026. This guide explains exactly what healthcare AI companies need to do, how the AI Act intersects with the MDR, and what practical steps will get you to compliance.
How Healthcare AI Is Classified Under the AI Act
The AI Act classifies healthcare AI as high-risk through two distinct pathways, depending on the nature of the system.
Pathway 1: Annex II: AI as a Safety Component of a Medical Device
Annex II, Section A of the AI Act lists EU harmonisation legislation that triggers high-risk classification when an AI system serves as a safety component of a product covered by that legislation, or when the AI system is itself such a product. The Medical Devices Regulation (2017/745) and the In Vitro Diagnostic Medical Devices Regulation (2017/746) are both explicitly listed in Annex II.
This means any AI system that is a medical device, or a safety component of a medical device, is automatically classified as high-risk under the AI Act, provided it requires a third-party conformity assessment under the MDR or IVDR.
Examples include:
- AI-powered diagnostic imaging systems (radiology AI, pathology AI, dermatology screening)
- AI-driven clinical decision support software that qualifies as a medical device under the MDR
- AI systems embedded in medical devices (such as AI algorithms in patient monitors, ventilators, or surgical robots)
- AI-based in vitro diagnostic tools (genetic analysis, laboratory result interpretation)
Pathway 2: Annex III: Standalone High-Risk AI in Healthcare
Annex III, point 5(c) classifies AI systems intended to be used to evaluate the readiness of first responders as high-risk. More broadly, Annex III, point 5(a) classifies AI systems intended to be used as safety components in the management and operation of critical infrastructure, which can encompass healthcare infrastructure in certain interpretations.
Additionally, any AI system used to assess eligibility for public healthcare services or that influences access to healthcare falls under Annex III, point 5(b) (access to essential public services).
The practical result is that the vast majority of AI systems used in clinical settings, whether they qualify as medical devices or not, will be classified as high-risk under at least one of these pathways.
For a complete overview of how risk classification works, see our EU AI Act Risk Assessment Guide.
The MDR and AI Act Overlap: What You Need to Understand
One of the most complex aspects of healthcare AI compliance is navigating the intersection between the AI Act and the MDR. These are not alternative frameworks. They apply simultaneously.
Conformity Assessment Coordination
Under Article 43(3) of the AI Act, for high-risk AI systems that are medical devices or safety components of medical devices, the AI Act conformity assessment is integrated into the existing MDR conformity assessment procedure. This means:
- The notified body that conducts your MDR conformity assessment will also assess compliance with the AI Act requirements
- You do not conduct two separate conformity assessments; the AI Act requirements are evaluated as part of the MDR assessment
- However, you must still meet all the substantive requirements of both regulations
This integration is intended to reduce regulatory burden, but in practice it means your MDR notified body must have the competence to assess AI-specific requirements, a capability gap that some notified bodies are still working to close.
Where the Two Frameworks Diverge
While the conformity assessment is integrated, several AI Act requirements go beyond what the MDR demands:
| Requirement | MDR | AI Act |
|---|---|---|
| Risk management system | Required (ISO 14971) | Required (Article 9), with specific AI risks |
| Clinical evaluation | Required | Not specifically required, but performance testing is |
| Data governance | General requirements | Detailed requirements (Article 10): bias detection, representativeness |
| Transparency | Labelling and IFU | Extended transparency (Article 13): interpretability of AI outputs |
| Human oversight | Implied through IFU | Explicit requirement (Article 14): override capability |
| Post-market surveillance | Required (MDR Article 83) | Required (Article 72): AI-specific monitoring |
| Fundamental rights impact | Not required | Required for certain deployers (Article 27) |
Quality Management Systems
Both the MDR and the AI Act require quality management systems (QMS). Under Article 17 of the AI Act, providers of high-risk AI systems must establish a QMS that includes policies and procedures for regulatory compliance, design and development processes, testing and validation, data management, and post-market monitoring. If you already have an MDR-compliant QMS (aligned with ISO 13485), you can extend it to incorporate AI Act-specific elements rather than building a parallel system.
Provider Obligations for Healthcare AI
Providers of healthcare AI systems must comply with the full set of high-risk AI requirements in Chapter 2 of Title III (Articles 8-21). Here is what each obligation means in the healthcare context.
Risk Management (Article 9)
Your risk management system must address AI-specific risks that the MDR's ISO 14971-based approach may not fully capture. These include:
- Algorithmic bias: Does your diagnostic AI perform differently across patient demographics (age, sex, ethnicity, body composition)? Clinical studies have documented significant performance disparities in dermatology AI, radiology AI, and other diagnostic tools
- Distribution shift: Medical imaging equipment varies across facilities. Lab values have different reference ranges across populations. Your risk management must account for performance degradation when the system encounters data that differs from its training distribution
- Automation bias: Clinicians may over-rely on AI recommendations, reducing their independent clinical judgement. Your risk management system must assess this risk and define mitigation measures
- Cascading errors: In clinical workflows, an erroneous AI recommendation can propagate through subsequent clinical decisions. Your risk assessment must map these dependency chains
Data Governance (Article 10)
The data governance requirements for healthcare AI are particularly demanding. Under Article 10, you must ensure:
- Training data representativeness: Your training datasets must be sufficiently representative of the patient populations for whom the system is intended. A diagnostic AI trained predominantly on data from one ethnic group will not meet this standard if the system is marketed for use across diverse populations
- Bias examination: You must systematically examine training, validation, and testing data for biases that could affect patient safety or lead to discrimination. For healthcare AI, this includes examining performance across demographic subgroups
- Data quality controls: The data used to train clinical AI must meet standards of accuracy, completeness, and relevance. This includes ensuring that ground truth labels (e.g., pathology-confirmed diagnoses used to train imaging AI) are reliable
- Special categories of personal data: Under Article 10(5), providers of high-risk AI systems may process special categories of personal data (including health data) to the extent strictly necessary for bias detection and correction, subject to appropriate safeguards. This provision is critical for healthcare AI providers who need access to demographic data to conduct bias audits
Technical Documentation (Article 11)
Your technical documentation must cover all elements specified in Annex IV, including:
- A detailed description of the AI system's intended purpose and the clinical contexts in which it is designed to operate
- The design specifications of the system, including its architecture, computational resources, and development methodology
- A description of the training, validation, and testing processes, including the data used, the metrics applied, and the results achieved
- Information about the system's performance across relevant patient subgroups
- The risk management measures adopted and their rationale
For AI systems that are also medical devices, this documentation must be coordinated with the MDR technical documentation requirements. Many companies find it practical to create a single integrated technical file that addresses both frameworks.
Transparency and Instructions for Use (Article 13)
Healthcare AI transparency obligations require you to provide deployers (hospitals, clinics, healthcare systems) with:
- Clear information about the system's intended clinical purpose and any limitations on its use
- The level of accuracy, sensitivity, specificity, and other relevant performance metrics, broken down by clinically relevant subgroups where appropriate
- Known circumstances that may adversely affect the system's performance (e.g., image quality requirements, patient populations for which the system has not been validated)
- Human oversight measures, including when and how a clinician should review or override the system's outputs
- Input data specifications, including image format requirements, minimum resolution, and data preprocessing steps
Human Oversight (Article 14)
Healthcare AI must be designed to allow effective human oversight. For clinical AI, this means:
- Clinicians must be able to understand the system's outputs well enough to make informed decisions about whether to follow or override them
- The system must include mechanisms that allow clinicians to override, reverse, or disregard the AI's outputs
- Where the AI system operates in a time-critical context (such as emergency triage), the human oversight design must balance the need for rapid decision-making with meaningful clinical review
Post-Market Monitoring (Article 72)
Providers must establish post-market monitoring systems that actively and systematically collect, document, and analyse relevant data about the performance of their AI systems throughout their lifetime. For healthcare AI, this includes:
- Monitoring clinical outcomes associated with the system's recommendations
- Tracking performance metrics across facilities and patient populations
- Detecting algorithmic drift, meaning gradual degradation in system performance as real-world data distributions shift over time
- Documenting and reporting serious incidents to national competent authorities under Article 62
This obligation aligns with the MDR's post-market surveillance requirements but adds AI-specific monitoring dimensions.
Deployer Obligations for Healthcare Organisations
Hospitals, clinics, and healthcare systems that deploy AI tools carry their own set of obligations under Article 26.
Appropriate Use and Oversight
Healthcare deployers must:
- Use AI systems in accordance with the provider's instructions for use
- Assign human oversight to clinicians with appropriate competence, training, and authority
- Ensure that input data is relevant and sufficiently representative of the patient population being served
- Monitor the AI system for risks and report any serious incidents to the provider and relevant authorities
Fundamental Rights Impact Assessment (Article 27)
Public healthcare bodies that deploy high-risk AI systems must conduct a fundamental rights impact assessment before putting the system into use. This assessment must evaluate:
- The impact on patients' rights to health, non-discrimination, and privacy
- The risks of the system producing biased or inequitable clinical recommendations across patient groups
- The measures in place to address identified risks, including clinical governance protocols and patient complaint mechanisms
Informing Patients (Article 50)
Under the AI Act's transparency obligations, patients must be informed when they are subject to decisions made or significantly influenced by an AI system. For healthcare deployers, this means implementing clear communication processes (in intake forms, patient portals, and clinical consultations) that disclose the role of AI in their care.
Practical Compliance Roadmap for Healthcare AI
Phase 1: Inventory and Classification (Now)
- Map all AI systems in use or under development across your clinical and operational workflows
- Determine classification under both the AI Act (Annex II vs. Annex III) and the MDR (device class)
- Identify your role (provider, deployer, or both) for each system
- Assess notified body readiness: confirm that your MDR notified body has the competence to assess AI Act requirements
Use our EU AI Act Compliance Checklist to structure this initial assessment.
Phase 2: Gap Analysis and Planning (Q1-Q2 2026)
- Conduct gap analysis comparing current MDR compliance artifacts against AI Act requirements
- Extend your QMS to incorporate AI Act-specific elements (data governance, bias monitoring, human oversight protocols)
- Audit training data for representativeness, bias, and quality
- Plan conformity assessment updates with your notified body
Phase 3: Implementation (Q2-Q3 2026)
- Update technical documentation to address all Annex IV requirements
- Implement or enhance bias detection and monitoring across patient demographics
- Develop transparency materials for both deployers and patients
- Train clinical staff on human oversight responsibilities and AI system limitations
- Establish post-market monitoring systems with AI-specific performance metrics
Phase 4: Validation and Deployment (Before August 2, 2026)
- Complete conformity assessment through your MDR notified body, incorporating AI Act requirements
- Conduct fundamental rights impact assessments for public healthcare deployments
- Test incident reporting procedures to ensure readiness for Article 62 obligations
- Document everything. Thorough documentation is your primary defence in any regulatory inquiry
Penalties and Market Access Risks
Non-compliance with the AI Act's high-risk requirements can result in administrative fines of up to 15 million EUR or 3% of worldwide annual turnover, whichever is higher. For healthcare AI companies, the market access implications may be even more consequential. A non-compliant AI system can be withdrawn from the EU market, and a provider can be prohibited from placing new products until compliance is demonstrated.
Given that the EU represents one of the world's largest healthcare markets, losing market access is a strategic risk that far exceeds the financial penalties. For more on enforcement mechanisms, see our guide on EU AI Act Fines and Enforcement.
The Strategic Opportunity in Compliance
Healthcare AI compliance is expensive and demanding. But it also represents a competitive advantage. Healthcare systems and hospitals are increasingly making procurement decisions based on regulatory compliance and the ability to demonstrate trustworthy AI. Providers who achieve full AI Act compliance early will be better positioned to win contracts, build clinical trust, and scale across the EU market.
The companies that treat compliance as a core product quality attribute, rather than a regulatory burden, will lead the next generation of healthcare AI.
To understand which AI practices are banned outright under the AI Act (including certain biometric categorisation systems that may be relevant in healthcare contexts), see our guide on EU AI Act Prohibited Practices.
For a comparative review of compliance management tools, see Best EU AI Act Compliance Tools Compared.
Start Your Free Compliance AssessmentHealthcare AI regulation is complex, but the path to compliance is clear. Start now, work systematically, and invest in the processes and documentation that will sustain compliance over the long term.