Open Access AI Research Repository
hello@aitopianism.comISSN: Applied ForPeer Reviewed

EU AI Act Enforcement Begins: What Health Companies Need to Know

Mandatory conformity assessments, bias audits, and post-market surveillance reshape the regulatory landscape for clinical AI

Published 2025-05-12 · Policy

On May 1, 2025, the European Union began enforcing the high-risk provisions of Regulation (EU) 2024/1689 — the AI Act — marking the first comprehensive regulatory regime for artificial intelligence adopted by any major jurisdiction. For companies developing, deploying, or distributing AI systems in healthcare, the implications are immediate and far-reaching. Most clinical AI tools are now classified as high-risk systems subject to conformity assessments, mandatory documentation, and ongoing post-market surveillance obligations.

The enforcement deadline had been anticipated since the regulation's publication in the Official Journal of the European Union in July 2024, but the practical reality of compliance has proven more complex than many organisations expected. National competent authorities in Germany, France, Italy, and Spain have already begun issuing guidance documents, and the first wave of market surveillance actions is expected before the end of 2025.

Which Health AI Systems Are Affected

The AI Act classifies AI systems into four risk tiers: unacceptable, high, limited, and minimal. Most healthcare applications fall into the high-risk category under Annex III, which explicitly includes AI systems used as safety components of products or as products themselves that are subject to existing EU product safety legislation — meaning any AI system that qualifies as a medical device under the Medical Device Regulation (MDR) or the In Vitro Diagnostic Regulation (IVDR) is automatically high-risk.

This encompasses a wide range of technologies: diagnostic imaging algorithms, clinical decision support systems, AI-powered drug discovery tools used to inform regulatory submissions, robotic surgery controllers, patient monitoring platforms, and triage tools. Even AI systems that do not themselves make clinical decisions but influence clinical workflows — such as hospital resource allocation optimisers or appointment scheduling algorithms that prioritise patients by predicted acuity — may fall within scope if they affect patient safety.

The regulation also captures AI systems used in the administration of healthcare, including insurance eligibility assessment, claims processing, and benefits determination. Companies providing these services to European healthcare payers and insurers must now demonstrate compliance with the same quality management, documentation, and transparency requirements that apply to clinical AI.

Conformity Assessment Requirements

High-risk AI systems must undergo a conformity assessment before being placed on the EU market. For most health AI systems, this means engagement with a notified body — an independent conformity assessment organisation designated by an EU member state. The process is broadly analogous to the CE marking process for medical devices under the MDR, but with additional AI-specific requirements.

The conformity assessment must demonstrate that the system satisfies requirements across several domains: risk management, data governance, technical documentation, transparency and provision of information to users, human oversight, accuracy, robustness, and cybersecurity. Providers must maintain a quality management system covering all stages of the AI lifecycle, from design and development through deployment and post-market monitoring.

Dr. Elisabeth Steinhauer, a regulatory affairs consultant at the Berlin-based firm MedTech Compliance, described the documentation burden as "significant but manageable" for companies with existing MDR quality management systems. "The AI Act layers additional requirements on top of what medical device companies are already doing, but the underlying framework is familiar. The challenge is mostly in the AI-specific provisions — bias testing, data provenance documentation, and the requirement to describe how the model reaches its outputs in terms users can understand."

Bias Auditing and Data Governance

One of the AI Act's most consequential provisions for health AI is the requirement for bias auditing. Providers must test their systems for biases that could lead to discrimination against persons or groups on the basis of race, ethnicity, gender, disability, age, or other protected characteristics. The testing must be performed using validated methodologies and documented in the technical file submitted to the notified body.

This requirement directly addresses a well-documented problem in clinical AI. Studies have shown that dermatology algorithms trained predominantly on lighter skin tones perform significantly worse on darker skin, that pulse oximetry models exhibit racial bias in oxygen saturation estimation, and that chest X-ray triage systems trained on data from one healthcare system may not generalise to populations with different disease prevalence patterns.

The regulation requires providers to describe the training, validation, and testing datasets used, including their size, scope, provenance, and known limitations. Where feasible, providers must demonstrate that their datasets are "relevant, sufficiently representative, and to the best extent possible, free of errors and complete" — a standard that is easier to articulate than to satisfy, particularly for rare disease applications where diverse training data is inherently scarce.

Transparency and Explainability

The AI Act requires that high-risk systems be designed to be sufficiently transparent to enable users to interpret the system's output and use it appropriately. For healthcare providers, this means that clinicians using AI-assisted diagnostic tools must be able to understand, at least at a functional level, why the system produced a particular recommendation.

The regulation stops short of mandating full algorithmic explainability — a requirement that would be technically infeasible for many deep learning systems. Instead, it requires that the system's output be accompanied by "clear information on its intended purpose, accuracy, and the level of confidence in its predictions," and that the system be designed to allow for "appropriate human oversight measures."

In practice, this is likely to accelerate the adoption of explainability techniques such as saliency maps, attention visualisation, and counterfactual explanations in clinical AI products. Several notified bodies have indicated that they will expect to see some form of interpretability mechanism in the technical documentation, even if the regulation does not prescribe a specific method.

Post-Market Surveillance and Reporting

Once a high-risk AI system is placed on the market, the provider must establish a post-market surveillance system that continuously monitors the system's performance, collects data on adverse events and near-misses, and feeds this information back into the risk management process. Providers must submit periodic safety update reports to the relevant national authority, with frequencies determined by the system's risk profile.

Serious incidents — defined as events that result in death, serious deterioration of health, or serious harm to patients — must be reported to the national competent authority within 15 days of the provider becoming aware of them. This timeline is shorter than the 30-day reporting window under the MDR for most medical device incidents, reflecting the EU's concern about the speed at which AI systems can propagate errors at scale.

The post-market surveillance requirement also extends to performance drift — the gradual degradation of model accuracy over time as the input data distribution shifts away from the training distribution. Providers must implement mechanisms to detect performance drift and must have procedures in place to retrain or recalibrate models when drift exceeds predefined thresholds.

Implications for Non-EU Companies

The AI Act applies to any AI system placed on the EU market or whose output is used within the EU, regardless of where the provider is established. American, Chinese, and Indian health AI companies selling into European markets must comply with the same requirements as EU-based firms. This extraterritorial reach is modelled on the GDPR's approach and is expected to have a similar global regulatory ripple effect.

Several major US health AI companies — including Tempus, PathAI, and Viz.ai — have already announced EU compliance programmes. Smaller companies face a steeper climb. Dr. Raj Chandra, founder of a Mumbai-based radiology AI startup, told the Financial Times that conformity assessment costs alone could exceed $300,000, a figure that is "existential for a seed-stage company trying to enter the European market."

The European Commission has acknowledged this concern and is exploring mutual recognition agreements with the US FDA and other regulators to reduce duplicate assessment burdens. No agreement has been finalised, but preliminary discussions are underway.

For ongoing coverage of AI regulation and policy, visit our AI Ethics and Policy research repository. Related articles include the WHO's global guidelines on AI in health and Woebot Health's FDA breakthrough designation, which explores the contrasting US regulatory pathway for digital therapeutics.

← Back to Publications

Stay Updated on AI Research

Browse our curated repository of peer-reviewed AI research across health, education, environment, and ethics.