Liability under the AI Act in healthcare: an exploration

An increasing number of healthcare providers use Artificial Intelligence (AI). Alongside clear benefits such as reducing administrative burdens, the use of AI also introduces risks. What happens if a patient suffers harm during a medical treatment in which an AI tool is used? This is a common question, a complex issue and, unfortunately, not one that can be answered instantly because the outcome depends heavily on the specific circumstances. It is also too broad for a single blog post given the highly fact-specific nature of liability law. Yet a colleague had such a good idea that I could not resist: “Why not explore this through a case study?”

Case study

Hospital De Linde uses an AI system called Radio AI. This system analyses radiology images and can detect breast cancer. A healthcare professional at De Linde uses the system for patient X. After analysis, Radio AI concludes that patient X does not have a tumour. Under the AI Act, human oversight is required. The healthcare professional reviews the scan and also does not identify any abnormalities. Patient X is sent home.

Several months later, patient X starts to worry and returns to De Linde. A new scan is taken and assessed by the AI system. This time, the news is bad. A tumour is visible. The old images are retrieved and, on closer examination, even these show a small abnormality. A breast-conserving operation is no longer possible although it might have been feasible at an earlier stage. In short, patient X has suffered harm.

The crux of this case is that the harm may stem from a mistake, meaning negligent conduct, by the healthcare professional and or the hospital, or from a defect in the AI system. Let us explore these options. For clarity, this blog does not aim to provide an exhaustive overview of liability grounds and their requirements. It offers a first exploration using the case study above.

Negligence (human)

The healthcare professional reviewed the system’s output but did not spot the false negative. Is there a basis for liability? Patient X will first need to show that, for example, the healthcare professional or the hospital made a mistake. Let us zoom in on the potential liability of the healthcare professional.

A medical treatment agreement was concluded between the healthcare professional and patient X. Whether there has been a mistake is therefore assessed under the duty of care standard in the Dutch Medical Treatment Agreement Act (WGBO) [1]. This requires the healthcare professional to exercise the care of a good healthcare professional and to act in line with the responsibility arising from professional and quality standards [2]. What level of care must be exercised when using an AI system is still developing. One useful reference is the Guideline for qualitative diagnostic and prognostic applications of AI in healthcare.

Negligent conduct can also arise from breaching a statutory duty or violating a statutory prohibition. This may occur when the AI Act is not followed, for example if the requirement for human oversight is ignored. Or if the AI system’s CE marking, required under the Medical Device Regulation (MDR), is not checked.

So, did the healthcare professional at De Linde act with sufficient care? This must be assessed against all facts and circumstances and is highly complex. The case study above does not provide enough information to draw firm conclusions. However, we can consider potentially relevant aspects, such as the failure to identify the false negative during human oversight. If, for example, the protocol or policy governing use of the AI system was not followed, this could suggest a lack of due care [3]. Liability also becomes more likely if Radio AI was not used in accordance with its instructions for use.

Auxiliary tools

The healthcare professional might also be liable under strict liability for the use of an unsuitable auxiliary tool [4]. A tool is unsuitable when it does not contribute to fulfilling the obligation at issue, in this case the medical treatment agreement. Here, for instance, the system’s accuracy could be relevant. Accuracy may indicate the extent to which the system can contribute to the performance of the treatment agreement [5]. A lower accuracy rate does not automatically mean unsuitability. Again, all facts and circumstances must be considered.

It is also important to note that the producer must generally be addressed first when a product defect is involved (see the section on defects).

Central liability of the healthcare institution

An employer can in principle be held liable when an employee causes harm to third parties in the course of their work [6]. In healthcare, an additional concept applies: central liability. Where a medical treatment agreement exists and a healthcare professional makes a mistake, the hospital may also be liable in certain cases. This depends in part on whether the agreement is concluded with the individual professional or with the healthcare institution.

Defect (the system)

If Hospital De Linde has developed Radio AI itself and places it on the market, the hospital may also be liable under product liability rules. Note: a hospital will also qualify as a manufacturer when it uses a purchased AI system under its own name or brand or makes substantial modifications to it.

The Product Liability Directive is being revised so that AI also falls within its scope [7]. The directive still needs to be finalised and transposed into national law. The future AI Liability Directive will also be relevant [8]. That directive introduces certain presumptions of proof; see our earlier blog for more details.

A manufacturer is liable for a defect in its product. A product is defective when it does not provide the safety that a person is entitled to expect [9]. Relevant factors include the product’s presentation, its reasonably expected use and the time at which it was placed on the market. If a product does not meet statutory requirements, liability for any resulting harm will be readily established. For example, if De Linde has failed to comply with requirements under the AI Act or other legislation such as the MDR, including post-market monitoring, and a bug therefore went undetected and unresolved, causing harm, liability will be likely.

Conclusion

Determining who is liable when an AI system is used can be a complex puzzle. It is essential for healthcare institutions to understand the liability risks associated with AI. Below are some takeaways to help you get started:

  • Be aware that if you develop an AI system in-house, use an existing system under your own name or brand, or make substantial modifications to an AI system, you qualify as a manufacturer under the Product Liability Directive and can be held liable for defects. Reflect this risk in your contracts with customers and make clear arrangements about liability.

  • As a healthcare institution, check your insurance coverage and whether it extends to harm caused by AI systems.

  • Follow the instructions for use provided by the AI system’s supplier. If the system is not used in line with these instructions, it becomes harder to demonstrate that you acted with due care.

  • When contracting with AI suppliers, be alert to liability limitations.

  • When procuring an AI system, ensure that the supplier holds the correct technical documentation and that the system has a valid CE marking. If you do not verify this, you fail in your duty of care as a healthcare institution, which may lead to liability if a defect arises.

  • Under the AI Act, healthcare institutions using AI must comply with specific obligations such as ensuring human oversight. Make sure your organisation meets these requirements. Also establish internal policies for responsible AI use and ensure that staff are aware of and follow these policies. Check, among other things, whether these policies align with the instructions for use of the AI systems in operation.

If you have any questions about liability relating to AI systems, feel free to get in touch. We are happy to help.

Contact us


[1] Article 7:453 WGBO.

[2] Article 1 paragraph 1 Wkkgz. See also Supreme Court 9 November 1990, ECLI:NL:HR:1990:AC1103.

[3] Supreme Court 2 March 2011, ECLI:NL:HR:2001:AB0377. Here, the Supreme Court held that if a departure from the protocol is not explained, this may, depending on the circumstances, constitute an attributable breach.

[4] Article 6:77 Dutch Civil Code.

[5] J. van Staalduinen, “Medical liability and AI: Which roads lead to Rome? Overview, analysis and issues”, Nederlands Juristenblad no. 32, p. 2576.

[6] This is laid down in Article 6:170 Dutch Civil Code. Where there is a contractor–client relationship, Article 6:76 Dutch Civil Code applies (liability for auxiliary persons).

[7] Proposal for a Directive of the European Parliament and of the Council on liability for defective products, COM(2022) 495 final.

[8] Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI), COM(2022) 496 final.

[9] Article 6:186 Dutch Civil Code.>

Back to overview