“AI Liability in Health” - A Moot Court organised by UNIVIE at the CPDP Conference 2024

by Clara Saillant, Theresa Henne & Lorraine Maisnier-Boché

On 22nd May 2024, UNIVIE went back to the Computer, Privacy and Data Protection (CPDP) Conference in Brussels for a new moot court edition. Last year at the CPDP, UNIVIE organised a moot court on the topic “Value of Health Data” based on the (now adopted) European Health Data Space regulation (EHDS)[1] and the question of how to reward data providers for their contribution to the development of AI models. This year’s edition focused on “AI Liability in Health” and the question whether a company, names AI4Mind, can be held liable for damage suffered by patients because of a incorrect diagnose. At the centre of the discussion were the intricacies of the (soon to be published) AI Act[2], in particular the transparency obligations, and the (Revised) Product Liability Directive[3].

In the workshop, the moot court participants split in two groups, one representing the patients and the other one the AI company. Additionally, one participant joined our judge, Lorraine Maisnier-Boché. The question at hand, and for which participants needed to argue their case, was whether there is a causal link between the AI company’s actions and the damage suffered by the patients.

To help the participants to this fictional court case in preparing their arguments to present to the judges, UNIVIE provided “evidence” composed of a detailed dataset description and data governance and transparency measures of AI4MIND. The participants were also provided with key provisions stemming from the (Revised) Product Liability Directive 2022/0302.

The details of the scenario were as follows: This fictional court case takes place in the EU, where an AI model is developed based on MRI image by the company AI4MIND with the aim to assist radiologists in detecting prostate cancer and provide treatment suggestions. Eighteen patients discover that they had been misdiagnosed and suffer from the resulting immaterial damage. Therefore, the group of misdiagnosed patients sues both the clinic and the AI provider. The clinic liability, and the question of doctor’s malpractice, is being litigated in a separate trial. Hence, the moot court focused exclusively on the liability of the AI provider.

In 2019, Thomas Davenport and Ravi Kalakota wrote in The Future Healthcare Journal that “[t]here are already a number of research studies suggesting that AI can perform as well as or better than humans at key healthcare tasks, such as diagnosing disease.”[4] Even if algorithm are indeed more and more performant, risk zero does not exist and mistakes, such as misdiagnosis, can happen. But when is a company, providing an AI model assisting a clinician, liable if something goes wrong?

Patients argued they had been wrongly treated which led them to suffer from psychological distress requiring medical treatment. As all eighteen patients share similar Middle Eastern origins, they suspect the AI company did not sufficiently test for bias and did not communicate transparently about the limitations of the models regarding ethnic origin.

The AI company dismisses these allegations arguing that their model was strictly and rigorously tested and validated. Furthermore, the company argues that, as there is no such thing as a zero risk, adverse outcomes resulting from unforeseeable factors or errors made by the healthcare professional are an inherent flaw to AI models itself. They stressed that this is why AI is only assisting medical personnel, who are responsible for the final decision.

The participants, in their respective groups, carefully discussed every aspect of the case and the evidence provided and made an argument on how the provisions of the Revised Product Liability Directive would best support their case. After two rounds of arguments and rebuttals, the judge and the assisting judge drafted the verdict.

The decision found the defendant liable for the damage suffered by the patients, because of the defective nature of the AI system. The liability was caused by an insufficient level of information within the product documentation and the limitations of the AI system occurring because of the absence of training and testing on ethnic data. The link of causality between the damage and the defect is based not only on the existence of an autonomous decision by the AI system (medical device) but also by applying the presumption of liability established by the new Product Liability Directive in the context of technical or scientific complexity.

The reasoning of the judgement at length can be found below.

We are very grateful to our fantastic participants that brought this fictional court case to life, and we are extremely grateful to our wonderful judge for a second year in row, Lorraine Maisnier-Boché! 

 

Court decision by the “Judge” Lorraine Maisnier-Boché

Summary

The defendant has been found liable for the damage sustained by the patients, because of the defective nature of its AI system, characterized by an insufficient level of information within the product documentation, on the limitations of the AI system (absence of training and testing on ethnic data). The link of causality between the damage and the defect is demonstrated not only because of the existence of an autonomous decision by the AI system (medical device) but also by applying the presumption allowed by the new Product Liability Directive in the context of technical or scientific complexity (in addition to the defect demonstrated by the plaintiff).

Scope

The AI system is a software that is considered as a product falling within the scope of the “New” Product Liability Directive.

Criteria

Necessity to demonstrate that:

- the product was defective;

- a damage was sustained by the plaintiffs

- there is a causal link between such defect and damage.

Reasoning

The damage appears to be demonstrated by the misdiagnosis of the patients, which led to unnecessary medical treatment (incidentally causing risk for the patients’ health, as such treatments could prove harmful to patients who do not suffer from the diagnosed pathology), psychological harm through the emotional distress and severe depression caused by the diagnosis, and financial damage resulting from the cost of the medical treatments.

The defect existence is strongly debated between the parties, as the defendant considers the misdiagnosis as caused by unforeseeable factors and end-user errors, while the plaintiffs contend that the AI system was biased as not trained on ethnic populations, was thus not subject to sufficient testing and in any case, that the AI system developer should have disclosed in a clear and transparent manner the limitations of the AI system regarding ethnicity.

On whether the AI system should have been trained on ethnic data:

Under art. 7 of the PLD “A product shall be considered defective if it does not provide the safety that a person is entitled to expect or that is required under Union or national law. In assessing the defectiveness of a product, all circumstances shall be taken into account, including: (…) reasonably foreseeable use of the product”. Thus, the specific needs of the users should be taken into account, in this case the practice of the health professionals. In this regard, the variety of ethnic backgrounds needed to ensure a proper diagnosis could be considered as foreseeable use.

However, under established case law regarding computer programs, in the absence of an expression of needs by the customer or request for customization of the AI system to the customer needs (e.g. through a specific calibration phase of the AI system), the developer cannot be considered as in breach of its obligations because of the absence of training on ethnic data.

On the sufficient nature of the level of information on the AI system limitations

Under art. 10(2)(b) of the PLD, “Member States shall ensure that a claimant is required to prove the defectiveness of the product, the damage suffered and the causal link between that defectiveness and that damage. 2. The defectiveness of the product shall be presumed where any of the following conditions are met (…) the claimant demonstrates that the product does not comply with mandatory product safety requirements laid down in Union law or national law that are intended to protect against the risk of the damage suffered by the injured person”.

As a potential medical device (likely to be class2A), the AI system would be under specific requirements of information in the instructions for use and of language. The absence of adequate information on the limitations of the AI system and/or on the indications of such medical device could qualify as a breach of such obligations.

Under the new AI Act, such AI system would thus qualify as a high-risk system, subject to additional requirements in terms of transparency as well as human oversight. The absence of adequate information on the limitations of the AI system could qualify as a breach of such obligations.

Thus, the presumption of defectiveness triggered by non-compliance with product safety regulations, in this case the Medical Device Regulation and AI Act, could apply.

The link of causality between the defect and the damage is of the essence in this case.

The scope of liability of the health care professionals who made a diagnosis based on faulty result from the AI system should be analyzed in a separate instance.

In addition, regardless of the liability of the health care provider, the output of the AI system can be considered as an autonomous decision-making process. In this regard, the court may rely by analogy on the CJUE Schufa case C-634-21, that confirmed that a score must be regarded as an ‘automated individual decision’, in so far as the clients of the score provider attribute to it a determining role in the final decision.

In addition, under art. 10(4) of the PLD “A national court shall presume the defectiveness of the product or the causal link between its defectiveness and the damage, or both, where, despite the disclosure of evidence in accordance with Article 9 and taking into account all the relevant circumstances of the case: (a) the claimant faces excessive difficulties, in particular due to technical or scientific complexity, in proving the defectiveness of the product or the causal link between its defectiveness and the damage, or both; and (b) the claimant demonstrates that it is likely that the product is defective or that there is a causal link between the defectiveness and the damage, or both”.


[1] st07553-en24.pdf (europa.eu)

[2] Texts adopted - Artificial Intelligence Act - Wednesday, 14 June 2023 (europa.eu)

[3] TA (europa.eu)

[4] Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019 Jun;6(2):94-98. doi: 10.7861/futurehosp.6-2-94. PMID: 31363513; PMCID: PMC6616181.