What are you looking for?

Accountability in Medical Autonomous Expert Systems: Ethical and Epistemological Challenges for Explainable AI

Year Group 2020/21

Medical centres and general health-care systems are making a rapid and irreversible shift toward incorporating autonomous AI decision-making systems into their practice (traditionally known as ‘expert systems’). This NIAs-Lorentz theme-group project’s main aim is to address the epistemic and normative dimensions of explanatory expert systems.

About the Topic

Medical centres and general health-care systems are making a rapid and irreversible shift toward incorporating autonomous decision-making expert systems into their practice. Their importance for the future of health-care is very concrete, for they promise to advance into the analysis of medical evidence, providing fast recommendations on treatment, and render reliable diagnoses. Being able to explain the results of expert systems, their reliability and trustworthiness is of paramount importance for a morally admissible medical practice. Unfortunately, there is little understanding of how such an explanation is possible, and there is effectively no account of its structure. The best researchers have produced are classifications and weak predictions, none of which are capable of offering the right epistemological virtues (e.g., understanding) that are at the basis of the right moral action. Furthermore, if these expert systems were to be used in actual medical practice, they would be in flagrant violation of the Recital 71, Article 22 of the new GDPR that establishes the “right to explanation”.

This project will change the current state of our understanding of expert systems. We propose a novel approach that combines epistemological studies with ethical implications. Concretely, we propose to study the structure of explanation of expert systems (including notions such as opacity, trustworthiness, auditability, and reliability) and the ethical implications that follow from bona fide explanations (incomplete, poor, bad explanations thereof). Such ethical analysis includes the study of notions such as accountability, bias, discrimination and the like in the context of explainable expert systems.

NIAS-Lorentz Program

The NIAS-Lorentz Program is a collaboration between NIAS and the Lorentz Center set up in 2006. The program promotes cutting-edge interdisciplinary research that brings together perspectives from the humanities and/or social sciences with the natural and/or technological sciences. We hold that important and exciting advances are to be expected in research at the interface of different disciplines.