Accountability in Medical Autonomous Expert Systems: ethical and epistemological challenges for explainable AI
What do we known when we explain the results of medical AI? And which are the ethical implications of such explanation?
Medical centres are making a rapid and irreversible shift toward incorporating medical AI systems into their practice. Their importance for the future of healthcare is very concrete, for they will advance the analysis of medical evidence, provide fast recommendations on treatment, and render reliable diagnoses. Being able to explain the results of medical AI systems is of paramount importance for a morally admissible medical practice. This project, then, combines epistemological studies with ethical implications. We propose to study the structure of explanation of medical AI systems and the ethical implications that follow from bona fide explanations. Such ethical analysis includes the study of notions such as accountability, bias, discrimination and others in the context of explainable medical AI.
- Juan M. Durán and Nico Formanek (2018) Grounds for Trust: Essential Epistemic Opacity and Computational Reliabilism. Minds and Machines, 28(4):645–666, 2018.
- Juan M. Durán (2018) Computer simulations in science and engineering. Concepts, practices, perspectives. Springer
- Juan M. Durán (2017) Varying the explanatory span: scientific explanation for computer simulations. International studies in the philosophy of science 31(1):27-45