The Epistemic Condition for Responsibility and Accountability as Capacity in Medical AI
Which skills and capacities are required from physicians to responsibly interact with emerging medical AI systems? Are those skills necessary to sustain accountability in those systems?
It is often argued that “AI experts and operators [must be] able and willing to communicate, explain and give reasons for what they are doing […].” In my project, I will scrutinize how the requirement for explanation is related to the concept of responsibility in the context of medical AI. It is usually suggested that one cannot be accountable for things one did not know or could not have foreseen – which is called the epistemic condition for responsibility. At the same time, it is widely acknowledged that willful ignorance does not undermine accountability either. Thus, I will investigate whether and how this translates into a requirement for physicians to sufficiently understand and explain medical AI systems and learn how to responsibly interact with those systems.
- Sand, M. (2020) Did Alexander Fleming deserve the Nobel Prize? In: Science and Engineering Ethics, 26, pp. 899–919, DOI: 10.1007/s11948-019-00149-5
- Sand, M. (2019) On “not having a future”. In: Futures, 109, pp. 98-106, DOI: 10.1016/j.futures.2019.01.002
- Van de Poel, Ibo.; Sand, M. (2018) Varieties of responsibility-Two problems of responsible innovation. In: Synthese. DOI: 10.1007/s11229-018-01951-7