What are you looking for?

Beckers, Sander

Beckers, Sander

NIAS-Lorentz Theme-Group Fellow

Accountability in Medical Autonomous Expert Systems: ethical and epistemological challenges for explainable AI

Research Question

How can we rely on automated decision-making procedures in such an important field as medicine, unless we understand how the underlying AI system reaches its decisions? In order to get such an understanding, we need causal models that can explain to us at the right level of detail which features contributed to a certain outcome.

Project Description

Medical practice is becoming more and more reliant on automated decision-making AI systems. Although an AI system can deliver great results, too often it functions as an incomprehensible black box: it tells us what output is best for a given input, but it does not explain how it reached that decision. This is disconcerting, because medical decisions have a huge impact and we need to be able to trust and understand the decision-making process. My goal is to use recent advances in the domain of causal modeling to interpret the working of such AI systems in terms of understandable cause and effect relationships. In time this approach could lead to a generic framework for assessing compliance with legal requirements concerning bias, fairness, and transparency.

Selected Publications

  • Beckers, S., Eberhardt, F., and Halpern, J.Y. (2019). Approximate Causal Abstraction. UAI2019
  • Beckers, S., and Halpern, J.Y. (2019). Abstracting Causal Models. AAAI2019
  • Beckers, S. (2018) AAAI: an Argument Against Artificial Intelligence. In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2017.

Personal page