Human-centered tools for coping with imperfect algorithms during medical decision-making
📜 Abstract
There is growing interest in using machine learning algorithms to assist experts in application areas ranging from criminal justice to medicine. But as we look to integrate such systems into human decision-making processes, we must ask how people respond to imperfect algorithmic advice in high-stakes situations. We need to carefully design interfaces that enable human decision makers to scrutinize their algorithmic counterparts and correct their errors. In this paper, we study the case of experts (medical professionals) using machine learning algorithms to make predictions. Specifically, we ask whether such experts face cognitive difficulty in adjusting their decisions in response to imperfect algorithmic advice. We develop a simple model of human decision-making when faced with imperfect algorithmic prediction. Based on our model, we run a series of experiments with medical professionals, which reveal that systematically adjusting the algorithms to maximize decision performance is not trivial. Lastly, we offer design recommendations for decision-making tools that can improve individuals' ability to account for algorithmic imperfection.
✨ Summary
This paper, published in November 2017, examines the challenges and design considerations that arise when integrating imperfect machine learning algorithms into high-stakes medical decision-making processes. The authors, Zafar et al., focus on understanding the cognitive difficulties faced by medical professionals when adjusting their decisions based on algorithmic predictions. The paper presents a model to study these interactions and reports experimental findings indicating that optimizing decision-making with imperfect algorithms is complex. The authors propose design recommendations for decision-support tools to improve interactions between human decision-makers and algorithms.
The paper is frequently cited in literature discussing human-AI interaction and has influenced further research on developing user-friendly decision-support systems in healthcare and other critical domains. It is referenced by works such as “Fairness and Abstraction in Sociotechnical Systems” by Selbst et al. (2019) [https://dl.acm.org/doi/10.1145/3287560.3287598] and “How Do Humans Collaborate with AI? Towards Systematic Design of AI Systems” by Maheshwari et al. (2018) [https://arxiv.org/abs/1810.08836]. These citations and others underpin the paper’s impact in promoting user-centered approaches to integrating AI in high-stakes environments.