Abstract
n AI-assisted decision-making, a central promise of putting a human in
the loop is that they should be able to complement the AI system by adhering to
its correct and overriding its mistaken recommendations. In practice, however, we
often see that humans tend to over- or under-rely on AI recommendations, mean-
ing that they either adhere to wrong or override correct recommendations. Such re-
liance behavior is detrimental to decision-making accuracy. In this work, we artic-
ulate and analyze the interdependence between reliance behavior and accuracy in
AI-assisted decision-making, which has been largely neglected in prior work. We
also propose a visual framework to make this interdependence more tangible. This
framework helps us interpret and compare empirical findings, as well as obtain a
nuanced understanding of the effects of interventions (e.g., explanations) in AI-
assisted decision-making. Finally, we infer several interesting properties from the
framework: (i) when humans under-rely on AI recommendations, there may be no
possibility for them to complement the AI in terms of decision-making accuracy;
(ii) when humans cannot discern correct and wrong AI recommendations, no such
improvement can be expected either; (iii) interventions may lead to an increase in
decision-making accuracy that is solely driven by an increase in humans’ adher-
ence to AI recommendations, without any ability to discern correct and wrong. Our
work emphasizes the importance of measuring and reporting both effects on accu-
racy and reliance behavior when empirically assessing interventions.
the loop is that they should be able to complement the AI system by adhering to
its correct and overriding its mistaken recommendations. In practice, however, we
often see that humans tend to over- or under-rely on AI recommendations, mean-
ing that they either adhere to wrong or override correct recommendations. Such re-
liance behavior is detrimental to decision-making accuracy. In this work, we artic-
ulate and analyze the interdependence between reliance behavior and accuracy in
AI-assisted decision-making, which has been largely neglected in prior work. We
also propose a visual framework to make this interdependence more tangible. This
framework helps us interpret and compare empirical findings, as well as obtain a
nuanced understanding of the effects of interventions (e.g., explanations) in AI-
assisted decision-making. Finally, we infer several interesting properties from the
framework: (i) when humans under-rely on AI recommendations, there may be no
possibility for them to complement the AI in terms of decision-making accuracy;
(ii) when humans cannot discern correct and wrong AI recommendations, no such
improvement can be expected either; (iii) interventions may lead to an increase in
decision-making accuracy that is solely driven by an increase in humans’ adher-
ence to AI recommendations, without any ability to discern correct and wrong. Our
work emphasizes the importance of measuring and reporting both effects on accu-
racy and reliance behavior when empirically assessing interventions.
Original language | English |
---|---|
Title of host publication | Frontiers in Artificial Intelligence and Applications |
Subtitle of host publication | HHAI 2023: Augmenting Human Intellect |
Publisher | IOS Press |
Pages | 46 - 59 |
Number of pages | 14 |
Volume | 368 |
DOIs | |
Publication status | Published - 22-Jun-2023 |
Externally published | Yes |