The integration of AI into decision support systems raises concerns about overreliance and distrust. To address this, we propose an experimental protocol combining Learning to Defer (LtD)—where AI delegates decisions to humans when appropriate—and Explainable AI (XAI), which provides users with decision rationales. Our study investigates how these approaches impact human decisionmaking, particularly in high-stakes contexts. Participants will classify noisy images from ImageNet under three between-subjects conditions: Defer (AI defers to user), Defer + XAI (AI provides an explanation), and Hidden Delegation (AI involvement is concealed). Each condition will be tested in neutral and high-stakes scenarios, the latter framed through narratives emphasizing the danger of misclassification. We will assess decision accuracy and reaction times, as well as psychological measures that explore the influence of individual differences (i.e., intolerance to uncertainty and cognitive styles), and emotions (e.g., emotion regulation, and AI-related anxiety). We hypothesize that Defer may prompt more analytical thinking, improving accuracy over Hidden Delegation, while Defer + XAI may further enhance performance. In contrast, Hidden Delegation could promote reliance on intuitive processing. We expect higher accuracy and longer response times in highstakes conditions. Findings will inform the design of human-AI systems that optimize user engagement and reliability, particularly in domains like clinical decision-making.

AI says I’m better: evaluating the effect of AI defer on users. A study protocol

Beretta A.;
2025

Abstract

The integration of AI into decision support systems raises concerns about overreliance and distrust. To address this, we propose an experimental protocol combining Learning to Defer (LtD)—where AI delegates decisions to humans when appropriate—and Explainable AI (XAI), which provides users with decision rationales. Our study investigates how these approaches impact human decisionmaking, particularly in high-stakes contexts. Participants will classify noisy images from ImageNet under three between-subjects conditions: Defer (AI defers to user), Defer + XAI (AI provides an explanation), and Hidden Delegation (AI involvement is concealed). Each condition will be tested in neutral and high-stakes scenarios, the latter framed through narratives emphasizing the danger of misclassification. We will assess decision accuracy and reaction times, as well as psychological measures that explore the influence of individual differences (i.e., intolerance to uncertainty and cognitive styles), and emotions (e.g., emotion regulation, and AI-related anxiety). We hypothesize that Defer may prompt more analytical thinking, improving accuracy over Hidden Delegation, while Defer + XAI may further enhance performance. In contrast, Hidden Delegation could promote reliance on intuitive processing. We expect higher accuracy and longer response times in highstakes conditions. Findings will inform the design of human-AI systems that optimize user engagement and reliability, particularly in domains like clinical decision-making.
2025
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
Cognitive S tyle
Decision Support System (DSS)
Decision-making
Emotions
Explainable AI (XAI)
High-stakes scenarios
Human-AI collaboration
Individual difference
Learning to Defer (LtD)
File in questo prodotto:
File Dimensione Formato  
Beretta et al_AI Says I’m Better_2025.pdf

accesso aperto

Descrizione: AI Says I’m Better: Evaluating the Effect of AI Defer on Users. A Study Protocol
Tipologia: Versione Editoriale (PDF)
Licenza: Altro tipo di licenza
Dimensione 274.35 kB
Formato Adobe PDF
274.35 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/563763
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact