The initial enthusiasm for eXplainable Artificial Intelligence (XAI) has been tempered by concerns about the effectiveness and reliability of its explanations. Studies show that some explanations are no more reliable than random ones. Tim Miller suggests a paradigm shift in XAI to address issues of cognitive biases, such as automation bias, which can affect decision-making processes. He advocates for hypothesis-driven support systems to align AI explanations with human cognitive processes. Addressing these issues, we propose the Trustworthy Recommenders of Evidence eXplanations (T-REX) framework. This approach aims to enhance XAI by moving from statistical explanations to those based on trustworthy scientific evidence, enabling AI systems to tackle complex tasks more effectively.

T-REX: a framework to build trustworthy recommenders of evidence explanation

Landi C.;
2025

Abstract

The initial enthusiasm for eXplainable Artificial Intelligence (XAI) has been tempered by concerns about the effectiveness and reliability of its explanations. Studies show that some explanations are no more reliable than random ones. Tim Miller suggests a paradigm shift in XAI to address issues of cognitive biases, such as automation bias, which can affect decision-making processes. He advocates for hypothesis-driven support systems to align AI explanations with human cognitive processes. Addressing these issues, we propose the Trustworthy Recommenders of Evidence eXplanations (T-REX) framework. This approach aims to enhance XAI by moving from statistical explanations to those based on trustworthy scientific evidence, enabling AI systems to tackle complex tasks more effectively.
2025
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
Explainable AI
Human-Machine Interaction
Trustworthy AI
File in questo prodotto:
File Dimensione Formato  
Fedele et al_T-REX_2024.pdf

accesso aperto

Descrizione: T-REX: A Framework to Build Trustworthy Recommenders of Evidence Explanation
Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 2.02 MB
Formato Adobe PDF
2.02 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/549103
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact