This paper introduces a framework employing inherently explainable Machine Learning models to provide intuitive global explanations for medical diagnostics. Our proposal uses a transparent approach based on \emph{Soft Decision Trees} that offers direct insights into its decision-making processes, eliminating the need for post-hoc explanation methods. SDTs combine the hierarchical structure of decision trees with the representational power of neural networks, allowing them to capture complex data patterns while maintaining interpretability. In the context of healthcare analytics, we apply SDTs to a classification task. The model is trained end-to-end using mini-batch gradient descent to minimize a cross-entropy loss, encouraging balanced hierarchical data partitioning. An ANOVA-based feature selection technique is implemented to reduce model complexity and enhance interpretability. We tested our methodology on a dataset from a healthcare scenario, demonstrating that SDTs effectively provide precise and understandable diagnostic predictions. The findings emphasize the role of explainable AI in enhancing trust and cooperation in healthcare decision-making.

Transparent Models in Healthcare: Enhancing Decision Support through Explainability

Alfredo Cuzzocrea;Francesco Folino
;
Luigi Pontieri;Pietro Sabatino;
2025

Abstract

This paper introduces a framework employing inherently explainable Machine Learning models to provide intuitive global explanations for medical diagnostics. Our proposal uses a transparent approach based on \emph{Soft Decision Trees} that offers direct insights into its decision-making processes, eliminating the need for post-hoc explanation methods. SDTs combine the hierarchical structure of decision trees with the representational power of neural networks, allowing them to capture complex data patterns while maintaining interpretability. In the context of healthcare analytics, we apply SDTs to a classification task. The model is trained end-to-end using mini-batch gradient descent to minimize a cross-entropy loss, encouraging balanced hierarchical data partitioning. An ANOVA-based feature selection technique is implemented to reduce model complexity and enhance interpretability. We tested our methodology on a dataset from a healthcare scenario, demonstrating that SDTs effectively provide precise and understandable diagnostic predictions. The findings emphasize the role of explainable AI in enhancing trust and cooperation in healthcare decision-making.
2025
Istituto di Calcolo e Reti ad Alte Prestazioni - ICAR
978-3-031-91378-5
Soft Decision Trees
Explainable AI
Clinical DSSs
Healthcare Analytics
Predictive Modeling
Machine Learning
File in questo prodotto:
File Dimensione Formato  
BookChapter_Springer_2025.pdf

solo utenti autorizzati

Descrizione: PDF
Tipologia: Documento in Pre-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 230.05 kB
Formato Adobe PDF
230.05 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/548303
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact