Artificial intelligence and deep learning are powerful tools for extracting knowledge from large particularly in healthcare. However, their black-box nature raises interpretability concerns, especially stakes applications. Existing eXplainable Artificial Intelligence methods often focus solely on visualization rule-based explanations, limiting interpretability's depth and clarity. This work proposes a novel AI method specifically designed for medical image analysis, integrating statistical, visual, and explanations to improve transparency in deep learning models. Statistical features are derived features extracted using a custom Mobilenetv2 model. A two-step feature selection method-filtering with mutual importance selection-ranks and refines these features. Decision tree and RuleFit are employed to classify data and extract human-readable rules. Additionally, a novel statistical feature overlay visualization generates heatmap-like representations of three key statistical measures (mean, and entropy), providing both localized and quantifiable visual explanations of model decisions. The method has been validated on five medical imaging datasets-COVID-19 radiography, ultrasound breast brain tumor magnetic resonance imaging, lung and colon cancer histopathological, and glaucoma images results confirmed by medical experts, demonstrating its effectiveness in enhancing interpretability image classification tasks.

A novel explainable AI framework for medical image classification integrating statistical, visual, and rule-based methods

De Falco I.;Sannino G.
2025

Abstract

Artificial intelligence and deep learning are powerful tools for extracting knowledge from large particularly in healthcare. However, their black-box nature raises interpretability concerns, especially stakes applications. Existing eXplainable Artificial Intelligence methods often focus solely on visualization rule-based explanations, limiting interpretability's depth and clarity. This work proposes a novel AI method specifically designed for medical image analysis, integrating statistical, visual, and explanations to improve transparency in deep learning models. Statistical features are derived features extracted using a custom Mobilenetv2 model. A two-step feature selection method-filtering with mutual importance selection-ranks and refines these features. Decision tree and RuleFit are employed to classify data and extract human-readable rules. Additionally, a novel statistical feature overlay visualization generates heatmap-like representations of three key statistical measures (mean, and entropy), providing both localized and quantifiable visual explanations of model decisions. The method has been validated on five medical imaging datasets-COVID-19 radiography, ultrasound breast brain tumor magnetic resonance imaging, lung and colon cancer histopathological, and glaucoma images results confirmed by medical experts, demonstrating its effectiveness in enhancing interpretability image classification tasks.
2025
Istituto di Calcolo e Reti ad Alte Prestazioni - ICAR
Explainable artificial intelligence
Feature engineering
Medical image classification
Rule-based interpretability
File in questo prodotto:
File Dimensione Formato  
A novel explainable AI framework for medical image classification integrating statistical, visual, and rule-based methods.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 4.26 MB
Formato Adobe PDF
4.26 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/559702
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 4
social impact