Artificial intelligence and deep learning are powerful tools for extracting knowledge from large particularly in healthcare. However, their black-box nature raises interpretability concerns, especially stakes applications. Existing eXplainable Artificial Intelligence methods often focus solely on visualization rule-based explanations, limiting interpretability's depth and clarity. This work proposes a novel AI method specifically designed for medical image analysis, integrating statistical, visual, and explanations to improve transparency in deep learning models. Statistical features are derived features extracted using a custom Mobilenetv2 model. A two-step feature selection method-filtering with mutual importance selection-ranks and refines these features. Decision tree and RuleFit are employed to classify data and extract human-readable rules. Additionally, a novel statistical feature overlay visualization generates heatmap-like representations of three key statistical measures (mean, and entropy), providing both localized and quantifiable visual explanations of model decisions. The method has been validated on five medical imaging datasets-COVID-19 radiography, ultrasound breast brain tumor magnetic resonance imaging, lung and colon cancer histopathological, and glaucoma images results confirmed by medical experts, demonstrating its effectiveness in enhancing interpretability image classification tasks.
A novel explainable AI framework for medical image classification integrating statistical, visual, and rule-based methods
De Falco I.;Sannino G.
2025
Abstract
Artificial intelligence and deep learning are powerful tools for extracting knowledge from large particularly in healthcare. However, their black-box nature raises interpretability concerns, especially stakes applications. Existing eXplainable Artificial Intelligence methods often focus solely on visualization rule-based explanations, limiting interpretability's depth and clarity. This work proposes a novel AI method specifically designed for medical image analysis, integrating statistical, visual, and explanations to improve transparency in deep learning models. Statistical features are derived features extracted using a custom Mobilenetv2 model. A two-step feature selection method-filtering with mutual importance selection-ranks and refines these features. Decision tree and RuleFit are employed to classify data and extract human-readable rules. Additionally, a novel statistical feature overlay visualization generates heatmap-like representations of three key statistical measures (mean, and entropy), providing both localized and quantifiable visual explanations of model decisions. The method has been validated on five medical imaging datasets-COVID-19 radiography, ultrasound breast brain tumor magnetic resonance imaging, lung and colon cancer histopathological, and glaucoma images results confirmed by medical experts, demonstrating its effectiveness in enhancing interpretability image classification tasks.| File | Dimensione | Formato | |
|---|---|---|---|
|
A novel explainable AI framework for medical image classification integrating statistical, visual, and rule-based methods.pdf
accesso aperto
Tipologia:
Versione Editoriale (PDF)
Licenza:
Creative commons
Dimensione
4.26 MB
Formato
Adobe PDF
|
4.26 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


