The lungs are vital organs essential for the exchange of oxygen and carbon dioxide, and their health is critical for sustaining life. Chest X-rays are a key non-invasive tool for assessing lung health, enabling the identification of abnormalities such as pulmonary infiltrations, which appear as opaque areas in radiographs and may signal underlying diseases. Early detection of such abnormalities is essential for timely intervention and disease management. Artificial intelligence (AI) has recently emerged as a valuable tool for analyzing large volumes of medical data and producing accurate predictions. However, the black box nature of deep learning models poses challenges in interpretability, limiting their adoption in critical domains like healthcare. To address these issues, this study introduces a three-level explainable framework using Grad-CAM, LIME, and SHAP techniques to enhance the interpretability of AI-driven solutions. The used methodology involves an in-depth analysis of classification results, through explainable AI techniques to tackle challenges such as medical artifacts, naturally present in X-ray images. This dual perspective combines clinical relevance with technical rigor to ensure actionable insights. The study highlights the complexities of applying AI to this domain while providing a comprehensive, explainable analysis, offering a foundation for improving both diagnostic accuracy and trust in AI systems. Future research directions are also discussed, emphasizing the importance of advancing explainable solutions in medical AI applications.
Explainable Deep Learning for Chest X-Ray Classification
Vocaturo, Eugenio
;Zumpano, Ester
2024
Abstract
The lungs are vital organs essential for the exchange of oxygen and carbon dioxide, and their health is critical for sustaining life. Chest X-rays are a key non-invasive tool for assessing lung health, enabling the identification of abnormalities such as pulmonary infiltrations, which appear as opaque areas in radiographs and may signal underlying diseases. Early detection of such abnormalities is essential for timely intervention and disease management. Artificial intelligence (AI) has recently emerged as a valuable tool for analyzing large volumes of medical data and producing accurate predictions. However, the black box nature of deep learning models poses challenges in interpretability, limiting their adoption in critical domains like healthcare. To address these issues, this study introduces a three-level explainable framework using Grad-CAM, LIME, and SHAP techniques to enhance the interpretability of AI-driven solutions. The used methodology involves an in-depth analysis of classification results, through explainable AI techniques to tackle challenges such as medical artifacts, naturally present in X-ray images. This dual perspective combines clinical relevance with technical rigor to ensure actionable insights. The study highlights the complexities of applying AI to this domain while providing a comprehensive, explainable analysis, offering a foundation for improving both diagnostic accuracy and trust in AI systems. Future research directions are also discussed, emphasizing the importance of advancing explainable solutions in medical AI applications.File | Dimensione | Formato | |
---|---|---|---|
Explainable_Deep_Learning_for_Chest_X-Ray_Classification.pdf
solo utenti autorizzati
Tipologia:
Versione Editoriale (PDF)
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
2.1 MB
Formato
Adobe PDF
|
2.1 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.