bstract Background and Objective: Cutaneous melanoma remains the most lethal form of skin cancer. Although incurable at advanced stages, if diagnosed at an early, localized stage, the five-year survival rate is remarkably high. Recent advancements in artificial intelligence have paved the way for early skin lesion diagnosis, leveraging digital imaging processes into effective solutions. Most of these, however, use Machine Learning and Deep Learning techniques compartmentalized, without combining the produced predictions. Methods: This paper introduces MultiExCam, a novel multi approach and explainable architecture for skin cancer detection that integrates both machine and deep learning. Three heterogeneous data from three different techniques are used: dermatoscopic images, features extracted from deep learning techniques, and hand-crafted statistical features. A convolutional neural network is used for both deep feature extraction and initial classification, with the extracted features being combined with handcrafted ones to train four additional machine learning models. An advanced ensemble model, implemented as a Feed Forward Neural Network with gating and attention mechanism, produces the final classification. To enhance interpretability, the architecture employs GradCAM for visualizing critical regions in input images and SHAP for evaluating the contribution of individual features to predictions. Results: MultiExCam demonstrates robust performance across three diverse datasets (HAM10000, ISIC, MED-NODE), achieving AUC scores of 97%, 91%, and 98% respectively, with corresponding F1-scores of 92%, 87%, and 94%. Comprehensive ablation studies validate the importance of the preprocessing pipeline and ensemble integration, with the hybrid approach consistently outperforming baseline deep learning models by 1–3 percentage points. Unlike existing compartmentalized hybrid solutions, MultiExCam’s adaptive ensemble architecture learns personalized decision strategies for individual lesions, mimicking expert dermatological workflows that integrate multiple evidence sources. The explainability analysis reveals clinically meaningful activation patterns corresponding to established diagnostic criteria including asymmetry, border irregularity, and color variation. Conclusion: MultiExCam establishes a new paradigm for AI-assisted dermatological diagnosis by demonstrating that true hybrid integration of deep learning and machine learning, combined with comprehensive explainability techniques, can achieve both superior diagnostic performance and clinical interpretability. The architecture’s ability to provide accurate classifications while explaining prediction rationale addresses critical requirements for medical AI adoption, offering a promising foundation for clinical decision support systems in melanoma detection.

MultiExCam: A multi approach and explainable artificial intelligence architecture for skin lesion classification

Caroprese, Luciano;Vocaturo, Eugenio;Zumpano, Ester
2025

Abstract

bstract Background and Objective: Cutaneous melanoma remains the most lethal form of skin cancer. Although incurable at advanced stages, if diagnosed at an early, localized stage, the five-year survival rate is remarkably high. Recent advancements in artificial intelligence have paved the way for early skin lesion diagnosis, leveraging digital imaging processes into effective solutions. Most of these, however, use Machine Learning and Deep Learning techniques compartmentalized, without combining the produced predictions. Methods: This paper introduces MultiExCam, a novel multi approach and explainable architecture for skin cancer detection that integrates both machine and deep learning. Three heterogeneous data from three different techniques are used: dermatoscopic images, features extracted from deep learning techniques, and hand-crafted statistical features. A convolutional neural network is used for both deep feature extraction and initial classification, with the extracted features being combined with handcrafted ones to train four additional machine learning models. An advanced ensemble model, implemented as a Feed Forward Neural Network with gating and attention mechanism, produces the final classification. To enhance interpretability, the architecture employs GradCAM for visualizing critical regions in input images and SHAP for evaluating the contribution of individual features to predictions. Results: MultiExCam demonstrates robust performance across three diverse datasets (HAM10000, ISIC, MED-NODE), achieving AUC scores of 97%, 91%, and 98% respectively, with corresponding F1-scores of 92%, 87%, and 94%. Comprehensive ablation studies validate the importance of the preprocessing pipeline and ensemble integration, with the hybrid approach consistently outperforming baseline deep learning models by 1–3 percentage points. Unlike existing compartmentalized hybrid solutions, MultiExCam’s adaptive ensemble architecture learns personalized decision strategies for individual lesions, mimicking expert dermatological workflows that integrate multiple evidence sources. The explainability analysis reveals clinically meaningful activation patterns corresponding to established diagnostic criteria including asymmetry, border irregularity, and color variation. Conclusion: MultiExCam establishes a new paradigm for AI-assisted dermatological diagnosis by demonstrating that true hybrid integration of deep learning and machine learning, combined with comprehensive explainability techniques, can achieve both superior diagnostic performance and clinical interpretability. The architecture’s ability to provide accurate classifications while explaining prediction rationale addresses critical requirements for medical AI adoption, offering a promising foundation for clinical decision support systems in melanoma detection.
2025
Istituto di Nanotecnologia - NANOTEC - Sede Secondaria Rende (CS)
Transfer learning; Ensemble learning; Melanoma; Skin lesion; Explainable AI;
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0169260725004985-main.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 3.67 MB
Formato Adobe PDF
3.67 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/554357
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact