We present a novel technique to discover and exploit weak causal signals directly from images via neural networks for classification purposes. This way, we model how the presence of a feature in one part of the image affects the appearance of another feature in a different part of the image. Our method consists of a convolutional neural network backbone and a causality-factors extractor module, which computes weights to enhance each feature map according to its causal influence in the scene. We develop different architecture variants and empirically evaluate all the models on two public datasets of prostate MRI images and breast histopathology slides for cancer diagnosis. We study the effectiveness of our module both in fully-supervised and few-shot learning, we assess its addition to existing attention-based solutions, we conduct ablation studies, and investigate the explainability of our models via class activation maps. Our findings show that our lightweight block extracts meaningful information and improves the overall classification, together with producing more robust predictions that focus on relevant parts of the image. That is crucial in medical imaging, where accurate and reliable classifications are essential for effective diagnosis and treatment planning.

Exploiting causality signals in medical images: a pilot study with empirical results

Carloni, Gianluca
;
Colantonio, Sara
2024

Abstract

We present a novel technique to discover and exploit weak causal signals directly from images via neural networks for classification purposes. This way, we model how the presence of a feature in one part of the image affects the appearance of another feature in a different part of the image. Our method consists of a convolutional neural network backbone and a causality-factors extractor module, which computes weights to enhance each feature map according to its causal influence in the scene. We develop different architecture variants and empirically evaluate all the models on two public datasets of prostate MRI images and breast histopathology slides for cancer diagnosis. We study the effectiveness of our module both in fully-supervised and few-shot learning, we assess its addition to existing attention-based solutions, we conduct ablation studies, and investigate the explainability of our models via class activation maps. Our findings show that our lightweight block extracts meaningful information and improves the overall classification, together with producing more robust predictions that focus on relevant parts of the image. That is crucial in medical imaging, where accurate and reliable classifications are essential for effective diagnosis and treatment planning.
2024
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
Visual attention module
Attention
Causality
Convolutional neural network
Deep learning
Medical imaging
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0957417424002987-main.pdf

accesso aperto

Descrizione: publishedVersion
Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 5.87 MB
Formato Adobe PDF
5.87 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/468047
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact