Background and objective: Employing deep learning models in critical domainssuch as medical imaging poses challenges associated with the limitedavailability of training data. We present a strategy for improving theperformance and generalization capabilities of models trained in low-dataregimes. Methods: The proposed method starts with a pre-training phase, wherefeatures learned in a self-supervised learning setting are disentangled toimprove the robustness of the representations for downstream tasks. We thenintroduce a meta-fine-tuning step, leveraging related classes betweenmeta-training and meta-testing phases but varying the granularity level. Thisapproach aims to enhance the model's generalization capabilities by exposing itto more challenging classification tasks during meta-training and evaluating iton easier tasks but holding greater clinical relevance during meta-testing. Wedemonstrate the effectiveness of the proposed approach through a series ofexperiments exploring several backbones, as well as diverse pre-training andfine-tuning schemes, on two distinct medical tasks, i.e., classification ofprostate cancer aggressiveness from MRI data and classification of breastcancer malignity from microscopic images. Results: Our results indicate thatthe proposed approach consistently yields superior performance w.r.t. ablationexperiments, maintaining competitiveness even when a distribution shift betweentraining and evaluation data occurs. Conclusion: Extensive experimentsdemonstrate the effectiveness and wide applicability of the proposed approach.We hope that this work will add another solution to the arsenal of addressinglearning issues in data-scarce imaging domains.

Boosting few-shot learning with disentangled self-supervised learning and meta-learning for medical image classification

Pachetti E.;Colantonio S.
2024

Abstract

Background and objective: Employing deep learning models in critical domainssuch as medical imaging poses challenges associated with the limitedavailability of training data. We present a strategy for improving theperformance and generalization capabilities of models trained in low-dataregimes. Methods: The proposed method starts with a pre-training phase, wherefeatures learned in a self-supervised learning setting are disentangled toimprove the robustness of the representations for downstream tasks. We thenintroduce a meta-fine-tuning step, leveraging related classes betweenmeta-training and meta-testing phases but varying the granularity level. Thisapproach aims to enhance the model's generalization capabilities by exposing itto more challenging classification tasks during meta-training and evaluating iton easier tasks but holding greater clinical relevance during meta-testing. Wedemonstrate the effectiveness of the proposed approach through a series ofexperiments exploring several backbones, as well as diverse pre-training andfine-tuning schemes, on two distinct medical tasks, i.e., classification ofprostate cancer aggressiveness from MRI data and classification of breastcancer malignity from microscopic images. Results: Our results indicate thatthe proposed approach consistently yields superior performance w.r.t. ablationexperiments, maintaining competitiveness even when a distribution shift betweentraining and evaluation data occurs. Conclusion: Extensive experimentsdemonstrate the effectiveness and wide applicability of the proposed approach.We hope that this work will add another solution to the arsenal of addressinglearning issues in data-scarce imaging domains.
2024
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
Few-shot learning
Self-supervised learning
Disentangled representation learning
File in questo prodotto:
File Dimensione Formato  
2403.17530v1.pdf

accesso aperto

Descrizione: Pre-print ArXiv
Tipologia: Documento in Pre-print
Licenza: Creative commons
Dimensione 2.4 MB
Formato Adobe PDF
2.4 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/498821
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact