Due to domain shift, deep learning image classifiers perform poorly whenapplied to a domain different from the training one. For instance, a classifiertrained on chest X-ray (CXR) images from one hospital may not generalize toimages from another hospital due to variations in scanner settings or patientcharacteristics. In this paper, we introduce our CROCODILE framework, showinghow tools from causality can foster a model's robustness to domain shift viafeature disentanglement, contrastive learning losses, and the injection ofprior knowledge. This way, the model relies less on spurious correlations,learns the mechanism bringing from images to prediction better, and outperformsbaselines on out-of-distribution (OOD) data. We apply our method to multi-labellung disease classification from CXRs, utilizing over 750000 images from fourdatasets. Our bias-mitigation method improves domain generalization andfairness, broadening the applicability and reliability of deep learning modelsfor a safer medical image analysis. Find our code at:https://github.com/gianlucarloni/crocodile.
CROCODILE: Causality aids RObustness via COntrastive DIsentangled LEarning
Carloni G.;Colantonio S.
2024
Abstract
Due to domain shift, deep learning image classifiers perform poorly whenapplied to a domain different from the training one. For instance, a classifiertrained on chest X-ray (CXR) images from one hospital may not generalize toimages from another hospital due to variations in scanner settings or patientcharacteristics. In this paper, we introduce our CROCODILE framework, showinghow tools from causality can foster a model's robustness to domain shift viafeature disentanglement, contrastive learning losses, and the injection ofprior knowledge. This way, the model relies less on spurious correlations,learns the mechanism bringing from images to prediction better, and outperformsbaselines on out-of-distribution (OOD) data. We apply our method to multi-labellung disease classification from CXRs, utilizing over 750000 images from fourdatasets. Our bias-mitigation method improves domain generalization andfairness, broadening the applicability and reliability of deep learning modelsfor a safer medical image analysis. Find our code at:https://github.com/gianlucarloni/crocodile.File | Dimensione | Formato | |
---|---|---|---|
2408.04949v1.pdf
accesso aperto
Tipologia:
Documento in Pre-print
Licenza:
Creative commons
Dimensione
2.3 MB
Formato
Adobe PDF
|
2.3 MB | Adobe PDF | Visualizza/Apri |
Carloni-Colantonio_LNCS-2024.pdf
solo utenti autorizzati
Descrizione: CROCODILE: Causality Aids RObustness via COntrastive DIsentangled LEarning
Tipologia:
Versione Editoriale (PDF)
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
1.78 MB
Formato
Adobe PDF
|
1.78 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.