Pansharpening refers to fusing a low-resolution multispectral image (LRMS) and a high-resolution panchromatic (PAN) image to generate a high-resolution multispectral image (HRMS). Traditional pansharpening methods use a single pair of LRMS and PAN to generate HRMS at full resolution, but they fail to generate high-quality fused products due to the assumption of a (often inaccurate) linear relationship between the fused products. Convolutional neural network methods, i.e., supervised and unsupervised learning approaches, can model any arbitrary non-linear relationship among data, but performing even worse than traditional methods when testing data are not consistent with training data. Moreover, supervised methods rely on simulating reduced resolution data for training causing information loss at full resolution. Unsupervised pansharpening suffers from distortion due to the lack of reference images and inaccuracy in the estimation of the degradation process. In this paper, we propose a zero-shot semi-supervised method for pansharpening (ZS-Pan), which only requires a single pair of PAN/LRMS images for training and testing networks combining both the pros of supervised and unsupervised methods. Facing with challenges of limited training data and no reference images, the ZS-Pan framework is built with a two-phase three-component model, i.e., the reduced resolution supervised pre-training (RSP), the spatial degradation establishment (SDE), and the full resolution unsupervised generation (FUG) stages. Specifically, a special parameter initialization technique, a data augmentation strategy, and a non-linear degradation network are proposed to improve the representation ability of the network. In our experiments, we evaluate the performance of the proposed framework on different datasets using some state-of-the-art (SOTA) pansharpening approaches for comparison. Results show that our ZS-Pan outperforms these SOTA methods, both visually and quantitatively. The code is available at https://github.com/coder-qicao/ZS-Pan.

Zero-shot semi-supervised learning for pansharpening

Vivone, Gemine
Ultimo
2024

Abstract

Pansharpening refers to fusing a low-resolution multispectral image (LRMS) and a high-resolution panchromatic (PAN) image to generate a high-resolution multispectral image (HRMS). Traditional pansharpening methods use a single pair of LRMS and PAN to generate HRMS at full resolution, but they fail to generate high-quality fused products due to the assumption of a (often inaccurate) linear relationship between the fused products. Convolutional neural network methods, i.e., supervised and unsupervised learning approaches, can model any arbitrary non-linear relationship among data, but performing even worse than traditional methods when testing data are not consistent with training data. Moreover, supervised methods rely on simulating reduced resolution data for training causing information loss at full resolution. Unsupervised pansharpening suffers from distortion due to the lack of reference images and inaccuracy in the estimation of the degradation process. In this paper, we propose a zero-shot semi-supervised method for pansharpening (ZS-Pan), which only requires a single pair of PAN/LRMS images for training and testing networks combining both the pros of supervised and unsupervised methods. Facing with challenges of limited training data and no reference images, the ZS-Pan framework is built with a two-phase three-component model, i.e., the reduced resolution supervised pre-training (RSP), the spatial degradation establishment (SDE), and the full resolution unsupervised generation (FUG) stages. Specifically, a special parameter initialization technique, a data augmentation strategy, and a non-linear degradation network are proposed to improve the representation ability of the network. In our experiments, we evaluate the performance of the proposed framework on different datasets using some state-of-the-art (SOTA) pansharpening approaches for comparison. Results show that our ZS-Pan outperforms these SOTA methods, both visually and quantitatively. The code is available at https://github.com/coder-qicao/ZS-Pan.
2024
Istituto di Metodologie per l'Analisi Ambientale - IMAA
Convolutional neural network
Zero-shot learning
Semi-supervised learning
Multispectral image fusion
Remote sensing
Pansharpening
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/509695
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ente

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact