A Transformer has received a lot of attention in computer vision. Because of global self-attention, the computational complexity of Transformer is quadratic with the number of tokens, leading to limitations for practical applications. Hence, the computational complexity issue can be efficiently resolved by computing the self-attention in groups of smaller fixed-size windows. In this article, we propose a novel pyramid Shuffleand-Reshuffle Transformer (PSRT) for the task of multispectral and hyperspectral image fusion (MHIF). Considering the strong correlation among different patches in remote sensing images and complementary information among patches with high similarity, we design Shuffle-and-Reshuffle (SaR) modules to consider the information interaction among global patches in an efficient manner. Besides, using pyramid structures based on window self-attention, the detail extraction is supported. Extensive experiments on four widely used benchmark datasets demonstrate the superiority of the proposed PSRT with a few parameters compared with several state-of-the-art approaches. The related code is available at https://github.com/Dengshangqi/PSRThttps://github.com/Deng-shangqi/PSRT.
PSRT: Pyramid Shuffle-and-Reshuffle Transformer for Multispectral and Hyperspectral Image Fusion
Vivone Gemine
2023
Abstract
A Transformer has received a lot of attention in computer vision. Because of global self-attention, the computational complexity of Transformer is quadratic with the number of tokens, leading to limitations for practical applications. Hence, the computational complexity issue can be efficiently resolved by computing the self-attention in groups of smaller fixed-size windows. In this article, we propose a novel pyramid Shuffleand-Reshuffle Transformer (PSRT) for the task of multispectral and hyperspectral image fusion (MHIF). Considering the strong correlation among different patches in remote sensing images and complementary information among patches with high similarity, we design Shuffle-and-Reshuffle (SaR) modules to consider the information interaction among global patches in an efficient manner. Besides, using pyramid structures based on window self-attention, the detail extraction is supported. Extensive experiments on four widely used benchmark datasets demonstrate the superiority of the proposed PSRT with a few parameters compared with several state-of-the-art approaches. The related code is available at https://github.com/Dengshangqi/PSRThttps://github.com/Deng-shangqi/PSRT.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.