Segment anything model (SAM) has been widely applied to various downstream tasks for its excellent performance and generalization capability. However, SAM exhibits three limitations related to remote sensing (RS) semantic segmentation task: 1) the image encoders excessively lose high-frequency information, such as object boundaries and textures, resulting in rough segmentation masks; 2) due to being trained on natural images, SAM faces difficulty in accurately recognizing objects with large-scale variations and uneven distribution in RS images; and 3) the output tokens used for mask prediction are trained on natural images and not applicable to RS image segmentation. In this article, we explore an efficient paradigm for applying SAM to the semantic segmentation of RS images. Furthermore, we propose multiscale enhanced SAM (MeSAM), a new SAM fine-tuning method more suitable for RS images to adapt it to semantic segmentation tasks. Our method first introduces an inception mixer into the image encoder to effectively preserve high-frequency features. Second, by designing a mask decoder with RS correction and incorporating multiscale connections, we make up the difference in SAM from natural images to RS images. Experimental results demonstrated that our method significantly improves the segmentation accuracy of SAM for RS images, outperforming some state-of-the-art (SOTA) methods. The code will be available at https://github.com/Magic-lem/MeSAM .

MeSAM: Multiscale Enhanced Segment Anything Model for Optical Remote Sensing Images

Vivone, Gemine
Penultimo
;
2024

Abstract

Segment anything model (SAM) has been widely applied to various downstream tasks for its excellent performance and generalization capability. However, SAM exhibits three limitations related to remote sensing (RS) semantic segmentation task: 1) the image encoders excessively lose high-frequency information, such as object boundaries and textures, resulting in rough segmentation masks; 2) due to being trained on natural images, SAM faces difficulty in accurately recognizing objects with large-scale variations and uneven distribution in RS images; and 3) the output tokens used for mask prediction are trained on natural images and not applicable to RS image segmentation. In this article, we explore an efficient paradigm for applying SAM to the semantic segmentation of RS images. Furthermore, we propose multiscale enhanced SAM (MeSAM), a new SAM fine-tuning method more suitable for RS images to adapt it to semantic segmentation tasks. Our method first introduces an inception mixer into the image encoder to effectively preserve high-frequency features. Second, by designing a mask decoder with RS correction and incorporating multiscale connections, we make up the difference in SAM from natural images to RS images. Experimental results demonstrated that our method significantly improves the segmentation accuracy of SAM for RS images, outperforming some state-of-the-art (SOTA) methods. The code will be available at https://github.com/Magic-lem/MeSAM .
2024
Istituto di Metodologie per l'Analisi Ambientale - IMAA
High-frequency
Multiscale
Remote sensing (RS)
Segment anything model (SAM)
Semantic segmentation
File in questo prodotto:
File Dimensione Formato  
MeSAM_Multiscale_Enhanced_Segment_Anything_Model_for_Optical_Remote_Sensing_Images.pdf

solo utenti autorizzati

Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 4.5 MB
Formato Adobe PDF
4.5 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/509764
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? ND
social impact