Skip to main navigation Skip to search Skip to main content

Spectral-Spatial Attention Transformer with Dense Connection for Hyperspectral Image Classification

Lanxue Dang, Libo Weng, Weichuan Dong, Shenshen Li, Yane Hou

Research output: Contribution to journalArticlepeer-review

Abstract

In recent years, deep learning has been widely used in hyperspectral image (HSI) classification and has shown good capabilities. Particularly, the use of convolutional neural network (CNN) in HSI classification has achieved attractive performance. However, HSI contains a lot of redundant information, and the CNN-based model is limited by the receptive field of CNN and cannot balance the performance and depth of the model. Furthermore, considering that HSI can be regarded as sequence data, CNN-based models cannot mine sequence features well. In this paper, we propose a model named SSA-Transformer to address the above problems and extract spectral-spatial features of HSI more efficiently. The SSA-Transformer model combines a modified CNN-based spectral-spatial attention mechanism and a self-attention-based transformer with dense connection. The SSA-Transformer model can combine the local and global features of HSI to improve the performance of the model. A series of experiments showed that the SSA-Transformer achieved competitive classification accuracy compared with other CNN-based classification methods using three HSI datasets: University of Pavia (PU), Salinas (SA), and Kennedy Space Center (KSC).

Original languageEnglish (US)
Article number7071485
JournalComputational Intelligence and Neuroscience
Volume2022
DOIs
StatePublished - 2022

ASJC Scopus subject areas

  • General Computer Science
  • General Neuroscience
  • General Mathematics

Fingerprint

Dive into the research topics of 'Spectral-Spatial Attention Transformer with Dense Connection for Hyperspectral Image Classification'. Together they form a unique fingerprint.

Cite this