Spectralformer: Rethinking hyperspectral image classification with transformers

Overview

Spectralformer: Rethinking hyperspectral image classification with transformers

Danfeng Hong, Zhu Han, Jing Yao, Lianru Gao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot


The code in this toolbox implements the "Spectralformer: Rethinking hyperspectral image classification with transformers". More specifically, it is detailed as follow.

alt text

Citation

Please kindly cite the papers if this code is useful and helpful for your research.

Danfeng Hong, Zhu Han, Jing Yao, Lianru Gao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot. Spectralformer: Rethinking hyperspectral image classification with transformers. arXiv preprint, arXiv:2107.02988, 2021.

@article{hong2021spectralformer,
  title={Spectralformer: Rethinking hyperspectral image classification with transformers},
  author={Hong, Danfeng and Han, Zhu and Yao, Jing and Gao, Lianru and Zhang, Bing and Plaza, Antonio and Chanussot, Jocelyn},
  journal={arXiv preprint arXiv:2107.02988},
  year={2021}
}

System-specific notes

The data were generated by Matlab R2016a or higher versions, and the codes of networks were tested using PyTorch 1.6 version (CUDA 10.1) in Python 3.7 on Ubuntu system.

How to use it?

This toolbox consists of two proposed modules, i.e., group-wise spectral embedding (GSE: by setting band_patches larger than 1) and cross-layer adaptive fusion (CAF: by setting mode to CAF), that can be plug-and-played into both pixel-wise and patch-wise hyperspectral image classification. For more details, please refer to the paper.

Here an example experiment is given by using Indian Pines hyperspectral data. Directly run demo.py functions with different network parameter settings to produce the results. Please note that due to the randomness of the parameter initialization, the experimental results might have slightly different from those reported in the paper.

You may need to manually download IndianPine.mat to your local in the folder under path Codes_SpectralFormer/data/, due to their too large file size, from the following links of google drive or baiduyun:

Google drive: https://drive.google.com/drive/folders/1nRphkwDZ74p-Al_O_X3feR24aRyEaJDY?usp=sharing

Baiduyun: https://pan.baidu.com/s/1rY9hj7Ku1Un4PPOjEFpEfQ (access code: 6dme)

If you want to run the code in your own data, you can accordingly change the input (e.g., data, labels) and tune the parameters.

If you encounter the bugs while using this code, please do not hesitate to contact us.

Licensing

Copyright (C) 2021 Danfeng Hong

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 3 of the License.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program.

Contact Information:

Danfeng Hong: [email protected]
Danfeng Hong is with the Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France.

If emergency, you can also add my QQ: 345088114.

Comments
  • 关于Indian pines数据集下以3%作为训练集的测试精度

    关于Indian pines数据集下以3%作为训练集的测试精度

    洪老师您好:

    我们的论文想要将您的论文作为对比方法进行比较,根据您开源的代码,以patch-wise spectralformer为例,在Indian pines数据集下,以3%作为训练集随机抽取,并使用PCA将bands降为3后,得到的测试精度大概在61%左右,为了避免失误,向您确认这个结果是否准确,我们将会在论文中引用您的文章。

    opened by Ritatanz 1
  • 关于数据集制作的问题

    关于数据集制作的问题

    老师您好,首先非常感谢您的研究成果,为后人的研究提供了思路。 我在看数据集处理时,您的Indian Pine数据集IndianPine.mat文件,与官方给出的Indian_pines_corrected.mat和Indian_pines_gt.mat文件还是有所差别。我已经通过MATLAB软件打开进行查看,但是由于数据量太大无法查看完整。 我想请教的问题是,您是使用什么方法将两个文件合并成一个文件,并且将原文件中 工作区名称为indian_pines_corrected 换为您数据集 工作区名称为input、TE、TR的,具体步骤或是流程能否讲解一下。 再次感谢您贡献,期待您的回复。 祝好!

    opened by DWBSIC 3
  • 关于训练部分的pt文件

    关于训练部分的pt文件

    老师,您好,有幸使用了您的代码,在代码运行对测试集进行运行的时候,我发现了您使用的是已经保存好了的训练模型也就是VIT_indian.pt文件,如果说我想用我自己的数据集去运行代码的话,在训练部分是只需要将我数据集划分为TE,TR,label,但是在测试集部分就需要自己生成一个关于数据集在训练部分的参数,我想问一下您,我如何能在自己数据集上训练结束之后,生成相对应的测试pt文件呢

    opened by handsomezhuo 3
  • 关于Indian Pine数据集处理方面

    关于Indian Pine数据集处理方面

    老师您好,首先非常感谢您开源了你们研究成果的代码,使我们能够更加深入的理解你们的研究思路。在看你们的代码的数据集处理时,发现你们的Indian Pine数据集与官方的不一样,你们的IndianPine.mat中包含三个矩阵,对于其中的TR和TE我不是很理解,官方是16个类组成一个Indian_pines_gt.mat文件,老师您代码中是TR和TE相加构成label,然而底下确实用TR作为一个总的类,我看了相关文件,发现TR中最大数字为13,即只有14个类(包括0),所以冒昧请教老师两个问题,第一、老师您数据集IndianPine.mat中TR和TE是代表什么,为什么与官方不一样,是官方中那部分让您觉得不合理使您将他变为TR和TE;第二、为什么老师您代码中num_class中的类别为14而不是官方数据集的16。 TR = data['TR'] TE = data['TE'] input = data['input'] # (145,145,200) label = TR + TE num_classes = np.max(TR) 当然上述问题可能由于我看的论文不够多,以至于问出相对幼稚的问题,希望老师您多多包容。 期待老师您的回复,不胜感激! 祝老师您身体健康,科研顺利!

    opened by chentiancai 3
  • 训练问题

    训练问题

    老师你好,我在训练时使用的是这个命令:python demo.py --dataset='Indian' --epoches=300 --patches=7 --band_patches=3 --mode='CAF' --weight_decay=5e-3,但是跑出来的精度只有75%,方便帮我看一下可能是哪里出现的问题吗? image

    opened by banxianerr 5
Owner
Danfeng Hong
Research Scientist, DLR, Germany / Adjunct Scientist, GiPSA-Lab, French / Machine and Deep Learning in Earth Vision
Danfeng Hong
Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot. Graph Convolutional Networks for Hyperspectral Image Classification, IEEE TGRS, 2021.

Graph Convolutional Networks for Hyperspectral Image Classification Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot T

Danfeng Hong 154 Dec 13, 2022
Semi-Supervised Graph Prototypical Networks for Hyperspectral Image Classification, IGARSS, 2021.

Semi-Supervised Graph Prototypical Networks for Hyperspectral Image Classification, IGARSS, 2021. Bobo Xi, Jiaojiao Li, Yunsong Li and Qian Du. Code f

Bobo Xi 7 Nov 3, 2022
PyTorch implementation of our method for adversarial attacks and defenses in hyperspectral image classification.

Self-Attention Context Network for Hyperspectral Image Classification PyTorch implementation of our method for adversarial attacks and defenses in hyp

null 22 Dec 2, 2022
paper: Hyperspectral Remote Sensing Image Classification Using Deep Convolutional Capsule Network

DC-CapsNet This is a tensorflow and keras based implementation of DC-CapsNet for HSI in the Remote Sensing Letters R. Lei et al., "Hyperspectral Remot

LEI 7 Nov 29, 2022
Deep learning toolbox based on PyTorch for hyperspectral data classification.

Deep learning toolbox based on PyTorch for hyperspectral data classification.

Nicolas 304 Dec 28, 2022
Graph Regularized Residual Subspace Clustering Network for hyperspectral image clustering

Graph Regularized Residual Subspace Clustering Network for hyperspectral image clustering

Yaoming Cai 5 Jul 18, 2022
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Segmentation Transformer Implementation of Segmentation Transformer in PyTorch, a new model to achieve SOTA in semantic segmentation while using trans

Abhay Gupta 161 Dec 8, 2022
Implementation of SETR model, Original paper: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.

SETR - Pytorch Since the original paper (Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.) has no official

zhaohu xing 112 Dec 16, 2022
[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Fudan Zhang Vision Group 897 Jan 5, 2023
Paddle pit - Rethinking Spatial Dimensions of Vision Transformers

基于Paddle实现PiT ——Rethinking Spatial Dimensions of Vision Transformers,arxiv 官方原版代

Hongtao Wen 4 Jan 15, 2022
GB-CosFace: Rethinking Softmax-based Face Recognition from the Perspective of Open Set Classification

GB-CosFace: Rethinking Softmax-based Face Recognition from the Perspective of Open Set Classification This is the official pytorch implementation of t

Alibaba Cloud 5 Nov 14, 2022
Simple-Image-Classification - Simple Image Classification Code (PyTorch)

Simple-Image-Classification Simple Image Classification Code (PyTorch) Yechan Kim This repository contains: Python3 / Pytorch code for multi-class ima

Yechan Kim 8 Oct 29, 2022
Image Classification - A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

null 0 Jan 23, 2022
Rethinking the U-Net architecture for multimodal biomedical image segmentation

MultiResUNet Rethinking the U-Net architecture for multimodal biomedical image segmentation This repository contains the original implementation of "M

Nabil Ibtehaz 308 Jan 5, 2023
General Multi-label Image Classification with Transformers

General Multi-label Image Classification with Transformers Jack Lanchantin, Tianlu Wang, Vicente Ordóñez Román, Yanjun Qi Conference on Computer Visio

QData 154 Dec 21, 2022
Multivariate Time Series Forecasting with efficient Transformers. Code for the paper "Long-Range Transformers for Dynamic Spatiotemporal Forecasting."

Spacetimeformer Multivariate Forecasting This repository contains the code for the paper, "Long-Range Transformers for Dynamic Spatiotemporal Forecast

QData 440 Jan 2, 2023
《Rethinking Sptil Dimensions of Vision Trnsformers》(2021)

Rethinking Spatial Dimensions of Vision Transformers Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, Seong Joon Oh | Paper NAVER

NAVER AI 224 Dec 27, 2022
Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation

SimplePose Code and pre-trained models for our paper, “Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation”, a

Jia Li 256 Dec 24, 2022
Official implementation of Rethinking Graph Neural Architecture Search from Message-passing (CVPR2021)

Rethinking Graph Neural Architecture Search from Message-passing Intro The GNAS can automatically learn better architecture with the optimal depth of

Shaofei Cai 48 Sep 30, 2022