Deep learning toolbox based on PyTorch for hyperspectral data classification.

Overview

DeepHyperX

A Python tool to perform deep learning experiments on various hyperspectral datasets.

https://www.onera.fr/en/research/information-processing-and-systems-domain

https://www-obelix.irisa.fr/

Reference

This toolbox was used for our review paper in Geoscience and Remote Sensing Magazine :

N. Audebert, B. Le Saux and S. Lefevre, "Deep Learning for Classification of Hyperspectral Data: A Comparative Review," in IEEE Geoscience and Remote Sensing Magazine, vol. 7, no. 2, pp. 159-173, June 2019.

Bibtex format :

@article{8738045, author={N. {Audebert} and B. {Le Saux} and S. {Lefèvre}}, journal={IEEE Geoscience and Remote Sensing Magazine}, title={Deep Learning for Classification of Hyperspectral Data: A Comparative Review}, year={2019}, volume={7}, number={2}, pages={159-173}, doi={10.1109/MGRS.2019.2912563}, ISSN={2373-7468}, month={June},}

Requirements

This tool is compatible with Python 2.7 and Python 3.5+.

It is based on the PyTorch deep learning and GPU computing framework and use the Visdom visualization server.

Setup

The easiest way to install this code is to create a Python virtual environment and to install dependencies using: pip install -r requirements.txt

(on Windows you should use pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html)

Docker

Alternatively, it is possible to run the Docker image.

Grab the image using:

docker pull registry.gitlab.inria.fr/naudeber/deephyperx:preview

And then run the image using:

docker run -p 9999:8097 -ti --rm -v `pwd`:/workspace/DeepHyperX/ registry.gitlab.inria.fr/naudeber/deephyperx:preview

This command:

  • starts a Docker container using the image registry.gitlab.inria.fr/naudeber/deephyperx:preview
  • starts an interactive shell session -ti
  • mounts the current folder in the /workspace/DeepHyperX/ path of the container
  • binds the local port 9999 to the container port 8097 (for Visdom)
  • removes the container --rm when the user has finished.

All data and products are stored in the current folder.

Users can build the Docker image locally using the Dockerfile using the command docker build ..

Hyperspectral datasets

Several public hyperspectral datasets are available on the UPV/EHU wiki. Users can download those beforehand or let the tool download them. The default dataset folder is ./Datasets/, although this can be modified at runtime using the --folder arg.

At this time, the tool automatically downloads the following public datasets:

  • Pavia University
  • Pavia Center
  • Kennedy Space Center
  • Indian Pines
  • Botswana

The Data Fusion Contest 2018 hyperspectral dataset is also preconfigured, although users need to download it on the DASE website and store it in the dataset folder under DFC2018_HSI.

An example dataset folder has the following structure:

Datasets
├── Botswana
│   ├── Botswana_gt.mat
│   └── Botswana.mat
├── DFC2018_HSI
│   ├── 2018_IEEE_GRSS_DFC_GT_TR.tif
│   ├── 2018_IEEE_GRSS_DFC_HSI_TR
│   ├── 2018_IEEE_GRSS_DFC_HSI_TR.aux.xml
│   ├── 2018_IEEE_GRSS_DFC_HSI_TR.HDR
├── IndianPines
│   ├── Indian_pines_corrected.mat
│   ├── Indian_pines_gt.mat
├── KSC
│   ├── KSC_gt.mat
│   └── KSC.mat
├── PaviaC
│   ├── Pavia_gt.mat
│   └── Pavia.mat
└── PaviaU
    ├── PaviaU_gt.mat
    └── PaviaU.mat

Adding a new dataset

Adding a custom dataset can be done by modifying the custom_datasets.py file. Developers should add a new entry to the CUSTOM_DATASETS_CONFIG variable and define a specific data loader for their use case.

Models

Currently, this tool implements several SVM variants from the scikit-learn library and many state-of-the-art deep networks implemented in PyTorch.

Adding a new model

Adding a custom deep network can be done by modifying the models.py file. This implies creating a new class for the custom deep network and altering the get_model function.

Usage

Start a Visdom server: python -m visdom.server and go to http://localhost:8097 to see the visualizations (or http://localhost:9999 if you use Docker).

Then, run the script main.py.

The most useful arguments are:

  • --model to specify the model (e.g. 'svm', 'nn', 'hamida', 'lee', 'chen', 'li'),
  • --dataset to specify which dataset to use (e.g. 'PaviaC', 'PaviaU', 'IndianPines', 'KSC', 'Botswana'),
  • the --cuda switch to run the neural nets on GPU. The tool fallbacks on CPU if this switch is not specified.

There are more parameters that can be used to control more finely the behaviour of the tool. See python main.py -h for more information.

Examples:

  • python main.py --model SVM --dataset IndianPines --training_sample 0.3 This runs a grid search on SVM on the Indian Pines dataset, using 30% of the samples for training and the rest for testing. Results are displayed in the visdom panel.
  • python main.py --model nn --dataset PaviaU --training_sample 0.1 --cuda This runs on GPU a basic 4-layers fully connected neural network on the Pavia University dataset, using 10% of the samples for training.
  • python main.py --model hamida --dataset PaviaU --training_sample 0.5 --patch_size 7 --epoch 50 --cuda This runs on GPU the 3D CNN from Hamida et al. on the Pavia University dataset with a patch size of 7, using 50% of the samples for training and optimizing for 50 epochs.

Say Thanks!

Comments
  • Defining Train Set and Test Set

    Defining Train Set and Test Set

    Hi,

    Anyone know how to define the training and test set?

    I have the GT defined in a mat file and I put the path in but it comes back with the following error:

    python main.py --model nn --dataset Selene1TestX --train_set C:\Users\bbop1\hsi-toolbox-master\DeepHyperX\Datasets\Selene1TrainX\Sub1TargetMapPyTrainX.mat --cuda 0

    Setting up a new session... Image has dimensions 1250x1596 and 134 channels Traceback (most recent call last): File "main.py", line 275, in test_gt[(train_gt > 0)[:w,:h]] = 0 TypeError: '>' not supported between instances of 'dict' and 'int'

    I have a feeling the train gt needs to be defined as a dictionary. Has anyone done this?

    Cheers,

    Bop

    opened by bbop1983 5
  • /bin/sh: 0: Can't open start.sh

    /bin/sh: 0: Can't open start.sh

    Hello,

    I tried running the container on both Docker and Singularity; however, I am getting this error "/bin/sh: 0: Can't open start.sh". Is there a way to fix this?

    This is the command I ran on Docker docker run -p 9999:8097 -ti --rm -vpwd:/workspace/DeepHyperX/ registry.gitlab.inria.fr/naudeber/deephyperx:rc2 and on Singularity singularity run ./deephyperxtest_latest.sif.

    Thank you!

    opened by 5MI7th3MI6 3
  • How to get stable results?

    How to get stable results?

    Thanks for your great work! I run the code and set "--run 10", but the results of each run fluctuate wildly. I adjusted lr, batchsize ..., but it didn't work. Could you give me some advice? thank you!

    opened by likyoo 2
  • pip install torch error - Windows 10

    pip install torch error - Windows 10

    Had trouble doing pip install torch using the requirements.txt-- seems to be a Windows issue (https://github.com/pytorch/pytorch/issues/29395) Might want to make note of this? Thank you!

    opened by sbaber1 2
  • i run python main.py --model nn --dataset PaviaU --training_sample 0.1 --cuda 1,then it happend

    i run python main.py --model nn --dataset PaviaU --training_sample 0.1 --cuda 1,then it happend

    Network : Traceback (most recent call last): File "main.py", line 301, in summary(model.to(hyperparams['device']), input.size()[1:], device=hyperparams['device']) File "/home/xj/anaconda3/lib/python3.6/site-packages/torchsummary/torchsummary.py", line 44, in summary device = device.lower() AttributeError: 'torch.device' object has no attribute 'lower'

    opened by JoJo-ops 2
  • problem for weights with ignore_label

    problem for weights with ignore_label

    Describe the bug when i run this demo,the result of confusion matrix is not correct, cm[0,0] always equals to zero. so i check the parameters of weight, 'weights': tensor([0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,]) the weights[0] is always 0. the function of get_model instantiate the weights. weights = torch.ones(n_classes) weights[torch.LongTensor(kwargs["ignored_labels"])] = 0.0 i think it maybe casued by the ignored_labels=[0] so, i replace ignored_labels=[] instead of ignored_labels=[0] but it don't work when i delete or comment out weights[torch.LongTensor(kwargs["ignored_labels"])] = 0.0 the initial values weight, 'weights': tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,])

    when , i replace ignored_labels=[2] instead of ignored_labels=[0] the values weight, 'weights': tensor([0., 1., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,]) anywhere,weights[0] always equals to zero. To Reproduce Steps to reproduce the behavior (e.g. the command that you used).

    Expected behavior A clear and concise description of what you expected to happen.

    Desktop (please complete the following information):

    • OS: window
    • CUDA : yes
    opened by bixiu 1
  • support image normalization

    support image normalization

    In this commit, common normalization methods are realized in the normalise_image function of utils.py and a normalization argument is added in the main.py.

    opened by mengxue-rs 1
  • Salinas loading

    Salinas loading

    I have suffered from loading failure before I examine the name of the data. Now code runs after #img = open_file(folder + 'Salinas.mat')['Salinas_corrected'] img = open_file(folder + 'Salinas_corrected.mat')['salinas_corrected'] #gt = open_file(folder + 'Salinas_gt.mat')['Salinas_gt'] gt = open_file(folder + 'Salinas_gt.mat')['salinas_gt']

    bug 
    opened by gxwangupc 1
  • OSError: [Errno 22] Invalid argument

    OSError: [Errno 22] Invalid argument

    Traceback (most recent call last): File "main.py", line 312, in <module> display=viz) File "C:\Users\yang6\PycharmProjects\DeepHyperX\models.py", line 1059, in train save_model(net, camel_to_snake(str(net.__class__.__name__)), data_loader.dataset.name, epoch=e, metric=abs(metric)) File "C:\Users\yang6\PycharmProjects\DeepHyperX\models.py", line 1068, in save_model torch.save(model.state_dict(), model_dir + filename + '.pth') File "C:\Users\yang6\Anaconda3\envs\DeepHyperX\lib\site-packages\torch\serialization.py", line 260, in save return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol)) File "C:\Users\yang6\Anaconda3\envs\DeepHyperX\lib\site-packages\torch\serialization.py", line 183, in _with_file_like f = open(f, mode) **OSError: [Errno 22] Invalid argument: './checkpoints/hamida_et_al/PaviaU/2020-07-03 10:52:46.094626_epoch2_0.78.pth'**

    I am very confused about this error. How to solve it?

    opened by GreatBruceYoung 1
  • Inverse Median Frequency Weights functioning?

    Inverse Median Frequency Weights functioning?

    Hi!

    I was using this framework (thank you), but I cannot understand if the inverse median frequency weights are called at all? For me it seems they are created after constructing the loss function and thus never passed anywhere. Is this assumption wrong on my part? Where do they come into play? I assumed the weights would have been passed to the get_model function through the hyperparameters, then used to construct the loss function and not be overwritten by the new initialization of the weight vector. Thank you!

    opened by Oscared 1
  • About the model

    About the model "Sharma(2DCNN)"

    Hi, I'm very thanks for your great work. Here, I have a question about the model "sharma" to ask for your help.

    I can't understand the input "x" of the network,
    why "'" x = torch.zeros( (1, 1, self.input_channels, self.patch_size, self.patch_size))"'"? what is """b, t, c, w, h = x.size()"""?

    I am looking for your apply, thanks!

    opened by g185211 0
  • Bump joblib from 0.14.1 to 1.2.0

    Bump joblib from 0.14.1 to 1.2.0

    Bumps joblib from 0.14.1 to 1.2.0.

    Changelog

    Sourced from joblib's changelog.

    Release 1.2.0

    • Fix a security issue where eval(pre_dispatch) could potentially run arbitrary code. Now only basic numerics are supported. joblib/joblib#1327

    • Make sure that joblib works even when multiprocessing is not available, for instance with Pyodide joblib/joblib#1256

    • Avoid unnecessary warnings when workers and main process delete the temporary memmap folder contents concurrently. joblib/joblib#1263

    • Fix memory alignment bug for pickles containing numpy arrays. This is especially important when loading the pickle with mmap_mode != None as the resulting numpy.memmap object would not be able to correct the misalignment without performing a memory copy. This bug would cause invalid computation and segmentation faults with native code that would directly access the underlying data buffer of a numpy array, for instance C/C++/Cython code compiled with older GCC versions or some old OpenBLAS written in platform specific assembly. joblib/joblib#1254

    • Vendor cloudpickle 2.2.0 which adds support for PyPy 3.8+.

    • Vendor loky 3.3.0 which fixes several bugs including:

      • robustly forcibly terminating worker processes in case of a crash (joblib/joblib#1269);

      • avoiding leaking worker processes in case of nested loky parallel calls;

      • reliability spawn the correct number of reusable workers.

    Release 1.1.0

    • Fix byte order inconsistency issue during deserialization using joblib.load in cross-endian environment: the numpy arrays are now always loaded to use the system byte order, independently of the byte order of the system that serialized the pickle. joblib/joblib#1181

    • Fix joblib.Memory bug with the ignore parameter when the cached function is a decorated function.

    ... (truncated)

    Commits
    • 5991350 Release 1.2.0
    • 3fa2188 MAINT cleanup numpy warnings related to np.matrix in tests (#1340)
    • cea26ff CI test the future loky-3.3.0 branch (#1338)
    • 8aca6f4 MAINT: remove pytest.warns(None) warnings in pytest 7 (#1264)
    • 067ed4f XFAIL test_child_raises_parent_exits_cleanly with multiprocessing (#1339)
    • ac4ebd5 MAINT add back pytest warnings plugin (#1337)
    • a23427d Test child raises parent exits cleanly more reliable on macos (#1335)
    • ac09691 [MAINT] various test updates (#1334)
    • 4a314b1 Vendor loky 3.2.0 (#1333)
    • bdf47e9 Make test_parallel_with_interactively_defined_functions_default_backend timeo...
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight

    expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight

    when I run "python main.py --model hamida --dataset PaviaU --training_sample 0.5 --patch_size 7 --epoch 10 --cuda 1 " it has RuntimeError: :expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight

    opened by Heboxian 0
  • can't reproduce the results

    can't reproduce the results

    HI ~ Thanks for your great work, Recently I am trying to reproduce the experiment on paper. But I found there are no accurate experiment settings in the part of the experiment. I am wondering if you can give me some suggestions on the initialization of the parameters. Such as the learning_rate、training_sample etc.

    Here is the result by using the command python main.py --model hamida --dataset IndianPines --training_sample 0.1 --cuda 0. I found its accuracy has a margin comparing the result on paper.

    image

    Looking forward to your early reply ~ Thanks for your consideration.

    opened by challow0 0
Owner
Nicolas
Assistant professor in Computer Science. Resarcher on computer vision and deep learning.
Nicolas
PyTorch implementation of our method for adversarial attacks and defenses in hyperspectral image classification.

Self-Attention Context Network for Hyperspectral Image Classification PyTorch implementation of our method for adversarial attacks and defenses in hyp

null 22 Dec 2, 2022
FPGA: Fast Patch-Free Global Learning Framework for Fully End-to-End Hyperspectral Image Classification

FPGA & FreeNet Fast Patch-Free Global Learning Framework for Fully End-to-End Hyperspectral Image Classification by Zhuo Zheng, Yanfei Zhong, Ailong M

Zhuo Zheng 92 Jan 3, 2023
Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot. Graph Convolutional Networks for Hyperspectral Image Classification, IEEE TGRS, 2021.

Graph Convolutional Networks for Hyperspectral Image Classification Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot T

Danfeng Hong 154 Dec 13, 2022
Semi-Supervised Graph Prototypical Networks for Hyperspectral Image Classification, IGARSS, 2021.

Semi-Supervised Graph Prototypical Networks for Hyperspectral Image Classification, IGARSS, 2021. Bobo Xi, Jiaojiao Li, Yunsong Li and Qian Du. Code f

Bobo Xi 7 Nov 3, 2022
Spectralformer: Rethinking hyperspectral image classification with transformers

The code in this toolbox implements the "Spectralformer: Rethinking hyperspectral image classification with transformers". More specifically, it is detailed as follow.

Danfeng Hong 104 Jan 4, 2023
Graph Regularized Residual Subspace Clustering Network for hyperspectral image clustering

Graph Regularized Residual Subspace Clustering Network for hyperspectral image clustering

Yaoming Cai 5 Jul 18, 2022
Paddle-Adversarial-Toolbox (PAT) is a Python library for Deep Learning Security based on PaddlePaddle.

Paddle-Adversarial-Toolbox Paddle-Adversarial-Toolbox (PAT) is a Python library for Deep Learning Security based on PaddlePaddle. Model Zoo Common FGS

AgentMaker 17 Nov 8, 2022
Hl classification bc - A Network-Based High-Level Data Classification Algorithm Using Betweenness Centrality

A Network-Based High-Level Data Classification Algorithm Using Betweenness Centr

Esteban Vilca 3 Dec 1, 2022
A graph adversarial learning toolbox based on PyTorch and DGL.

GraphWar: Arms Race in Graph Adversarial Learning NOTE: GraphWar is still in the early stages and the API will likely continue to change. ?? Installat

Jintang Li 54 Jan 5, 2023
mmfewshot is an open source few shot learning toolbox based on PyTorch

OpenMMLab FewShot Learning Toolbox and Benchmark

OpenMMLab 514 Dec 28, 2022
Ludwig is a toolbox that allows to train and evaluate deep learning models without the need to write code.

Translated in ???? Korean/ Ludwig is a toolbox that allows users to train and test deep learning models without the need to write code. It is built on

Ludwig 8.7k Jan 5, 2023
Ludwig is a toolbox that allows to train and evaluate deep learning models without the need to write code.

Translated in ???? Korean/ Ludwig is a toolbox that allows users to train and test deep learning models without the need to write code. It is built on

Ludwig 8.7k Dec 31, 2022
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

null 63 Oct 17, 2022
MMDetection3D is an open source object detection toolbox based on PyTorch

MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is a part of the OpenMMLab project developed by MMLab.

OpenMMLab 3.2k Jan 5, 2023
A semantic segmentation toolbox based on PyTorch

Introduction vedaseg is an open source semantic segmentation toolbox based on PyTorch. Features Modular Design We decompose the semantic segmentation

null 407 Dec 15, 2022
LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models

LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models. Developers can reproduce these SOTA methods and build their own methods.

TuZheng 405 Jan 4, 2023
MMFlow is an open source optical flow toolbox based on PyTorch

Documentation: https://mmflow.readthedocs.io/ Introduction English | 简体中文 MMFlow is an open source optical flow toolbox based on PyTorch. It is a part

OpenMMLab 688 Jan 6, 2023
An open source object detection toolbox based on PyTorch

MMDetection is an open source object detection toolbox based on PyTorch. It is a part of the OpenMMLab project.

Bo Chen 24 Dec 28, 2022
Mmdetection3d Noted - MMDetection3D is an open source object detection toolbox based on PyTorch

MMDetection3D is an open source object detection toolbox based on PyTorch

Jiangjingwen 13 Jan 6, 2023