This repository contains code, network definitions and pre-trained models for working on remote sensing images using deep learning

Overview

Deep learning for Earth Observation

http://www.onera.fr/en/dtim https://www-obelix.irisa.fr/

This repository contains code, network definitions and pre-trained models for working on remote sensing images using deep learning.

We build on the SegNet architecture (Badrinarayanan et al., 2015) to provide a semantic labeling network able to perform dense prediction on remote sensing data. The implementation uses the PyTorch framework.

Motivation

Earth Observation consists in visualizing and understanding our planet thanks to airborne and satellite data. Thanks to the release of large amounts of both satellite (e.g. Sentinel and Landsat) and airborne images, Earth Observation entered into the Big Data era. Many applications could benefit from automatic analysis of those datasets : cartography, urban planning, traffic analysis, biomass estimation and so on. Therefore, lots of progresses have been made to use machine learning to help us have a better understanding of our Earth Observation data.

In this work, we show that deep learning allows a computer to parse and classify objects in an image and can be used for automatical cartography from remote sensing data. Especially, we provide examples of deep fully convolutional networks that can be trained for semantic labeling for airborne pictures of urban areas.

Content

Deep networks

We provide a deep neural network based on the SegNet architecture for semantic labeling of Earth Observation images.

All the pre-trained weights can be found on the OBELIX team website (backup link.

Data

Our example models are trained on the ISPRS Vaihingen dataset and ISPRS Potsdam dataset. We use the IRRG tiles (8bit format) and we build 8bit composite images using the DSM, NDSM and NDVI.

You can either use our script from the OSM folder (based on the Maperitive software) to generate OpenStreetMap rasters from the images, or download the OSM tiles from Potsdam here.

The nDSM for the Vaihingen dataset is available here (courtesy of Markus Gerke, see also his webpage). The nDSM for the Potsdam dataset is available here.

How to start

Just run the SegNet_PyTorch_v2.ipynb notebook using Jupyter!

Requirements

Find the right version for your setup and install PyTorch.

Then, you can use pip or any package manager to install the packages listed in requirements.txt, e.g. by using:

pip install -r requirements.txt

References

If you use this work for your projects, please take the time to cite our ISPRS Journal paper :

https://arxiv.org/abs/1711.08681 Nicolas Audebert, Bertrand Le Saux and Sébastien Lefèvre, Beyond RGB: Very High Resolution Urban Remote Sensing With Multimodal Deep Networks, ISPRS Journal of Photogrammetry and Remote Sensing, 2017.

@article{audebert_beyond_2017,
title = "Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks",
journal = "ISPRS Journal of Photogrammetry and Remote Sensing",
year = "2017",
issn = "0924-2716",
doi = "https://doi.org/10.1016/j.isprsjprs.2017.11.011",
author = "Nicolas Audebert and Bertrand Le Saux and Sébastien Lefèvre",
keywords = "Deep learning, Remote sensing, Semantic mapping, Data fusion"
}

License

Code (scripts and Jupyter notebooks) are released under the GPLv3 license for non-commercial and research purposes only. For commercial purposes, please contact the authors.

https://creativecommons.org/licenses/by-nc-sa/3.0/ The network weights are released under Creative-Commons BY-NC-SA. For commercial purposes, please contact the authors.

See LICENSE.md for more details.

Acknowledgements

This work has been conducted at ONERA (DTIM) and IRISA (OBELIX team), with the support of the joint Total-ONERA research project NAOMI.

The Vaihingen data set was provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF).

Say Thanks!

Comments
  • 'LayerParameter' object has no attribute 'scale'

    'LayerParameter' object has no attribute 'scale'

    I was having some problems when running the following line ,

    Write the train prototxt in a file

    f = open(MODEL_FOLDER + 'train_segnet.prototxt', 'w') f.write(str(net_arch.to_proto()))

    I got the error: 'LayerParameter' object has no attribute 'scale'

    I don't know what cause the error,how can i resolve this problem? the following error message is showed: File "training.py", line 213, in f.write(str(net_arch.to_proto())) File "/home/tukrin1/caffe/python/caffe/net_spec.py", line 189, in to_proto top._to_proto(layers, names, autonames) File "/home/tukrin1/caffe/python/caffe/net_spec.py", line 97, in _to_proto return self.fn._to_proto(layers, names, autonames) File "/home/tukrin1/caffe/python/caffe/net_spec.py", line 158, in _to_proto assign_proto(layer, k, v) File "/home/tukrin1/caffe/python/caffe/net_spec.py", line 64, in assign_proto is_repeated_field = hasattr(getattr(proto, name), 'extend') AttributeError: 'LayerParameter' object has no attribute 'scale'

    Could anyone help me with this? I appreciate your help!

    opened by zht3344 15
  • error during model training

    error during model training

    Hello Sir, Please I would like to change the ISPRS Postdam dataset by another one but I always receive this error in the training Concerning the error, please refer to the attached screenshot Thank you in advance for your help Best regards screenshot

    opened by Lionel2021-belgique 7
  • create_lmdb error

    create_lmdb error

    I use the potsdam data ,when i run the create_lmdb.py i got this error:

    Traceback (most recent call last):
      File "create_lmdb.py", line 143, in <module>
        create_image_lmdb(target_folder, samples, bgr=True)
      File "create_lmdb.py", line 92, in create_image_lmdb
        sample = sample[:,:,::-1]
    IndexError: too many indices for array
    

    but i test the image independently it runs well:

    sample = io.imread('/home/tukrin1/Breeze/rs/Potsdam/RELEASE_FOLDER/Potsdam/potsdam_128_128_32/irrg_train/5411.png')
    sample = sample[:,:,::-1]
    print sample.shape
    >> (128, 128, 3)
    

    i'm so confused about the error ,any idea about what to do ? thanks

    opened by tongyl 7
  • Operation on cpu

    Operation on cpu

    Hi, @nshaud what changes should be made to use the CPU rather than the GPU? You have to remove the calls to cuda () definitely, but with what to replace them? Could you explicitly define all the lines to be modified to work on cpu?

    Thanks in advance

    opened by FideliusC 5
  • run the code with  ISPRS Vaihingen data

    run the code with ISPRS Vaihingen data

    I downloaded the data. when i create the label lmdb i got error because the code assume that the label are 1dim and i have 3dim you make something to the original label ? i use this path for the label - "~/ISPRS/Vaihingen/ISPRS_semantic_labeling_Vaihingen/gts_for_participants" so i change the code for read the label to lmdb. But of course i have a problem in the loss layer because againe the net expect for 1dim and not 3dim . you have solution for me? thenk you

    opened by oneOfThePeople 5
  • accuracy of your SegNet model

    accuracy of your SegNet model

    @nshaud
    I just run the default PyTorch code SegNet_PyTorch_v2.ipynb. However, the accuracy of Vaihingen is about 86%, not as good as your paper. Then I read your paper again, and I found that you have made some change. Do you have any plan to share the code in the BeyondRGB paper?

    enhancement 
    opened by hurricane2018 4
  • Using images with with nodata pixels

    Using images with with nodata pixels

    Hello, I am using the pytorch version of your code to detect wetlands from a 3-band geotiff made up of topographic indices. My verification data is made up of manually surveyed wetland delineations, so it has an irregular shape. I use the wetland surveys as ground truth wetland and nonwetland areas for training and testing purposes by splitting the surveyed area into separate tiles and randomly splitting these into the two groups. Of course, some of the training tiles end up having no data pixels, representing land cover that is not known to be either wetland or nonwetland because it extends further than the limits of the original wetland surveys. If I limit my training set to only those tiles without nodata pixels, I get reasonable predictions. However, I'm already VERY limited in my training data size, relative to most deep learning applications, and would rather include all training data possible by letting the code just ignore these sets of pixels.

    So far I've tried:

    1. setting the nodata pixels to a value outside of the range(len(LABELS)) i.e., nonwetlands = 0, wetlands = 1 and nodata >1 or < 0
    2. including nodata value in the list of target labels i.e., nodata=0, nonwetlands=1, wetlands=2 and then adjusting the WEIGHTS parameter to give nodata a weight of 0.
    3. Both 1 and 2 PLUS setting ignore_index = [nodata value] when calling cross_entropy()

    These attempts either cause loss = nan or produce poor results after a couple of epochs (predict everything as nonwetland)

    Do you have other ideas for signaling to pytorch to ignore a certain type of pixel? I'm very new to deep learning and pytorch so any insight would be very helpful! Thanks! Example training image and corresponding label tile that have nodata pixels

    training_EX image_EX

    question 
    opened by GinaONeil 4
  • ImportError: No module named _caffe

    ImportError: No module named _caffe

    after the installation, I tried to run the inference_patches.py, an error "ImportError: No module named _caffe" occurred

    sys.path.insert(0,CAFFE_ROOT+"/python")
    import caffe
    

    I read some other notes with similar problems, which says _caffe.so should be in the /python folder after make pycaffe

    I can find the libcaffe.so and libcaffe.so.1.0.0-rc4 in the /caffe/build/lib, but in the /python folder i could find any _caffe.so. to make _caffe.so as suggested by some other, I tried ' make pycaffe ' inside /python folder, but a 'no rule to make target pycaffe ' error occored

    opened by helxsz 4
  • segnet_isprs_vaihingen_irrg.prototxt

    segnet_isprs_vaihingen_irrg.prototxt

    Dear, In the segnet_isprs_vaihingen_irrg.prototxt the value of the outputs is equal to 6, however the real number of outputs of the dataset is 7 ('imp_surfaces', 'building', 'low_vegetation', 'tree', 'car', 'clutter', 'unclassified').

    So, I think that I don't understand somethin, because, I try to adapt your example to my dataset with 3 labels (A,B and 'unclassified), setting the value of the outputs to 2, and I obtain an error: 'error == cudaSuccess (77 vs. 0) an illegal memory access was encountered'

    I try setting 3 in the outputs, and It works...but this is not what I want, because in this case I'm training the net with the unclassfied values, and I don't want to do this. I only want to train the net with A and B labels.

    I try to create the dataset with nan values, but it crash in the lmdb creation.

    Is there any way to avoid the unclassified values in the net? Can you help me? Thanks in advance

    opened by jorgenaya 4
  •  nDSM DATA of  Vaihingen Dataset

    nDSM DATA of Vaihingen Dataset

    Dear Dr. Audebert @nshaud :

    Thank you for your wonderful work "Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks" [1]. However, I can't find the nDSM of Vaihingen Dataset. Could you provide it ? Thank you very much !
    

    [1] Audebert, Nicolas, Bertrand Le Saux, and Sébastien Lefèvre. "Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks." ISPRS Journal of Photogrammetry and Remote Sensing 140 (2018): 20-32.

    Best,

    opened by hurricane2018 3
  • Typos to load Potsdam data

    Typos to load Potsdam data

    Hi, In the notebook, the parser to load the Potsdam data must be modified : Original : all_ids = ["".join(f.split('')[3:5]) for f in all_files] What works : all_ids = ["".join(f.split('')[5:7]) for f in all_files]

    And there is a typo in DATA_FOLDER for Potsdam : it begins with a "3" not with a "Y".

    bug 
    opened by gaslen 3
  • the use of another dataset

    the use of another dataset

    Hello Sir, Please I tried to apply the Segnet-Pytorch code you developed using the SEMCITY TOULOUSE dataset (http://rs.ipb.uni-bonn.de/data/semcity-toulouse/)

    I want to inform you that I have changed only the classes at the code level. Please find below the results I obtained: impervious surface = nan building = 0.99 pervious surface = nan high vegetation = nan car = nan water = nan sport venues = nan

    Please I would like to know why the nan in the all classes except the Building class Thank you for your help

    opened by Lionel2021-belgique 0
  • PyTorch 4.0 compliance

    PyTorch 4.0 compliance

    Since PyTorch 4.0, some things have been deprecated and should be fixed:

    • [ ] Variable does not exist anymore
    • [ ] loss.data[0] should now be loss.item()
    • [ ] volatile flag does not do anything, use no_grad instead
    • [ ] .to() method allows us to write device-agnostic code
    enhancement 
    opened by nshaud 0
Owner
Nicolas Audebert
Assistant professor in Computer Science. Resarcher on computer vision and deep learning.
Nicolas Audebert
High level network definitions with pre-trained weights in TensorFlow

TensorNets High level network definitions with pre-trained weights in TensorFlow (tested with 2.1.0 >= TF >= 1.4.0). Guiding principles Applicability.

Taehoon Lee 1k Dec 13, 2022
CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Detection in Remote Sensing Images

CFC-Net This project hosts the official implementation for the paper: CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Dete

ming71 55 Dec 12, 2022
paper: Hyperspectral Remote Sensing Image Classification Using Deep Convolutional Capsule Network

DC-CapsNet This is a tensorflow and keras based implementation of DC-CapsNet for HSI in the Remote Sensing Letters R. Lei et al., "Hyperspectral Remot

LEI 7 Nov 29, 2022
Fang Zhonghao 13 Nov 19, 2022
Official Pytorch Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images.

IAug_CDNet Official Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images. Overview We propose a

null 53 Dec 2, 2022
a practicable framework used in Deep Learning. So far UDL only provide DCFNet implementation for the ICCV paper (Dynamic Cross Feature Fusion for Remote Sensing Pansharpening)

UDL UDL is a practicable framework used in Deep Learning (computer vision). Benchmark codes, results and models are available in UDL, please contact @

Xiao Wu 11 Sep 30, 2022
PyTorch implementation of popular datasets and models in remote sensing

PyTorch Remote Sensing (torchrs) (WIP) PyTorch implementation of popular datasets and models in remote sensing tasks (Change Detection, Image Super Re

isaac 222 Dec 28, 2022
From this paper "SESNet: A Semantically Enhanced Siamese Network for Remote Sensing Change Detection"

SESNet for remote sensing image change detection It is the implementation of the paper: "SESNet: A Semantically Enhanced Siamese Network for Remote Se

null 1 May 24, 2022
This repo contains the official code and pre-trained models for the Dynamic Vision Transformer (DVT).

Dynamic-Vision-Transformer (Pytorch) This repo contains the official code and pre-trained models for the Dynamic Vision Transformer (DVT). Not All Ima

null 210 Dec 18, 2022
A collection of pre-trained StyleGAN2 models trained on different datasets at different resolution.

Awesome Pretrained StyleGAN2 A collection of pre-trained StyleGAN2 models trained on different datasets at different resolution. Note the readme is a

Justin 1.1k Dec 24, 2022
Pre-trained model, code, and materials from the paper "Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation" (MICCAI 2019).

Adaptive Segmentation Mask Attack This repository contains the implementation of the Adaptive Segmentation Mask Attack (ASMA), a targeted adversarial

Utku Ozbulak 53 Jul 4, 2022
Remote sensing change detection using PaddlePaddle

Change Detection Laboratory Developing and benchmarking deep learning-based remo

Lin Manhui 15 Sep 23, 2022
Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts

t5-japanese Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts. The following is a list of models that

Kimio Kuramitsu 1 Dec 13, 2021
Semi-supervised Representation Learning for Remote Sensing Image Classification Based on Generative Adversarial Networks

SSRL-for-image-classification Semi-supervised Representation Learning for Remote Sensing Image Classification Based on Generative Adversarial Networks

Feng 2 Nov 19, 2021
Deep learning (neural network) based remote photoplethysmography: how to extract pulse signal from video using deep learning tools

Deep-rPPG: Camera-based pulse estimation using deep learning tools Deep learning (neural network) based remote photoplethysmography: how to extract pu

Terbe Dániel 138 Dec 17, 2022
Pre-trained Deep Learning models and demos (high quality and extremely fast)

OpenVINO™ Toolkit - Open Model Zoo repository This repository includes optimized deep learning models and a set of demos to expedite development of hi

OpenVINO Toolkit 3.4k Dec 31, 2022
The code repository for EMNLP 2021 paper "Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization".

Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization [Paper] accepted at the EMNLP 2021: Vision Guided Genera

CAiRE 42 Jan 7, 2023
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation (NeurIPS2021 Benchmark and Dataset Track)

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zh

Kingdrone 174 Dec 22, 2022
Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark

Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark Yong

null 19 Dec 17, 2022