A Transformer-Based Siamese Network for Change Detection

Overview

ChangeFormer: A Transformer-Based Siamese Network for Change Detection (Under review at IGARSS-2022)

Wele Gedara Chaminda Bandara, Vishal M. Patel

Here, we provide the pytorch implementation of the paper: A Transformer-Based Siamese Network for Change Detection.

For more information, please see our paper at arxiv.

image-20210228153142126

Requirements

Python 3.8.0
pytorch 1.10.1
torchvision 0.11.2
einops  0.3.2

Please see requirements.txt for all the other requirements.

Installation

Clone this repo:

git clone https://github.com/wgcban/ChangeFormer.git
cd ChangeFormer

Quick Start on LEVIR dataset

We have some samples from the LEVIR-CD dataset in the folder samples_LEVIR for a quick start.

Firstly, you can download our ChangeFormerV6 pretrained model——by DropBox. After downloaded the pretrained model, you can put it in checkpoints/ChangeFormer_LEVIR/.

Then, run a demo to get started as follows:

python demo_LEVIR.py

After that, you can find the prediction results in samples/predict_LEVIR.

Quick Start on DSFIN dataset

We have some samples from the DSFIN-CD dataset in the folder samples_DSFIN for a quick start.

Firstly, you can download our ChangeFormerV6 pretrained model——by DropBox. After downloaded the pretrained model, you can put it in checkpoints/ChangeFormer_LEVIR/.

Then, run a demo to get started as follows:

python demo_DSFIN.py

After that, you can find the prediction results in samples/predict_DSFIN.

Train on LEVIR-CD

You can find the training script run_ChangeFormer_LEVIR.sh in the folder scripts. You can run the script file by sh scripts/run_ChangeFormer_LEVIR.sh in the command environment.

The detailed script file run_ChangeFormer_LEVIR.sh is as follows:

#!/usr/bin/env bash

#GPUs
gpus=0

#Set paths
checkpoint_root=/media/lidan/ssd2/ChangeFormer/checkpoints
vis_root=/media/lidan/ssd2/ChangeFormer/vis
data_name=LEVIR


img_size=256    
batch_size=16   
lr=0.0001         
max_epochs=200
embed_dim=256

net_G=ChangeFormerV6        #ChangeFormerV6 is the finalized verion

lr_policy=linear
optimizer=adamw                 #Choices: sgd (set lr to 0.01), adam, adamw
loss=ce                         #Choices: ce, fl (Focal Loss), miou
multi_scale_train=True
multi_scale_infer=False
shuffle_AB=False

#Initializing from pretrained weights
pretrain=/media/lidan/ssd2/ChangeFormer/pretrained_segformer/segformer.b2.512x512.ade.160k.pth

#Train and Validation splits
split=train         #trainval
split_val=test      #test
project_name=CD_${net_G}_${data_name}_b${batch_size}_lr${lr}_${optimizer}_${split}_${split_val}_${max_epochs}_${lr_policy}_${loss}_multi_train_${multi_scale_train}_multi_infer_${multi_scale_infer}_shuffle_AB_${shuffle_AB}_embed_dim_${embed_dim}

CUDA_VISIBLE_DEVICES=1 python main_cd.py --img_size ${img_size} --loss ${loss} --checkpoint_root ${checkpoint_root} --vis_root ${vis_root} --lr_policy ${lr_policy} --optimizer ${optimizer} --pretrain ${pretrain} --split ${split} --split_val ${split_val} --net_G ${net_G} --multi_scale_train ${multi_scale_train} --multi_scale_infer ${multi_scale_infer} --gpu_ids ${gpus} --max_epochs ${max_epochs} --project_name ${project_name} --batch_size ${batch_size} --shuffle_AB ${shuffle_AB} --data_name ${data_name}  --lr ${lr} --embed_dim ${embed_dim}

Train on DSFIN-CD

Follow the similar procedure mentioned for LEVIR-CD. Use run_ChangeFormer_DSFIN.sh in scripts folder to train on DSFIN-CD.

Evaluate on LEVIR

You can find the evaluation script eval_ChangeFormer_LEVIR.sh in the folder scripts. You can run the script file by sh scripts/eval_ChangeFormer_LEVIR.sh in the command environment.

The detailed script file eval_ChangeFormer_LEVIR.sh is as follows:

#!/usr/bin/env bash

gpus=0

data_name=LEVIR
net_G=ChangeFormerV6 #This is the best version
split=test
vis_root=/media/lidan/ssd2/ChangeFormer/vis
project_name=CD_ChangeFormerV6_LEVIR_b16_lr0.0001_adamw_train_test_200_linear_ce_multi_train_True_multi_infer_False_shuffle_AB_False_embed_dim_256
checkpoints_root=/media/lidan/ssd2/ChangeFormer/checkpoints
checkpoint_name=best_ckpt.pt
img_size=256
embed_dim=256 #Make sure to change the embedding dim (best and default = 256)

CUDA_VISIBLE_DEVICES=0 python eval_cd.py --split ${split} --net_G ${net_G} --embed_dim ${embed_dim} --img_size ${img_size} --vis_root ${vis_root} --checkpoints_root ${checkpoints_root} --checkpoint_name ${checkpoint_name} --gpu_ids ${gpus} --project_name ${project_name} --data_name ${data_name}

Evaluate on LEVIR

Follow the same evaluation procedure mentioned for LEVIR-CD. You can find the evaluation script eval_ChangeFormer_DSFIN.sh in the folder scripts. You can run the script file by sh scripts/eval_ChangeFormer_DSFIN.sh in the command environment.

Dataset Preparation

Data structure

"""
Change detection data set with pixel-level binary labels;
├─A
├─B
├─label
└─list
"""

A: images of t1 phase;

B:images of t2 phase;

label: label maps;

list: contains train.txt, val.txt and test.txt, each file records the image names (XXX.png) in the change detection dataset.

Data Download

LEVIR-CD: https://justchenhao.github.io/LEVIR/

WHU-CD: https://study.rsgis.whu.edu.cn/pages/download/building_dataset.html

DSIFN-CD: https://github.com/GeoZcx/A-deeply-supervised-image-fusion-network-for-change-detection-in-remote-sensing-images/tree/master/dataset

License

Code is released for non-commercial and research purposes only. For commercial purposes, please contact the authors.

Citation

If you use this code for your research, please cite our paper:

@Article{
}

References

Appreciate the work from the following repositories:

Comments
  • How could use?

    How could use?

    Hi, I'm very interested in using your code in my project. I was able to run demo scripts. Now I want to use your code for my own data, but unfortunately I do not know how I can do this on my data. I put them in files A, B But I think I need guidance to test Please tell me how I can find a difference for my images (I am a newcomer, thank you)

    help wanted 
    opened by p00uya 23
  • How to train more classes label instead of two?

    How to train more classes label instead of two?

    Hi I really appreciate your work.Now I want to train on changesim,a dataset with 4 classes label ,such as "missing","new","ratation","replaced object". I tried change n_class , but it didnt work. what should I do to train on a dataset which is more than 2 classes? thx~

    help wanted 
    opened by xgyyao 14
  • Question about training on LEVIR-CD

    Question about training on LEVIR-CD

    Hi , I found when i load the pretrained model(trained on ade160k dataset), the keys of checkpoint are not matched. The pretrained model: BFA7F864-1D89-4f31-9F0F-5C87B1584CF8 The self.net_G: image

    So the keys of pretrained model are all missing keys.

    question 
    opened by Youskrpig 10
  • About using a new dataset

    About using a new dataset

    opened by SnycradJuice 8
  • DSIFN accuracy

    DSIFN accuracy

    Hi wgcban, I notice that DSIFN-CD dataset has much higher accuracy than BIT[4] (IoU from BIT 52.97% to ChangeFormer 76.48%). On another dataset, LEVIR-CD, the difference is not as large as DSIFN-CD. Could you please explain the main source of the large improvement on DSIFN-CD? e.g. training strategy, data augmentation, model structure... Thanks Wesley

    question 
    opened by WesleyZhang1991 8
  • Some Questions about Code and Paper Details

    Some Questions about Code and Paper Details

    Hi~ Thx for your great work, :clap: it's really inspired a lot in siamese Transformer network realizing.

    However, I still have some questions about the code implementation and the details of the paper.

    1. In the code, the implementation of Sequence Reduction was completed through the Conv2d non-overlapping cutting feature map before MHSA. https://github.com/wgcban/ChangeFormer/blob/9025e26417cf8f10f29a48f34a05758498216465/models/ChangeFormer.py#L316 The effect of the implementation is similar to that of the first shape and then linear projection in the paper, but this code implementation will result in a reduction of the sequence length R^2 times (similar to the idea of cutting the image into 16 * 16 patches at the beginning of the ViT). However, the formula 2 in the paper shows that the reduction is times. Is there an error here?

    2. In this code:https://github.com/wgcban/ChangeFormer/blob/9025e26417cf8f10f29a48f34a05758498216465/models/ChangeFormer.py#L507 the actual code implements two skips connected. In the pink block diagram of Transformer Block explained in the upper right corner of Fig1 in the paper, do you need to draw two skips connected?(Add skip bypass connecting Sequence Reduction input an MHSA output)

    3. About Depth-wise Conv in Transformer Block as PE.Why you do this? How to realize position coding(Can you explain it)? Why is this effective?

    4. patch_block1 is not used in the code. What is this module used for? Why is it inconsistent with the previous block1 dimension? (dim=embed_dims[1] and dim=embed_dims[0] respectively)https://github.com/wgcban/ChangeFormer/blob/9025e26417cf8f10f29a48f34a05758498216465/models/ChangeFormer.py#L52

    Looking forward to your early reply!:smiley:

    opened by zafirshi 6
  • about multi classes

    about multi classes

    hi, I tried to train my personal data with 4 classes = {0,1,2,3}

    pixels are like

    0000000002220000 0000000002200000 0000000002000000 0010000000000000 0111110000000000 1111000000000000

    grayscale.

    when I train this data, the accuracy converges to 0.5 and never changes. Is there any problem that I miss?

    what I changed is only n_classes = 4

    thanks.

    opened by g7199 5
  • The code runs too long

    The code runs too long

    Hello, it's a nice code. It takes me a lot of time to run the program using the LEVIR-CD-256 dataset you have processed. Is this normal? How long will it take you to train the model? Looking forward to your early reply!

    opened by Mengtao-ship 5
  • How to

    How to

    Hi I really appreciate your work. I have a few questions about the model. First of all is it possible to modify the size of the input images? Then how can we retrain the model with our data? I noticed that the model detects the changes appeared in the image B. How can we generate a map for the disappearance of elements in A? Thanks. I remain open to your answers and suggestions.

    question 
    opened by choumie 5
  • A question about the difference module

    A question about the difference module

    HI~,after reading your paper, I still can't understand your design of Difference Module which consists of Conv2D, ReLU and BatchNorm2d,What is the reason for this design? in the eqn: Fidiff = BN(ReLU(Conv2D3×3(Cat(Fipre, Fipost)))) in the code: nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1), nn.ReLU(), nn.BatchNorm2d(out_channels), nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1), nn.ReLU() Looking forward to your early reply!

    opened by Herxinsasa 4
  • F1 values of WHU-CD and DSIFN-CD

    F1 values of WHU-CD and DSIFN-CD

    Hello, I am trying to reproduce the BIT-CD model code, and it looks incorrect when using WHU-CD dataset as well as DSIFN-CD dataset, and the result appears too high than your result. But it looks normal when using LEVID-CD dataset. Do I need to change the pre-training?

    opened by yuwanting828 4
  • A question about visualization.

    A question about visualization.

    Hello again! I'm training the net on the former dataset,a problem happened when the trainer saves visualization, the vis_pred, vis_gt of saved images do not look normal. Though I found your visualization method right here https://github.com/wgcban/ChangeFormer/blob/9025e26417cf8f10f29a48f34a05758498216465/models/trainer.py#L220-L232 still don't know how to make it fit to save multiple classes gt and pred images.

    opened by SnycradJuice 0
  • multi class

    multi class

    You want to perform multiclass classification. Even if I change the code to args.n_class = 9, the class is predicted to be 2. What should I do? sry im korean 캡처 It shouldn't be possible to modify only n_class, but should there be multiple classes of labels?

    opened by taemin6697 20
Releases(v0.1.0)
Owner
Wele Gedara Chaminda Bandara
I am a second-year Ph.D. student in the ECE at Johns Hopkins University.
Wele Gedara Chaminda Bandara
The official implementation of paper Siamese Transformer Pyramid Networks for Real-Time UAV Tracking, accepted by WACV22

SiamTPN Introduction This is the official implementation of the SiamTPN (WACV2022). The tracker intergrates pyramid feature network and transformer in

Robotics and Intelligent Systems Control @ NYUAD 28 Nov 25, 2022
TransCD: Scene Change Detection via Transformer-based Architecture

TransCD: Scene Change Detection via Transformer-based Architecture

wangzhixue 29 Dec 11, 2022
Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

scc4onnx Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel

Katsuya Hyodo 16 Dec 22, 2022
Python library containing BART query generation and BERT-based Siamese models for neural retrieval.

Neural Retrieval Embedding-based Zero-shot Retrieval through Query Generation leverages query synthesis over large corpuses of unlabeled text (such as

Amazon Web Services - Labs 35 Apr 14, 2022
Classify bird species based on their songs using SIamese Networks and 1D dilated convolutions.

The goal is to classify different birds species based on their songs/calls. Spectrograms have been extracted from the audio samples and used as features for classification.

Aditya Dutt 9 Dec 27, 2022
Official implement of Paper:A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sening images

A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images 深度监督影像融合网络DSIFN用于高分辨率双时相遥感影像变化检测 Of

Chenxiao Zhang 135 Dec 19, 2022
A PyTorch re-implementation of the paper 'Exploring Simple Siamese Representation Learning'. Reproduced the 67.8% Top1 Acc on ImageNet.

Exploring simple siamese representation learning This is a PyTorch re-implementation of the SimSiam paper on ImageNet dataset. The results match that

Taojiannan Yang 72 Nov 9, 2022
PyTorch implementation of SimSiam: Exploring Simple Siamese Representation Learning

SimSiam: Exploring Simple Siamese Representation Learning This is a PyTorch implementation of the SimSiam paper: @Article{chen2020simsiam, author =

Facebook Research 834 Dec 30, 2022
A PyTorch implementation of "Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning", IJCAI-21

MERIT A PyTorch implementation of our IJCAI-21 paper Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning. Depen

Graph Analysis & Deep Learning Laboratory, GRAND 32 Jan 2, 2023
Official code for 'Robust Siamese Object Tracking for Unmanned Aerial Manipulator' and offical introduction to UAMT100 benchmark

SiamSA: Robust Siamese Object Tracking for Unmanned Aerial Manipulator Demo video ?? Our video on Youtube and bilibili demonstrates the evaluation of

Intelligent Vision for Robotics in Complex Environment 12 Dec 18, 2022
Siamese TabNet

Raifhack-DS-2021 https://raifhack.ru/ - Команда Звёздочка Siamese TabNet Сиамская TabNet предсказывает стоимость объекта недвижимости с price_type=1,

Daniel Gafni 15 Apr 16, 2022
Exploring Simple Siamese Representation Learning

G-SimSiam A PyTorch implementation which refers to repo for the paper Exploring Simple Siamese Representation Learning by Xinlei Chen & Kaiming He Add

zhuyun 1 Dec 19, 2021
PyTorch implementation of Asymmetric Siamese (https://arxiv.org/abs/2204.00613)

Asym-Siam: On the Importance of Asymmetry for Siamese Representation Learning This is a PyTorch implementation of the Asym-Siam paper, CVPR 2022: @inp

Meta Research 89 Dec 18, 2022
[CVPR 2022 Oral] Crafting Better Contrastive Views for Siamese Representation Learning

Crafting Better Contrastive Views for Siamese Representation Learning (CVPR 2022 Oral) 2022-03-29: The paper was selected as a CVPR 2022 Oral paper! 2

null 249 Dec 28, 2022
[TIP 2020] Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion

Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion Code for Multi-Temporal Scene Classification and Scene Ch

Lixiang Ru 33 Dec 12, 2022
Remote sensing change detection tool based on PaddlePaddle

PdRSCD PdRSCD(PaddlePaddle Remote Sensing Change Detection)是一个基于飞桨PaddlePaddle的遥感变化检测的项目,pypi包名为ppcd。目前0.2版本,最新支持图像列表输入的训练和预测,如多期影像、多源影像甚至多期多源影像。可以快速完

null 38 Aug 31, 2022
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

Jiezhang Cao 225 Nov 13, 2022