Transformer based SAR image despeckling

Overview

Transformer based SAR image despeckling

Using the code:

The code is stable while using Python 3.6.13, CUDA >=10.1

  • Clone this repository:
git clone https://github.com/malshaV/sar_transformer
cd sar_transformer

To install all the dependencies using conda:

conda env create -f environment.yml
conda activate sar

If you prefer pip, install following versions:

timm==0.3.2
mmcv-full==1.2.7
torch==1.7.1
torchvision==0.8.2
opencv-python==4.5.1.48

Creating synthetic data:

This network was trained synthetic SAR images generated using BSD500. To create the synthetic data use create_synthetic_data.py file.

To train the network:

python train.py --batch_size 1 --epoch 400 --modelname "TransSARV2" --learning_rate 0.0002 --train_dataset "path_to_training_data" --val_dataset "path_to _validation_data" --direc "path_to_save_results" --crop 256

To test the network:

python test.py --loadmodel "./pretrained_models/model.pth" --save_path "./test_images/" --model "TransSARV2"
You might also like...
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

The implementation of
The implementation of "Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer"

Shuffle Transformer The implementation of "Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer" Introduction Very recently, window-

Unofficial implementation of
Unofficial implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" (https://arxiv.org/abs/2103.14030)

Swin-Transformer-Tensorflow A direct translation of the official PyTorch implementation of "Swin Transformer: Hierarchical Vision Transformer using Sh

CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped
CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped

CSWin-Transformer This repo is the official implementation of "CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows". Th

nnFormer: Interleaved Transformer for Volumetric Segmentation Code for paper "nnFormer: Interleaved Transformer for Volumetric Segmentation "

nnFormer: Interleaved Transformer for Volumetric Segmentation Code for paper "nnFormer: Interleaved Transformer for Volumetric Segmentation ". Please

3D-Transformer: Molecular Representation with Transformer in 3D Space

3D-Transformer: Molecular Representation with Transformer in 3D Space

This repository builds a basic vision transformer from scratch so that one beginner can understand the theory of vision transformer.

vision-transformer-from-scratch This repository includes several kinds of vision transformers from scratch so that one beginner can understand the the

Transformer - Transformer in PyTorch

Transformer 完成进度 Embeddings and PositionalEncoding with example. MultiHeadAttent

Transformer Huffman coding - Complete Huffman coding through transformer

Transformer_Huffman_coding Complete Huffman coding through transformer 2022/2/19

Comments
  • Error in test.py

    Error in test.py

    When using test.py I get an error:

    model = TransSARV2()
    ---->model.load_state_dict(torch.load("./pretrained_models/model.pth"))
          model.eval()
    
    [/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in load_state_dict(self, state_dict, strict)
       1603         if len(error_msgs) > 0:
       1604             raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    -> 1605                                self.__class__.__name__, "\n\t".join(error_msgs)))
       1606         return _IncompatibleKeys(missing_keys, unexpected_keys)
       1607 
    
    RuntimeError: Error(s) in loading state_dict for TransSARV2:
    	Missing key(s) in state_dict: "Tenc.patch_embed1.proj.weight", "Tenc.patch_embed1.proj.bias", "Tenc.patch_embed1.norm.weight", "Tenc.patch_embed1.norm.bias", "Tenc.patch_embed2.proj.weight", "Tenc.patch_embed2.proj.bias", "Tenc.patch_embed2.norm.weight", "Tenc.patch_embed2.norm.bias", "Tenc.patch_embed3.proj.weight", "Tenc.patch_embed3.proj.bias", "Tenc.patch_embed3.norm.weight", "Tenc.patch_embed3.norm.bias", "Tenc.patch_embed4.proj.weight", "Tenc.patch_embed4.proj.bias", "Tenc.patch_embed4.norm.weight", "Tenc.patch_embed4.norm.bias", "Tenc.patch_embed5.proj.weight", "Tenc.patch_embed5.proj.bias", "Tenc.patch_embed5.norm.weight", "Tenc.patch_embed5.norm.bias", "Tenc.block1.0.norm1.weight", "Tenc.block1.0.norm1.bias", "Tenc.block1.0.attn.q.weight", "Tenc.block1.0.attn.kv.weight", "Tenc.block1.0.attn.proj.weight", "Tenc.block1.0.attn.proj.bias", "Tenc.block1.0.attn.sr.weight", "Tenc.block1.0.attn.sr.bias", "Tenc.block1.0.attn.norm.weight", "Tenc.block1.0.attn.norm.bias", "Tenc.block1.0.norm2.weight", "Tenc.block1.0.norm2.bias", "Tenc.block1.0.mlp.fc1.weight", "Tenc.block1.0.mlp.fc1.bias", "Tenc.block1.0.mlp.dwconv.dwconv.weight", "Tenc.block1.0.mlp.dwconv.dwconv.bias", "Tenc.block1.0.mlp.fc2.weight", "Tenc.block1.0.mlp.fc2.bias", "Tenc.block1.1.norm1.weight", "Tenc.block1.1.norm1.bias", "Tenc.block1.1.attn.q.weight", "Tenc.block1.1.attn.kv.weight", "Tenc.block1.1.attn.proj.weight", "Tenc.block1.1.attn.proj.bias", "Tenc.block1.1.attn.sr.weight", "Tenc.block1.1.attn.sr.bias"...
    	Unexpected key(s) in state_dict: "module.Tenc.patch_embed1.proj.weight", "module.Tenc.patch_embed1.proj.bias", "module.Tenc.patch_embed1.norm.weight", "module.Tenc.patch_embed1.norm.bias", "module.Tenc.patch_embed2.proj.weight", "module.Tenc.patch_embed2.proj.bias", "module.Tenc.patch_embed2.norm.weight", "module.Tenc.patch_embed2.norm.bias", "module.Tenc.patch_embed3.proj.weight", "module.Tenc.patch_embed3.proj.bias", "module.Tenc.patch_embed3.norm.weight", "module.Tenc.patch_embed3.norm.bias", "module.Tenc.patch_embed4.proj.weight", "module.Tenc.patch_embed4.proj.bias", "module.Tenc.patch_embed4.norm.weight", "module.Tenc.patch_embed4.norm.bias", "module.Tenc.patch_embed5.proj.weight", "module.Tenc.patch_embed5.proj.bias", "module.Tenc.patch_embed5.norm.weight", "module.Tenc.patch_embed5.norm.bias", "module.Tenc.block1.0.norm1.weight", "module.Tenc.block1.0.norm1.bias", "module.Tenc.block1.0.attn.q.weight", "module.Tenc.block1.0.attn.kv.weight", "module.Tenc.block1.0.attn.proj.weight", "module.Tenc.block1.0.attn.proj.bias", "module.Tenc.block1.0.attn.sr.weight", "module.Tenc.block1.0.attn.sr.bias", "module.Tenc.block1.0.attn.norm.weight", "module.Tenc.block1.0.attn.norm.bias", "module.Tenc.block1.0.norm2.weight", "module.Tenc.block1.0.norm2.bias", "module.Tenc.block1.0.mlp.fc1.weight", "module.Tenc.block1.0.mlp.fc1.bias", "module.Tenc.block1.0.mlp.dwconv.dwconv.weight", "module.Tenc.block1.0.mlp.dwconv.dwconv.bias", "module.Tenc.block1.0.mlp.fc2.weight", "module.Tenc.bl...
    

    So the weights file is either corrupted, or has incorrect weights.

    opened by azhanmohammed 2
  • About train and test on real SAR image

    About train and test on real SAR image

    Thank you for your nice work!

    I am a bit confused about the training setup on real SAR data. You mentioned real SAR image data does not have clean ground truth data. In this case, did you train your model on synthetically generated speckled images, and then directly test your model on real SAR image data? Thank you very much!

    opened by longbai1006 2
Owner
null
Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

ImageProcessingTransformer Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

null 61 Jan 1, 2023
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

Jiezhang Cao 225 Nov 13, 2022
Alex Pashevich 62 Dec 24, 2022
Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorch

Transformer in Transformer Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image c

Phil Wang 272 Dec 23, 2022
Official repository for "Restormer: Efficient Transformer for High-Resolution Image Restoration". SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.

Restormer: Efficient Transformer for High-Resolution Image Restoration Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan,

Syed Waqas Zamir 906 Dec 30, 2022
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

null 139 Jan 1, 2023
Mesh Graphormer is a new transformer-based method for human pose and mesh reconsruction from an input image

MeshGraphormer ✨ ✨ This is our research code of Mesh Graphormer. Mesh Graphormer is a new transformer-based method for human pose and mesh reconsructi

Microsoft 251 Jan 8, 2023
A transformer-based method for Healthcare Image Captioning in Vietnamese

vieCap4H Challenge 2021: A transformer-based method for Healthcare Image Captioning in Vietnamese This repo GitHub contains our solution for vieCap4H

Doanh B C 4 May 5, 2022
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Object Detection and Instance Segmentation.

Swin Transformer for Object Detection This repo contains the supported code and configuration files to reproduce object detection results of Swin Tran

Swin Transformer 1.4k Dec 30, 2022