code for paper -- "Seamless Satellite-image Synthesis"

Overview

Seamless Satellite-image Synthesis

by Jialin Zhu and Tom Kelly.

Project site. The code of our models borrows heavily from the BicycleGAN repository and SPADE repository. Some missing description can be found in the original repository.

Watch the video

YouTube video

Web UI system

Watch the video

  • The UI system is developed by web framework - Django.
  • Clone the code and cd web_ui
  • Install required packages(mainly Django 3.1 and PyTorch 1.7.1)
    • These are easy to install so we do not provide a requirements.txt file.
    • Packages other than Django and PyTorch can be installed in sequence according to the output error logs.
  • Download pre-trained weights and put them in web_ui/sss_ui/checkpoints.
  • Run python manage.py migrate and python manage.py makemigrations.
  • Run python runserver.py.
  • Access 127.0.0.1/index thourough a web browser.
  • Start play with the UI system

Pre-trained weights are available here: Mega link

We provide some preset map data, if you want more extensive or other map data, you need to replace the map data yourself. There are some features that have not yet been implemented. Please report bugs as github issues.

SSS pipeline

The SSS whole pipeline will allow users to generate a set of satellite images from map data of three different scale level.

  • Clone the code and cd SPADE.
  • Install required packages(mainly PyTorch 1.7.1)
  • Run bash scit_m.sh [level_1_dataset_dir] [raw_data_dir] [results_output_dir].
  • The generated satellite images are in the [results_output_path] folder.

We provide some preset map data, if you want more extensive or other map data, you need to replace the map data yourself.

Training

You can also re-train the whole pipeline or train with your own data. For copyright reasons, we will not provide download links for the data we use. But they are very easy to obtain, especially for academic institutions such as universities. Our training data is from Digimap. We use OS MasterMap® Topography Layer with GDAL and GeoPandas to render map images, and we use satellite images from Aerial via Getmapping.

To train map2sat for level 1:

  • Clone the code and cd SPADE.
  • Run python train.py --name [z1] --dataset_mode ins --label_dir [label_dir] --image_dir [image_dir] --instance_dir [instance_dir] --label_nc 13 --load_size 256 --crop_size 256 --niter_decay 20 --use_vae --ins_edge --gpu_ids 0,1,2,3 --batchSize 16.
  • We recommend using a larger batch size so that the encoder can generate results with greater style differences.

To train map2sat for level z (z > 1):

  • Clone the code and cd SPADE.
  • Run python trainCG.py --name [z2_cg] --dataset_mode insgb --label_dir [label_dir] --image_dir [image_dir] --instance_dir [instance_dir] --label_nc 13 --load_size 256 --crop_size 256 --niter_decay 20 --ins_edge --cg --netG spadebranchn --cg_size 256 --gbk_size 8.

To train seam2cont:

  • Clone the code and cd BicycleGAN.
  • Run python train.py --dataroot [dataset_dir] --name [z1sn] --model sn --direction AtoB --load_size 256 --save_epoch_freq 201 --lambda_ml 0 --input_nc 8 --dataset_mode sn --seams_map --batch_size 1 --ndf 32 --conD --forced_mask.

Citation

@inproceedings{zhu2021seamless,
  title={Seamless Satellite-image Synthesis},
  author={Zhu, J and Kelly, T},
  booktitle={Computer Graphics Forum},
  year={2021},
  organization={Wiley}
}

Acknowledgements

We would like to thank Nvidia Corporation for hardware and Ordnance Survey Mapping for map data which made this project possible. This work was undertaken on ARC4, part of the High Performance Computing facilities at the University of Leeds, UK. This work made use of the facilities of the N8 Centre of Excellence in Computationally Intensive Research (N8 CIR) provided and funded by the N8 research partnership and EPSRC (Grant No. EP/T022167/1).

You might also like...
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation
Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation

A Theoretical Analysis of the Repetition Problem in Text Generation This repository share the code for the paper "A Theoretical Analysis of the Repeti

Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Code for the Shortformer model, from the paper by Ofir Press, Noah A. Smith and Mike Lewis.

Shortformer This repository contains the code and the final checkpoint of the Shortformer model. This file explains how to run our experiments on the

PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection
PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection

Unbiased Teacher for Semi-Supervised Object Detection This is the PyTorch implementation of our paper: Unbiased Teacher for Semi-Supervised Object Detection

Official code for paper "Optimization for Oriented Object Detection via Representation Invariance Loss".

Optimization for Oriented Object Detection via Representation Invariance Loss By Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Xue Yang, and Yunpeng Dong. Th

Code for our CVPR 2021 paper
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

Code for our CVPR2021 paper coordinate attention
Code for our CVPR2021 paper coordinate attention

Coordinate Attention for Efficient Mobile Network Design (preprint) This repository is a PyTorch implementation of our coordinate attention (will appe

Comments
Owner
Light
I am really skilled at printing "hello world" in various programming languages.
Light
This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

Benjamin Biggs 29 Dec 28, 2022
TensorFlow code for the neural network presented in the paper: "Structural Language Models of Code" (ICML'2020)

SLM: Structural Language Models of Code This is an official implementation of the model described in: "Structural Language Models of Code" [PDF] To ap

null 73 Nov 6, 2022
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

J K Terry 32 Nov 9, 2021
Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166

Region Proportion Regularized Inference (RePRI) for Few-Shot Segmentation In this repo, we provide the code for our paper : "Few-Shot Segmentation Wit

Malik Boudiaf 138 Dec 12, 2022
Code for ACM MM 2020 paper "NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination"

NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination The offical implementation for the "NOH-NMS: Improving Pedestrian Detection by

Tencent YouTu Research 64 Nov 11, 2022
Official TensorFlow code for the forthcoming paper

~ Efficient-CapsNet ~ Are you tired of over inflated and overused convolutional neural networks? You're right! It's time for CAPSULES :)

Vittorio Mazzia 203 Jan 8, 2023
This is the code for the paper "Contrastive Clustering" (AAAI 2021)

Contrastive Clustering (CC) This is the code for the paper "Contrastive Clustering" (AAAI 2021) Dependency python>=3.7 pytorch>=1.6.0 torchvision>=0.8

Yunfan Li 210 Dec 30, 2022
Code for the paper Learning the Predictability of the Future

Learning the Predictability of the Future Code from the paper Learning the Predictability of the Future. Website of the project in hyperfuture.cs.colu

Computer Vision Lab at Columbia University 139 Nov 18, 2022