CoANet: Connectivity Attention Network for Road Extraction From Satellite Imagery

Related tags

Deep Learning CoANet
Overview

CoANet: Connectivity Attention Network for Road Extraction From Satellite Imagery

This paper (CoANet) has been published in IEEE TIP 2021.

This code is licensed for non-commerical research purpose only.

Introduction

Extracting roads from satellite imagery is a promising approach to update the dynamic changes of road networks efficiently and timely. However, it is challenging due to the occlusions caused by other objects and the complex traffic environment, the pixel-based methods often generate fragmented roads and fail to predict topological correctness. In this paper, motivated by the road shapes and connections in the graph network, we propose a connectivity attention network (CoANet) to jointly learn the segmentation and pair-wise dependencies. Since the strip convolution is more aligned with the shape of roads, which are long-span, narrow, and distributed continuously. We develop a strip convolution module (SCM) that leverages four strip convolutions to capture long-range context information from different directions and avoid interference from irrelevant regions. Besides, considering the occlusions in road regions caused by buildings and trees, a connectivity attention module (CoA) is proposed to explore the relationship between neighboring pixels. The CoA module incorporates the graphical information and enables the connectivity of roads are better preserved. Extensive experiments on the popular benchmarks (SpaceNet and DeepGlobe datasets) demonstrate that our proposed CoANet establishes new state-of-the-art results.

SANet

Citations

If you are using the code/model provided here in a publication, please consider citing:

@article{mei2021coanet,
title={CoANet: Connectivity Attention Network for Road Extraction From Satellite Imagery},
author={Mei, Jie and Li, Rou-Jing and Gao, Wang and Cheng, Ming-Ming},
journal={IEEE Transactions on Image Processing},
volume={30},
pages={8540--8552},
year={2021},
publisher={IEEE}
}

Requirements

The code is built with the following dependencies:

  • Python 3.6 or higher
  • CUDA 10.0 or higher
  • PyTorch 1.2 or higher
  • tqdm
  • matplotlib
  • pillow
  • tensorboardX

Data Preparation

PreProcess SpaceNet Dataset

  • Convert SpaceNet 11-bit images to 8-bit Images.
  • Create road masks (3m), country wise.
  • Move all data to single folder.

SpaceNet dataset tree structure after preprocessing.

spacenet
|
└───gt
│   └───AOI_2_Vegas_img1.tif
└───images
│   └───RGB-PanSharpen_AOI_2_Vegas_img1.tif

Download DeepGlobe Road dataset in the following tree structure.

deepglobe
│
└───train
│   └───gt
│   └───images

Create Crops and connectivity cubes

python create_crops.py --base_dir ./data/spacenet/ --crop_size 650 --im_suffix .png --gt_suffix .png
python create_crops.py --base_dir ./data/deepglobe/train --crop_size 512 --im_suffix .png --gt_suffix .png
python create_connection.py --base_dir ./data/spacenet/crops 
python create_connection.py --base_dir ./data/deepglobe/train/crops 
spacenet
|   train.txt
|   val.txt
|   train_crops.txt   # created by create_crops.py
|   val_crops.txt     # created by create_crops.py
|
└───gt
│   
└───images
│   
└───crops       
│   └───connect_8_d1	# created by create_connection.py
│   └───connect_8_d3	# created by create_connection.py
│   └───gt		# created by create_crops.py
│   └───images	# created by create_crops.py

Testing

The pretrained model of CoANet can be downloaded:

Run the following scripts to evaluate the model.

  • SpaceNet
python test.py --ckpt='./run/spacenet/CoANet-resnet/CoANet-spacenet.pth.tar' --out_path='./run/spacenet/CoANet-resnet' --dataset='spacenet' --base_size=1280 --crop_size=1280 
  • DeepGlobe
python test.py --ckpt='./run/DeepGlobe/CoANet-resnet/CoANet-DeepGlobe.pth.tar' --out_path='./run/DeepGlobe/CoANet-resnet' --dataset='DeepGlobe' --base_size=1024 --crop_size=1024

Evaluate APLS

Training

Follow steps below to train your model:

  1. Configure your dataset path in [mypath.py].
  2. Input arguments: (see full input arguments via python train.py --help):
usage: train.py [-h] [--backbone resnet]
                [--out-stride OUT_STRIDE] [--dataset {spacenet,DeepGlobe}]
                [--workers N] [--base-size BASE_SIZE]
                [--crop-size CROP_SIZE] [--sync-bn SYNC_BN]
                [--freeze-bn FREEZE_BN] [--loss-type {ce,con_ce,focal}] [--epochs N]
                [--start_epoch N] [--batch-size N] [--test-batch-size N]
                [--use-balanced-weights] [--lr LR]
                [--lr-scheduler {poly,step,cos}] [--momentum M]
                [--weight-decay M] [--nesterov] [--no-cuda]
                [--gpu-ids GPU_IDS] [--seed S] [--resume RESUME]
                [--checkname CHECKNAME] [--ft] [--eval-interval EVAL_INTERVAL]
                [--no-val]
    
  1. To train CoANet using SpaceNet dataset and ResNet as backbone:
python train.py --dataset=spacenet

Contact

For any questions, please contact me via e-mail: [email protected].

Acknowledgment

This code is based on the pytorch-deeplab-xception codebase.

Comments
  • No module named 'prefetch_generator'

    No module named 'prefetch_generator'

    运行test.py的时候出现错误 Traceback (most recent call last): File "test.py", line 6, in from dataloaders import custom_transforms as tr File "/data0/wangxue/CoANet-main/dataloaders/init.py", line 3, in from prefetch_generator import BackgroundGenerator ModuleNotFoundError: No module named 'prefetch_generator'

    opened by zuzi2015 2
  • This work is hard to repeat

    This work is hard to repeat

    I have followed this work for a long time, and the result claims in the paper can't repeat!! I have several qustions about the code and paper:

    1. As we know, before the data feed into model, we need do some 'transfoms'. But why do you padding the ignore region for 0 and then using it for loss calculation? And why not mask this region out?
    2. About the Connction Branch, in your process code, "create_connction.py", we can see that, it create from the GroundTruth using some pixel interval. In the paper, page 7, it claimed that "Besides, we devise an upper bound to our proposed method CoANet by using the ground truth of connectivity branch during inference". It doesn't make sense! CoANet-UB is so big just because It get the GroundTruth!
    3. Go on with the Connction Branch, It claimed that "The upper bound results indicate that there is still large room for the connectivity branch to improve, and one possibility is to use a larger pixel interval in the future work."(page 7 right) But from the Ablation Study, TABLE V, page 10, we can see that It is not work by using a larger pixel interval. It is not consistent about your discussion.

    Could you answer my qustion?

    opened by Dawn-bin 2
  • getting error while loading pretrained model

    getting error while loading pretrained model

    I'm unable to load the pretrained model file that you have provided. Can you please check if the file is correctly uploaded or not? I'm getting the following error while loading the model:

    [enforce fail at inline_container.cc:144] . PytorchStreamReader failed reading zip archive: failed finding central directory

    will look forward to your response. Thanks.

    opened by Alizah-Masood 2
  • Question about Figure 4

    Question about Figure 4

    In Figure 4 of the paper, it looks to me like the diagram has left out some items. I've drawn what i think is missing in red. Can you please correct me if this is wrong?

    https://imgur.com/a/leR5lpT

    opened by ajtao 1
  • Wanna know more experiment details used in your paper

    Wanna know more experiment details used in your paper

    Hi, I'm doing research on raod extraction from remote sensing images recently. And I sincerely appreciate your works and suprised at remarkable perfomrance improvement comparing with previous achitectures. And I want to know more details on the experiments done in your paper as following:

    1. The experiment results are obtained from the partitioned and resized validation dataset? Or from the whole dataset which also retained the original size?
    2. When you calculated the training and interference time of those models, did you maintain the same batch size of images?
    3. Did you use the same data augmentation and post processing published in other model's codes when you test the other model's performance?
    4. What kind of skeletonize function is used when you compute Apls metric. Could you publish the codes or name the function you used?

    Sincerely hope that you could solve my confusions!

    opened by WangZX-0630 1
Owner
Jie Mei
PhD
Jie Mei
To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

Kunal Wadhwa 2 Jan 5, 2022
Trajectory Extraction of road users via Traffic Camera

Traffic Monitoring Citation The associated paper for this project will be published here as soon as possible. When using this software, please cite th

Julian Strosahl 14 Dec 17, 2022
Recovering Brain Structure Network Using Functional Connectivity

Recovering-Brain-Structure-Network-Using-Functional-Connectivity Framework: Papers: This repository provides a PyTorch implementation of the models ad

null 5 Nov 30, 2022
Official code for paper "Demystifying Local Vision Transformer: Sparse Connectivity, Weight Sharing, and Dynamic Weight"

Demysitifing Local Vision Transformer, arxiv This is the official PyTorch implementation of our paper. We simply replace local self attention by (dyna

null 138 Dec 28, 2022
PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

Haoyu Chen 71 Dec 30, 2022
A public available dataset for road boundary detection in aerial images

Topo-boundary This is the official github repo of paper Topo-boundary: A Benchmark Dataset on Topological Road-boundary Detection Using Aerial Images

Zhenhua Xu 79 Jan 4, 2023
[CVPR'21] Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation

Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation Weixiang Yang, Qi Li, Wenxi Liu, Yuanlong Yu, Y

null 118 Dec 26, 2022
Code repository for Semantic Terrain Classification for Off-Road Autonomous Driving

BEVNet Datasets Datasets should be put inside data/. For example, data/semantic_kitti_4class_100x100. Training BEVNet-S Example: cd experiments bash t

(Brian) JoonHo Lee 24 Dec 12, 2022
PyTorch implementation of Memory-based semantic segmentation for off-road unstructured natural environments.

MemSeg: Memory-based semantic segmentation for off-road unstructured natural environments Introduction This repository is a PyTorch implementation of

null 11 Nov 28, 2022
Road Crack Detection Using Deep Learning Methods

Road-Crack-Detection-Using-Deep-Learning-Methods This is my Diploma Thesis ¨Road Crack Detection Using Deep Learning Methods¨ under the supervision of

Aggelos Katsaliros 3 May 3, 2022
Find-Lane-Line - Use openCV library and Python to detect the road-lane-line

Find-Lane-Line This project is to use openCV library and Python to detect the road-lane-line. Data Pipeline Step one : Color Selection Step two : Cann

Kenny Cheng 3 Aug 17, 2022
Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONNX.

ONNX-HybridNets-Multitask-Road-Detection Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONN

Ibai Gorordo 45 Jan 1, 2023
It's a implement of this paper:Relation extraction via Multi-Level attention CNNs

Relation Classification via Multi-Level Attention CNNs It's a implement of this paper:Relation Classification via Multi-Level Attention CNNs. Training

Aybss 2 Nov 4, 2022
Models Supported: AlbUNet [18, 34, 50, 101, 152] (1D and 2D versions for Single and Multiclass Segmentation, Feature Extraction with supports for Deep Supervision and Guided Attention)

AlbUNet-1D-2D-Tensorflow-Keras This repository contains 1D and 2D Signal Segmentation Model Builder for AlbUNet and several of its variants developed

Sakib Mahmud 1 Nov 15, 2021
Aerial Imagery dataset for fire detection: classification and segmentation (Unmanned Aerial Vehicle (UAV))

Aerial Imagery dataset for fire detection: classification and segmentation using Unmanned Aerial Vehicle (UAV) Title FLAME (Fire Luminosity Airborne-b

null 79 Jan 6, 2023
FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery (TGRS)

FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery by Ailong Ma, Junjue Wang*, Yanfei Zhon

Kingdrone 43 Jan 5, 2023
Experiments on Flood Segmentation on Sentinel-1 SAR Imagery with Cyclical Pseudo Labeling and Noisy Student Training

Flood Detection Challenge This repository contains code for our submission to the ETCI 2021 Competition on Flood Detection (Winning Solution #2). Acco

Siddha Ganju 108 Dec 28, 2022
Change is Everywhere: Single-Temporal Supervised Object Change Detection in Remote Sensing Imagery (ICCV 2021)

Change is Everywhere Single-Temporal Supervised Object Change Detection in Remote Sensing Imagery by Zhuo Zheng, Ailong Ma, Liangpei Zhang and Yanfei

Zhuo Zheng 125 Dec 13, 2022