PyTorch Remote Sensing (torchrs)
(WIP) PyTorch implementation of popular datasets and models in remote sensing tasks (Change Detection, Image Super Resolution, Land Cover Classification/Segmentation, Image-to-Image Translation, etc.) for various Optical (Sentinel-2, Landsat, etc.) and Synthetic Aperture Radar (SAR) (Sentinel-1) sensors.
Installation
# pypi
pip install torch-rs
# latest
pip install git+https://github.com/isaaccorley/torchrs
Table of Contents
Datasets
- PROBA-V Super Resolution
- ETCI 2021 Flood Detection
- Onera Satellite Change Detection (OSCD)
- Remote Sensing Visual Question Answering (RSVQA) Low Resolution (LR)
- Remote Sensing Image Captioning Dataset (RSICD)
- Remote Sensing Image Scene Classification (RESISC45)
- EuroSAT
PROBA-V Super Resolution
The PROBA-V Super Resolution Challenge dataset is a Multi-image Super Resolution (MISR) dataset of images taken by the ESA PROBA-Vegetation satellite. The dataset contains sets of unregistered 300m low resolution (LR) images which can be used to generate single 100m high resolution (HR) images for both Near Infrared (NIR) and Red bands. In addition, Quality Masks (QM) for each LR image and Status Masks (SM) for each HR image are available. The PROBA-V contains sensors which take imagery at 100m and 300m spatial resolutions with 5 and 1 day revisit rates, respectively. Generating high resolution imagery estimates would effectively increase the frequency at which HR imagery is available for vegetation monitoring.
The dataset can be downloaded (0.83GB) using scripts/download_probav.sh
and instantiated below:
from torchrs.transforms import Compose, ToTensor
from torchrs.datasets import PROBAV
transform = Compose([ToTensor()])
dataset = PROBAV(
root="path/to/dataset/",
split="train", # or 'test'
band="RED", # or 'NIR'
lr_transform=transform,
hr_transform=transform
)
x = dataset[0]
"""
x: dict(
lr: low res images (t, 1, 128, 128)
qm: quality masks (t, 1, 128, 128)
hr: high res image (1, 384, 384)
sm: status mask (1, 384, 384)
)
t varies by set of images (minimum of 9)
"""
ETCI 2021 Flood Detection
The ETCI 2021 Dataset is a Flood Detection segmentation dataset of SAR images taken by the ESA Sentinel-1 satellite. The dataset contains pairs of VV and VH polarization images processed by the Hybrid Pluggable Processing Pipeline (hyp3) along with corresponding binary flood and water body ground truth masks.
The dataset can be downloaded (5.6GB) using scripts/download_etci2021.sh
and instantiated below:
from torchrs.transforms import Compose, ToTensor
from torchrs.datasets import ETCI2021
transform = Compose([ToTensor()])
dataset = ETCI2021(
root="path/to/dataset/",
split="train", # or 'val', 'test'
transform=transform
)
x = dataset[0]
"""
x: dict(
vv: (3, 256, 256)
vh: (3, 256, 256)
flood_mask: (1, 256, 256)
water_mask: (1, 256, 256)
)
"""
Onera Satellite Change Detection (OSCD)
The Onera Satellite Change Detection (OSCD) dataset, proposed in "Urban Change Detection for Multispectral Earth Observation Using Convolutional Neural Networks", Daudt et al. is a Change Detection dataset of 13 band multispectral (MS) images taken by the ESA Sentinel-2 satellite. The dataset contains 24 registered image pairs from multiple continents between 2015-2018 along with binary change masks.
The dataset can be downloaded (0.73GB) using scripts/download_oscd.sh
and instantiated below:
from torchrs.transforms import Compose, ToTensor
from torchrs.datasets import OSCD
transform = Compose([ToTensor(permute_dims=False)])
dataset = OSCD(
root="path/to/dataset/",
split="train", # or 'test'
transform=transform,
)
x = dataset[0]
"""
x: dict(
x: (2, 13, h, w)
mask: (1, h, w)
)
"""
Remote Sensing Visual Question Answering (RSVQA) Low Resolution (LR)
The RSVQA LR dataset, proposed in "RSVQA: Visual Question Answering for Remote Sensing Data", Lobry et al. is a visual question answering (VQA) dataset of RGB images taken by the ESA Sentinel-2 satellite. Each image is annotated with a set of questions and their corresponding answers. Among other applications, this dataset can be used to train VQA models to perform scene understanding of medium resolution remote sensing imagery.
The dataset can be downloaded (0.2GB) using scripts/download_rsvqa_lr.sh
and instantiated below:
import torchvision.transforms as T
from torchrs.datasets import RSVQALR
transform = T.Compose([T.ToTensor()])
dataset = RSVQALR(
root="path/to/dataset/",
split="train", # or 'val', 'test'
transform=transform
)
x = dataset[0]
"""
x: dict(
x: (3, 256, 256)
questions: List[str]
answers: List[str]
types: List[str]
)
"""
Remote Sensing Image Captioning Dataset (RSICD)
The RSICD dataset, proposed in "Exploring Models and Data for Remote Sensing Image Caption Generation", Lu et al. is an image captioning dataset with 5 captions per image for 10,921 RGB images extracted using Google Earth, Baidu Map, MapABC and Tianditu. While one of the larger remote sensing image captioning datasets, this dataset contains very repetitive language with little detail and many captions are duplicated.
The dataset can be downloaded (0.57GB) using scripts/download_rsicd.sh
and instantiated below:
import torchvision.transforms as T
from torchrs.datasets import RSICD
transform = T.Compose([T.ToTensor()])
dataset = RSICD(
root="path/to/dataset/",
split="train", # or 'val', 'test'
transform=transform
)
x = dataset[0]
"""
x: dict(
x: (3, 224, 224)
captions: List[str]
)
"""
Remote Sensing Image Scene Classification (RESISC45)
The RESISC45 dataset, proposed in "Remote Sensing Image Scene Classification: Benchmark and State of the Art", Cheng et al. is an image classification dataset of 31,500 RGB images extracted using Google Earth Engine. The dataset contains 45 scenes with 700 images per class from over 100 countries and was selected to optimize for high variability in image conditions (spatial resolution, occlusion, weather, illumination, etc.).
The dataset can be downloaded (0.47GB) using scripts/download_resisc45.sh
and instantiated below:
import torchvision.transforms as T
from torchrs.datasets import RESISC45
transform = T.Compose([T.ToTensor()])
dataset = RESISC45(
root="path/to/dataset/",
transform=transform
)
x, y = dataset[0]
"""
x: (3, 256, 256)
y: int
"""
dataset.classes
"""
['airplane', 'airport', 'baseball_diamond', 'basketball_court', 'beach', 'bridge', 'chaparral',
'church', 'circular_farmland', 'cloud', 'commercial_area', 'dense_residential', 'desert', 'forest',
'freeway', 'golf_course', 'ground_track_field', 'harbor', 'industrial_area', 'intersection', 'island',
'lake', 'meadow', 'medium_residential', 'mobile_home_park', 'mountain', 'overpass', 'palace', 'parking_lot',
'railway', 'railway_station', 'rectangular_farmland', 'river', 'roundabout', 'runway', 'sea_ice', 'ship',
'snowberg', 'sparse_residential', 'stadium', 'storage_tank', 'tennis_court', 'terrace', 'thermal_power_station', 'wetland']
"""
EuroSAT
The EuroSAT dataset, proposed in "EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification", Helber et al. is a land cover classification dataset of 27,000 images taken by the ESA Sentinel-2 satellite. The dataset contains 10 land cover classes with 2-3k images per class from over 34 European countries. The dataset is available in the form of RGB only or all Multispectral (MS) Sentinel-2 bands. This dataset is fairly easy with ~98.6% accuracy achieved with a ResNet-50.
The dataset can be downloaded (.13GB and 2.8GB) using scripts/download_eurosat_rgb.sh
or scripts/download_eurosat_ms.sh
and instantiated below:
import torchvision.transforms as T
from torchrs.transforms import ToTensor
from torchrs.datasets import EuroSATRGB, EuroSATMS
transform = T.Compose([T.ToTensor()])
dataset = EuroSATRGB(
root="path/to/dataset/",
transform=transform
)
x, y = dataset[0]
"""
x: (3, 64, 64)
y: int
"""
transform = T.Compose([ToTensor()])
dataset = EuroSATMS(
root="path/to/dataset/",
transform=transform
)
x, y = dataset[0]
"""
x: (13, 64, 64)
y: int
"""
dataset.classes
"""
['AnnualCrop', 'Forest', 'HerbaceousVegetation', 'Highway', 'Industrial',
'Pasture', 'PermanentCrop', 'Residential', 'River', 'SeaLake']
"""
Models
RAMS
Residual Attention Multi-image Super-resolution Network (RAMS) from "Multi-Image Super Resolution of Remotely Sensed Images Using Residual Attention Deep Neural Networks", Salvetti et al. (2021)
RAMS is currently one of the top performers on the PROBA-V Super Resolution Challenge. This Multi-image Super Resolution (MISR) architecture utilizes attention based methods to extract spatial and spatiotemporal features from a set of low resolution images to form a single high resolution image. Note that the attention methods are effectively Squeeze-and-Excitation blocks from "Squeeze-and-Excitation Networks", Hu et al..
import torch
from torchrs.models import RAMS
# increase resolution by factor of 3 (e.g. 128x128 -> 384x384)
model = RAMS(
scale_factor=3,
t=9,
c=1,
num_feature_attn_blocks=12
)
# Input should be of shape (bs, t, c, h, w), where t is the number
# of low resolution input images and c is the number of channels/bands
lr = torch.randn(1, 9, 1, 128, 128)
sr = model(lr) # (1, 1, 384, 384)
Tests
$ pytest -ra