SatelliteSfM - A library for solving the satellite structure from motion problem

Overview

Satellite Structure from Motion

Maintained by Kai Zhang.

Overview

  • This is a library dedicated to solving the satellite structure from motion problem.
  • It's a wrapper of the VisSatSatelliteStereo repo for easier use.
  • The outputs are png images and OpenCV-compatible pinhole camreas readily deployable to multi-view stereo pipelines targetting ground-level images.

Installation

Assume you are on a Linux machine with at least one GPU, and have conda installed. Then to install this library, simply by:

. ./env.sh

Inputs

We assume the inputs to be a set of .tif images encoding the 3-channel uint8 RGB colors, and the metadata like RPC cameras. This data format is to align with the public satellite benchmark: TRACK 3: MULTI-VIEW SEMANTIC STEREO. Download one example data from this google drive; folder structure look like below:

- examples/inputs
    - images/
        - *.tif
        - *.tif
        - *.tif
        - ...
    - latlonalt_bbx.json

, where latlonalt_bbx.json specifies the bounding box for the site of interest in the global (latitude, longitude, altitude) coordinate system.

If you are not sure what is a reasonably good altitude range, you can put random numbers in the json file, but you have to enable the --use_srtm4 option below.

Run Structure from Motion

python satellite_sfm.py --input_folder examples/inputs --output_folder examples/outputs --run_sfm [--use_srtm4] [--enable_debug]

The --enable_debug option outputs some visualization helpful debugging the structure from motion quality.

Outputs

  • {output_folder}/images/ folder contains the png images
  • {output_folder}/cameras_adjusted/ folder contains the bundle-adjusted pinhole cameras; each camera is represented by a pair of 4x4 K, W2C matrices that are OpenCV-compatible.
  • {output_folder}/enu_bbx_adjusted.json contains the scene bounding box in the local ENU Euclidean coordinate system.
  • {output_folder}/enu_observer_latlonalt.json contains the observer coordinate for defining the local ENU coordinate; essentially, this observer coordinate is only necessary for coordinate conversion between local ENU and global latitude-longitude-altitude.

If you turn on the --enable_debug option, you might want to dig into the folder {output_folder}/debug_sfm for visuals, etc.

Citations

@inproceedings{VisSat-2019,
  title={Leveraging Vision Reconstruction Pipelines for Satellite Imagery},
  author={Zhang, Kai and Sun, Jin and Snavely, Noah},
  booktitle={IEEE International Conference on Computer Vision Workshops},
  year={2019}
}

Example results

input images

Input images

sparse point cloud ouput by SfM

Sparse point cloud

homograhpy-warp one view, then average with another by a plane sequence

Sweep plane high-res video

inspect epipolar geometry

python inspect_epipolar_geometry.py

inspect epipolar

get zero-skew instrincis marix

python skew_correct.py --input_folder ./examples/outputs ./examples/outputs_zeroskew

skew correct

More handy scripts are coming

Stay tuned :-)

Comments
  • How to enable --use_srtm4 option in command?

    How to enable --use_srtm4 option in command?

    @Kai-46 How to enable --use_srtm4 option in command?

    On writing this command I am getting the error.

    Command written:

    python3 satellite_sfm.py --input_folder /Users/jaskiratsingh/IIIT-Hyderabad-Research/SatelliteSfM_Input_Images --output_folder /Users/jaskiratsingh/IIIT-Hyderabad-Research/SatelliteSfM_Output_Image --run_sfm [--use_srtm4] [--enable_debug]
    

    Error:

    zsh: no matches found: [--use_srtm4]
    

    Can you help me know how can I resolve this?

    Thanks!

    opened by jaskiratsingh2000 8
  • Not able to install adpated Colmap

    Not able to install adpated Colmap

    Hello,

    I am running into some problems when calling install_colmapforvissat.sh

    At first, everything seems fine and it starts building and downloading different components, like ceres and such. However, when it reaches the moment of building colmap_cuda, it runs into the following error:

    [ 25%] Built target colmap_cuda [ 26%] Building C object lib/VLFeat/CMakeFiles/vlfeat.dir/scalespace.c.o In file included from /home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.h:21, from /home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c:363: /home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c: In function ‘_vl_kmeans_quantize_f’: /home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/mathop.h:92:37: error: ‘vl_infinity_d’ not specified in enclosing ‘parallel’ 92 | #define VL_INFINITY_D (vl_infinity_d.value) | ~~~~~~~~~~~~~~^~~~~~~ /home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c:685:34: note: in expansion of macro ‘VL_INFINITY_D’ 685 | TYPE bestDistance = (TYPE) VL_INFINITY_D ; | ^~~~~~~~~~~~~ In file included from /home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c:1782: /home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c:672:9: error: enclosing ‘parallel’ 672 | #pragma omp parallel default(none)
    | ^~~ In file included from /home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c:1788: /home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c: In function ‘_vl_kmeans_quantize_d’: /home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c:685:27: error: ‘vl_infinity_d’ not specified in enclosing ‘parallel’ 685 | TYPE bestDistance = (TYPE) VL_INFINITY_D ; /home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c:672:9: error: enclosing ‘parallel’ 672 | #pragma omp parallel default(none)
    | ^~~ make[2]: *** [lib/VLFeat/CMakeFiles/vlfeat.dir/build.make:258: lib/VLFeat/CMakeFiles/vlfeat.dir/kmeans.c.o] Error 1 make[2]: *** Waiting for unfinished jobs.... make[1]: *** [CMakeFiles/Makefile2:902: lib/VLFeat/CMakeFiles/vlfeat.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... [ 56%] Built target colmap make: *** [Makefile:141: all] Error 2 Command failed: cmake --build . --target install --config Release -- -j8

    I have tried to build it myself from source by running cmake and then make and it run into the same problem. My cmake configuration is able to detect my CUDA config and seems to work fine.

    My specs are:

    Ubuntu 20.04 gcc g++ version 9 nvcc version 11.3 cmake version 3.10 Libboost version 1.71.0

    Many thanks in advance for your help and this great project !!

    opened by A-guridi 5
  • ColmapForVisSatPatched

    ColmapForVisSatPatched

    Hi Kai,

    I had some time and implemented ColmapForVisSat using git patch. I was able to re-create some satellite reconstructions using the latest Colmap version.

    Best regards, Sebastian

    opened by SBCV 3
  • What downstream algorithm adopted to achieve the effect of JAX_166_compressed.mp4  ?

    What downstream algorithm adopted to achieve the effect of JAX_166_compressed.mp4 ?

    Hi Kai, Thanks for providing this awesome repo ! Recently, we want to use some satellite images to finish remote sensing large-scale city reconstruction. And we find the effect of JAX_166_compressed.mp4 is very meet our need. But we can't find any certainly content of downstream algorithm description you adopted. Is Neus or MVS ? Can you give us some helps ? We also want to obtain similar results from you.

    opened by zs670980918 2
  • more data?

    more data?

    hi, thanks for this amazing repo, but I do have some more questions,

    1. how to get more tiff datas? I do want to generate more imgs
    2. how to get latlonalt_bbx parameters, this is a known parameter? seems that you use the inside value to create camera parameters.
    3. how do you generate the videos like "./readme_resources/novel_view.gif" "https://user-images.githubusercontent.com/21653654/153779703-36b50265-ae3b-41ac-8139-2e0bf081f28d.mp4" hope that you can add more scripts.

    thanks!

    opened by jeannotes 0
  • Apply the code on the own PAN dataset

    Apply the code on the own PAN dataset

    Hi Kai,

    I've been noticed the input dataset are .tif images encoding the 3-channel uint8 RGB colors. But now, I have my own PAN dataset that have only one channel the detail are as followed:

    • PAN_SEN_PWOI_000004990_1_2_F_1_RPC.TXT
    • DIM_PAN_SEN_PWOI_000004990_1_2_F_1.XML
    • PAN_SEN_PWOI_000004990_1_2_F_1_P_R2C1.TFW
    • IMG_PAN_SEN_PWOI_000004990_1_2_F_1_P_R2C1.TIF
    • PREVIEW_PAN_SEN_PWOI_000004990_1_2_F_1.jpg
    • RPC_PAN_SEN_PWOI_000004990_1_2_F_1.XML

    The .TIF are PAN image with one channel. I tried to revise the code with np.expand_dims to expand the channel from 1 to 3 and commend out date_time. When run python satellite_sfm.py --input_folder examples/my_own_data --output_folder examples/outputs_my_own_data --run_sfm --use_srtm4, I got the following bug:

    bug.txt

    So, I wondered how can I apply the code on my own PAN dataset? Or just have something run? Cause I dont know how to set the latlonalt_bbx.json but to use the flag --use_srtm4.

    Thank you so much

    opened by VictorZoo 0
  • File name changes when using ColmapForVisSatPatched

    File name changes when using ColmapForVisSatPatched

    Hey,

    I used ColmapForVisSatPatched during Installation, but encountered some errors I believe are due to a newer Colmap Version being used than in the original ColmapForVisSat.

    In detail:

    1. The python scripts are contained inside Colmap/scripts/python, but SateliteSfM expects these to be inside preprocess_sfm/colmap/.
    2. The read_model.py script has been renamed to read_write_model.py (relevant commit). SatelliteSfM still expects this file to be named read_model.py, so i had to manually rename it.

    After these changes everything seems to work as expected, the sample provided runs through without any issues.

    Best regards, Valentin

    opened by wagnva 0
Owner
Kai Zhang
PhD candidate at Cornell.
Kai Zhang
MWPToolkit is a PyTorch-based toolkit for Math Word Problem (MWP) solving.

MWPToolkit is a PyTorch-based toolkit for Math Word Problem (MWP) solving. It is a comprehensive framework for research purpose that integrates popular MWP benchmark datasets and typical deep learning-based MWP algorithms.

null 119 Jan 4, 2023
'Solving the sampling problem of the Sycamore quantum supremacy circuits

solve_sycamore This repo contains data, contraction code, and contraction order for the paper ''Solving the sampling problem of the Sycamore quantum s

Feng Pan 29 Nov 28, 2022
Problem-943.-ACMP - Problem 943. ACMP

Problem-943.-ACMP В "main.py" расположен вариант моего решения задачи 943 с серв

Konstantin Dyomshin 2 Aug 19, 2022
null 5 Jan 5, 2023
PyTorch implementation DRO: Deep Recurrent Optimizer for Structure-from-Motion

DRO: Deep Recurrent Optimizer for Structure-from-Motion This is the official PyTorch implementation code for DRO-sfm. For technical details, please re

Alibaba Cloud 56 Dec 12, 2022
Deep Two-View Structure-from-Motion Revisited

Deep Two-View Structure-from-Motion Revisited This repository provides the code for our CVPR 2021 paper Deep Two-View Structure-from-Motion Revisited.

Jianyuan Wang 145 Jan 6, 2023
Video Autoencoder: self-supervised disentanglement of 3D structure and motion

Video Autoencoder: self-supervised disentanglement of 3D structure and motion This repository contains the code (in PyTorch) for the model introduced

null 157 Dec 22, 2022
COLMAP - Structure-from-Motion and Multi-View Stereo

COLMAP About COLMAP is a general-purpose Structure-from-Motion (SfM) and Multi-View Stereo (MVS) pipeline with a graphical and command-line interface.

null 4.7k Jan 7, 2023
Making Structure-from-Motion (COLMAP) more robust to symmetries and duplicated structures

SfM disambiguation with COLMAP About Structure-from-Motion generally fails when the scene exhibits symmetries and duplicated structures. In this repos

Computer Vision and Geometry Lab 193 Dec 26, 2022
This repository contains the code for the paper "Hierarchical Motion Understanding via Motion Programs"

Hierarchical Motion Understanding via Motion Programs (CVPR 2021) This repository contains the official implementation of: Hierarchical Motion Underst

Sumith Kulal 40 Dec 5, 2022
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance This is the codebase for video-based human motion reconstruction in human-mot

Jiachen Xu 5 Jul 14, 2022
Deep learning library for solving differential equations and more

DeepXDE Voting on whether we should have a Slack channel for discussion. DeepXDE is a library for scientific machine learning. Use DeepXDE if you need

Lu Lu 1.4k Dec 29, 2022
Code and model benchmarks for "SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology"

NeurIPS 2020 SEVIR Code for paper: SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology Requirement

USAF - MIT Artificial Intelligence Accelerator 46 Dec 15, 2022
A large dataset of 100k Google Satellite and matching Map images, resembling pix2pix's Google Maps dataset.

Larger Google Sat2Map dataset This dataset extends the aerial ⟷ Maps dataset used in pix2pix (Isola et al., CVPR17). The provide script download_sat2m

null 34 Dec 28, 2022
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 360 Dec 10, 2022
1st place solution to the Satellite Image Change Detection Challenge hosted by SenseTime

1st place solution to the Satellite Image Change Detection Challenge hosted by SenseTime

Lihe Yang 209 Jan 1, 2023
Train a deep learning net with OpenStreetMap features and satellite imagery.

DeepOSM Classify roads and features in satellite imagery, by training neural networks with OpenStreetMap (OSM) data. DeepOSM can: Download a chunk of

TrailBehind, Inc. 1.3k Nov 24, 2022
Satellite labelling tool for manual labelling of storm top features such as overshooting tops, above-anvil plumes, cold U/Vs, rings etc.

Satellite labelling tool About this app A tool for manual labelling of storm top features such as overshooting tops, above-anvil plumes, cold U/Vs, ri

Czech Hydrometeorological Institute - Satellite Department 10 Sep 14, 2022
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 359 Jan 5, 2023