DSAC* for Visual Camera Re-Localization (RGB or RGB-D)

Overview

DSAC* for Visual Camera Re-Localization (RGB or RGB-D)

Introduction

DSAC* is a learning-based visual re-localization method. After being trained for a specific scene, DSAC* is able to estimate the camera rotation and translation from a single, new image of the same scene. DSAC* is versatile w.r.t what data is available at training and test time. It can be trained from RGB images and ground truth poses alone, or additionally utilize depth maps (measured or rendered) or sparse scene reconstructions for training. During test time, it supports pose estimation from RGB as well as RGB-D inputs.

DSAC* is a combination of Scene Coordinate Regression with CNNs and Differentiable RANSAC (DSAC) for end-to-end training. This code extends and improves our previous re-localization pipeline, DSAC++ with support for RGB-D inputs, support for data augmentation, a leaner network architecture, reduced training and test time, as well as other improvements for increased accuracy.

For more details, we kindly refer to the paper. You find a BibTeX reference of the paper at the end of this readme.

Installation

DSAC* is based on PyTorch, and includes a custom C++ extension which you have to compile and install (but it's easy). The main framework is implemented in Python, including data processing and setting parameters. The C++ extension encapsulates robust pose optimization and the respective gradient calculation for efficiency reasons.

DSAC* requires the following python packages, and we tested it with the package version in brackets

pytorch (1.6.0)
opencv (3.4.2)
scikit-image (0.16.2)

Note: The code does not support OpenCV 4.x at the moment.

You compile and install the C++ extension by executing:

cd dsacstar
python setup.py install

Compilation requires access to OpenCV header files and libraries. If you are using Conda, the setup script will look for the OpenCV package in the current Conda environment. Otherwise (or if that fails), you have to set the OpenCV library directory and include directory yourself by editing the setup.py file.

If compilation succeeds, you can import dsacstar in your python scripts. The extension provides four functions: dsacstar.forward_rgb(...), dsacstar.backward_rgb(...), dsacstar.forward_rgbd(...) and dsacstar.backward_rgbd(...). Check our python scripts or the documentation in dsacstar/dsacstar.cpp for reference how to use these functions.

Data Structure

The datasets folder is expected to contain one sub-folder per self-contained scene (e.g. one indoor room or one outdoor area). We do not provide any data with this repository. However, the datasets folder comes with a selection of Python scripts that will download and setup the datasets used in our paper (linux only, please adapt the script for other operating systems). In the following, we describe the data format expected in each scene folder, but we advice to look at the provided dataset scripts for reference.

Each sub-folder of datasets should be structured by the following sub-folders that implement the training/test split expected by the code:

datasets/<scene_name>/training/
datasets/<scene_name>/test/

Training and test folders contain the following sub-folders:

rgb/ -- image files
calibration/ -- camera calibration files
poses/ -- camera transformation matrix
init/ -- (optional for training) pre-computed ground truth scene coordinates
depth/ -- (optional for training) can be used to compute ground truth scene coordinates on the fly
eye/-- (optional for RGB-D inputs) pre-computed camera coordinates (i.e. back-projected depth maps)

Correspondences of files across the different sub-folders will be established by alphabetical ordering.

Details for image files: Any format supported by scikit-image.

Details for pose files: Text files containing the camera pose h as 4x4 matrix following the 7Scenes/12Scenes convention. The pose transforms camera coordinates e to scene coordinates y, i.e. y = he.

Details for calibration files: Text file. At the moment we only support the camera focal length (one value shared for x- and y-direction, in px). The principal point is assumed to lie in the image center.

Details for init files: (3xHxW) tensor (standard PyTorch file format via torch.save/torch.load) where H and W are the dimension of the output of our network. Since we rescale input images to 480px height, and our network predicts an output that is sub-sampled by a factor of 8, our init files are 60px height. Invalid scene coordinate values should be set to zeros, e.g. when generating scene coordinate ground truth from a sparse SfM reconstruction. For reference how to generate these files, we refer to datasets/setup_cambridge.py where they are generated from sparse SfM reconstructions, or dataset.py where they are generated from dense depth maps and ground truth poses.

Details for depth files: Any format supported by scikit-image. Should have same size as the corresponding RGB image and contain a depth measurement per pixel in millimeters. Invalid depth values should be set to zero.

Details for eye files: Same format, size and conventions as init files but should contain camera coordinates instead of scene coordinates. For reference how to generate these files, we refer to dataset.py where associated scene coordinate tensors are generated from depth maps. Just adapt that code by storing camera coordinates directly, instead of transforming them with the ground truth pose.

Supported Datasets

Prior to using these datasets, please check their orignial licenses (see the website links at the beginning of each section).

7Scenes

7Scenes (MSR) is a small-scale indoor re-localization dataset. The authors provide training/test split information, and a dense 3D scan of each scene, RGB and depth images as well as ground truth poses. We provide the Python script setup_7scenes.py to download the dataset and convert it into our format.

Note that the provided depth images are not yet registered to the RGB images, and using them directly will lead to inferior results. As an alternative, we provide rendered depth maps here. Just extract the archive inside datasets/ and the depth maps should be merged into the respective 7Scenes sub-folders.

For RGB-D experiments we provide pre-computed camera coordinate files (eye/) for all training and test scenes here. We generated them from the original depth maps after doing a custom registration to the RGB images. Just extract the archive inside datasets/ and the coordinate files should be merged into the respective 7Scenes sub-folders.

12Scenes

12Scenes (Stanford) is a small-scale indoor re-localization dataset. The authors provide training/test split information, and a dense 3D scan of each scene, RGB and depth images as well as ground truth poses. We provide the Python script setup_12scenes.py to download the dataset and convert it into our format.

Provided depth images are registered to the RGB images, and can be used directly.However, we provide rendered depth maps here which we used in our experiments. Just extract the archive inside datasets/ and the depth maps should be merged into the respective 12Scenes sub-folders.

For RGB-D experiments we provide pre-computed camera coordinate files (eye/) for all training and test scenes here. We generated them from the original depth maps after doing a custom registration to the RGB images. Just extract the archive inside datasets/ and the coordinate files should be merged into the respective 12Scenes sub-folders.

Cambridge Landmarks

Cambridge Landmarks is an outdoor re-localization dataset. The dataset comes with a set of RGB images of five landmark buildings in the city of Cambridge (UK). The authors provide training/test split information, and a structure-from-motion (SfM) reconstruction containing a 3D point cloud of each building, and reconstructed camera poses for all images. We provide the Python script setup_cambridge.py to download the dataset and convert it into our format. The script will generate ground-truth scene coordinate files from the sparse SfM reconstructions. This dataset is not suitable for RGB-D based pose estimation since measured depth maps are not available.

Note: The Cambridge Landmarks dataset contains a sixth scene, Street, which we omitted in our experiments due to the poor quality of the SfM reconstruction.

Training DSAC*

We train DSAC* in two stages: Initializing scene coordinate regression, and end-to-end training. DSAC* supports various variants of camera re-localization, depending on what information about the scene is available at training and test time, e.g. a 3D reconstruction of the scene, or depth measurements for images.

Note: We provide pre-trained networks for 7Scenes, 12Scenes, and Cambridge, each trained for the three main scenarios investigated in the paper: RGB only (RGB), RGB + 3D model (RGBM) and RGB-D RGBD). Download them here.

You may call all training scripts with the -h option to see a listing of all supported command line arguments. The default settings of all parameters correspond to our experiments in the paper.

Each training script will create a log file *.txt file which contains the training iteration and training loss in each line. The initalization script will additionally log the percentage of valid predictions w.r.t. the various constraints described in the paper.

Initalization

RGB only (mode 0)

If only RGB images and ground truth poses are available (minimal setup), initialize a network by calling:

python train_init.py <scene_name> <network_output_file> --mode 0

Mode 0 triggers the RGB only mode which requires no pre-computed ground truth scene coordinates nor depth maps. You specify a scene via <scene_name> which should correspond to the sub-directory of the datasets folder, e.g. 'Cambridge_GreatCourt'. <network_output_file> specifies under which file name the script should store the resulting new network.

RGB + 3D Model (mode 1)

When a 3D model of the scene is available, it may be utilized during the initalization stage which usually leads to improved accuracy. You may utilize the 3D model in two ways: Either you use it together with the ground truth poses to render dense depth maps for each RGB image (see depth\ folder description in the Data Structure section above), as we did for 7Scenes/12Scenes. Note that we provide such rendered depth maps for 7Scenes/12Scenes, see Supported Datasets section above.

In this case, the training script will generate ground truth scene coordinates from the depth maps and ground truth poses (implemented in dataset.py).

python train_init.py <scene_name> <network_output_file> --mode 1

Alternatively, you may pre-compute ground truth scene coordinate files directly (see init\ folder description in the Data Structure section above), as we did for Cambridge Landmarks. Note that the datasets\setup_cambridge.py script will generate these files for you. To utilize pre-computed scene coordinate ground truth, append the -sparse flag.

python train_init.py <scene_name> <network_output_file> --mode 1 -sparse

RGB-D (mode 2)

When (measured) depth maps for each image are available, you call:

python train_init.py <scene_name> <network_output_file> --mode 2

This uses the depth\ dataset folder similar to mode 1 to generate ground truth scene coordinates but optimizes a different loss for initalization (3D distance instead of reprojection error).

Note: The 7Scenes depth maps are not registered to the RGB images, and hence are not directly usable for training. The 12Scenes depth maps are registered properly and may be used as is. However, in our experiments, we used rendered depth maps for both 7Scenes and 12Scenes to initialize scene coordinate regression.

End-To-End Training

End-To-End training supports two modes: RGB (mode 1) and RGB-D (mode 2) depending on whether depth maps are available or not.

python train_e2e.py <scene_name> <network_input_file> <network_output_file> --mode <1 or 2>

<network_input_file> points to a network which has already been initialized for this scene. <network_output_file> specifies under which file name the script should store the resulting new network.

Mode 2 (RGB-D) requires pre-computed camera coordinate files (see Data Structure section above). We provide these files for 7Scenes/12Scenes, see Supported Datasets section.

Testing DSAC*

Testing supports two modes: RGB (mode 1) and RGB-D (mode 2) depending on whether depth maps are available or not.

To evaluate on a scene, call:

python test.py <scene_name> <network_input_file> --mode <1 or 2>

This will estimate poses for the test set, and compare them to the respective ground truth. You specify a scene via <scene_name> which should correspond to the sub-directory of the dataset folder, e.g. 'Cambridge_GreatCourt'. <network_input_file> points to a network which has already been initialized for this scene. Running the script creates two output files:

test_<scene_name>_.txt -- Contains the median rotation error (deg), the median translation error (cm), and the average processing time per test image (s).

poses_<scene_name>_.txt -- Contains for each test image the corrsponding file name, the estimated pose as 4D quaternion (wxyz) and 3D translation vector (xyz), followed by the rotation error (deg) and translation error (m).

Mode 2 (RGB-D) requires pre-computed camera coordinate files (see Data Structure section above). We provide these files for 7Scenes/12Scenes, see Supported Datasets section. Note that these files have to be generated from the measured depth maps (but ensure proper registration to RGB images). You should not utlize rendered depth maps here, since rendering would use the ground truth camera pose which means that ground truth test information leaks into your input data.

Call the test script with the -h option to see a listing of all supported command line arguments.

Publications

Please cite the following paper if you use DSAC* or parts of this code in your own work.

@article{brachmann2020dsacstar,
  title={Visual Camera Re-Localization from {RGB} and {RGB-D} Images Using {DSAC}},
  author={Brachmann, Eric and Rother, Carsten},
  journal={arXiv},
  year={2020}
}

This code builds on our previous camera re-localization pipelines, namely DSAC and DSAC++:

@inproceedings{brachmann2017dsac,
  title={{DSAC}-{Differentiable RANSAC} for Camera Localization},
  author={Brachmann, Eric and Krull, Alexander and Nowozin, Sebastian and Shotton, Jamie and Michel, Frank and Gumhold, Stefan and Rother, Carsten},
  booktitle={CVPR},
  year={2017}
}

@inproceedings{brachmann2018lessmore,
  title={Learning less is more - {6D} camera localization via {3D} surface regression},
  author={Brachmann, Eric and Rother, Carsten},
  booktitle={CVPR},
  year={2018}
}
Comments
  • Train Cambridge

    Train Cambridge

    Hi!

    I want to confirm the parameters used to train the Cambridge Landmark Dataset. Do you use the default parameters to train to achieve the report accuracies (RGB only)? I am using the dsacstar to train on a custom outdoor dataset with RGB only (did not work well using default parameters). I want to have your help on suggesting a direction on tuning the parameters. Thanks so much!

    opened by Song-Jingyu 4
  • Problem with the result about cambridge datasets

    Problem with the result about cambridge datasets

    I test the pre-trained network on cambridge datasets. The results of OldHospital, ShopFacade and StMarysChurch are obviously deviated from the normal value, while the results of KingsCollege and GreatCourt are normal. I guess that the pre-trained network do not correspond to dataset. The following is the screenshot of ShopFacade: 4~BMA27B 5{ V16V RT% @D

    opened by aruba01 3
  • Getting Runtime Error when running on Windows 10

    Getting Runtime Error when running on Windows 10

    I was able to build the code to windows on.

    • conda python 3.7.10
    • MSVC c++ compiler
    • requirements from README.md

    I got this Error when running the train_init.py on chess dataset.

    i used : python .\train_init.py 7scenes_chess chess --mode 0

    RuntimeError:
            An attempt has been made to start a new process before the
            current process has finished its bootstrapping phase.
            This probably means that you are not using fork to start your
            child processes and you have forgotten to use the proper idiom
            in the main module:
    
                if __name__ == '__main__':
                    freeze_support()
                    ...
    
            The "freeze_support()" line can be omitted if the program
            is not going to be frozen to produce an executable.
    

    i think this is problem from not linking openmp with -fopenmp flag or some other error in windows.

    opened by RameshKamath 3
  • Does re-scaling damage the unknown scene coordinate masks?

    Does re-scaling damage the unknown scene coordinate masks?

    Hi,

    Thanks for the wonderful open-sourced project (again)!

    I have a question on the potentially harmful effects of label re-scaling. The re-scaling of the image is generally fine. But re-scaling for 3D labels may change the 0 value for invalid scene coordinate masks. https://github.com/vislearn/dsacstar/blob/3ffbcb1d4d7b0cae68902560b5a2296d8c1b77e6/dataset.py#L187-L199

    In the loss function, the mask is used as follows: https://github.com/vislearn/dsacstar/blob/3ffbcb1d4d7b0cae68902560b5a2296d8c1b77e6/train_init.py#L191-L192

    We are concerned about this in our project as the training labels might become accurate after augmentation. I wondered if you have some insights on this issue.

    Many thanks!

    opened by qiyan98 2
  • Generate learned geometry

    Generate learned geometry

    HI!

    Could you please clearify how to generate learned geometry? i've tried to run test.py saving scene_coordinates = scene_coordinates.cpu() cv2.imwrite(str(file),scene_coordinates.permute([0,2,3,1]).numpy()[0]) So I got network predictions, but it is not clear how to generate a point cloud from them. I took all these images and reshaped them to the (N, 3) , but when I visualize them I get following: image

    opened by YaroslavShchekaturov 2
  • Dataset preparation (camera pose)

    Dataset preparation (camera pose)

    HI!

    Am I right that if I want to use camera pose extracted from the blender ( obj.matrix_world) I need to multiply it by diag(1 -1 -1 1) to follow the OpenCV convention?

    Thank you!

    opened by YaroslavShchekaturov 2
  • How did you generate depth maps and precomputed camera coordinate files?

    How did you generate depth maps and precomputed camera coordinate files?

    The instruction says to download generated depth maps and precomputed camera coordinate files.

    Did you generated those from the 7scenes sequence images?

    If yes, how did you compute/generated?

    Is there any code for that?

    opened by chudur-budur 2
  • What does it mean by

    What does it mean by "opencv (3.4.2)"?

    Which one is it?

    opencv 3.4.2 (opencv.org): https://github.com/opencv/opencv/releases/tag/3.4.2 opencv-python 3.4.2.16 (pypi.org): https://pypi.org/project/opencv-python/3.4.2.16/#history opencv 3.4.2 (anaconda.org): https://anaconda.org/conda-forge/opencv

    opened by chudur-budur 2
  • Do I need to compile and install opencv with CUDA support for DSAC* to work?

    Do I need to compile and install opencv with CUDA support for DSAC* to work?

    The README says, I need to have opencv (3.4.2). Do I need to compile and install opencv (3.4.2) with CUDA enabled?

    Like the process described here?

    https://www.pyimagesearch.com/2016/07/11/compiling-opencv-with-cuda-support/

    opened by chudur-budur 2
  • Problem with data downloading again

    Problem with data downloading again

    I can't download 7scenes_rendered_depth and 12scenes_rendered_depth from https://heidata.uni-heidelberg.de/. I have tried several times but each time I failed to download them halfway. Is there other methods in which I can get access to the data? Sorry to bother you. I would appreciate your help.

    opened by Chicolor 1
  • Problem with data downloading

    Problem with data downloading

    Hello there, I have a problem with data downloading. Now I cannot access or download any data from https://heidata.uni-heidelberg.de. Can you provide another way to get 7scenes rendered depth and pre-trained networks?

    Thanks for sharing your work.

    opened by MisEty 1
  • mean coordinates of the scene

    mean coordinates of the scene

    Hello,

    I would like to ask about the effects of computing the mean coordinates and applying these means to the final scene map. Because I only see that you applied this step in the initialization phase but not the end-2-end train phase and the test phase. Is it important to perform this step and what was the motivation behind it? Thank you in advance.

    opened by sontung 0
  • Questions on Pre-training and pose matrix h

    Questions on Pre-training and pose matrix h

    Hi Eric Thanks for your wonderful work on pose estimation. (1). I was checking the performance improvement on the test dataset in the Cambridge_GreatCourt scene starting with your provided initial network and then the trained network in an end-to-end fashion. Surprisingly, the error rates are pretty the same before and after end-to-end learning. In particular, I'm running test.py by python test.py Cambridge_GreatCourt models\rgb\cambridge\Cambridge_GreatCourt.net --mode 1 and the results are image However when I train the network for a few epochs in the second phase, by python train_e2e.py Cambridge_GreatCourt models\rgb\cambridge\Cambridge_GreatCourt.net output\output.net --mode 1 the results are image which means there is no benefit in the end-to-end learning stage. Is that what you observed before? Or I'm doing something wrong.

    (2). I'm a bit confused about the notations in the paper and what has been implemented in the code. In the paper, it was defined that h (4*4 matrix) is the pose matrix from the camera to the 3d scene coordinate. However, the ground truth pose is the matrix transforming from the environment to the camera coordinate. So direct differencing is not correct, right? I do not know within the c++ code whether you are computing the inverse or not, but I can see at the end, you are saving the inverse pose in the output text file. Can you please explain this?

    Best regards Aref

    opened by arekavandi 3
  • Problems with run mode 2 with my data

    Problems with run mode 2 with my data

    Hello, I try to train DSAC* on my data captured by kinect v2 and I have some questions:

    1. Because the intrinstics of RGB camera and depth camera are different, some depth pixels do not have corresponding RGB pixels(I use white color instead). The input RGB image may like this: 2639 Can the network handle with such case?
    2. I train model by using my RGBD images. The training process is successed but when I run test.py in mode 2, I meet the errors: image It seems that dsacstar.forward_rgbd failed on some frames. And when I run test.py in mode 1 everything is OK.
    opened by MisEty 1
  • Problem with testing on mode 2

    Problem with testing on mode 2

    Hello there,

    I have a problem with testing on mode 2. I can train models by using rendered depth maps with mode 2. But when I try to test them it gives results as below. To be sure I tried your pre-trained model it also gives the same results. When I test in mode 1, everything seems correct for both models.

    image

    Additionally, in rendered depth maps in seq04 frames between 450-500 does not match with the original images. scm_002475 frame-000475 color

    Thanks for sharing your work.

    opened by mhmtsarigul 1
  • Problems executing setup.py

    Problems executing setup.py

    Hi, I made environment in conda with all the packages that you mentioned with correct versions, but I am constantly getting this error while running setup.py.

    dsacstar/dsacstar/dsacstar_util.h:193:5: error: ‘SOLVEPNP_P3P’ is not a member of ‘cv’

    I am using OpenCV 3.4.2 and I manually set the paths for header and library files for for OpenCV in setup.py.

    opened by basit-7 6
Owner
Visual Learning Lab
Visual Learning Lab
SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated) ICRA 2021

SSL_SLAM2 Lightweight 3-D Localization and Mapping for Solid-State LiDAR (Intel Realsense L515 as an example) This repo is an extension work of SSL_SL

Wang Han 王晗 1.3k Jan 8, 2023
Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX.

ONNX Object Localization Network Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX. Ori

Ibai Gorordo 15 Oct 14, 2022
Back to the Feature: Learning Robust Camera Localization from Pixels to Pose (CVPR 2021)

Back to the Feature with PixLoc We introduce PixLoc, a neural network for end-to-end learning of camera localization from an image and a 3D model via

Computer Vision and Geometry Lab 610 Jan 5, 2023
The implementation of the paper "A Deep Feature Aggregation Network for Accurate Indoor Camera Localization".

A Deep Feature Aggregation Network for Accurate Indoor Camera Localization This is the PyTorch implementation of our paper "A Deep Feature Aggregation

null 9 Dec 9, 2022
Camera-caps - Examine the camera capabilities for V4l2 cameras

camera-caps This is a graphical user interface over the v4l2-ctl command line to

Jetsonhacks 25 Dec 26, 2022
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

null 117 Dec 28, 2022
Official PyTorch implementation of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image", ICCV 2019

PoseNet of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image" Introduction This repo is official Py

Gyeongsik Moon 677 Dec 25, 2022
Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera.

Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera. This project prepares training and testing data for various deep learning projects such as 6D object pose estimation projects singleshotpose, as well as object detection and instance segmentation projects.

null 305 Dec 16, 2022
3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans.

3DMV 3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans. This work is based on our ECCV'18 p

Владислав Молодцов 0 Feb 6, 2022
A 2D Visual Localization Framework based on Essential Matrices [ICRA2020]

A 2D Visual Localization Framework based on Essential Matrices This repository provides implementation of our paper accepted at ICRA: To Learn or Not

Qunjie Zhou 27 Nov 7, 2022
Official Implementation of Few-shot Visual Relationship Co-localization

VRC Official implementation of the Few-shot Visual Relationship Co-localization (ICCV 2021) paper project page | paper Requirements Use python >= 3.8.

null 22 Oct 13, 2022
Weak-supervised Visual Geo-localization via Attention-based Knowledge Distillation

Weak-supervised Visual Geo-localization via Attention-based Knowledge Distillation Introduction WAKD is a PyTorch implementation for our ICPR-2022 pap

null 2 Oct 20, 2022
improvement of CLIP features over the traditional resnet features on the visual question answering, image captioning, navigation and visual entailment tasks.

CLIP-ViL In our paper "How Much Can CLIP Benefit Vision-and-Language Tasks?", we show the improvement of CLIP features over the traditional resnet fea

null 310 Dec 28, 2022
Puzzle-CAM: Improved localization via matching partial and full features.

Puzzle-CAM The official implementation of "Puzzle-CAM: Improved localization via matching partial and full features".

Sanghyun Jo 150 Nov 14, 2022
Localization Distillation for Object Detection

Localization Distillation for Object Detection This repo is based on mmDetection. This is the code for our paper: Localization Distillation

null 274 Dec 26, 2022
Pytorch implementation of Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization https://arxiv.org/abs/2008.11646

[TCSVT] Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization LPN [Paper] NEWs Prerequisites Python 3.6 GPU Memory >= 8G Numpy > 1.

null 46 Dec 14, 2022
PyTorch implementations of the paper: "Learning Independent Instance Maps for Crowd Localization"

IIM - Crowd Localization This repo is the official implementation of paper: Learning Independent Instance Maps for Crowd Localization. The code is dev

tao han 91 Nov 10, 2022
TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks

TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks [Paper] [Project Website] This repository holds the source code, pretra

Humam Alwassel 83 Dec 21, 2022
Range Image-based LiDAR Localization for Autonomous Vehicles Using Mesh Maps

Range Image-based 3D LiDAR Localization This repo contains the code for our ICRA2021 paper: Range Image-based LiDAR Localization for Autonomous Vehicl

Photogrammetry & Robotics Bonn 208 Dec 15, 2022