Official PyTorch implementation of PICCOLO: Point-Cloud Centric Omnidirectional Localization (ICCV 2021)

Overview

PICCOLO: Point-Cloud Centric Omnidirectional Localization

Official PyTorch implementation of PICCOLO: Point-Cloud Centric Omnidirectional Localization (ICCV 2021) [Paper] [Video].


PICCOLO is a simple, efficient algorithm for omnidirectional localization that estimates camera pose given a set of input query omnidirectional image and point cloud: no additional preprocessing/learning is required!


In this repository, we provide the implementation and instructions for running PICCOLO, along with the accompanying OmniScenes dataset. If you have any questions regarding the dataset or the baseline implementations, please leave an issue or contact [email protected].

Running PICCOLO

Dataset Preparation

First, download the Stanford2D-3D-S Dataset, and place the data in the directory structure below.

piccolo/data
└── stanford (Stanford2D-3D-S Dataset)
    ├── pano (panorama images)
    │   ├── area_1
    │   │  └── *.png
    │   ⋮
    │   │
    │   └── area_6
    │       └── *.png
    ├── pcd_not_aligned (point cloud data)
    │   ├── area_1
    │   │   └── *.txt
    │   ⋮
    │   │
    │   └── area_6
    │       └── *.txt
    └── pose (json files containing ground truth camera pose)
        ├── area_1
        │   └── *.json
        ⋮
        │
        └── area_6
            └── *.json

Installation

To run the codebase, you need Anaconda. Once you have Anaconda installed, run the following command to create a conda environment.

conda create --name omniloc python=3.7
conda activate omniloc
pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html 
conda install cudatoolkit=10.1

In addition, you must install pytorch_scatter. Follow the instructions provided in the pytorch_scatter github repo. You need to install the version for torch 1.7.0 and CUDA 10.1.

Running

To obtain results for the Stanford-2D-3D-S dataset, run the following command from the terminal:

python main.py --config configs/stanford.ini --log logs/NAME_OF_LOG_DIRECTORY

The config above performs gradient descent sequentially for each candidate starting point. We also provide a parallel implementation of PICCOLO, which performs gradient descent in parallel. While this version faster, it shows slightly inferior performance compared to the sequential optimization version. To run the parallel implementation, run the following command:

python main.py --config configs/stanford_parallel.ini --log logs/NAME_OF_LOG_DIRECTORY

Output

After running, four files will be in the log directory.

  • Config file used for PICCOLO
  • Images, made by projecting point cloud using the result obtained from PICCOLO, in NAME_OF_LOG_DIRECTORY/results
  • Csv file which contains the information
    • Panorama image name
    • Ground truth translation
    • Ground truth rotation
    • Whether the image was skipped (skipped when the ground truth translation is out of point cloud bound)
    • Translation obtained by running PICCOLO
    • Rotation obtained by running PICCOLO
    • Translation error
    • Rotation error
    • Time
  • Tensorboard file containing the accuracy

Downloading OmniScenes

OmniScenes is our newly collected dataset for evaluating omnidirectional localization in diverse scenearios such as robot-mounted/handheld cameras and scenes with changes.


The dataset is comprised of images and point clouds captured from 7 scenes ranging from wedding halls to hotel rooms. We are currently in the process of removing regions in the dataset that contains private information difficult to be released in public. We will notify further updates through this GitHub repository.

You might also like...
Synthetic LiDAR sequential point cloud dataset with point-wise annotations
Synthetic LiDAR sequential point cloud dataset with point-wise annotations

SynLiDAR dataset: Learning From Synthetic LiDAR Sequential Point Cloud This is official repository of the SynLiDAR dataset. For technical details, ple

[PyTorch] Official implementation of CVPR2021 paper
[PyTorch] Official implementation of CVPR2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency". https://arxiv.org/abs/2103.05465

PointDSC repository PyTorch implementation of PointDSC for CVPR'2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency",

Official pytorch implementation of
Official pytorch implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion"

DSPoint Official implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion". Paper link: https://arxiv.org/abs/2111.10

PyTorch implementation of NeurIPS 2021 paper:
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

Official Code for ICML 2021 paper
Official Code for ICML 2021 paper "Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline"

Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline Ankit Goyal, Hei Law, Bowei Liu, Alejandro Newell, Jia Deng Internati

Normalization Matters in Weakly Supervised Object Localization (ICCV 2021)
Normalization Matters in Weakly Supervised Object Localization (ICCV 2021)

Normalization Matters in Weakly Supervised Object Localization (ICCV 2021) 99% of the code in this repository originates from this link. ICCV 2021 pap

Official implementation of YOGO for Point-Cloud Processing
Official implementation of YOGO for Point-Cloud Processing

You Only Group Once: Efficient Point-Cloud Processing with Token Representation and Relation Inference Module By Chenfeng Xu, Bohan Zhai, Bichen Wu, T

The official implementation of ICCV paper
The official implementation of ICCV paper "Box-Aware Feature Enhancement for Single Object Tracking on Point Clouds".

Box-Aware Tracker (BAT) Pytorch-Lightning implementation of the Box-Aware Tracker. Box-Aware Feature Enhancement for Single Object Tracking on Point C

Pytorch implementation of PCT: Point Cloud Transformer

PCT: Point Cloud Transformer This is a Pytorch implementation of PCT: Point Cloud Transformer.

Comments
  • Instructions on preparing the 2D-3D-S dataset

    Instructions on preparing the 2D-3D-S dataset

    According to their project repository, the original directory structure provided by the 2D-3D-S dataset is as follows:

    README.md
    /assets
      semantic_labels.json
      utils.py
    /area_1
      /3d
        pointcloud.mat
        rgb.obj    # The raw 3d mesh with rgb textures
        rgb.mtl      # The textures for the raw 3d mesh
        semantic.obj # Semantically-tagged 3d mesh
        semantic.mtl # Tectures for semantic.obj
        /rgb_textures
          {uuid_{i}}.jpg  # Texture images for the rgb 3d mesh
      /data   # all of the generated data
        /pose
          camera_{uuid}__{room}_{i}_frame_{j}_domain__pose.json
        /rgb
          camera_{uuid}__{room}_{i}_frame_{j}_domain__rgb.png
        /depth
        /global_xyz
        /normal
        /semantic
        /semantic_pretty
      /pano    # equirectangular projections
        /pose
          camera_{uuid}__{room}_{i}_frame_equirectangular_domain__pose.json
        /rgb
        /depth
        /global_xyz
        /normal
        /semantic
        /semantic_pretty
      /raw     # Raw data from Matterport
        {uuid}_pose_{pitch_level}_{yaw_position}.txt # RT matrix for raw sensor
        {uuid}_intrinsics_{pitch_level}.txt      # Camera calibration for sensor at {pitch_level}
        {uuid}_i{pitch_level}_{yaw_position}.jpg # Raw RGB image from sensor
        {uuid}_d{pitch_level}_{yaw_position}.jpg # Raw depth image form sensor
    /area_2
    /area_3
    /area_4
    /area_5a
    /area_5b
    /area_6
    

    While in the dataset preparation section, the 2D-3D-S data for PICCOLO is organized otherwise. After setting up the environment, to run on area_3 of the 2D-3D-S dataset, I have tried the below steps:

    1. Symlinking area_3/data/rgb/ (in the original 2D-3D-S dataset) to piccolo/data/stanford/pano/area_3
    2. Symlinking area_3/data/raw/*.txt (in the original 2D-3D-S dataset) under piccolo/data/stanford/pcd_not_aligned/area_3 directory
    3. Symlinking area_3/pano/pose (in the original 2D-3D-S dataset) to piccolo/data/stanford/pose

    Then when I run python main.py --config configs/stanford.ini --log log/test, it gives the following traceback:

    Traceback (most recent call last):
      File "main.py", line 94, in <module>
        localize.localize_stanford(cfg, writer, log_dir)
      File "/path/to/piccolo/localize.py", line 209, in localize_stanford
        xyz_np, rgb_np = data_utils.read_stanford(pcd_name, sample_rate)
      File "/path/to/piccolo/data_utils.py", line 35, in read_stanford
        data = read_table(filepath, header=None, delim_whitespace=True).values
      File "/path/to/.conda/envs/piccolo/lib/python3.7/site-packages/pandas/util/_decorators.py", line 311, in wrapper
        return func(*args, **kwargs)
      File "/path/to/.conda/envs/piccolo/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 683, in read_table
        return _read(filepath_or_buffer, kwds)
      File "/path/to/.conda/envs/piccolo/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 482, in _read
        parser = TextFileReader(filepath_or_buffer, **kwds)
      File "/path/to/.conda/envs/piccolo/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 811, in __init__
        self._engine = self._make_engine(self.engine)
      File "/path/to/.conda/envs/piccolo/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 1040, in _make_engine
        return mapping[engine](self.f, **self.options)  # type: ignore[call-arg]
      File "/path/to/.conda/envs/piccolo/lib/python3.7/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 51, in __init__
        self._open_handles(src, kwds)
      File "/path/to/.conda/envs/piccolo/lib/python3.7/site-packages/pandas/io/parsers/base_parser.py", line 229, in _open_handles
        errors=kwds.get("encoding_errors", "strict"),
      File "/path/to/.conda/envs/piccolo/lib/python3.7/site-packages/pandas/io/common.py", line 707, in get_handle
        newline="",
    FileNotFoundError: [Errno 2] No such file or directory: './data/stanford/pcd_not_aligned/area_3/WC_1.txt'
    

    It seems that the code tries to load a file called WC_1.txt under data/pcd_not_aligned directory, while filenames in that directory typically look like 04a287849657478ea774727e5bff5202_pose_1_0.txt and 04a287849657478ea774727e5bff5202_intrinsics_0.txt.

    Could you please point out anything missing in my dataset preparation steps?

    opened by blurgyy 5
Owner
Noob grad student
null
Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch

Omninet - Pytorch Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch. The authors propose that we should be atte

Phil Wang 48 Nov 21, 2022
Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization' (ICCV-21 Oral)

Learning-Action-Completeness-from-Points Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal A

Pilhyeon Lee 67 Jan 3, 2023
SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated) ICRA 2021

SSL_SLAM2 Lightweight 3-D Localization and Mapping for Solid-State LiDAR (Intel Realsense L515 as an example) This repo is an extension work of SSL_SL

Wang Han 王晗 1.3k Jan 8, 2023
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021) An efficient PyTorch library for Point Cloud Completion.

Microsoft 119 Jan 2, 2023
[ICCV 2021 Oral] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers

PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers Created by Xumin Yu*, Yongming Rao*, Ziyi Wang, Zuyan Liu, Jiwen Lu, Jie Zhou

Xumin Yu 317 Dec 26, 2022
Implementation of the "PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences" paper.

PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences Introduction Point cloud sequences are irregular and unordered in the spatial dimen

Hehe Fan 63 Dec 9, 2022
Implementation of the "Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos" paper.

Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos Introduction Point cloud videos exhibit irregularities and lack of or

Hehe Fan 101 Dec 29, 2022
Omnidirectional Scene Text Detection with Sequential-free Box Discretization (IJCAI 2019). Including competition model, online demo, etc.

Box_Discretization_Network This repository is built on the pytorch [maskrcnn_benchmark]. The method is the foundation of our ReCTs-competition method

Yuliang Liu 266 Nov 24, 2022
Omnidirectional camera calibration in python

Omnidirectional Camera Calibration Key features pure python initial solution based on A Toolbox for Easily Calibrating Omnidirectional Cameras (Davide

Thomas Pönitz 12 Nov 22, 2022
Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX.

ONNX Object Localization Network Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX. Ori

Ibai Gorordo 15 Oct 14, 2022