Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation

Overview

SUO-SLAM

This repository hosts the code for our CVPR 2022 paper "Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation". ArXiv link.

Citation

If you use any part of this repository in an academic work, please cite our paper as:

@inproceedings{Merrill2022CVPR,
  Title      = {Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation},
  Author     = {Nathaniel Merrill and Yuliang Guo and Xingxing Zuo and Xinyu Huang and Stefan Leutenegger and Xi Peng and Liu Ren and Guoquan Huang},
  Booktitle  = {2022 Conference on Computer Vision and Pattern Recognition (CVPR)},
  Year       = {2022},
  Address    = {New Orleans, USA},
  Month      = jun,
}

Installation

Click for details... This codebase was tested on Ubuntu 18.04. To use the BOP rendering (i.e. for keypoint labeling) install
sudo apt install libfreetype6-dev libglfw3

You will also need a python environment that contains the required packages. To see what packages we used, check out the list of requirements in requirements.txt. They can be installed via pip install -r requirements.txt

Preparing Data

Click for details...

Datasets

To be able to run the training and testing (i.e. single view or with SLAM), first decide on a place to download the data to. The disk will need a few hundred GB of space for all the data (at least 150GB for download and more to extract). All of our code expects the data to be in a local directory ./data, but you can of course symlink this to another location (perhaps with more disk space). So, first of all, in the root of this repo run

$ mkdir data

or to symlink to an external location

$ ln -s /path/to/drive/with/space/ ./data

You can pick and choose what data you want to download (for example just T-LESS or YCBV). Note that all YCBV and TLESS downloads have our keypoint labels packaged along with the data. Download the following google drive links into ./data and extract them.

When all is said and done, the tree should look like this

$ cd ./data && tree --filelimit 3
.
├── bop_datasets
│   ├── tless 
│   └── ycbv 
├── saved_detections
└── VOCdevkit
    └── VOC2012

Pre-trained models

You can download the pretrained models anywhere, but I like to keep them in the results directory that is written to during training.

Training

Click for details...

First set the default arguments in ./lib/args.py for your username if desired, then execute

$ ./train.py

with the appropriate arguments for your filesystem. You can also run

$ ./train.py -h

for a full list of arguments and their meaning. Some important args are batch_size, which is the number of images loaded for each training batch. Note that there may be a variable number of objects in each image, and the objects are all stacked together into one big batch to run the network -- so the actual batch size being run might be multiple times batch_size. In order to keep batch_size reasonably large, we provide another arg called truncate_obj, which, as the help says, truncates the object batches to this number if it exceeds it. We recommend that you start with a large batch size so that you can find out the maximum truncate_obj for you GPUs, then reduce the batch size until there are little to no warnings about too many objects being truncated.

Evaluation

Click for details...

Before you can evaluate in a single-view or SLAM fashion, you will need to build the thirdparty libraries for PnP and graph optimization. First make sure that you have CERES solver installed. The run

$ ./build_thirdparty.sh

Reproducing Results

To reproduce the results of the paper with the pretrained models, check out the scripts under the scripts directory:

eval_all_tless.sh  eval_all_ycbv.sh  make_video.sh

These will reproduce most of the results in the paper as well as any video clips you want. You may have to change the first few lines of each script. Note that these examples can also show you the proper arguments if you want to run from command line alone.

Note that for the T-LESS dataset, we use the thirdparty BOP toolkit to get the VSD error recall, which will show up in the final terminal output as "Mean object recall" among other numbers.

Labeling

Click for details...

Overview

We manually label keypoints on the CAD model to enable some keypoints with semantic meaning. For the full list of keypoint meanings, see the specific README

We provide our landmark labeling tool. Check out the script manual_keypoints.py. This same script can be used to make a visualization of the keypoints as shown below with the --viz option.

The script will show a panel of the same object but oriented slightly differently. The idea is that you pick the same keypoint multiple times to ensure correctness and to get a better label by averaging multiple samples.

The script will also print the following directions to follow in the terminal.

============= Welcome ===============
Select the keypoints with a left click!
Use the "wasd" to turn the objects.
Press "i" to zoom in and "o" to zoom out.
Make sure that the keypoint colors match between all views.
Messed up? Just press 'u' to undo.
Press "Enter" to finish and save the keypoints
Press "Esc" to just quit

Once you have pressed "enter", you will get to an inspection pane.

Where the unscaled mean keypoints are on the left, and the ones scaled by covariance is on the left, where the ellipses are the Gaussian 3-sigma projected onto the image. If the covariance is too large, or the mean is out of place, then you may have messed up. Again, the program will print out these directions to terminal:

Inspect the results!
Use the "wasd" to turn the object.
Press "i" to zoom in and "o" to zoom out.
Press "Esc" to go back, "Enter" to accept (saving keypoints and viewpoint for vizualization).
Please pick a point on the object!

So if you are done, and the result looks good, then press "Enter", if not then "Esc" to go back. Make sure also that when you are done, you rotate and scale the object into the best "view pose" (with the front facing the camera, and top facing up), as this pose is used by both the above vizualization and the actual training code for determining the best symmetry to pick for an initial detection.

Labeling Tips

Even though there are 8 panels, you don't need to fill out all 8. Each keypoint just needs at least 3 samples to sample the covariance.

We recommend that you label the same keypoint (say keypoint i) on all the object renderings first, then go to the inspection panel at the end of this each time so that you can easily undo a mistake for keypoint i with the "u" key and not lose any work. Otherwise, if you label each object rendering completely, then you may have to undo a lot of labelings that were not mistakes.

Also, if there is an object that you want to label a void in the CAD model, like the top center of the bowl, then you can use the multiple samples to your advantage, and choose samples that will average to the desired result, since the labels are required to land on the actual CAD model in the labeling tool.

<\details>

You might also like...
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

OpenFace 2.2.0: a facial behavior analysis toolkit Over the past few years, there has been an increased interest in automatic facial behavior analysis

Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Code of Adverse Weather Image Translation with Asymmetric and Uncertainty aware GAN
Code of Adverse Weather Image Translation with Asymmetric and Uncertainty aware GAN

Adverse Weather Image Translation with Asymmetric and Uncertainty-aware GAN (AU-GAN) Official Tensorflow implementation of Adverse Weather Image Trans

Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

Official PyTorch implementation of UACANet: Uncertainty Aware Context Attention for Polyp Segmentation
Official PyTorch implementation of UACANet: Uncertainty Aware Context Attention for Polyp Segmentation

UACANet: Uncertainty Aware Context Attention for Polyp Segmentation Official pytorch implementation of UACANet: Uncertainty Aware Context Attention fo

 Estimating and Exploiting the Aleatoric Uncertainty in Surface Normal Estimation
Estimating and Exploiting the Aleatoric Uncertainty in Surface Normal Estimation

Estimating and Exploiting the Aleatoric Uncertainty in Surface Normal Estimation

git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation (RA-L/ICRA 2020)
Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation (RA-L/ICRA 2020)

Aerial Depth Completion This work is described in the letter "Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation", by Lucas

A Weakly Supervised Amodal Segmenter with Boundary Uncertainty Estimation

Paper Khoi Nguyen, Sinisa Todorovic "A Weakly Supervised Amodal Segmenter with Boundary Uncertainty Estimation", accepted to ICCV 2021 Our code is mai

Owner
Robot Perception & Navigation Group (RPNG)
Research on robot sensing, estimation, localization, mapping, perception, and planning
Robot Perception & Navigation Group (RPNG)
Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral

Good news! We release a clean version of PVNet: clean-pvnet, including how to train the PVNet on the custom dataset. Use PVNet with a detector. The tr

ZJU3DV 722 Dec 27, 2022
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

null 117 Dec 28, 2022
MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images

Main repo for ECCV 2020 paper MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images. visual.cs.brown.edu/matryodshka

Brown University Visual Computing Group 75 Dec 13, 2022
SE3 Pose Interp - Interpolate camera pose or trajectory in SE3, pose interpolation, trajectory interpolation

SE3 Pose Interpolation Pose estimated from SLAM system are always discrete, and

Ran Cheng 4 Dec 15, 2022
Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation

SimplePose Code and pre-trained models for our paper, “Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation”, a

Jia Li 256 Dec 24, 2022
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Pyjcsx 328 Dec 17, 2022
This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

shangbuhuan 52 Nov 25, 2022
Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Ibai Gorordo 99 Dec 31, 2022
Official PyTorch implementation of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image", ICCV 2019

PoseNet of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image" Introduction This repo is official Py

Gyeongsik Moon 677 Dec 25, 2022
Web service for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation based on OpenFace 2.0

OpenGaze: Web Service for OpenFace Facial Behaviour Analysis Toolkit Overview OpenFace is a fantastic tool intended for computer vision and machine le

Sayom Shakib 4 Nov 3, 2022