SE(3)-eSCOPE
video | paper | website
Leveraging SE(3) Equivariance for Self-Supervised Category-Level Object Pose Estimation
Xiaolong Li, Yijia Weng, Li Yi , Leonidas Guibas, A. Lynn Abbott, Shuran Song, He Wang
NeurIPS 2021
SE(3)-eSCOPE is a self-supervised learning framework to estimate category-level 6D object pose from single 3D point clouds, with no ground-truth pose annotations, no GT CAD models, and no multi-view supervision during training. The key to our method is to disentangle shape and pose through an invariant shape reconstruction module and an equivariant pose estimation module, empowered by SE(3) equivariant point cloud networks and reconstruction loss.
News
[2021-11] We release the training code for 5 categories.
Prerequisites
The code is built and tested with following libraries:
- Python>=3.6
- PyTorch/1.7.1
- gcc>=6.1.0
- cmake
- cuda/11.0.1, or cuda/11.1 for newer GPUs
- cudnn
Recommended Installation
# 1. install python environments
conda create --name equi-pose python=3.6
source activate equi-pose
pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
# 2. compile extra CUDA libraries
bash build.sh
Data Preparation
You could find the subset we use for ModelNet40 directly [drive_link], and our rendered depth point clouds dataset [drive_link], download and put them into your own 'data' folder. check global_info.py
for codes and data paths.
project_path
should contain theequi-pose
code folder;second_path
is set to store logs, checkpoints, results, and etc;- check
configs/dataset/modelnet40_complete.yaml
to setdataset_path
; - check
configs/dataset/modelnet40_partial.yaml
to setdataset_path
;
Training
You may run the following code to train the model from scratch:
python main.py exp_num=[experiment_id] training=[name_training] datasets=[name_dataset] category=[name_category]
For example, to train the model on completet airplane, you may run
python main.py exp_num='1.0' training="complete_pcloud" dataset="modelnet40_complete" category='airplane' use_wandb=True
Testing Pretrained Models
Some of our pretrained checkpoints have been released, check [drive_link]. Put them in the 'second_path
/models' folder. You can run the following command to test the performance;
python main.py exp_num=[experiment_id] training=[name_training] datasets=[name_dataset] category=[name_category] eval=True save=True
For example, to test the model on complete airplane category or partial airplane, you may run
python main.py exp_num='0.813' training="complete_pcloud" dataset="modelnet40_complete" category='airplane'
eval=True save=True
python main.py exp_num='0.913r' training="partial_pcloud" dataset="modelnet40_partial" category='airplane' eval=True save=True
Note: add "use_fps_points=True" to get slightly better results; for your own datasets, add 'pre_compute_delta=True' and use example canonical shapes to compute pose misalignment first.
Visualization
Check out my script demo.py
or teaser.py
for some hints.
Citation
If you use this code for your research, please cite our paper.
@inproceedings{li2021leveraging,
title={Leveraging SE (3) Equivariance for Self-supervised Category-Level Object Pose Estimation from Point Clouds},
author={Li, Xiaolong and Weng, Yijia and Yi, Li and Guibas, Leonidas and Abbott, A Lynn and Song, Shuran and Wang, He},
booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
year={2021}
}
We thank Haiwei Chen for the helpful discussions on equivariant neural networks.