template-pose
Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper (accepted to CVPR 2022)
Van Nguyen Nguyen, Yinlin Hu, Yang Xiao, Mathieu Salzmann and Vincent Lepetit
Check out our paper and webpage for details!
If our project is helpful for your research, please consider citing :
@inproceedings{nguyen2022template,
title={Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions},
author={Nguyen, Van Nguyen and Hu, Yinlin and Xiao, Yang and Salzmann, Mathieu and Lepetit, Vincent},
booktitle={Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
year={2022}}
Table of Content
π§βπ
Methodology We introduce template-pose, which estimates 3D pose of new objects (can be very different from the training ones, i.e LINEMOD dataset) with only their 3D models. Our method requires neither a training phase on these objects nor images depicting them.
Two settings are considered in this work:
Dataset | Predict ID object | In-plane rotation |
---|---|---|
(Occlusion-)LINEMOD | Yes | No |
T-LESS | No | Yes |
π¨βπ§
Installation We recommend creating a new Anaconda environment to use template-pose. Use the following commands to setup a new environment:
conda env create -f environment.yml
conda activate template
Optional: Installation of BlenderProc is required to render synthetic images. It can be ignored if you use our provided template. More details can be found in Datasets.
πΊ
π
Datasets Before downloading the datasets, you may change this line to define the $ROOT folder (to store data and results).
There are two options:
- To download our pre-processed datasets (15GB) + SUN397 dataset (37GB)
./data/download_preprocessed_data.sh
Optional: You can download with following gdrive links and unzip them manually. We recommend keeping $DATA folder structure as detailed in ./data/README to keep pipeline simple:
- LINEMOD and Occlusion-LINEMOD (3GB) -> then run
python -m data.crop_image_linemod
- T-LESS (11GB)
- Templates (both T-LESS and LINEMOD) (1.7GB)
- Dataframes including query-template pairwise used for training (11MB)
- SUN397, randomized background for training on T-LESS (37GB)
- To download the original datasets and process them from scratch (process GT poses, render templates, compute nearest neighbors). All the main steps are detailed in ./data/README.
./data/download_and_process_from_scratch.sh
For any training with backbone ResNet50, we initialise with pretrained features of MOCOv2 which can be downloaded with the following command:
python -m lib.download_weight --model_name MoCov2
π
T-LESS 1. To launch a training on T-LESS:
python train_tless.py --config_path ./config_run/TLESS.json
2. To reproduce the results on T-LESS:
To download pretrained weights (by default, they are saved at $ROOT/pretrained/TLESS.pth):
python -m lib.download_weight --model_name TLESS
Optional: You can download manually with this link
To evaluate model with the pretrained weight:
python test_tless.py --config_path ./config_run/TLESS.json --checkpoint $ROOT/pretrained/TLESS.pth
πΊ
LINEMOD and Occlusion-LINEMOD 1. To launch a training on LINEMOD:
python train_linemod.py --config_path config_run/LM_$backbone_$split_name.json
For example, with βbase" backbone and split #1:
python train_linemod.py --config_path config_run/LM_baseNetwork_split1.json
2. To reproduce the results on LINEMOD:
To download pretrained weights (by default, they are saved at $ROOT/pretrained):
python -m lib.download_weight --model_name LM_$backbone_$split_name
Optional: You can download manually with this link
To evaluate model with a checkpoint_path:
python test_linemod.py --config_path config_run/LM_$backbone_$split_name.json --checkpoint checkpoint_path
For example, with βbase" backbone and split #1:
python -m lib.download_weight --model_name LM_baseNetwork_split1
python test_linemod.py --config_path config_run/LM_baseNetwork_split1.json --checkpoint $ROOT/pretrained/LM_baseNetwork_split1.pth
Acknowledgement
The code is adapted from PoseContrast, DTI-Clustering, CosyPose and BOP Toolkit. Many thanks to them!
The authors thank Martin Sundermeyer, Paul Wohlhart and Shreyas Hampali for their fast reply, feedback!
Contact
If you have any question, feel free to create an issue or contact the first author at [email protected]