GDR-Net: Geometry-Guided Direct Regression Network for Monocular 6D Object Pose Estimation. (CVPR 2021)

Overview

GDR-Net

This repo provides the PyTorch implementation of the work:

Gu Wang, Fabian Manhardt, Federico Tombari, Xiangyang Ji. GDR-Net: Geometry-Guided Direct Regression Network for Monocular 6D Object Pose Estimation. In CVPR 2021. [Paper][ArXiv][Video][bibtex]

Overview

Requirements

  • Ubuntu 16.04/18.04, CUDA 10.1/10.2, python >= 3.6, PyTorch >= 1.6, torchvision
  • Install detectron2 from source
  • sh scripts/install_deps.sh
  • Compile the cpp extension for farthest points sampling (fps):
    sh core/csrc/compile.sh
    

Datasets

Download the 6D pose datasets (LM, LM-O, YCB-V) from the BOP website and VOC 2012 for background images. Please also download the image_sets and test_bboxes from here (BaiduNetDisk, OneDrive, password: qjfk).

The structure of datasets folder should look like below:

# recommend using soft links (ln -sf)
datasets/
├── BOP_DATASETS
    ├──lm
    ├──lmo
    ├──ycbv
├── lm_imgn  # the OpenGL rendered images for LM, 1k/obj
├── lm_renders_blender  # the Blender rendered images for LM, 10k/obj (pvnet-rendering)
├── VOCdevkit

Training GDR-Net

./core/gdrn_modeling/train_gdrn.sh <config_path> <gpu_ids> (other args)

Example:

./core/gdrn_modeling/train_gdrn.sh configs/gdrn/lm/a6_cPnP_lm13.py 0  # multiple gpus: 0,1,2,3
# add --resume if you want to resume from an interrupted experiment.

Our trained GDR-Net models can be found here (BaiduNetDisk, OneDrive, password: kedv).
(Note that the models for BOP setup in the supplement were trained using a refactored version of this repo (not compatible), they are slightly better than the models provided here.)

Evaluation

./core/gdrn_modeling/test_gdrn.sh <config_path> <gpu_ids> <ckpt_path> (other args)

Example:

./core/gdrn_modeling/test_gdrn.sh configs/gdrn/lmo/a6_cPnP_AugAAETrunc_BG0.5_lmo_real_pbr0.1_40e.py 0 output/gdrn/lmo/a6_cPnP_AugAAETrunc_BG0.5_lmo_real_pbr0.1_40e/gdrn_lmo_real_pbr.pth

Citation

If you find this useful in your research, please consider citing:

@InProceedings{Wang_2021_GDRN,
    title     = {{GDR-Net}: Geometry-Guided Direct Regression Network for Monocular 6D Object Pose Estimation},
    author    = {Wang, Gu and Manhardt, Fabian and Tombari, Federico and Ji, Xiangyang},
    booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {16611-16621}
}
Comments
  • 关于cuda版本的问题

    关于cuda版本的问题

    您好,请问我使用cuda11以上的版本可以训练吗,因为我只有A6000和A100的显卡,它们不兼容cuda11以下的版本。我用cuda11.1和torch1.8或者1.9训练时,都会报double free or corruption (!prev)、RuntimeError: DataLoader worker (pid(s) xxxxx) exited unexpectedly。

    help wanted 
    opened by fn6767 9
  • Pipeline to print inferred 3D bounding boxes on images

    Pipeline to print inferred 3D bounding boxes on images

    Hello! I find this work really interesting. After successfully testing inference (LMO and YCB) I was just interested in plotting the inference results as 3D bounding boxes on RGB images and by inspecting the code I bumped into:

    https://github.com/THU-DA-6D-Pose-Group/GDR-Net/blob/5fb30c3dc53f46bac24a8a83a373eac7a8038556/core/gdrn_modeling/gdrn_evaluator.py#L516

    it is the function used for inference, which seems to show the results in terms of the different metrics, but not showing graphical results as I am looking for

    In the same file I noticed the function: https://github.com/THU-DA-6D-Pose-Group/GDR-Net/blob/5fb30c3dc53f46bac24a8a83a373eac7a8038556/core/gdrn_modeling/gdrn_evaluator.py#L634

    which seems structurally similar but with some input differences, in particular I would like to ask if the input dataloader can be be computed for gdrn_inference_on_dataset as for save_result_of_dataset as in

    https://github.com/THU-DA-6D-Pose-Group/GDR-Net/blob/5fb30c3dc53f46bac24a8a83a373eac7a8038556/core/gdrn_modeling/engine.py#L135-L137

    Since from preliminar debugging it seems it is not possible to access to the "image" field of the input sample in

    https://github.com/THU-DA-6D-Pose-Group/GDR-Net/blob/5fb30c3dc53f46bac24a8a83a373eac7a8038556/core/gdrn_modeling/gdrn_evaluator.py#L678

    Possibly related issue: https://github.com/THU-DA-6D-Pose-Group/GDR-Net/issues/56

    opened by AlbertoRemus 8
  • Some question of the paper

    Some question of the paper

    你好,论文里关于MXYZ到M2D-3D的转化是这样说的。"$M_{2D-3D}$ can then be derived by stacking $M_{XYZ}$onto the corresponding 2D pixel coordinates". 但是我还是不太清楚为什么从$3\times64\times64$维度的$M_{XYZ}$转变成了$2\times64\times64$维度的$M_{2D-3D}$。以及为什么要做这样一个转化呢,直接将预测的XYZ归一化之后和MSRA Concatenation不行吗?

    opened by Mr2er0 8
  • 关于更换数据的问题

    关于更换数据的问题

    王博,您好! 您的工作对我的帮助很大,非常感谢您提供的开源项目。现在我想使用自己的数据在您的模型上训练,之前的一些issue里您只提到了应该如何处理和组织自己的数据,但并没有提及如果要使用自己的数据,应该修改哪些部分的代码。因为之前在lm数据集上训练时需要先生成一些文件,所以我猜测如果要将模型应用在自己的数据上,可能需要修改的地方有很多,可以请您具体讲讲吗? 期待您的回复,再次致谢!

    opened by micki-37 7
  • Questions about LM-O evaluation results

    Questions about LM-O evaluation results

    Hi! thanks for your great work. I execute the following command to get the evaluation results of LM-O as follows, ‘GDR-Net-DATA‘ is the folder where I put the trained models. ./core/gdrn_modeling/test_gdrn.sh configs/gdrn/lmo/a6_cPnP_AugAAETrunc_BG0.5_lmo_real_pbr0.1_40e.py 1 GDR-Net-DATA/gdrn/lmo/a6_cPnP_AugAAETrunc_BG0.5_lmo_real_pbr0.1_40e/gdrn_lmo_real_pbr.pth 2345截图20210821172336 Is ‘ad_10’ the ‘Average Recall (%) of ADD(-S)’ mentioned in Table 2 in the paper?

    opened by Liuchongpei 7
  • Zero recall value while evaluating on LMO dataset

    Zero recall value while evaluating on LMO dataset

    Hello @wangg12

    I tried to evaluate the GDR-Net model on LMO dataset using the pretrained models you shared on OneDrive. I used following command to run the valuation:

    python core/gdrn_modeling/main_gdrn.py --config-file configs/gdrn/lmo/a6_cPnP_AugAAETrunc_BG0.5_lmo_real_pbr0.1_40e.py \
     --num-gpus 1 \
    --eval-only  \
    --opts MODEL.WEIGHTS=output/gdrn/lmo/a6_cPnP_AugAAETrunc_BG0.5_lmo_real_pbr0.1_40e/gdrn_lmo_real_pbr.pth
    

    However, it is showing zero recall values. Please see the screenshot below. Could you please help?

    Thank you, Supriya image

    opened by supriya-gdptl 6
  • evaluation failed for lmoSO

    evaluation failed for lmoSO

    Hi,

    When I train GDR-Net on ape of LMO dataset by

    ./core/gdrn_modeling/train_gdrn.sh configs/gdrn/lmoSO/a6_cPnP_AugAAETrunc_BG0.5_lmo_real_pbr0.1_80e_SO/a6_cPnP_AugAAETrunc_BG0.5_lmo_real_pbr0.1_80e_ape.py 1
    

    I get the unexpected output at the end of log.txt:

    core.gdrn_modeling.test_utils WARNING@70: evaluation failed.
    core.gdrn_modeling.test_utils INFO@274: =====================================================================
    core.gdrn_modeling.test_utils WARNING@316: output/gdrn/lmoSO/a6_cPnP_AugAAETrunc_lmo_real_pbr0.1_80e_SO/ape/inference_model_final/lmo_test/a6-cPnP-AugAAETrunc-BG0.5-lmo-real-pbr0.1-80e-ape-test-iter0_lmo-test-bb8/error:ad_ntop:1 does not exist.
    

    Could you suggest how to fix it? Thanks!

    opened by RuyiLian 6
  • One drive link seems not working

    One drive link seems not working

    Hi, unfortunately the One-Drive link of pretrained model seems to provide the following error on different browsers, do you have any insight about this?

    Thanks in advance,

    Alberto

    Screenshot from 2022-09-08 18-15-36

    opened by AlbertoRemus 5
  • 关于xyz_crop生成问题

    关于xyz_crop生成问题

    王博你好,我在使用tools/lm/lm_pbr_1_gen_xyz_crop.py生成xyz_crop文件的过程中遇到了这个问题。

    Traceback (most recent call last): File "tools/lm/lmo_pbr_1_gen_xyz_crop.py", line 228, in xyz_gen.main() File "tools/lm/lmo_pbr_1_gen_xyz_crop.py", line 137, in main bgr_gl, depth_gl = self.get_renderer().render(render_obj_id, IM_W, IM_H, K, R, t, near, far) File "tools/lm/lmo_pbr_1_gen_xyz_crop.py", line 98, in get_renderer self.renderer = Renderer( File "/data/hsm/gdr/tools/lm/../../lib/meshrenderer/meshrenderer_phong.py", line 26, in init self._fbo = gu.Framebuffer( File "/data/hsm/gdr/tools/lm/../../lib/meshrenderer/gl_utils/fbo.py", line 22, in init glNamedFramebufferTexture(self.__id, k, attachement.id, 0) File "/data/hsm/env/gdrn2/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 415, in call return self( *args, **named ) ctypes.ArgumentError: argument 1: <class 'TypeError'>: wrong type

    我认为可能是在 https://github.com/THU-DA-6D-Pose-Group/GDR-Net/blob/main/lib/meshrenderer/gl_utils/fbo.py#L19 中传入的k类型不匹配所以出错。图片为debug中显示的glNamedFramebufferTexture函数要求传入的数据类型。 图片 在issue中没有找到与我类似的问题,请问有人有任何解决这个问题的相关建议吗?

    need-more-info 
    opened by hellohaley 5
  • CUDA out of memory

    CUDA out of memory

    We implement the training process with pbr rendered data on eight GPU parallel computing (NIVDIA 2080 Ti with graphic memory of 12 G) , it barely starts training in batchsize 8 (original is 24). But when we resume the training process, CUDA will be out of memory.

    We'd like to know the author's training configuration...

    opened by GabrielleTse 5
  • Loss_region unable to converge

    Loss_region unable to converge

    1 Other Loss has significant decline, but Loss_region‘s drop is very weak. My training use config : configs/gdrn/lm/a6_cPnP_lm13.py Region area choose 4, 16, 64 can not make any improve.

    opened by lu-ming-lei 5
  • Generating test_bboxes/faster_R50_FPN_AugCosyAAE_HalfAnchor_lmo_pbr_lmo_fuse_real_all_8e_test_480x640.json file

    Generating test_bboxes/faster_R50_FPN_AugCosyAAE_HalfAnchor_lmo_pbr_lmo_fuse_real_all_8e_test_480x640.json file

    Hello @wangg12,

    Sorry to bother you again.

    Could you please tell me how to generate faster_R50_FPN_AugCosyAAE_HalfAnchor_lmo_pbr_lmo_fuse_real_all_8e_test_480x640.json in lmo/test/test_bboxes folder?

    Which code did you run to obtain this file?

    Thank you, Supriya

    opened by supriya-gdptl 1
Releases(v1.1)
This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

shangbuhuan 52 Nov 25, 2022
Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation"

EgoNet Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation". This repo inclu

Shichao Li 138 Dec 9, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Pyjcsx 328 Dec 17, 2022
PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation

SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation This is the PyTorch implemention of ICCV'21 paper SGPA: Structure

Chen Kai 24 Dec 5, 2022
Code for "Human Pose Regression with Residual Log-likelihood Estimation", ICCV 2021 Oral

Human Pose Regression with Residual Log-likelihood Estimation [Paper] [arXiv] [Project Page] Human Pose Regression with Residual Log-likelihood Estima

JeffLi 347 Dec 24, 2022
Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)'

SCL Introduction Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)' We evaluated our approach using two baseline

null 34 Oct 8, 2022
[CVPR 2021] Monocular depth estimation using wavelets for efficiency

Single Image Depth Prediction with Wavelet Decomposition Michaël Ramamonjisoa, Michael Firman, Jamie Watson, Vincent Lepetit and Daniyar Turmukhambeto

Niantic Labs 205 Jan 2, 2023
Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.

human-pose-estimation-3d-python-cpp RealSenseD435 (RGB) 480x640 + CPU Corei9 45 FPS (Depth is not used) 1. Run 1-1. RealSenseD435 (RGB) 480x640 + CPU

Katsuya Hyodo 8 Oct 3, 2022
Single-Stage 6D Object Pose Estimation, CVPR 2020

Overview This repository contains the code for the paper Single-Stage 6D Object Pose Estimation. Yinlin Hu, Pascal Fua, Wei Wang and Mathieu Salzmann.

CVLAB @ EPFL 89 Dec 26, 2022
[CVPR 2022] Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper

template-pose Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions

Van Nguyen Nguyen 92 Dec 28, 2022
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Bae, Gwangbin 138 Dec 28, 2022
Monocular Depth Estimation - Weighted-average prediction from multiple pre-trained depth estimation models

merged_depth runs (1) AdaBins, (2) DiverseDepth, (3) MiDaS, (4) SGDepth, and (5) Monodepth2, and calculates a weighted-average per-pixel absolute dept

Pranav 39 Nov 21, 2022
HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation

HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation Official PyTroch implementation of HPRNet. HPRNet: Hierarchical Point Regre

Nermin Samet 53 Dec 4, 2022
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 76 Jan 2, 2023
Delving into Localization Errors for Monocular 3D Object Detection, CVPR'2021

Delving into Localization Errors for Monocular 3D Detection By Xinzhu Ma, Yinmin Zhang, Dan Xu, Dongzhan Zhou, Shuai Yi, Haojie Li, Wanli Ouyang. Intr

XINZHU.MA 124 Jan 4, 2023
Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation

SUO-SLAM This repository hosts the code for our CVPR 2022 paper "Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation". ArXiv li

Robot Perception & Navigation Group (RPNG) 97 Jan 3, 2023
Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral

Good news! We release a clean version of PVNet: clean-pvnet, including how to train the PVNet on the custom dataset. Use PVNet with a detector. The tr

ZJU3DV 722 Dec 27, 2022
SE3 Pose Interp - Interpolate camera pose or trajectory in SE3, pose interpolation, trajectory interpolation

SE3 Pose Interpolation Pose estimated from SLAM system are always discrete, and

Ran Cheng 4 Dec 15, 2022