pytorch implementation of GPV-Pose

Overview

GPV-Pose

Pytorch implementation of GPV-Pose: Category-level Object Pose Estimation via Geometry-guided Point-wise Voting. (link)

pipeline

UPDATE

A new version of code which integrates shape prior information will be updated to the shape-prior-integrated branch in this repo soon.

Required environment

  • Ubuntu 18.04
  • Python 3.8
  • Pytorch 1.10.1
  • CUDA 11.3.

Installing

  • Install the main requirements in 'requirement.txt'.
  • Install Detectron2.

Data Preparation

To generate your own dataset, use the data preprocess code provided in this git. Download the detection results in this git.

Trained model

Download the trained model from this link.

Training

Please note, some details are changed from the original paper for more efficient training.

Specify the dataset directory and run the following command.

python -m engine.train --data_dir YOUR_DATA_DIR --model_save SAVE_DIR

Detailed configurations are in 'config/config.py'.

Evaluation

python -m evaluation.evaluate --data_dir YOUR_DATA_DIR --detection_dir DETECTION_DIR --resume 1 --resume_model MODEL_PATH --model_save SAVE_DIR

Acknowledgment

Our implementation leverages the code from 3dgcn, FS-Net, DualPoseNet, SPD.

Comments
  • Question about the training on CAMERA dataset

    Question about the training on CAMERA dataset

    Thanks for the code sharing and the impressive work.

    I am trying to train the code on REAL275 and CAMERA datasets by myself. I trained two networks by setting the dataset to Real and CAMERA in config.py. The result trained on REAL275 is good. However, the one trained on the CAMERA dataset did not seem well. The following are the results on CAMERA:

    average mAP:
    3D IoU at 25: 95.0
    3D IoU at 50: 92.8
    3D IoU at 75: 85.1
    5 degree, 2cm: 61.9
    5 degree, 5cm: 73.0
    10 degree, 2cm: 71.1
    10 degree, 5cm: 85.8
    10 degree, 10cm: 87.3
    

    I am wondering should the hyperparameters be adjusted for training the CAMERA dataset, or are there any other clues for this? Thank you so much!

    opened by LSerene 2
  • About obtaining pose from point clouds

    About obtaining pose from point clouds

    Hi Yan Di, I see the input of your network is P = points - points.mean(dim=1, keepdim=True),where points is obtained from the backprojection of the depth map.

    In my shallow knowledge, two pieces of information are needed to obtain the pose(R only discussed), the points P=R @ P_ori after the R transformation and the original points P_ori.

    The input of network only has P=R @ P_ori, how does the network get rotationR without knowing the original points P_ori

    opened by vtasStu 2
  • Please give me some guide for

    Please give me some guide for "Paper and Code loss name matching"

    Hello. I feel quite confused while trying to read your code. I think 3.1 L_Basic_rc == cal_loss_R_con in fsnet_loss,py << sure by k1 value 3.2 L_Basic_Sym == prop_sym_matching_loss in prop_loss.py
    3.3 L_Basic_pc == cal_recon_loss_point in recon_loss.py << sure by k2 value 3.4 L_PC_(R,t) == cal_geo_loss_point in geometry_loss.py L_PC_(s) == cal_geo_loss_face in geometry_loss.py 3.5 L_BB_(R,t,s) == ??

    But, maybe I'm wrong.

    Plus, can you tell me how did you get the specific value(13.7, 303.5)?? Please give me some advice~!

    Thank you!

    opened by dedoogong 2
  •  How long is the training?

    How long is the training?

    I'm very interested in your work. I found that it takes an hour to train an epoch on one 3090, and It takes a very long time to fully train the GPV_Pose. How many GPUs and how much time does you take to train GPV_Pose?

    opened by swords123 2
  • your requirements may contain some @, I can not know its current version

    your requirements may contain some @, I can not know its current version

    for example

    blessings @ file:///tmp/build/80754af9/blessings_1614076441300/work
    

    it may occur in your requirements

    this may solve it problem

    pip list --format=freeze > requirements.txt
    
    opened by yuheyuan 1
  • why different hyperparameters are used?

    why different hyperparameters are used?

    Hello! thanks for your kind sharing! I saw the difference between your implementation and paper regarding the hyperparams(lambda 1, ~, 8 and so on)

    flags.DEFINE_float('rot_1_w', 8.0, '') flags.DEFINE_float('rot_2_w', 8.0, '') flags.DEFINE_float('rot_regular', 4.0, '') flags.DEFINE_float('tran_w', 8.0, '') flags.DEFINE_float('size_w', 8.0, '') flags.DEFINE_float('recon_w', 8.0, '') flags.DEFINE_float('r_con_w', 1.0, '')

    flags.DEFINE_float('recon_n_w', 3.0, 'normal estimation loss') flags.DEFINE_float('recon_d_w', 3.0, 'dis estimation loss') flags.DEFINE_float('recon_v_w', 1.0, 'voting loss weight') flags.DEFINE_float('recon_s_w', 0.3, 'point sampling loss weight, important') flags.DEFINE_float('recon_f_w', 1.0, 'confidence loss') flags.DEFINE_float('recon_bb_r_w', 1.0, 'bbox r loss') flags.DEFINE_float('recon_bb_t_w', 1.0, 'bbox t loss') flags.DEFINE_float('recon_bb_s_w', 1.0, 'bbox s loss') flags.DEFINE_float('recon_bb_self_w', 1.0, 'bb self')

    flags.DEFINE_float('mask_w', 1.0, 'obj_mask_loss')

    flags.DEFINE_float('geo_p_w', 1.0, 'geo point mathcing loss') flags.DEFINE_float('geo_s_w', 10.0, 'geo symmetry loss') flags.DEFINE_float('geo_f_w', 0.1, 'geo face loss, face must be consistent with the point cloud')

    flags.DEFINE_float('prop_pm_w', 2.0, '') flags.DEFINE_float('prop_sym_w', 1.0, 'importtannt for symmetric objects, can do point aug along reflection plane') flags.DEFINE_float('prop_r_reg_w', 1.0, 'rot confidence must be sum to 1')

    those are different from the paper( 1/8.0, 1/8.0, 1/8.0, 1. ,,,. 8.0, 1.0, 1.0) and also it seems like there are ssome unseen, newly added ones like recon_n_w recon_d_w recon_s_w prop_pm_w.

    Could you please give me some advice about it?

    Plus, I think I should add 'Geo_face' loss term for 3.5 Bounding Box - Pose Geometric Consistency. Right?

    Thank you very much!

    opened by dedoogong 1
  • where/how can I find/generate

    where/how can I find/generate "mug_handle.pkl"?

    Hello~! Thank you very much for your sharing. I already handled all SPD, FS-Net, Dualpose repos before, but I didn't see "mug_handle.pkl" which is required for training your model~ Please let me know about it~

    Thanks!

    opened by dedoogong 1
  • weird evaluation results

    weird evaluation results

    Hello, I tried to run the evaluation and obtained

    2022-12-18 15:10:19,793 : average mAP:
    I1218 15:10:19.793995 140695683121408 evaluate.py:162] average mAP:
    2022-12-18 15:10:19,794 : 3D IoU at 25: 48.1
    I1218 15:10:19.794512 140695683121408 evaluate.py:162] 3D IoU at 25: 48.1
    2022-12-18 15:10:19,794 : 3D IoU at 50: 0.0
    I1218 15:10:19.794570 140695683121408 evaluate.py:162] 3D IoU at 50: 0.0
    2022-12-18 15:10:19,794 : 3D IoU at 75: 0.0
    I1218 15:10:19.794611 140695683121408 evaluate.py:162] 3D IoU at 75: 0.0
    2022-12-18 15:10:19,794 : 5 degree, 2cm: 0.0
    I1218 15:10:19.794649 140695683121408 evaluate.py:162] 5 degree, 2cm: 0.0
    2022-12-18 15:10:19,794 : 5 degree, 5cm: 0.0
    I1218 15:10:19.794685 140695683121408 evaluate.py:162] 5 degree, 5cm: 0.0
    2022-12-18 15:10:19,794 : 10 degree, 2cm: 0.0
    I1218 15:10:19.794721 140695683121408 evaluate.py:162] 10 degree, 2cm: 0.0
    2022-12-18 15:10:19,794 : 10 degree, 5cm: 0.0
    I1218 15:10:19.794758 140695683121408 evaluate.py:162] 10 degree, 5cm: 0.0
    2022-12-18 15:10:19,794 : 10 degree, 10cm: 0.0
    I1218 15:10:19.794793 140695683121408 evaluate.py:162] 10 degree, 10cm: 0.0
    2022-12-18 15:10:19,794 : category bottle
    I1218 15:10:19.794828 140695683121408 evaluate.py:162] category bottle
    2022-12-18 15:10:19,794 : mAP:
    I1218 15:10:19.794862 140695683121408 evaluate.py:162] mAP:
    2022-12-18 15:10:19,794 : 3D IoU at 25: 40.1
    I1218 15:10:19.794898 140695683121408 evaluate.py:162] 3D IoU at 25: 40.1
    2022-12-18 15:10:19,794 : 3D IoU at 50: 0.0
    I1218 15:10:19.794933 140695683121408 evaluate.py:162] 3D IoU at 50: 0.0
    2022-12-18 15:10:19,794 : 3D IoU at 75: 0.0
    I1218 15:10:19.794968 140695683121408 evaluate.py:162] 3D IoU at 75: 0.0
    2022-12-18 15:10:19,795 : 5 degree, 2cm: 0.0
    I1218 15:10:19.795003 140695683121408 evaluate.py:162] 5 degree, 2cm: 0.0
    2022-12-18 15:10:19,795 : 5 degree, 5cm: 0.0
    I1218 15:10:19.795037 140695683121408 evaluate.py:162] 5 degree, 5cm: 0.0
    2022-12-18 15:10:19,795 : 10 degree, 2cm: 0.0
    I1218 15:10:19.795072 140695683121408 evaluate.py:162] 10 degree, 2cm: 0.0
    2022-12-18 15:10:19,795 : 10 degree, 5cm: 0.0
    I1218 15:10:19.795106 140695683121408 evaluate.py:162] 10 degree, 5cm: 0.0
    2022-12-18 15:10:19,795 : 10 degree, 10cm: 0.0
    I1218 15:10:19.795141 140695683121408 evaluate.py:162] 10 degree, 10cm: 0.0
    2022-12-18 15:10:19,795 : category bowl
    I1218 15:10:19.795175 140695683121408 evaluate.py:162] category bowl
    2022-12-18 15:10:19,795 : mAP:
    I1218 15:10:19.795209 140695683121408 evaluate.py:162] mAP:
    2022-12-18 15:10:19,795 : 3D IoU at 25: 83.0
    I1218 15:10:19.795244 140695683121408 evaluate.py:162] 3D IoU at 25: 83.0
    2022-12-18 15:10:19,795 : 3D IoU at 50: 0.0
    I1218 15:10:19.795278 140695683121408 evaluate.py:162] 3D IoU at 50: 0.0
    2022-12-18 15:10:19,795 : 3D IoU at 75: 0.0
    I1218 15:10:19.795313 140695683121408 evaluate.py:162] 3D IoU at 75: 0.0
    2022-12-18 15:10:19,795 : 5 degree, 2cm: 0.0
    I1218 15:10:19.795347 140695683121408 evaluate.py:162] 5 degree, 2cm: 0.0
    2022-12-18 15:10:19,795 : 5 degree, 5cm: 0.0
    I1218 15:10:19.795381 140695683121408 evaluate.py:162] 5 degree, 5cm: 0.0
    2022-12-18 15:10:19,795 : 10 degree, 2cm: 0.0
    I1218 15:10:19.795415 140695683121408 evaluate.py:162] 10 degree, 2cm: 0.0
    2022-12-18 15:10:19,795 : 10 degree, 5cm: 0.0
    I1218 15:10:19.795450 140695683121408 evaluate.py:162] 10 degree, 5cm: 0.0
    2022-12-18 15:10:19,795 : 10 degree, 10cm: 0.0
    I1218 15:10:19.795485 140695683121408 evaluate.py:162] 10 degree, 10cm: 0.0
    2022-12-18 15:10:19,795 : category camera
    I1218 15:10:19.795519 140695683121408 evaluate.py:162] category camera
    2022-12-18 15:10:19,795 : mAP:
    I1218 15:10:19.795553 140695683121408 evaluate.py:162] mAP:
    2022-12-18 15:10:19,795 : 3D IoU at 25: 52.1
    I1218 15:10:19.795587 140695683121408 evaluate.py:162] 3D IoU at 25: 52.1
    2022-12-18 15:10:19,795 : 3D IoU at 50: 0.0
    I1218 15:10:19.795621 140695683121408 evaluate.py:162] 3D IoU at 50: 0.0
    2022-12-18 15:10:19,795 : 3D IoU at 75: 0.0
    I1218 15:10:19.795656 140695683121408 evaluate.py:162] 3D IoU at 75: 0.0
    2022-12-18 15:10:19,795 : 5 degree, 2cm: 0.0
    I1218 15:10:19.795690 140695683121408 evaluate.py:162] 5 degree, 2cm: 0.0
    2022-12-18 15:10:19,795 : 5 degree, 5cm: 0.0
    I1218 15:10:19.795733 140695683121408 evaluate.py:162] 5 degree, 5cm: 0.0
    2022-12-18 15:10:19,795 : 10 degree, 2cm: 0.0
    I1218 15:10:19.795767 140695683121408 evaluate.py:162] 10 degree, 2cm: 0.0
    2022-12-18 15:10:19,795 : 10 degree, 5cm: 0.0
    I1218 15:10:19.795802 140695683121408 evaluate.py:162] 10 degree, 5cm: 0.0
    2022-12-18 15:10:19,795 : 10 degree, 10cm: 0.0
    I1218 15:10:19.795836 140695683121408 evaluate.py:162] 10 degree, 10cm: 0.0
    2022-12-18 15:10:19,795 : category can
    I1218 15:10:19.795870 140695683121408 evaluate.py:162] category can
    2022-12-18 15:10:19,795 : mAP:
    I1218 15:10:19.795905 140695683121408 evaluate.py:162] mAP:
    2022-12-18 15:10:19,795 : 3D IoU at 25: 49.3
    I1218 15:10:19.795939 140695683121408 evaluate.py:162] 3D IoU at 25: 49.3
    2022-12-18 15:10:19,795 : 3D IoU at 50: 0.0
    I1218 15:10:19.795973 140695683121408 evaluate.py:162] 3D IoU at 50: 0.0
    2022-12-18 15:10:19,796 : 3D IoU at 75: 0.0
    I1218 15:10:19.796007 140695683121408 evaluate.py:162] 3D IoU at 75: 0.0
    2022-12-18 15:10:19,796 : 5 degree, 2cm: 0.0
    I1218 15:10:19.796042 140695683121408 evaluate.py:162] 5 degree, 2cm: 0.0
    2022-12-18 15:10:19,796 : 5 degree, 5cm: 0.0
    I1218 15:10:19.796077 140695683121408 evaluate.py:162] 5 degree, 5cm: 0.0
    2022-12-18 15:10:19,796 : 10 degree, 2cm: 0.0
    I1218 15:10:19.796111 140695683121408 evaluate.py:162] 10 degree, 2cm: 0.0
    2022-12-18 15:10:19,796 : 10 degree, 5cm: 0.0
    I1218 15:10:19.796145 140695683121408 evaluate.py:162] 10 degree, 5cm: 0.0
    2022-12-18 15:10:19,796 : 10 degree, 10cm: 0.0
    I1218 15:10:19.796179 140695683121408 evaluate.py:162] 10 degree, 10cm: 0.0
    2022-12-18 15:10:19,796 : category laptop
    I1218 15:10:19.796213 140695683121408 evaluate.py:162] category laptop
    2022-12-18 15:10:19,796 : mAP:
    I1218 15:10:19.796247 140695683121408 evaluate.py:162] mAP:
    2022-12-18 15:10:19,796 : 3D IoU at 25: 0.0
    I1218 15:10:19.796282 140695683121408 evaluate.py:162] 3D IoU at 25: 0.0
    2022-12-18 15:10:19,796 : 3D IoU at 50: 0.0
    I1218 15:10:19.796316 140695683121408 evaluate.py:162] 3D IoU at 50: 0.0
    2022-12-18 15:10:19,796 : 3D IoU at 75: 0.0
    I1218 15:10:19.796351 140695683121408 evaluate.py:162] 3D IoU at 75: 0.0
    2022-12-18 15:10:19,796 : 5 degree, 2cm: 0.0
    I1218 15:10:19.796385 140695683121408 evaluate.py:162] 5 degree, 2cm: 0.0
    2022-12-18 15:10:19,796 : 5 degree, 5cm: 0.0
    I1218 15:10:19.796419 140695683121408 evaluate.py:162] 5 degree, 5cm: 0.0
    2022-12-18 15:10:19,796 : 10 degree, 2cm: 0.0
    I1218 15:10:19.796454 140695683121408 evaluate.py:162] 10 degree, 2cm: 0.0
    2022-12-18 15:10:19,796 : 10 degree, 5cm: 0.0
    I1218 15:10:19.796489 140695683121408 evaluate.py:162] 10 degree, 5cm: 0.0
    2022-12-18 15:10:19,796 : 10 degree, 10cm: 0.0
    I1218 15:10:19.796523 140695683121408 evaluate.py:162] 10 degree, 10cm: 0.0
    2022-12-18 15:10:19,796 : category mug
    I1218 15:10:19.796557 140695683121408 evaluate.py:162] category mug
    2022-12-18 15:10:19,796 : mAP:
    I1218 15:10:19.796592 140695683121408 evaluate.py:162] mAP:
    2022-12-18 15:10:19,796 : 3D IoU at 25: 64.3
    I1218 15:10:19.796626 140695683121408 evaluate.py:162] 3D IoU at 25: 64.3
    2022-12-18 15:10:19,796 : 3D IoU at 50: 0.0
    I1218 15:10:19.796661 140695683121408 evaluate.py:162] 3D IoU at 50: 0.0
    2022-12-18 15:10:19,796 : 3D IoU at 75: 0.0
    I1218 15:10:19.796695 140695683121408 evaluate.py:162] 3D IoU at 75: 0.0
    2022-12-18 15:10:19,796 : 5 degree, 2cm: 0.0
    I1218 15:10:19.796730 140695683121408 evaluate.py:162] 5 degree, 2cm: 0.0
    2022-12-18 15:10:19,796 : 5 degree, 5cm: 0.0
    I1218 15:10:19.796765 140695683121408 evaluate.py:162] 5 degree, 5cm: 0.0
    2022-12-18 15:10:19,796 : 10 degree, 2cm: 0.0
    I1218 15:10:19.796799 140695683121408 evaluate.py:162] 10 degree, 2cm: 0.0
    2022-12-18 15:10:19,796 : 10 degree, 5cm: 0.0
    I1218 15:10:19.796833 140695683121408 evaluate.py:162] 10 degree, 5cm: 0.0
    2022-12-18 15:10:19,796 : 10 degree, 10cm: 0.0
    I1218 15:10:19.796868 140695683121408 evaluate.py:162] 10 degree, 10cm: 0.0
    

    I think there are something wrong here. Most of the values are zero. Any suggestions to correct it? and may I know what is the expected evaluation results please?

    opened by rpc-group 0
  • Question regarding evaluation

    Question regarding evaluation

    So for eval a different dataset is used compared to training,

    how are the values for the dict obtained?

    eval:

    (rgb=data['roi_img'].to(device), depth=data['roi_depth'].to(device),
                              depth_normalize=data['depth_normalize'].to(device),
                              obj_id=data['cat_id_0base'].to(device), 
                              camK=data['cam_K'].to(device),
                              gt_mask=data['roi_mask'].to(device),
                              gt_R=None, gt_t=None, gt_s=None, mean_shape=mean_shape,
                              gt_2D=data['roi_coord_2d'].to(device), sym=sym,
                              def_mask=data['roi_mask'].to(device))
    

    train:

    network(rgb=data['roi_img'].to(device), depth=data['roi_depth'].to(device),
                                  depth_normalize=data['depth_normalize'].to(device),
                                  obj_id=data['cat_id'].to(device), 
                                  camK=data['cam_K'].to(device), gt_mask=data['roi_mask'].to(device),
                                  gt_R=data['rotation'].to(device), gt_t=data['translation'].to(device),
                                  gt_s=data['fsnet_scale'].to(device), mean_shape=data['mean_shape'].to(device),
                                  gt_2D=data['roi_coord_2d'].to(device), sym=data['sym_info'].to(device),
                                  aug_bb=data['aug_bb'].to(device), aug_rt_t=data['aug_rt_t'].to(device), aug_rt_r=data['aug_rt_R'].to(device),
                                  def_mask=data['roi_mask_deform'].to(device),
                                  model_point=data['model_point'].to(device), nocs_scale=data['nocs_scale'].to(device), do_loss=True)
    

    RGB --> same depth --> same depth_normalize --> same obj_id --> eval ['cat_id_0base'], train data['cat_id'] camK --> same gt_mask --> same gt_R --> not used in eval gt_t --> not used in eval gt_s --> not used in eval mean_shape --> not used in eval gt_2D --> same sym --> same def_mask --> eval def_mask=data['roi_mask'], train def_mask=data['roi_mask_deform']

    required for eval: pred_RT --> obtained from line 84, generate_RT([p_green_R_vec, p_red_R_vec], [f_green_R, f_red_R], p_T, mode='vec', sym=sym)

    information not present in GPV Pose --> how to obtain gt_handle_visibility in mentian/object-deformnet --> https://github.com/mentian/object-deformnet/search?q=gt_handle_visibility gt_handle_visibility = nocs['gt_handle_visibility']

    so my question is why is the category id definition different? and for an own dataset how to obtain the value gt_handle_visibility

    opened by HannahHaensen 3
  • Questions about the code

    Questions about the code

    ey :) I am sorry I have another question

    in the recon_loss.py

    res_vote = res_vote / 6 / bs

    all the devisions by 6 is this because of the 6 distances paramters or is this related to the number of categories

    and another question https://github.com/lolrudy/GPV_Pose/blob/5ac3307ac3af5892f09812a04621fdae415deec8/losses/recon_loss.py#L234

    this line why is this not calculated for latop?

    opened by shangbuhuan13 4
  • No script to visualize results

    No script to visualize results

    Hi there,

    this is not really an issue. I'm just wondering if there should be a script that projects the estimated pose onto an rgb image to generate something like figure 5 in the paper.

    I found draw_detections in eval_utils.py and vis_utils.py, but it does not seem to be used anywhere? Would it be possible to add such a script?

    Thanks in advance!

    opened by kannwism 0
  • RuntimeError: Function 'DotBackward0' returned nan values in its 0th output.

    RuntimeError: Function 'DotBackward0' returned nan values in its 0th output.

    Hi~ Thank you for releasing the code. When I run the training code, the loss will appear Nan after several epochs. I have tried three times and encountered the same problem. I did not modify any parameters. Can you give me some advice?

    Screenshot from 2022-06-27 12-00-23

    opened by Bingo-1996 9
Owner
null
Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation

SimplePose Code and pre-trained models for our paper, “Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation”, a

Jia Li 256 Dec 24, 2022
《Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement》(ECCV 2020) GitHub: [fig9]

Unsupervised 3D Human Pose Representation [Paper] The implementation of our paper Unsupervised 3D Human Pose Representation with Viewpoint and Pose Di

null 42 Nov 24, 2022
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Pyjcsx 328 Dec 17, 2022
This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

shangbuhuan 52 Nov 25, 2022
Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Ibai Gorordo 99 Dec 31, 2022
Official PyTorch implementation of "IntegralAction: Pose-driven Feature Integration for Robust Human Action Recognition in Videos", CVPRW 2021

IntegralAction: Pose-driven Feature Integration for Robust Human Action Recognition in Videos Introduction This repo is official PyTorch implementatio

Gyeongsik Moon 29 Sep 24, 2022
Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds

CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds Introduction This is the official PyTorch implementation of o

Yijia Weng 96 Dec 7, 2022
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Hongsuk Choi 215 Jan 6, 2023
PyTorch Implementation of Realtime Multi-Person Pose Estimation project.

PyTorch Realtime Multi-Person Pose Estimation This is a pytorch version of Realtime_Multi-Person_Pose_Estimation, origin code is here Realtime_Multi-P

Dave Fang 157 Nov 12, 2022
PyTorch implementation for 3D human pose estimation

Towards 3D Human Pose Estimation in the Wild: a Weakly-supervised Approach This repository is the PyTorch implementation for the network presented in:

Xingyi Zhou 579 Dec 22, 2022
This repository is the offical Pytorch implementation of ContextPose: Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021).

Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021) Introduction This repository is the offical Pytorch implementation of

null 37 Nov 21, 2022
This repo is official PyTorch implementation of MobileHumanPose: Toward real-time 3D human pose estimation in mobile devices(CVPRW 2021).

Github Code of "MobileHumanPose: Toward real-time 3D human pose estimation in mobile devices" Introduction This repo is official PyTorch implementatio

Choi Sang Bum 203 Jan 5, 2023
Official PyTorch implementation of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image", ICCV 2019

PoseNet of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image" Introduction This repo is official Py

Gyeongsik Moon 677 Dec 25, 2022
pytorch implementation of openpose including Hand and Body Pose Estimation.

pytorch-openpose pytorch implementation of openpose including Body and Hand Pose Estimation, and the pytorch model is directly converted from openpose

Hzzone 1.4k Jan 7, 2023
PyTorch implementation of the Pose Residual Network (PRN)

Pose Residual Network This repository contains a PyTorch implementation of the Pose Residual Network (PRN) presented in our ECCV 2018 paper: Muhammed

Salih Karagoz 289 Nov 28, 2022
Pytorch implementation for A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose

A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose Paper | Website | Data A-NeRF: Articulated Neural Radiance F

Shih-Yang Su 172 Dec 22, 2022
[CVPR 2022] Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper

template-pose Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions

Van Nguyen Nguyen 92 Dec 28, 2022
Official Pytorch implementation of 6DRepNet: 6D Rotation representation for unconstrained head pose estimation.

6D Rotation Representation for Unconstrained Head Pose Estimation (Pytorch) Paper Thorsten Hempel and Ahmed A. Abdelrahman and Ayoub Al-Hamadi, "6D Ro

Thorsten Hempel 284 Dec 23, 2022
The PyTorch improved version of TPAMI 2017 paper: Face Alignment in Full Pose Range: A 3D Total Solution.

Face Alignment in Full Pose Range: A 3D Total Solution By Jianzhu Guo. [Updates] 2020.8.30: The pre-trained model and code of ECCV-20 are made public

Jianzhu Guo 3.4k Jan 2, 2023