KAPAO is an efficient multi-person human pose estimation model that detects keypoints and poses as objects and fuses the detections to predict human poses.

Overview

KAPAO (Keypoints and Poses as Objects)

KAPAO is an efficient single-stage multi-person human pose estimation model that models keypoints and poses as objects within a dense anchor-based detection framework. When not using test-time augmentation (TTA), KAPAO is much faster and more accurate than previous single-stage methods like DEKR and HigherHRNet:

alt text

This repository contains the official PyTorch implementation for the paper:
Rethinking Keypoint Representations: Modeling Keypoints and Poses as Objects for Multi-Person Human Pose Estimation.

Our code was forked from ultralytics/yolov5 at commit 5487451.

Setup

  1. If you haven't already, install Anaconda or Miniconda.
  2. Create a new conda environment with Python 3.6: $ conda create -n kapao python=3.6.
  3. Activate the environment: $ conda activate kapao
  4. Clone this repo: $ git clone https://github.com/wmcnally/kapao.git
  5. Install the dependencies: $ cd kapao && pip install -r requirements.txt
  6. Download the trained models: $ sh data/scripts/download_models.sh

Inference Demos

Note: FPS calculations includes all processing, including inference, plotting / tracking, image resizing, etc. See demo script arguments for inference options.

Flash Mob Demo

This demo runs inference on a 720p dance video (native frame-rate of 25 FPS).

alt text

To display the inference results in real-time:
$ python demos/flash_mob.py --weights kapao_s_coco.pt --display --fps

To create the GIF above:
$ python demos/flash_mob.py --weights kapao_s_coco.pt --start 188 --end 196 --gif --fps

Squash Demo

This demo runs inference on a 1080p slow motion squash video (native frame-rate of 25 FPS). It uses a simple player tracking algorithm based on the frame-to-frame pose differences.

alt text

To display the inference results in real-time:
$ python demos/squash.py --weights kapao_s_coco.pt --display --fps

To create the GIF above:
$ python demos/squash.py --weights kapao_s_coco.pt --start 42 --end 50 --gif --fps

COCO Experiments

Download the COCO dataset: $ sh data/scripts/get_coco_kp.sh

Validation (without TTA)

  • KAPAO-S (63.0 AP): $ python val.py --rect
  • KAPAO-M (68.5 AP): $ python val.py --rect --weights kapao_m_coco.pt
  • KAPAO-L (70.6 AP): $ python val.py --rect --weights kapao_l_coco.pt

Validation (with TTA)

  • KAPAO-S (64.3 AP): $ python val.py --scales 0.8 1 1.2 --flips -1 3 -1
  • KAPAO-M (69.6 AP): $ python val.py --weights kapao_m_coco.pt \
    --scales 0.8 1 1.2 --flips -1 3 -1
  • KAPAO-L (71.6 AP): $ python val.py --weights kapao_l_coco.pt \
    --scales 0.8 1 1.2 --flips -1 3 -1

Testing

  • KAPAO-S (63.8 AP): $ python val.py --scales 0.8 1 1.2 --flips -1 3 -1 --task test
  • KAPAO-M (68.8 AP): $ python val.py --weights kapao_m_coco.pt \
    --scales 0.8 1 1.2 --flips -1 3 -1 --task test
  • KAPAO-L (70.3 AP): $ python val.py --weights kapao_l_coco.pt \
    --scales 0.8 1 1.2 --flips -1 3 -1 --task test

Training

The following commands were used to train the KAPAO models on 4 V100s with 32GB memory each.

KAPAO-S:

python -m torch.distributed.launch --nproc_per_node 4 train.py \
--img 1280 \
--batch 128 \
--epochs 500 \
--data data/coco-kp.yaml \
--hyp data/hyps/hyp.kp-p6.yaml \
--val-scales 1 \
--val-flips -1 \
--weights yolov5s6.pt \
--project runs/s_e500 \
--name train \
--workers 128

KAPAO-M:

python train.py \
--img 1280 \
--batch 72 \
--epochs 500 \
--data data/coco-kp.yaml \
--hyp data/hyps/hyp.kp-p6.yaml \
--val-scales 1 \
--val-flips -1 \
--weights yolov5m6.pt \
--project runs/m_e500 \
--name train \
--workers 128

KAPAO-L:

python train.py \
--img 1280 \
--batch 48 \
--epochs 500 \
--data data/coco-kp.yaml \
--hyp data/hyps/hyp.kp-p6.yaml \
--val-scales 1 \
--val-flips -1 \
--weights yolov5l6.pt \
--project runs/l_e500 \
--name train \
--workers 128

Note: DDP is usually recommended but we found training was less stable for KAPAO-M/L using DDP. We are investigating this issue.

CrowdPose Experiments

  • Install the CrowdPose API to your conda environment:
    $ cd .. && git clone https://github.com/Jeff-sjtu/CrowdPose.git
    $ cd CrowdPose/crowdpose-api/PythonAPI && sh install.sh && cd ../../../kapao
  • Download the CrowdPose dataset: $ sh data/scripts/get_crowdpose.sh

Testing

  • KAPAO-S (63.8 AP): $ python val.py --data crowdpose.yaml \
    --weights kapao_s_crowdpose.pt --scales 0.8 1 1.2 --flips -1 3 -1
  • KAPAO-M (67.1 AP): $ python val.py --data crowdpose.yaml \
    --weights kapao_m_crowdpose.pt --scales 0.8 1 1.2 --flips -1 3 -1
  • KAPAO-L (68.9 AP): $ python val.py --data crowdpose.yaml \
    --weights kapao_l_crowdpose.pt --scales 0.8 1 1.2 --flips -1 3 -1

Training

The following commands were used to train the KAPAO models on 4 V100s with 32GB memory each. Training was performed on the trainval split with no validation. The test results above were generated using the last model checkpoint.

KAPAO-S:

python -m torch.distributed.launch --nproc_per_node 4 train.py \
--img 1280 \
--batch 128 \
--epochs 300 \
--data data/crowdpose.yaml \
--hyp data/hyps/hyp.kp-p6.yaml \
--val-scales 1 \
--val-flips -1 \
--weights yolov5s6.pt \
--project runs/cp_s_e300 \
--name train \
--workers 128 \
--noval

KAPAO-M:

python train.py \
--img 1280 \
--batch 72 \
--epochs 300 \
--data data/coco-kp.yaml \
--hyp data/hyps/hyp.kp-p6.yaml \
--val-scales 1 \
--val-flips -1 \
--weights yolov5m6.pt \
--project runs/cp_m_e300 \
--name train \
--workers 128 \
--noval

KAPAO-L:

python train.py \
--img 1280 \
--batch 48 \
--epochs 300 \
--data data/crowdpose.yaml \
--hyp data/hyps/hyp.kp-p6.yaml \
--val-scales 1 \
--val-flips -1 \
--weights yolov5l6.pt \
--project runs/cp_l_e300 \
--name train \
--workers 128 \
--noval

Acknowledgements

This work was supported in part by Compute Canada, the Canada Research Chairs Program, the Natural Sciences and Engineering Research Council of Canada, a Microsoft Azure Grant, and an NVIDIA Hardware Grant.

If you find this repo is helpful in your research, please cite our paper:

@article{mcnally2021kapao,
  title={Rethinking Keypoint Representations: Modeling Keypoints and Poses as Objects for Multi-Person Human Pose Estimation},
  author={McNally, William and Vats, Kanav and Wong, Alexander and McPhee, John},
  journal={arXiv preprint arXiv:2111.08557},
  year={2021}
}

Please also consider citing our previous works:

@inproceedings{mcnally2021deepdarts,
  title={DeepDarts: Modeling Keypoints as Objects for Automatic Scorekeeping in Darts using a Single Camera},
  author={McNally, William and Walters, Pascale and Vats, Kanav and Wong, Alexander and McPhee, John},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={4547--4556},
  year={2021}
}

@article{mcnally2021evopose2d,
  title={EvoPose2D: Pushing the Boundaries of 2D Human Pose Estimation Using Accelerated Neuroevolution With Weight Transfer},
  author={McNally, William and Vats, Kanav and Wong, Alexander and McPhee, John},
  journal={IEEE Access},
  volume={9},
  pages={139403--139414},
  year={2021},
  publisher={IEEE}
}
Comments
  • How to train my own data?

    How to train my own data?

    Thank you for your splendid work! But I have some questions about training my own data by using your model. For example, what should I do on my label files(.json)?

    opened by Richard-wang85 28
  • Inference time 16 bit vs 32 bit

    Inference time 16 bit vs 32 bit

    In the video.py file i hardcoded half = False so as to avoid any conversion to half precision. But still the inference time was same ?

    Is it because the weights by default are half precision and therefore I cannot measure how much time your model will take if it was 32 bit ?

    opened by nikhilchh 12
  • How to fuse the results of different scales?

    How to fuse the results of different scales?

    Thanks for your great work. And I wonder during the inference, how you combine the results of 4 different output grids? Will there be some special fusion? Looking forward to your reply.

    opened by liqikai9 10
  • Animal Pose estimation using KAPAO

    Animal Pose estimation using KAPAO

    @Kanav123 thanks for the open source code base I had a few queries

    1. can we train kapo for animal pose estimation on the animal pose dataset? if so what are the changes to be made
    2. can we have hand pose, facial landmarks, and body pose from the same architecture? can we modify kapo architecture for these things if so what is ur suggestions

    thanks in advance

    opened by abhigoku10 10
  • the result of kapao-s

    the result of kapao-s

    i train the kapao-s,but the result is not well,Is this the difference GPU brings?

    YoLov52021-12-14 torch 1.9.1+cu102 CUDA:0(GeForce RTX 2080 SUPER,7982.3125NB) Average Precision(AP)@[ IoU=0.50:0.95 | area=all | maxDets= 20 ] = 0.587 Average Precision(AP)@[ IoU=0.50| area=all l maxDets= 20 ] = 0.840 Average Precision(AP) [ IoU=0.75i area=alli maxDets= 20 j = 0.644 Average Precision(AP) @[ IoU=0.50:0.95i area=mediumimaxDets= 20 j = 0.542 Average Precision(AP)[ IoU=0.50:0.95i area= large i maxDets= 20 ] = 0.663 Average Recall(AR) @[ IoU=0.50:0.95area=all i maxDets= 20 ] = 0.662 Average Recall(AR) [ IoU=0.50i area=all j maxDets= 20 j = 0.890 Average Recall(AR)@[IoU=0.75i area=alli maxDets= 20 ] = 0.715 Average Recall(AR)@[ IoU=0.50:0.95 area=medium| maxDets= 20 ] = 0.610 Average Recall(AR) IoU=0.50:0.95i area= large | maxDets= 20 ] =0.737. Speed: 0.518ms pre-process,12.620ms inference,5.793ms ilS per image at shape (1,3,1280,1280) wenti

    and can you tell me what the keypoint object Fused mean?

    opened by sayoko17 9
  • 'NoneType' object has no attribute 'span'

    'NoneType' object has no attribute 'span'

    When I run any of demos, I get an error. Any suggestions?

    (kapao) Me:~/kapao$ python demos/squash.py --display --fps Traceback (most recent call last): File "demos/squash.py", line 83, in stream = [s for s in yt.streams if s.itag == 137][0] # 1080p, 25 fps File "/home/media/miniconda3/envs/kapao/lib/python3.6/site-packages/pytube/main.py", line 292, in streams return StreamQuery(self.fmt_streams) File "/home/media/miniconda3/envs/kapao/lib/python3.6/site-packages/pytube/main.py", line 177, in fmt_streams extract.apply_signature(stream_manifest, self.vid_info, self.js) File "/home/media/miniconda3/envs/kapao/lib/python3.6/site-packages/pytube/extract.py", line 409, in apply_signature cipher = Cipher(js=js) File "/home/media/miniconda3/envs/kapao/lib/python3.6/site-packages/pytube/cipher.py", line 44, in init self.throttling_array = get_throttling_function_array(js) File "/home/media/miniconda3/envs/kapao/lib/python3.6/site-packages/pytube/cipher.py", line 323, in get_throttling_function_array str_array = throttling_array_split(array_raw) File "/home/media/miniconda3/envs/kapao/lib/python3.6/site-packages/pytube/parser.py", line 158, in throttling_array_split match_start, match_end = match.span() AttributeError: 'NoneType' object has no attribute 'span'

    opened by amirhosk 9
  • Nearly 50% images are missed in training and validation

    Nearly 50% images are missed in training and validation

    Scanning data/datasets/coco/kp_labels/img_txt/train2017.cache images and labels... 64115 found, 54172 missing, 0 empty, 0 corrupted' Scanning data/datasets/coco/kp_labels/img_txt/val2017.cache images and labels... 2693 found, 2307 missing, 0 empty, 0 corrupted'

    I wonder if you are using an ad-hoc way of data preprocessing/filtering. If you were doing so, then the numbers you had reported in the paper would not be comparable to DEKR.

    opened by wuzhenyusjtu 5
  • Question about tau_{ck}

    Question about tau_{ck}

    How can I change the value of tau_{ck} in the code? I want to make it zero to have a higher number of keypoint confidences returned as described in Section 3.4 of the paper.

    opened by Ramt111 5
  • Can you provide the right utils/datasets.py file?

    Can you provide the right utils/datasets.py file?

    I got an error when running the train process. assert nf > 0 or not augment, f'{prefix}No labels in {cache_path}. Can not train without labels. See {HELP_URL}' It shows that the provided utils/datasets.py file seems wrong for not changing into loading keypoints json file. So, could you please provide the right utils/datasets.py file?

    opened by hnuzhy 5
  • Kapo Model structure

    Kapo Model structure

    @Kanav123 @wmcnally i am having a framework that has both yolov5 and kapao in the same folder structure , when i run inference of the kapao it tries to load from the model folder yolo.py file , i tried renaming the file and loading the kapao but it gives an error, when looked deep into it during training the model node type properties are model.yolo_ . Can we change the model node properties while retraining ? can you please share ur thoughts

    Thanks in adavance

    opened by abhigoku10 5
  • Not working on RTX 3060 and RTX 3090

    Not working on RTX 3060 and RTX 3090

    I used the installation instructions with conda on RTX 3060 and RTX 3090. Cuda 11.4 Ubuntu 20 and 21 both. Getting this error.

    `/kapao$ python demos/video.py --yt-id nrchfeybHmw --imgsz 1024 --weights kapao_l_coco.pt --conf-thres-kp 0.01 --kp-obj --face --start 56 --end 72 --display Downloading demo video... Done. /home/beltech/anaconda3/envs/kapao2/lib/python3.6/site-packages/torch/cuda/init.py:106: UserWarning: NVIDIA GeForce RTX 3060 Laptop GPU with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70. If you want to use the NVIDIA GeForce RTX 3060 Laptop GPU GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

    warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name)) Using device: cuda:0 Traceback (most recent call last): File "demos/video.py", line 115, in model = attempt_load(args.weights, map_location=device) # load FP32 model File "/home/beltech/kapao/models/experimental.py", line 96, in attempt_load model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model File "/home/beltech/anaconda3/envs/kapao2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 692, in float return self._apply(lambda t: t.float() if t.is_floating_point() else t) File "/home/beltech/anaconda3/envs/kapao2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 530, in _apply module._apply(fn) File "/home/beltech/anaconda3/envs/kapao2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 530, in _apply module._apply(fn) File "/home/beltech/anaconda3/envs/kapao2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 530, in _apply module._apply(fn) [Previous line repeated 1 more time] File "/home/beltech/anaconda3/envs/kapao2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 552, in _apply param_applied = fn(param) File "/home/beltech/anaconda3/envs/kapao2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 692, in return self._apply(lambda t: t.float() if t.is_floating_point() else t) RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. `

    opened by aafaqin 5
  • pytube not found during isntallation

    pytube not found during isntallation

    After running: pip install -r requirements.txt Get error to find the pytube repo:

    Collecting git+https://github.com/baxterisme/pytube (from -r requirements.txt (line 36)) Cloning https://github.com/baxterisme/pytube to /tmp/pip-req-build-yu8sm76y Running command git clone --filter=blob:none --quiet https://github.com/baxterisme/pytube /tmp/pip-req-build-yu8sm76y remote: Repository not found. fatal: repository 'https://github.com/baxterisme/pytube/' not found error: subprocess-exited-with-error

    × git clone --filter=blob:none --quiet https://github.com/baxterisme/pytube /tmp/pip-req-build-yu8sm76y did not run successfully. │ exit code: 128 ╰─> See above for output.

    note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error

    × git clone --filter=blob:none --quiet https://github.com/baxterisme/pytube /tmp/pip-req-build-yu8sm76y did not run successfully. │ exit code: 128 ╰─> See above for output.

    note: This error originates from a subprocess, and is likely not a problem with pip.

    Thanks.

    opened by ShuangjunLiu 0
  • Get Started with the demo

    Get Started with the demo

    Hey,

    I am trying to get the demo running, but I am getting these errors when I tried to run the command: pip install -r requirements.txt

    I get these errors:

    image
    opened by lathinharoon 0
  • Where is the inference code in project

    Where is the inference code in project

    Hi! I just can not find the code of inference. specially, the code of "Keypoint Object Fusion (ϕ)" in the project, can you help me. thanks for you !

    opened by suijua 3
  • val.py PermissionErro

    val.py PermissionErro

    Epoch gpu_mem box obj cls kps labels img_size 0/299 1.98G 0.04683 0.06445 0.01968 0.1411 52 640: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 315/315 [01:07<00:00, 4.69it/s] Processing val images: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 135/135 [00:10<00:00, 13.02it/s] Traceback (most recent call last): File "train.py", line 603, in main(opt) File "train.py", line 501, in main train(opt.hyp, opt, device) File "train.py", line 352, in train results, maps, _ = val.run(data_dict, File "D:\anaconda3\envs\pytorch-gpu\lib\site-packages\torch\autograd\grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "D:\RunPrograms\kapao-master\val.py", line 283, in run with open(json_path, 'w') as f: PermissionError: [Errno 13] Permission denied: 'C:\Users\Reza\AppData\Local\Temp\tmpt7tpns4i'

    opened by Flashan 1
Owner
Will McNally
PhD Candidate
Will McNally
Code for "Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo"

Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo This repository includes the source code for our CVPR 2021 paper on multi-view mult

Jiahao Lin 66 Jan 4, 2023
PoseViz – Multi-person, multi-camera 3D human pose visualization tool built using Mayavi.

PoseViz – 3D Human Pose Visualizer Multi-person, multi-camera 3D human pose visualization tool built using Mayavi. As used in MeTRAbs visualizations.

István Sárándi 79 Dec 30, 2022
3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks

3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks Introduction This repository contains the code and models for the follo

null 124 Jan 6, 2023
Keras implementation of PersonLab for Multi-Person Pose Estimation and Instance Segmentation.

PersonLab This is a Keras implementation of PersonLab for Multi-Person Pose Estimation and Instance Segmentation. The model predicts heatmaps and vari

OCTI 160 Dec 21, 2022
PyTorch Implementation of Realtime Multi-Person Pose Estimation project.

PyTorch Realtime Multi-Person Pose Estimation This is a pytorch version of Realtime_Multi-Person_Pose_Estimation, origin code is here Realtime_Multi-P

Dave Fang 157 Nov 12, 2022
Code repo for realtime multi-person pose estimation in CVPR'17 (Oral)

Realtime Multi-Person Pose Estimation By Zhe Cao, Tomas Simon, Shih-En Wei, Yaser Sheikh. Introduction Code repo for winning 2016 MSCOCO Keypoints Cha

Zhe Cao 4.9k Dec 31, 2022
Official PyTorch implementation of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image", ICCV 2019

PoseNet of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image" Introduction This repo is official Py

Gyeongsik Moon 677 Dec 25, 2022
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Pyjcsx 328 Dec 17, 2022
Sign Language is detected in realtime using video sequences. Our approach involves MediaPipe Holistic for keypoints extraction and LSTM Model for prediction.

RealTime Sign Language Detection using Action Recognition Approach Real-Time Sign Language is commonly predicted using models whose architecture consi

Rishikesh S 15 Aug 20, 2022
YOLTv5 rapidly detects objects in arbitrarily large aerial or satellite images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks

YOLTv5 rapidly detects objects in arbitrarily large aerial or satellite images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks.

Adam Van Etten 145 Jan 1, 2023
Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors Human POSEitioning System (H

Aymen Mir 66 Dec 21, 2022
Predict multi paths to a moving person depending on his trajectory history.

Multi-future Trajectory Prediction The project is about using the Multiverse model to make possible multible-future trajectory prediction for a seen p

Said Gamal 1 Jan 18, 2022
[CVPR 2022] Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper

template-pose Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions

Van Nguyen Nguyen 92 Dec 28, 2022
[ICCV 2021] Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation

MAED: Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation Getting Started Our codes are implemented and tested with pyth

ZiNiU WaN 176 Dec 15, 2022
Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time

Semi Hand-Object Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time (CVPR 2021).

null 96 Dec 27, 2022
MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation

MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation This repo is the official implementation of "MHFormer: Multi-Hypothesis Transforme

Vegetabird 281 Jan 7, 2023
Towards Multi-Camera 3D Human Pose Estimation in Wild Environment

PanopticStudio Toolbox This repository has a toolbox to download, process, and visualize the Panoptic Studio (Panoptic) data. Note: Sep-21-2020: Curre

null 335 Jan 9, 2023
SE3 Pose Interp - Interpolate camera pose or trajectory in SE3, pose interpolation, trajectory interpolation

SE3 Pose Interpolation Pose estimated from SLAM system are always discrete, and

Ran Cheng 4 Dec 15, 2022
Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Davis Rempe 367 Dec 24, 2022