Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

Overview

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video

Qualtitative result Paper teaser video
aa bb

Introduction

This repository is the official Pytorch implementation of Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video. The base codes are largely borrowed from VIBE. Find more qualitative results here.

Installation

TCMR is tested on Ubuntu 16.04 with Pytorch 1.4 and Python 3.7.10. You may need sudo privilege for the installation.

source scripts/install_pip.sh

Quick demo

  • Download the pre-trained demo TCMR and required data by below command and download SMPL layers from here (male&female) and here (neutral). Put SMPL layers (pkl files) under ${ROOT}/data/base_data/.
source scripts/get_base_data.sh
  • Run demo with options (e.g. render on plain background). See more option details in bottom lines of demo.py.
  • A video overlayed with rendered meshes will be saved in ${ROOT}/output/demo_output/.
python demo.py --vid_file demo.mp4 --gpu 0 

Results

Here I report the performance of TCMR.

table table

See our paper for more details.

Running TCMR

Download pre-processed data (except InstaVariety dataset) from here. You may also download datasets from sources and pre-process yourself. Refer to this. Put SMPL layers (pkl files) under ${ROOT}/data/base_data/.

The data directory structure should follow the below hierarchy.

${ROOT}  
|-- data  
|   |-- base_data  
|   |-- preprocessed_data  
|   |-- pretrained_models

Evaluation

  • Download pre-trained TCMR weights from here.
  • Run the evaluation code with a corresponding config file to reproduce the performance in the tables of our paper.
# dataset: 3dpw, mpii3d, h36m 
python evaluate.py --dataset 3dpw --cfg ./configs/repr_table4_3dpw_model.yaml --gpu 0 
  • You may test options such as average filtering and rendering. See the bottom lines of ${ROOT}/lib/core/config.py.
  • We checked rendering results of TCMR on 3DPW validation and test sets.

Reproduction (Training)

  • Run the training code with a corresponding config file to reproduce the performance in the tables of our paper.
# training outputs are saved in `experiments` directory
# mkdir experiments
python train.py --cfg ./configs/repr_table4_3dpw_model.yaml --gpu 0 
  • Evaluate the trained TCMR (either checkpoint.pth.tar or model_best.pth.tar) on a target dataset.
  • You may test the motion discriminator introduced in VIBE by uncommenting the codes that have exclude motion discriminator notations.
  • We do not release NeuralAnnot SMPL annotations of Human36M used in our paper yet. Thus the performance in Table 6 may be slightly different with the paper.

Reference

@InProceedings{choi2020beyond,
  title={Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video},
  author={Choi, Hongsuk and Moon, Gyeongsik and Lee, Kyoung Mu},
  booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)}
  year={2021}
}
Comments
  • Reproduction of the numbers in the paper

    Reproduction of the numbers in the paper

    Thanks for sharing your great work.

    When I train TCMR with the code (default training configs) and datasets provided here, I get

    {'mpjpe': 98.79354, 'mpjpe_pa': 56.348724, 'accel_err': 7.208395533521851, 'mpvpe': 119.89002}

    , which is worse than what is provided in Table 4 (3DPW test set).

    Can you please provide suggestions or any additional clarifications? Thank you.

    opened by Seethevoice 22
  • Use 3D dataset Only

    Use 3D dataset Only

    Hi, First of all thanks for your great contribution.

    I want to use 3D dataset only, actually don't want to use insta dataset How can I turn off using 2D dataset?

    1. I tried remark the DATASETS_2D in config file (doesn't work)
    #  DATASETS_2D:
    #    - 'Insta'
      DATASETS_3D:
        - 'ThreeDPW'
    #    - 'MPII3D'
    #    - 'Human36M'
    
    1. Tried make DATA_2D_RATIO = 0. (doesn't work)
      DATA_2D_RATIO: 0.
      OVERLAP: true
      DATASETS_2D:
        - 'Insta'
      DATASETS_3D:
        - 'ThreeDPW'
    #    - 'MPII3D'
    #    - 'Human36M'
    

    got dataloader error

    Traceback (most recent call last):
      File "train.py", line 138, in <module>
        main(cfg)
      File "train.py", line 48, in main
        data_loaders = get_data_loaders(cfg)
      File "/home/inpyosong/tcmr/lib/dataset/_loaders.py", line 92, in get_data_loaders
        num_workers=cfg.NUM_WORKERS,
      File "/home/inpyosong/anaconda3/envs/tcmr-mm/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 359, in __init__
        batch_sampler = BatchSampler(sampler, batch_size, drop_last)
      File "/home/inpyosong/anaconda3/envs/tcmr-mm/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 226, in __init__
        "but got batch_size={}".format(batch_size))
    ValueError: batch_size should be a positive integer value, but got batch_size=0
    

    I look forward to your response. Thank you!

    opened by Songinpyo 4
  • Details about data preprocessing.

    Details about data preprocessing.

    Hi, thanks for your excellent work.

    In addition to the feature map, I want to take the RGB image as input of the network. However, the provided preprocessed data do not include the RGB image. I wonder if you could provide a preprocessed data that contain the RGB image.

    Or could you please tell me how to preprocess the data? You mention in 'data.md' that "You may need to change details (ex. scale), so check comments in {dataset_name}_utils.py files.". So I worry about that I could not correctly preprocess the data if there is something wrong with the details.

    Thank you very much!

    opened by linjing7 4
  • Tranning Needed memory size

    Tranning Needed memory size

    Hi, My computer have 23GB memory, when I start trainning repr_table6_h36m_model.yaml, I can only load needed 2d data and 3d data. How can I do? Can I separate 3D data and train stage by stage? are you still remeber your experiment memory?

    opened by amituofo1996 3
  • Question about Parameter 'scale' in 3d datasets

    Question about Parameter 'scale' in 3d datasets

    Hello. Thanks for your great work! I want to do some different processes for original data. To ensure data consistency, I have to ensure the parameter 'scale' in different 3d datasets. I use the repr_table4_3dpw_model.yaml as config file. And the corresponding datasets are listed below. https://github.com/hongsukchoi/TCMR_RELEASE/blob/8078b3c39c22cae39eb19c0e1eb70e09c60ecea7/configs/repr_table4_3dpw_model.yaml#L31-L35 https://github.com/hongsukchoi/TCMR_RELEASE/blob/8078b3c39c22cae39eb19c0e1eb70e09c60ecea7/lib/dataset/_dataset_3d.py#L92-L99 Does scale of h36m_train_25fps_occ_nosmpl_db.pt correspond to 1.2, that of 3dpw _train_occ_db.pt corresponds to 1.2, and that of 3dpw_val _db.pt corresponds to 1.2 (i.e. all the 3d datasets used are scaled to 1.2)? Did I say anything wrong? I'm not quite sure because in the threedpw_utils.py, the default scale is set to 1.3. https://github.com/hongsukchoi/TCMR_RELEASE/blob/8078b3c39c22cae39eb19c0e1eb70e09c60ecea7/lib/data_utils/threedpw_utils.py#L157-L158 Thanks!

    opened by MooreManor 3
  • Difference number of frames between TCMR and SPIN

    Difference number of frames between TCMR and SPIN

    Hi, thanks for your excellent work. I found that in SPIN, the total number of frames in 3DPW dataset is 35515. But in TCMR, the 3DPW frames number is 34561. Could you please tell me what causes the difference?

    opened by linjing7 2
  • About reproducing repr_table4_3dpw_model.yaml

    About reproducing repr_table4_3dpw_model.yaml

    train.log is following

    InstaVariety number of dataset objects 2086896
    3DPW Dataset overlap ratio:  0.9375
    Loaded 3dpw dataset from data/preprocessed_data/3dpw_train_occ_db.pt
    is_train:  True
    3dpw - number of dataset objects 22448
    MPII3D Dataset overlap ratio:  0.9375
    Loaded mpii3d dataset from data/preprocessed_data/mpii3d_train_scale12_occ_db.pt
    is_train:  True
    mpii3d - number of dataset objects 958944
    Human36M Dataset overlap ratio:  0.9375
    Loaded h36m dataset from data/preprocessed_data/h36m_train_25fps_occ_db.pt
    is_train:  True
    h36m - number of dataset objects 775296
    3DPW Dataset overlap ratio:  0
    Loaded 3dpw dataset from data/preprocessed_data/3dpw_val_db.pt
    is_train:  False
    3dpw - number of dataset objects 53
    => loaded pretrained model from 'data/base_data/spin_model_checkpoint.pth.tar'
    Epoch 1/30
    => no checkpoint found at ''
    (500/500) | Total: 0:03:40 | ETA: 0:00:01 | loss: 9.99 | 2d: 3.91 | 3d: 5.62  | loss_kp_2d: 2.671 | loss_kp_3d: 3.043 | data: 0.01 | forward: 0.05 | loss: 0.00 | backward: 0.06 | batch: 0.12
    Traceback (most recent call last):
      File "/media/xf/F/code/TCMR_RELEASE-master/train.py", line 141, in <module>
        main(cfg)
      File "/media/xf/F/code/TCMR_RELEASE-master/train.py", line 131, in main
        debug_freq=cfg.DEBUG_FREQ,
      File "/media/xf/F/code/TCMR_RELEASE-master/lib/core/trainer.py", line 343, in fit
        self.validate()
      File "/media/xf/F/code/TCMR_RELEASE-master/lib/core/trainer.py", line 291, in validate
        for i, target in enumerate(self.valid_loader):
      File "/home/xf/miniconda3/envs/tcmr/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
        data = self._next_data()
      File "/home/xf/miniconda3/envs/tcmr/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data
        return self._process_data(data)
      File "/home/xf/miniconda3/envs/tcmr/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
        data.reraise()
      File "/home/xf/miniconda3/envs/tcmr/lib/python3.7/site-packages/torch/_utils.py", line 394, in reraise
        raise self.exc_type(msg)
    ValueError: Caught ValueError in DataLoader worker process 0.
    Original Traceback (most recent call last):
      File "/home/xf/miniconda3/envs/tcmr/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
        data = fetcher.fetch(index)
      File "/home/xf/miniconda3/envs/tcmr/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
        data = [self.dataset[idx] for idx in possibly_batched_index]
      File "/home/xf/miniconda3/envs/tcmr/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
        data = [self.dataset[idx] for idx in possibly_batched_index]
      File "/media/xf/F/code/TCMR_RELEASE-master/lib/dataset/_dataset_3d.py", line 86, in __getitem__
        return self.get_single_item(index)
      File "/media/xf/F/code/TCMR_RELEASE-master/lib/dataset/_dataset_3d.py", line 243, in get_single_item
        kp_3d_tensor[idx] = kp_3d[idx]
    ValueError: could not broadcast input array from shape (49,3) into shape (14,3)
    

    Only repr_table6_h36m_model.yaml is ok,and others experiment all facing this problem.They have same DATA_EVAL=ThreeDPW,I get confused of this. Could you help me? Thanks

    opened by amituofo1996 2
  • Reproduction when facing memory error

    Reproduction when facing memory error

    Hi when I reproduce repr_table6_h36m,face following problem, My compute memory is 55 GB, and I close predicting vertes for it use too much momery.

    InstaVariety number of dataset objects 130431
    MPII3D Dataset overlap ratio:  0
    Loaded mpii3d dataset from data/preprocessed_data/mpii3d_train_scale1_db.pt
    is_train:  True
    mpii3d - number of dataset objects 59934
    Human36M Dataset overlap ratio:  0
    Loaded h36m dataset from data/preprocessed_data/h36m_train_25fps_tight_db.pt
    is_train:  True
    h36m - number of dataset objects 48456
    Human36M Dataset overlap ratio:  0.9375
    Loaded h36m dataset from data/preprocessed_data/h36m_test_front_25fps_tight_db.pt
    is_train:  False
    h36m - number of dataset objects 68416
    => loaded pretrained model from 'data/base_data/spin_model_checkpoint.pth.tar'
    => no checkpoint found at ''
    Epoch 1/45
    (500/500) | Total: 0:03:51 | ETA: 0:00:01 | loss: 9.57 | 2d: 3.23 | 3d: 4.53  | loss_kp_2d: 1.615 | loss_kp_3d: 0.990 | loss_shape: 0.022 | loss_pose: 0.682 | data: 1.17 | forward: 0.03 | loss: 0.01 | backward: 0.07 | batch: 1.28
    (2138/2138) | batch: 1.201e+03ms | Total: 0:02:00 | ETA: 0:00:01
    Evaluating on 68416 number of poses...
    Learning rate 5e-05
    Learning rate 0.0001
    Epoch 0, MPJPE: 84.9076, PA-MPJPE: 57.8166, ACCEL: 2.5355, ACCEL_ERR: 3.3904,
    Epoch 1 performance: 57.8166
    Best performance achived, saving it!
    Epoch 2/45
    (500/500) | Total: 0:03:59 | ETA: 0:00:01 | loss: 3.98 | 2d: 2.00 | 3d: 1.38  | loss_kp_2d: 2.273 | loss_kp_3d: 1.975 | loss_shape: 0.020 | loss_pose: 0.572 | data: 0.56 | forward: 0.03 | loss: 0.01 | backward: 0.08 | batch: 0.68
    (2138/2138) | batch: 1.197e+03ms | Total: 0:01:59 | ETA: 0:00:01
    Evaluating on 68416 number of poses...
    Epoch 1, MPJPE: 77.5908, PA-MPJPE: 49.9525, ACCEL: 2.5912, ACCEL_ERR: 3.3052,
    Epoch 2 performance: 49.9525
    Learning rate 5e-05
    Learning rate 0.0001
    Best performance achived, saving it!
    Epoch 3/45
    (500/500) | Total: 0:03:56 | ETA: 0:00:01 | loss: 3.34 | 2d: 1.70 | 3d: 1.18  | loss_kp_2d: 3.205 | loss_kp_3d: 1.627 | loss_shape: 0.018 | loss_pose: 0.582 | data: 0.01 | forward: 0.03 | loss: 0.01 | backward: 0.08 | batch: 0.12
    (2138/2138) | batch: 1.238e+03ms | Total: 0:02:03 | ETA: 0:00:01
    Evaluating on 68416 number of poses...
    Epoch 2, MPJPE: 72.0071, PA-MPJPE: 48.5966, ACCEL: 2.8021, ACCEL_ERR: 3.3457,
    Epoch 3 performance: 48.5966
    Learning rate 5e-05
    Learning rate 0.0001
    Best performance achived, saving it!
    Epoch 4/45
    (500/500) | Total: 0:03:48 | ETA: 0:00:01 | loss: 3.12 | 2d: 1.69 | 3d: 1.06  | loss_kp_2d: 1.572 | loss_kp_3d: 2.059 | loss_shape: 0.017 | loss_pose: 0.360 | data: 0.01 | forward: 0.03 | loss: 0.01 | backward: 0.07 | batch: 0.12
    (2138/2138) | batch: 1.199e+03ms | Total: 0:01:59 | ETA: 0:00:01
    Evaluating on 68416 number of poses...
    Epoch 3, MPJPE: 70.6277, PA-MPJPE: 47.0223, ACCEL: 2.6875, ACCEL_ERR: 3.2729,
    Epoch 4 performance: 47.0223
    Learning rate 5e-05
    Learning rate 0.0001
    Best performance achived, saving it!
    Epoch 5/45
    (500/500) | Total: 0:03:51 | ETA: 0:00:01 | loss: 2.92 | 2d: 1.61 | 3d: 0.98  | loss_kp_2d: 1.649 | loss_kp_3d: 1.081 | loss_shape: 0.012 | loss_pose: 0.321 | data: 0.50 | forward: 0.03 | loss: 0.01 | backward: 0.07 | batch: 0.61
    Traceback (most recent call last):
      File "/media/xf/F/code/TCMR_RELEASE-master/train.py", line 141, in <module>
        main(cfg)
      File "/media/xf/F/code/TCMR_RELEASE-master/train.py", line 131, in main
        debug_freq=cfg.DEBUG_FREQ,
      File "/media/xf/F/code/TCMR_RELEASE-master/lib/core/trainer.py", line 343, in fit
        self.validate()
      File "/media/xf/F/code/TCMR_RELEASE-master/lib/core/trainer.py", line 291, in validate
        for i, target in enumerate(self.valid_loader):
      File "/home/xf/miniconda3/envs/tcmr/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 279, in __iter__
        return _MultiProcessingDataLoaderIter(self)
      File "/home/xf/miniconda3/envs/tcmr/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 719, in __init__
        w.start()
      File "/home/xf/miniconda3/envs/tcmr/lib/python3.7/multiprocessing/process.py", line 112, in start
        self._popen = self._Popen(self)
      File "/home/xf/miniconda3/envs/tcmr/lib/python3.7/multiprocessing/context.py", line 223, in _Popen
        return _default_context.get_context().Process._Popen(process_obj)
      File "/home/xf/miniconda3/envs/tcmr/lib/python3.7/multiprocessing/context.py", line 277, in _Popen
        return Popen(process_obj)
      File "/home/xf/miniconda3/envs/tcmr/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
        self._launch(process_obj)
      File "/home/xf/miniconda3/envs/tcmr/lib/python3.7/multiprocessing/popen_fork.py", line 70, in _launch
        self.pid = os.fork()
    OSError: [Errno 12] Cannot allocate memory
    
    

    Could you give me some suggestion? Thanks!

    opened by amituofo1996 2
  • Too many frames

    Too many frames

    Thanks for your work! When I tested my video with demo, I found that the number of frames became a lot. This will take me a long time to run the demo. I think it was caused by the followed command. image

    So I changed the code and found that the number of frames decreased and the running speed was much faster. I wonder if this will affect the accuracy of the demo? My change is here: image

    opened by sulei1998 2
  • About preprocessing from videos to images in h36m datasets

    About preprocessing from videos to images in h36m datasets

    Hello. Thanks for your great work! Can you offer the preprocessing file of changing videos in h36m to images whose format is like s_01_act_02_subact_01_ca_01*.jpg and generating corresponding json files like Human36M_subject*_data.json? Thanks!

    opened by MooreManor 1
  • How to handle the frames at the beginning and end of the video?

    How to handle the frames at the beginning and end of the video?

    Hi, First of all, thank you for the good research that inspires me.

    I have a question about data loader.

    The TCMR loads 16 frames and predicts the middle frame, but I don't know how can we load 16 frames with the same target in the middle at the beginning or end of the video.

    So, I was wondering if TCMR drops the 8 frames at the beginning and the end, but when I ran the demo, it seemed that prediction was possible in the entire frame without any discarded frames.

    I wonder how you are handling this part. Is the loading method different between training and inference?

    I look forward to your response. Thank you!

    opened by Songinpyo 3
  • TCMR missmatch??

    TCMR missmatch??

    RuntimeError: Error(s) in loading state_dict for TCMR: size mismatch for regressor.smpl.shapedirs: copying a param with shape torch.Size([6890, 3, 10]) from checkpoint, the shape in current model is torch.Size([6890, 3, 300]).

    Anyone know about this?

    opened by codeHorasan 5
  • ValueError: Invalid device ID (0)

    ValueError: Invalid device ID (0)

    Hi, I met this bug and don't know how to resolve it. I've tried to change the egl.py but it didn't work. It may be similar to #7 and I have tried lots of methods after it but they don't work.

    I would VERY appreciate it if you can give me some advice. It has tortured me for a long time. My environment is Ubuntu16.04 from a remote server, given 1 GPU only.


    Running "ffmpeg -i sample_demo.mp4 -r 30000/1001 -f image2 -v error /tmp/sample_demo_mp4/%06d.jpg" Images saved to "/tmp/sample_demo_mp4" Input video number of frames 122

    Running Multi-Person-Tracker 100%|███████████████████████████████████████████| 11/11 [00:04<00:00, 2.48it/s] Finished. Detection + Tracking FPS 27.49 => loaded pretrained model from 'data/base_data/spin_model_checkpoint.pth.tar' Load pretrained weights from './data/base_data/tcmr_demo_model.pth.tar'

    Running TCMR on each person tracklet... 100%|█████████████████████████████████████████████| 5/5 [00:22<00:00, 4.53s/it] TCMR FPS: 5.38 Total time spent: 36.98 seconds (including model loading time). Total FPS (including model loading time): 3.30. Get SMPL faces Traceback (most recent call last): File "/root/data/meilin/TCMR/demo.py", line 376, in main(args) File "/root/data/meilin/TCMR/demo.py", line 248, in main renderer = Renderer(resolution=(orig_width, orig_height), orig_img=True, wireframe=args.wireframe) File "/root/data/meilin/TCMR/lib/utils/renderer.py", line 47, in init point_size=1.0 File "/usr/local/lib/python3.6/dist-packages/pyrender/offscreen.py", line 31, in init self._create() File "/usr/local/lib/python3.6/dist-packages/pyrender/offscreen.py", line 137, in _create egl_device = egl.get_device_by_index(device_id) File "/usr/local/lib/python3.6/dist-packages/pyrender/platforms/egl.py", line 83, in get_device_by_index raise ValueError('Invalid device ID ({})'.format(device_id, len(devices))) ValueError: Invalid device ID (0)

    Process finished with exit code 1

    opened by Mirandl 2
  • DataLoader worker (pid 2991): Bus error.

    DataLoader worker (pid 2991): Bus error.

    Hi, Thank you for your great work! When running your code, I got this error:

    `Running TCMR on each person tracklet... 0%| | 0/5 [00:00<?, ?it/s]ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm). 0%| | 0/5 [00:02<?, ?it/s] Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 779, in _try_get_data data = self._data_queue.get(timeout=timeout) File "/usr/lib/python3.6/multiprocessing/queues.py", line 104, in get if not self._poll(timeout): File "/usr/lib/python3.6/multiprocessing/connection.py", line 257, in poll return self._poll(timeout) File "/usr/lib/python3.6/multiprocessing/connection.py", line 414, in _poll r = wait([self], timeout) File "/usr/lib/python3.6/multiprocessing/connection.py", line 911, in wait ready = selector.select(timeout) File "/usr/lib/python3.6/selectors.py", line 376, in select fd_event_list = self._poll.poll(timeout) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 2991) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/root/data/meilin/TCMR/demo.py", line 377, in main(args) File "/root/data/meilin/TCMR/demo.py", line 157, in main for i, batch in enumerate(crop_dataloader): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 363, in next data = self._next_data() File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 974, in _next_data idx, data = self._get_data() File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 941, in _get_data success, data = self._try_get_data() File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 792, in _try_get_data raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) RuntimeError: DataLoader worker (pid(s) 2991) exited unexpectedly

    Process finished with exit code 1 ` It seems the num_workers need to be adjusted, but I found it's no use... Can you guide me a little bit for this! Thank you!

    opened by Mirandl 3
Owner
Hongsuk Choi
Research area: 3D human pose, shape, and mesh estimation
Hongsuk Choi
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
Official PyTorch implementation of RobustNet (CVPR 2021 Oral)

RobustNet (CVPR 2021 Oral): Official Project Webpage Codes and pretrained models will be released soon. This repository provides the official PyTorch

Sungha Choi 173 Dec 21, 2022
Official pytorch implementation of Rainbow Memory (CVPR 2021)

Rainbow Memory: Continual Learning with a Memory of Diverse Samples

Clova AI Research 91 Dec 17, 2022
Official PyTorch Implementation of Embedding Transfer with Label Relaxation for Improved Metric Learning, CVPR 2021

Embedding Transfer with Label Relaxation for Improved Metric Learning Official PyTorch implementation of CVPR 2021 paper Embedding Transfer with Label

Sungyeon Kim 37 Dec 6, 2022
Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Zhengxia Zou 1.5k Dec 28, 2022
Official PyTorch Implementation of Convolutional Hough Matching Networks, CVPR 2021 (oral)

Convolutional Hough Matching Networks This is the implementation of the paper "Convolutional Hough Matching Network" by J. Min and M. Cho. Implemented

Juhong Min 70 Nov 22, 2022
Official PyTorch implementation of "VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization" (CVPR 2021)

VITON-HD — Official PyTorch Implementation VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization Seunghwan Choi*1, Sunghyun Pa

Seunghwan Choi 250 Jan 6, 2023
[CVPR 2021] Official PyTorch Implementation for "Iterative Filter Adaptive Network for Single Image Defocus Deblurring"

IFAN: Iterative Filter Adaptive Network for Single Image Defocus Deblurring Checkout for the demo (GUI/Google Colab)! The GUI version might occasional

Junyong Lee 173 Dec 30, 2022
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 76 Jan 2, 2023
Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)

Implicit3DUnderstanding (Im3D) [Project Page] Holistic 3D Scene Understanding from a Single Image with Implicit Representation Cheng Zhang, Zhaopeng C

Cheng Zhang 149 Jan 8, 2023
CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training

UC2 UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu,

Mingyang Zhou 28 Dec 30, 2022
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
Official implementation for (Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation, CVPR-2021)

FRSKD Official implementation for Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation (CVPR-2021) Requirements Pytho

null 75 Dec 28, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Chenyu 109 Dec 23, 2022
Official Implementation and Dataset of "PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency", CVPR 2021

Portrait Photo Retouching with PPR10K Paper | Supplementary Material PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask an

null 184 Dec 11, 2022
The official implementation of Equalization Loss v1 & v2 (CVPR 2020, 2021) based on MMDetection.

The Equalization Losses for Long-tailed Object Detection and Instance Segmentation This repo is official implementation CVPR 2021 paper: Equalization

Jingru Tan 129 Dec 16, 2022
An official TensorFlow implementation of “CLCC: Contrastive Learning for Color Constancy” accepted at CVPR 2021.

CLCC: Contrastive Learning for Color Constancy (CVPR 2021) Yi-Chen Lo*, Chia-Che Chang*, Hsuan-Chao Chiu, Yu-Hao Huang, Chia-Ping Chen, Yu-Lin Chang,

Yi-Chen (Howard) Lo 58 Dec 17, 2022
The official implementation of CVPR 2021 Paper: Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation.

Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation This repository is the official implementation of CVPR 2021 paper:

null 9 Nov 14, 2022