3D Human Pose Machines with Self-supervised Learning

Overview

3D Human Pose Machines with Self-supervised Learning

Keze Wang, Liang Lin, Chenhan Jiang, Chen Qian, and Pengxu Wei, “3D Human Pose Machines with Self-supervised Learning”. To appear in IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2019.

This repository implements a 3D human pose machine to resolve 3D pose sequence generation for monocular frames, and includes a concise self-supervised correction mechanism to enhance our model by retaining the 3D geometric consistency. The main part is written in C++ and powered by Caffe deep learning toolbox. Another is written in Python and powered by Tensorflow.

Results

We proposed results on the Human3.6M, KTH Football II and MPII dataset.

   

   

   

License

This project is Only released for Academic Research Use.

Get Started

Clone the repo:

git clone https://github.com/chanyn/3Dpose_ssl.git

or directly download from https://www.dropbox.com/s/qycpjinof2ishw9/3Dpose_ssl.tar.gz?dl=0 (including datasets and well-compiled caffe under cuda-8.0)

Our code is organized as follows:

caffe-3dssl/: support caffe
models/: pretrained models and results
prototxt/: network architecture definitions
tensorflow/: code for online refine 
test/: script that run results split by action 
tools/: python and matlab code 

Requirements

  1. NVIDIA GPU and cuDNN are required to have fast speeds. For now, CUDA 8.0 with cuDNN 5.1 has been tested. The other versions should be working.
  2. Caffe Python wrapper is required.
  3. Tensorflow 1.1.0
  4. python 2.7.13
  5. MATLAB
  6. Opencv-python

Installation

  1. Build 3Dssl Caffe

       cd $ROOT/caffe-3dssl    # Follow the Caffe installation instructions here:    #   http://caffe.berkeleyvision.org/installation.html        # If you're experienced with Caffe and have all of the requirements installed    # and your Makefile.config in place, then simply do:    make all -j 8        make pycaffe    

  1. Install Tensorflow

Datasets

  • Human3.6m

  We change annotation of Human3.6m to hold 16 points ( 'RFoot' 'RKnee' 'RHip' 'LHip' 'LKnee' 'LFoot' 'Hip' 'Spine' 'Thorax' 'Head' 'RWrist' 'RElbow'  'RShoulder' 'LShoulder' 'LElbow' 'LWrist') in keeping with MPII.

  We have provided count mean file and protocol #I & protocol #III split list of Human3.6m. Follow Human3.6m website to download videos and API. We split each video per 5 frames, you can directly download processed square data in this link.  And list format of 16skel_train/test_* is [img_path] [P12dx, P12dy, P22dx, P22dy,..., P13dx, P13dy, P13dz, P23dx, P23dy, P23dz,...] clip. Clip = 0 denote reset lstm.

  shell   # files construction   h36m   |_gt # 2d and 3d annotations splited by actions   |_hg2dh36m # 2d estimation predicted by *Hourglass*, 'square' denotes prediction of square image.   |_ours_2d # 2d prediction from our model   |_ours_3d # 3d coarse prediction of *Model Extension: mask3d*   |_16skel_train_2d3d_clip.txt # train list of *Protocol I*   |_16skel_test_2d3d_clip.txt   |_16skel_train_2d3d_p3_clip.txt # train list of *Protocol III*   |_16skel_test_2d3d_p3_clip.txt   |_16point_mean_limb_scaled_max_min.csv #16 points normalize by (x-min) / (max-min)  

  After setting up Human3.6m dataset following its illustration and download the above training/testing list. You should update “root_folder” paths in CAFFE_ROOT/examples/.../*.prototxt for images and annotation director.

  • MPII

  We crop and square single person from  all images and update 2d annotation in train_h36m.txt (resort points according to order of Human3.6m points).

    mkdir data/MPII   cd data/MPII   wget -v https://drive.google.com/open?id=16gQJvf4wHLEconStLOh5Y7EzcnBUhoM-   tar -xzvf MPII_square.tar.gz   rm -f MPII_square.tar.gz  

 

Training

Offline Phase

Our model consists of two cascade modules, so the training phase can be divided into the following steps:

cd CAFFE_ROOT
  1. Pre-train the 2D pose sub-network with MPII. You can follow CPM or Hourglass or other 2D pose estimation method. We provide pretrained CPM-caffemodel. Please put it into CAFFE_ROOT/models/.

  2. Train 2D-to-3D pose transformer module with Human3.6M. And we fix the parameters of the 2D pose sub-network. The corresponding prototxt file is in examples/2D_to_3D/bilstm.prototxt.

       sh examples/2D_to_3D/train.sh    

  1. To train 3D-to-2D pose projector module, we fix the above module weights. And we need in the wild 2D Pose dataset to help training (we choose MPII).

   sh    sh examples/3D_to_2D/train.sh    

  1. Fine-tune the whole model jointly. We provide trained model and coarse prediction of Protocol I and Protocol III.

   sh    sh examples/finetune_whole/train.sh    

  1. Model extension: Add rand mask to relieve model bias. We provide corresponding model files in examples/mask3d.

   sh    sh examples/mask3d/train.sh    

Model Inference

3D-to-2D project module is initialized from the well-trained model, and they will be updated by minimizing the difference between the predicted 2D pose and projected 2D pose.

  shell   # Step1: Download the trained model   cd PROJECT_ROOT   mkdir models   cd models   wget -v https://drive.google.com/open?id=1dMuPuD_JdHuMIMapwE2DwgJ2IGK04xhQ   unzip model_extension_mask3d.zip   rm -r model_extension_mask3d.zip   cd ../     # Step2: save coarse 3D prediction   cd test   # change 'data_root' in test_human16.sh   # change 'root_folder' in template_16_merge.prototxt   # test_human16.sh [$1 deploy.prototxt] [$2 trained model] [$3 save dir] [$4 batchsize]   sh test_human16.sh . ../models/model_extension_mask3d/mask3d_iter_400000.caffemodel mask3d 5     # Step3: online refine 3D pose prediction   # protocal: 1/3 , default is 1   # pose2d: ours/hourglass/gt, default is ours   # coarse_3d: saved results in Sept2   python pred_v2.py --trained_model ../models/model_extension_mask3d/mask3d-400000.pkl --protocol 1 --data_dir /data/h36m/ --coarse_3d ../test/mask3d --save srr_results --pose2d hourglass  

 

  shell   # Maybe you want to predict 2d.   # The model we use to predict 2d pose is similar to our 3dpredict model without ssl module.   # Or you can use Hourglass(https://github.com/princeton-vl/pose-hg-demo) to predict 2d pose     # Step1.1: Download the trained merge model   cd PROJECT_ROOT   mkdir models && cd models   wget -v https://drive.google.com/open?id=19kTyttzUnm_1_7HEwoNKCXPP2QVo_zcK   unzip our2d.zip   rm -r our2d.zip   # move 2d prototxt to PROJECT_ROOT/test/   mv our2d/2d ../test/   cd ../     # Step1.2: save 2D prediction   cd test   # change 'data_root' in test_human16.sh   # change 'root_folder' in 2d/template_16_merge.prototxt   # test_human16.sh [$1 deploy.prototxt] [$2 trained model] [$3 save dir] [$4 batchsize]   sh test_human16.sh 2d/ ../models/our2d/2d_iter_800000.caffemodel our2d 5   # replace predict 2d pose in data dir or change data_dir in tensorflow/pred_v2.py   mv our2d /data/h36m/ours_2d/bilstm2d-p1-800000       # Step2 is same as above       # Step3: online refine 3D pose prediction   # protocal: 1/3 , default is 1   # pose2d: ours/hourglass/gt, default is ours   # coarse_3d: saved results in Sept2   python pred_v2.py --trained_model ../models/model_extension_mask3d/mask3d-400000.pkl --protocol 1 --data_dir /data/h36m/ --coarse_3d ../test/mask3d --save srr_results --pose2d ours  

 

  • Inference with yourself

  The only difference is that you should transfer caffemodel of 3D-to-2D project module to pkl file. We provide gen_refinepkl.py in tools/.

  sh   # Follow above Step1~2 to produce coarse 3d prediction and 2d pose.   # transfer caffemodel of SRR module to python .pkl file   python tools/gen_refinepkl.py CAFFE_ROOT CAFFEMODEL_DIR --pkl_dir model.pkl     # online refine 3D pose prediction   python pred_v2.py --trained_model model.pkl  

 

  • Evaluation

  shell   # Print MPJP   run tools/eval_h36m.m     # Visualization of 2dpose/ 3d gt pose/ 3d coarse pose/ 3d refine pose   # Please change data_root in visualization.m before running   run visualization.m  

Citation

@article{wang20193d,
  title={3D Human Pose Machines with Self-supervised Learning},
  author={Wang, Keze and Lin, Liang and Jiang, Chenhan and Qian, Chen and Wei, Pengxu},
  journal={IEEE transactions on pattern analysis and machine intelligence},
  year={2019},
  publisher={IEEE}
}
Comments
  • Step: Train 2D-to-3D pose

    Step: Train 2D-to-3D pose

    dear @chanyn, I run that step and i had error: (testcaffe) binh@binh:~/3Dpose_ssl-master/caffe-3dssl$ sh examples/2D_to_3D/train.sh examples/2D_to_3D/train.sh: 7: examples/2D_to_3D/train.sh: ../../build/tools/caffe: not found can you help me?

    opened by NguyenDangBinh 10
  • ❌ `ParseError: 13:3 : Message type

    ❌ `ParseError: 13:3 : Message type "caffe.LayerParameter" has no field named "image_data_lstm_param".`

    Attempts at loading a model from the prototxt file, caffemodel file, and proto file have failed in a manner very similar to Issue #22

    Expected Behavior

    Running sh test_human16.sh . should produce the behavior described in the README.md.

    Outside of this, given a .prototxt file, .caffemodel file, and .proto file for an arbitrary caffe model, it should be possible to use any of the various caffe2pytorch utilities (caffe2pytorch, pytorch-caffe, Caffe2Pytorch, deep-learning-model-convertor) to convert the caffe model to a pytorch model (as it does with the other caffe models that have been tested).

    Current Behavior

    In both cases, both running the model based on the README instructions and trying to convert the model, the code returns the following error:

    ParseError: 13:3 : Message type "caffe.LayerParameter" has no field named "image_data_lstm_param".
    

    Possible Solution

    Only two results for image_data_lstm_param come up with a Google search. Both of them are from the 3d pose estimation repo. This suggests that these layer parameters are unique to this caffe model.

    Steps to Reproduce

    1. Download repo with git clone https://github.com/chanyn/3Dpose_ssl.git
    2. Build 3Dssl Caffe & Install Tensorflow
    3. Download the appropriate Dataset and/or provided pre-trained models
    4. Multiple possible options for this step:
    • Run sh test_human16.sh . ../models/model_extension_mask3d/mask3d_iter_400000.caffemodel mask3d 5 as suggested in the README
    • Try converting model to Pytorch with a tool like pytorch-caffe
    from caffenet import *
    
    net = CaffeNet('../our2d/deploy2d.prototxt')
    print(net)
    net.load_weights('../our2d/2d_iter_800000.caffemodel')
    net.eval()
    
    import torch
    import caffemodel2pytorch
    
    model = caffemodel2pytorch.Net(
            prototxt = 'our2d/2d/template_16_merge.prototxt',
    	weights = 'our2d/2d_iter_800000.caffemodel',
            caffe_proto = 'https://raw.githubusercontent.com/chanyn/3Dpose_ssl/master/caffe-3dssl/src/caffe/proto/caffe.proto'
    )
    model.cuda()
    model.eval()
    torch.set_grad_enabled(False)
    

    All will result in the same error: ParseError: 13:3 : Message type "caffe.LayerParameter" has no field named "image_data_lstm_param".

    Detailed Description

    In the latest comment in Issue #22, the topic of "downloading the entire zip file" is brought up.

    Not only does the protobuf file 3Dpose_ssl/caffe-3dssl/src/caffe/proto/caffe.proto appear to be out-of-date, but the following links from the README all seem to lack alternatives:

    For all the zip files that did contain .prototxt and .caffemodel files, no combinations worked with the available caffe.proto.

    Possible Implementation

    Aside from attempting to manually reconstruct and retrain the model architecture from the paper details (which would be highly at-risk of error and/or deviation from the reported results), updating the protobuf file seems like both the quickest and least-risky fix.

    Looking forward to the response, @kezewang & @chanyn

    Context (Environment)

    Both the README instructions and alternative Caffe-to-Pytorch conversions have been attempted in the following environments:

    Environment 1: (Google Colab (pre-Caffe-installation))

      System:
        OS: Linux 4.14 Ubuntu 18.04.3 LTS (Bionic Beaver)
        CPU: (2) x64 Intel(R) Xeon(R) CPU @ 2.20GHz
        Memory: 10.12 GB / 12.72 GB
        Container: Yes
        Shell: 4.4.20 - /bin/bash
      Binaries:
        Node: 8.11.3 - /tools/node/bin/node
        npm: 5.7.1 - /tools/node/bin/npm
      Managers:
        Apt: 1.6.12 - /usr/bin/apt
        pip2: 19.3.1 - /usr/local/bin/pip2
        pip3: 19.3.1 - /usr/local/bin/pip3
      Utilities:
        CMake: 3.12.0 - /usr/local/bin/cmake
        Make: 4.1 - /usr/bin/make
        GCC: 7.4.0 - /usr/bin/gcc
        Git: 2.17.1 - /usr/bin/git
        Clang: 6.0.0-1ubuntu2 - /usr/bin/clang
        FFmpeg: 3.4.6 - /usr/bin/ffmpeg
      Languages:
        Bash: 4.4.20 - /bin/bash
        Java: 1.8.0_242 - /usr/bin/javac
        Perl: 5.26.1 - /usr/bin/perl
        Python: 3.6.9 - /usr/local/bin/python
        Python3: 3.6.9 - /usr/bin/python3
        R: 3.6.2 - /usr/local/bin/R
    

    Environment 2: (Google Colab (post-Caffe-installation))

      System:
        OS: Linux 4.14 Ubuntu 18.04.3 LTS (Bionic Beaver)
        CPU: (2) x64 Intel(R) Xeon(R) CPU @ 2.20GHz
        Memory: 10.12 GB / 12.72 GB
        Container: Yes
        Shell: 4.4.20 - /bin/bash
      Binaries:
        Node: 8.11.3 - /tools/node/bin/node
        npm: 5.7.1 - /tools/node/bin/npm
      Managers:
        Apt: 1.6.12 - /usr/bin/apt
        pip2: 19.3.1 - /usr/local/bin/pip2
        pip3: 19.3.1 - /usr/local/bin/pip3
      Utilities:
        CMake: 3.12.0 - /usr/local/bin/cmake
        Make: 4.1 - /usr/bin/make
        GCC: 7.4.0 - /usr/bin/gcc
        Git: 2.17.1 - /usr/bin/git
        Clang: 6.0.0-1ubuntu2 - /usr/bin/clang
        FFmpeg: 3.4.6 - /usr/bin/ffmpeg
      Languages:
        Bash: 4.4.20 - /bin/bash
        Java: 1.8.0_242 - /usr/bin/javac
        Perl: 5.26.1 - /usr/bin/perl
        Python: 3.6.9 - /usr/local/bin/python
        Python3: 3.6.9 - /usr/bin/python3
        R: 3.6.2 - /usr/local/bin/R
    

    Environment 3: (Local Linux Laptop)

     System:
        OS: Linux 4.15 Ubuntu 18.04.4 LTS (Bionic Beaver)
        CPU: (12) x64 Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
        Memory: 19.04 GB / 31.27 GB
        Container: Yes
        Shell: 4.4.20 - /bin/bash
      Binaries:
        Node: 8.17.0 - /usr/bin/node
        Yarn: 1.21.1 - /usr/bin/yarn
        npm: 6.4.1 - /usr/local/bin/npm
        Watchman: 4.9.0 - /usr/local/bin/watchman
      Managers:
        Apt: 1.6.12 - /usr/bin/apt
        Cargo: 1.31.0 - ~/.cargo/bin/cargo
        CocoaPods: 1.7.3 - /usr/local/bin/pod
        Composer: 1.8.6 - /usr/local/bin/composer
        Gradle: 5.5 - /usr/local/bin/gradle
        Homebrew: 2.1.7 - /usr/local/bin/brew
        Maven: 3.6.1 - /usr/local/bin/mvn
        pip2: 18.1 - /usr/local/bin/pip2
        pip3: 20.0.1 - ~/anaconda3/bin/pip3
      Utilities:
        CMake: 3.10.2 - /usr/bin/cmake
        Make: 4.1 - /usr/bin/make
        GCC: 7.4.0 - /usr/bin/gcc
        Git: 2.17.1 - /usr/bin/git
      Virtualization:
        Docker: 19.03.5 - /usr/bin/docker
        VirtualBox: 5.2.34 - /usr/bin/vboxmanage
      IDEs:
        Android Studio: 3.1 AI-173.4907809
        Emacs: 25.2.2 - /usr/bin/emacs
        Nano: 2.9.3 - /bin/nano
        VSCode: 1.41.1 - /snap/bin/code
        Vim: 8.0 - /usr/bin/vim
      Languages:
        Bash: 4.4.20 - /bin/bash
        Java: 1.8.0_181 - /usr/bin/javac
        Perl: 5.26.1 - /usr/bin/perl
        Python: 3.7.2 - /home/mmcateer0/anaconda3/bin/python
        Python3: 3.7.2 - /home/mmcateer0/anaconda3/bin/python3
      Databases:
        MongoDB: 3.6.3 - /usr/bin/mongo
        SQLite: 3.26.0 - /home/mmcateer0/anaconda3/bin/sqlite3
      Browsers:
        Chrome: 80.0.3987.87
        Firefox: 72.0.2
    
    

    Given the consistency of the problem between environments, it does not seem that reconfiguring Caffe installation is a viable solution to this.

    opened by matthew-mcateer 3
  • Link expired

    Link expired

    Hello, the direct download link containing all datasets and trained models is expired: https://www.dropbox.com/s/szfkir7hgrc9kfw/3Dpose_ssl.tar.gz?dl=0 , could you please create a new one?

    Thank you.

    opened by aljafor 3
  •  skel_vector_loss_layer.cpp:24 Check failed: this->layer_param_.has_skel_vector_param()

    skel_vector_loss_layer.cpp:24 Check failed: this->layer_param_.has_skel_vector_param()

    When trying to execute the script "sh train.sh“ inside caffe-3dssl/examples/3D_to_2D folder, such error appears as follows.

    I0329 16:58:35.306548 24296 layer_factory.hpp:77] Creating layer 3Dboneloss_h36m1 I0329 16:58:35.306560 24296 net.cpp:101] Creating Layer 3Dboneloss_h36m1 I0329 16:58:35.306565 24296 net.cpp:437] 3Dboneloss_h36m1 <- 3dpredict_h36m_slice_pred_0_split_2 I0329 16:58:35.306572 24296 net.cpp:437] 3Dboneloss_h36m1 <- label3d_slice2_1_split_2 I0329 16:58:35.306579 24296 net.cpp:411] 3Dboneloss_h36m1 -> 3Dboneloss_h36m1 this->layer_param_.has_skel_vector_param() = 0 F0329 16:58:35.306607 24296 skel_vector_loss_layer.cpp:24] Check failed: this->layer_param_.has_skel_vector_param() *** Check failure stack trace: *** @ 0x7fbdee8f15cd google::LogMessage::Fail() @ 0x7fbdee8f3433 google::LogMessage::SendToLog() @ 0x7fbdee8f115b google::LogMessage::Flush() @ 0x7fbdee8f3e1e google::LogMessageFatal::~LogMessageFatal() @ 0x7fbdeefeeafb caffe::SkelVectorLossLayer<>::LayerSetUp() @ 0x7fbdef00cc8d caffe::Net<>::Init() @ 0x7fbdef00e6e1 caffe::Net<>::Net() @ 0x7fbdef16497a caffe::Solver<>::InitTrainNet() @ 0x7fbdef164ee7 caffe::Solver<>::Init() @ 0x7fbdef16527a caffe::Solver<>::Solver() @ 0x7fbdef17a143 caffe::Creator_AdamSolver<>() @ 0x40b1f8 train() @ 0x407a14 main @ 0x7fbded890830 __libc_start_main @ 0x408329 _start @ (nil) (unknown) Aborted (core dumped)

    It seems that there is something wrong with the configuration with "skel_vector_param" parameter which is decribed in the caffe/src/caffe.proto. message SkelVectorParameter{ optional int32 dim = 1 [default = 3]; }

    in the network proto file "examples/3D_to_2D/addsrr.prototxt", the layer description is listed like layer { name: "3Dboneloss_h36m1" type: "SkelVectorLoss" bottom: "3dpredict_h36m" bottom: "label3d" top: "3Dboneloss_h36m1" include { phase: TRAIN } }

    do we need to add "skel_vector_param" into the "3Dboneloss_h36m1" layer defintion like this? layer { name: "3Dboneloss_h36m1" type: "SkelVectorLoss" bottom: "3dpredict_h36m" bottom: "label3d" top: "3Dboneloss_h36m1" skel_vector_param{ dim : 3 } include { phase: TRAIN } }

    Thanks for any information

    opened by zhangwangzi2010 3
  • an error occured when I run test_human16.sh

    an error occured when I run test_human16.sh

    F0311 14:05:49.627907 23105 mpjpe_evalution_layer.cpp:26] Check failed: this->max_min_value_.size() >= 2 (0 vs. 2) *** Check failure stack trace: *** @ 0x7f6e4d4c85cd google::LogMessage::Fail() @ 0x7f6e4d4ca433 google::LogMessage::SendToLog() @ 0x7f6e4d4c815b google::LogMessage::Flush() @ 0x7f6e4d4cae1e google::LogMessageFatal::~LogMessageFatal() @ 0x7f6e4dc15230 caffe::MPJPEEvaluationLayer<>::LayerSetUp() @ 0x7f6e4dd53f4f caffe::Net<>::Init() @ 0x7f6e4dd56230 caffe::Net<>::Net() @ 0x409897 test() @ 0x4078f4 main @ 0x7f6e4bc5e830 __libc_start_main @ 0x408209 _start @ (nil) (unknown) Aborted (core dumped)

    opened by mingsjtu 2
  • How to project the 3D points onto the 2D square images?

    How to project the 3D points onto the 2D square images?

    Thanks for sharing this wonderful jobs. The camera parameters provided by Human3.6M are for original images, but how you project the 3D points onto the cropped square images? It seems that the some parameters, at least the center of image, in intrinsic matirx have changed.

    Looking forward to your reply, thanks.

    opened by huge123 1
  • why does this happened

    why does this happened

    an error occured when i run the train.sh in examples/2D_to_3D/:

    F0321 11:37:40.849474 8301 insert_splits.cpp:29] Unknown bottom blob 'clip' (layer 'lstm1', bottom index 1) *** Check failure stack trace: *** @ 0x7f1f11f135cd google::LogMessage::Fail() @ 0x7f1f11f15433 google::LogMessage::SendToLog() @ 0x7f1f11f1315b google::LogMessage::Flush() @ 0x7f1f11f15e1e google::LogMessageFatal::~LogMessageFatal() @ 0x7f1f1259725c caffe::InsertSplits() @ 0x7f1f125c0d3e caffe::Net<>::Init() @ 0x7f1f125c41f1 caffe::Net<>::Net() @ 0x7f1f1279905a caffe::Solver<>::InitTrainNet() @ 0x7f1f127995c7 caffe::Solver<>::Init() @ 0x7f1f1279995a caffe::Solver<>::Solver() @ 0x7f1f125cfcd3 caffe::Creator_AdamSolver<>() @ 0x40b018 train() @ 0x4078d4 main @ 0x7f1f106a9830 __libc_start_main @ 0x4081e9 _start @ (nil) (unknown) Aborted (core dumped)

    this is the lstm part in bilstm.prototxt:(i have change nothing of this .prototxt besides assign "..../data/h36m" to "root_folder" )

    lstm1 layer layer { name: "lstm1" type: "LSTM" bottom: "fc-reshape" bottom: "clip" top: "lstm1" recurrent_param { num_output: 1024 weight_filler { type: "uniform" min: -0.01 max: 0.01 } bias_filler { type: "constant" value: 0 } } }

    i have never use caffe ever before, someone can solve this? thanks so much!

    opened by wzzz-zh 1
  • test_human16.sh error

    test_human16.sh error

    When I modified the test_human16.sh EE369_evaluation.log and the template_16_merge.prototxt , and run sh test_human16.sh ,I received an error like this

    I0317 13:23:44.770185 20271 net.cpp:411] evalution -> evalution *** Aborted at 1552800224 (unix time) try "date -d @1552800224" if you are using GNU date *** PC: @ 0x7f95b27acfec caffe::MPJPEEvaluationLayer<>::LayerSetUp() *** SIGSEGV (@0x0) received by PID 20271 (TID 0x7f95b326b780) from PID 0; stack trace: *** @ 0x7f95b080b4b0 (unknown) @ 0x7f95b27acfec caffe::MPJPEEvaluationLayer<>::LayerSetUp() @ 0x7f95b28ebe4f caffe::Net<>::Init() @ 0x7f95b28ee130 caffe::Net<>::Net() @ 0x409897 test() @ 0x4078f4 main @ 0x7f95b07f6830 __libc_start_main @ 0x408209 _start @ 0x0 (unknown) Segmentation fault (core dumped)

    And I need your help ,please.

    opened by mingsjtu 1
  • What is mean?

    What is mean?

    dear, " After set up Human3.6m dataset following its illustration and download above training/testing list. You should update “root_folder” paths in CAFFE_ROOT/examples/.../*.prototxt for images and annotation director. " That mean I should replace " root_folder: "Human3.6m_ROOT_DIR" " in bilstm.prototxt? and how to replace?

    opened by NguyenDangBinh 1
  • Predict 2D Pose

    Predict 2D Pose

    Hi!

    I want to predict 2D with the model you provided but I still cannot do it.

    Following your instructions, I'm using "template_16_merge.prototxt" and the file "2d_iter_800000.caffemodel". Finally, I get the results from the layer named "2dpredict". I supposed the 32 parameters are organized as x1,y1, x2, yx,..., xn, yn. And since the results are between 0 and 1, they should be multiplied by the max(img_width, img_height). But still, when I graphic this points, they don't make sense.

    How should I read the 32 parameters that the model is outputting? Do I need some function to map it back to the input image? Is the pair "template_16_merge.prototxt" and "2d_iter_800000.caffemodel" correct to predict 2d?

    Thank you for your help!

    opened by BarbCoder 0
  • Error with Cublas_v2 on build

    Error with Cublas_v2 on build

    I've been trying to build the caffe library as mentioned during the steps in the readme but I keep getting errors in relation to a part of blas called cublas_v2 not being able to be imported when I use the make file in the script. How should I proceed, here is the full output:

    CXX src/caffe/blob.cpp
    CXX src/caffe/common.cpp
    CXX src/caffe/data_reader.cpp
    CXX src/caffe/data_transformer.cpp
    In file included from ./include/caffe/common.hpp:19:0,
                     from ./include/caffe/blob.hpp:8,
                     from src/caffe/blob.cpp:4:
    ./include/caffe/util/device_alternate.hpp:34:10: fatal error: cublas_v2.h: No such file or directory
     #include <cublas_v2.h>
              ^~~~~~~~~~~~~
    compilation terminated.
    In file included from ./include/caffe/common.hpp:19:0,
                     from ./include/caffe/blob.hpp:8,
                     from ./include/caffe/data_transformer.hpp:6,
                     from src/caffe/data_transformer.cpp:8:
    ./include/caffe/util/device_alternate.hpp:34:10: fatal error: cublas_v2.h: No such file or directory
     #include <cublas_v2.h>
              ^~~~~~~~~~~~~
    compilation terminated.
    Makefile:575: recipe for target '.build_release/src/caffe/blob.o' failed
    make: *** [.build_release/src/caffe/blob.o] Error 1
    make: *** Waiting for unfinished jobs....
    Makefile:575: recipe for target '.build_release/src/caffe/data_transformer.o' failed
    make: *** [.build_release/src/caffe/data_transformer.o] Error 1
    In file included from ./include/caffe/common.hpp:19:0,
                     from src/caffe/data_reader.cpp:6:
    ./include/caffe/util/device_alternate.hpp:34:10: fatal error: cublas_v2.h: No such file or directory
     #include <cublas_v2.h>
              ^~~~~~~~~~~~~
    compilation terminated.
    Makefile:575: recipe for target '.build_release/src/caffe/data_reader.o' failed
    make: *** [.build_release/src/caffe/data_reader.o] Error 1
    In file included from ./include/caffe/common.hpp:19:0,
                     from src/caffe/common.cpp:7:
    ./include/caffe/util/device_alternate.hpp:34:10: fatal error: cublas_v2.h: No such file or directory
     #include <cublas_v2.h>
              ^~~~~~~~~~~~~
    compilation terminated.
    Makefile:575: recipe for target '.build_release/src/caffe/common.o' failed
    make: *** [.build_release/src/caffe/common.o] Error 1
    
    
    opened by skyler14 3
  • Error when running test_human16.sh: Check failed: FLAGS_weights.size() > 0 (0 vs. 0) Need model weights to score.

    Error when running test_human16.sh: Check failed: FLAGS_weights.size() > 0 (0 vs. 0) Need model weights to score.

    Using the code located at the dropbox link in the README, I was able to successfully compile Caffe and PyCaffe. But when I try to run sh test_human16.sh . ../models/model_extension_mask3d/mask3d_iter_400000.caffemodel mask3d 5 I get the following error:

    mask3d/result1
    sed s#DATA_FOLDER#/home/3Dpose_ssl/data/h36m/gt/test/test1.txt#g ./template_16_merge.prototxt > ./test_tmp.prototxt
    F0623 23:22:46.469914 29140 caffe.cpp:263] Check failed: FLAGS_weights.size() > 0 (0 vs. 0) Need model weights to score.
    *** Check failure stack trace: ***
        @     0x7f5ce2a215cd  google::LogMessage::Fail()
        @     0x7f5ce2a23433  google::LogMessage::SendToLog()
        @     0x7f5ce2a2115b  google::LogMessage::Flush()
        @     0x7f5ce2a23e1e  google::LogMessageFatal::~LogMessageFatal()
        @           0x40a5b4  test()
        @           0x4078d4  main
        @     0x7f5ce11b7830  __libc_start_main
        @           0x4081e9  _start
        @              (nil)  (unknown)
    test_human16.sh: line 49: 29140 Aborted                 (core dumped) ../caffe-3dssl/build/tools/caffe test -gpu=1 -model=$tmp_proto_fn_2
    test_human16.sh: line 46: -weights=../models/model_extension_mask3d/mask3d_iter_400000.caffemodel: No such file or directory
    

    Any idea what could be causing this? The file mask3d_iter_400000.caffemodel is definitely in that location. I even tried using a complete path (not a relative one) and got the same error.

    opened by joshhaug 1
  • Why  the joint positions of MPII and human3.6m data the authors provided are inconsistent ???

    Why the joint positions of MPII and human3.6m data the authors provided are inconsistent ???

    why the joint positions of MPII and human3.6m datasets the authors provided are inconsistent ???

    This is the result of the MPII and human3.6m datasets the authors provided: 屏幕快照 2019-03-28 上午10 36 48

    opened by chaytonmin 4
Owner
Chenhan Jiang
Chenhan Jiang
Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Ibai Gorordo 99 Dec 31, 2022
Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors Human POSEitioning System (H

Aymen Mir 66 Dec 21, 2022
SE3 Pose Interp - Interpolate camera pose or trajectory in SE3, pose interpolation, trajectory interpolation

SE3 Pose Interpolation Pose estimated from SLAM system are always discrete, and

Ran Cheng 4 Dec 15, 2022
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

SpaceML 92 Nov 30, 2022
《Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement》(ECCV 2020) GitHub: [fig9]

Unsupervised 3D Human Pose Representation [Paper] The implementation of our paper Unsupervised 3D Human Pose Representation with Viewpoint and Pose Di

null 42 Nov 24, 2022
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Pyjcsx 328 Dec 17, 2022
This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

shangbuhuan 52 Nov 25, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

selfcontact This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] It includes the main function

Lea Müller 68 Dec 6, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

SMPLify-XMC This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright Lic

Lea Müller 83 Dec 14, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

TUCH This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright License fo

Lea Müller 45 Jan 7, 2023
The official TensorFlow implementation of the paper Action Transformer: A Self-Attention Model for Short-Time Pose-Based Human Action Recognition

Action Transformer A Self-Attention Model for Short-Time Human Action Recognition This repository contains the official TensorFlow implementation of t

PIC4SeRCentre 20 Jan 3, 2023
[CVPR 2022] PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision (Oral)

PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision Kehong Gong*, Bingbing Li*, Jianfeng Zhang*, Ta

null 256 Dec 28, 2022
[ICCV-2021] An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human Pose Estimation

An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human Pose Estimation (ICCV 2021) Introduction This is an official pytorch implemen

rongchangxie 42 Jan 4, 2023
Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR

UniSpeech The family of UniSpeech: UniSpeech (ICML 2021): Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR UniSpeech-

Microsoft 282 Jan 9, 2023
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Leo Xiao 3.9k Jan 5, 2023
Deep High-Resolution Representation Learning for Human Pose Estimation

Deep High-Resolution Representation Learning for Human Pose Estimation (accepted to CVPR2019) News If you are interested in internship or research pos

HRNet 167 Dec 27, 2022
Pytorch implementation for A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose

A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose Paper | Website | Data A-NeRF: Articulated Neural Radiance F

Shih-Yang Su 172 Dec 22, 2022
[CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models Codes for this paper The Lottery Tickets Hypo

VITA 59 Dec 28, 2022
Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models

Patch-Rotation(PatchRot) Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models Submitted to Neurips2021 To

null 4 Jul 12, 2021