This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)"

Overview

Gait3D-Benchmark

This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)". The official project page is here.

What's New

  • [Mar 2022] Another gait in the wild dataset GREW is supported.
  • [Mar 2022] Our Gait3D dataset and SMPLGait method are released.

Model Zoo

Gait3D

Input Size: 128x88(64x44)

Method Rank@1 Rank@5 mAP mINP download
GaitSet(AAAI2019)) 42.60(36.70) 63.10(58.30) 33.69(30.01) 19.69(17.30) model-128(model-64)
GaitPart(CVPR2020) 29.90(28.20) 50.60(47.60) 23.34(21.58) 13.15(12.36) model-128(model-64)
GLN(ECCV2020) 42.20(31.40) 64.50(52.90) 33.14(24.74) 19.56(13.58) model-128(model-64)
GaitGL(ICCV2021) 23.50(29.70) 38.50(48.50) 16.40(22.29) 9.20(13.26) model-128(model-64)
OpenGait Baseline* 47.70(42.90) 67.20(63.90) 37.62(35.19) 22.24(20.83) model-128(model-64)
SMPLGait(CVPR2022) 53.20(46.30) 71.00(64.50) 42.43(37.16) 25.97(22.23) model-128(model-64)

*It should be noticed that OpenGait Baseline is equal to SMPLGait w/o 3D in our paper.

Cross Domain

Datasets in the Wild (GaitSet, 64x44)

Source Target Rank@1 Rank@5 mAP
GREW (official split) Gait3D 15.80 30.20 11.83
GREW (our split) 16.50 31.10 11.71
Gait3D GREW (official split) 18.81 32.25 ~
GREW (our split) 43.86 60.89 28.06

Requirements

  • pytorch >= 1.6
  • torchvision
  • pyyaml
  • tensorboard
  • opencv-python
  • tqdm
  • py7zr
  • tabulate
  • termcolor

Installation

You can replace the second command from the bottom to install pytorch based on your CUDA version.

git clone https://github.com/Gait3D/Gait3D-Benchmark.git
cd Gait3D-Benchmark
conda create --name py37torch160 python=3.7
conda activate py37torch160
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 -c pytorch
pip install tqdm pyyaml tensorboard opencv-python tqdm py7zr tabulate termcolor

Data Preparation

Please download the Gait3D dataset by signing an agreement. We ask for your information only to make sure the dataset is used for non-commercial purposes. We will not give it to any third party or publish it publicly anywhere.

Data Pretreatment

Run the following command to preprocess the Gait3D dataset.

python misc/pretreatment.py --input_path 'Gait3D/2D_Silhouettes' --output_path 'Gait3D-sils-64-44-pkl' --img_h 64 --img_w 44
python misc/pretreatment.py --input_path 'Gait3D/2D_Silhouettes' --output_path 'Gait3D-sils-128-88-pkl' --img_h 128 --img_w 88
python misc/pretreatment_smpl.py --input_path 'Gait3D/3D_SMPLs' --output_path 'Gait3D-smpls-pkl'

Data Structrue

After the pretreatment, the data structure under the directory should like this

├── Gait3D-sils-64-44-pkl
│  ├── 0000
│     ├── camid0_videoid2
│        ├── seq0
│           └──seq0.pkl
├── Gait3D-sils-128-88-pkl
│  ├── 0000
│     ├── camid0_videoid2
│        ├── seq0
│           └──seq0.pkl
├── Gait3D-smpls-pkl
│  ├── 0000
│     ├── camid0_videoid2
│        ├── seq0
│           └──seq0.pkl

Train

Run the following command:

sh train.sh

Test

Run the following command:

sh test.sh

Citation

Please cite this paper in your publications if it helps your research:

@inproceedings{zheng2022gait3d,
  title={Gait Recognition in the Wild with Dense 3D Representations and A Benchmark},
  author={Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

Acknowledgement

Here are some great resources we benefit:

  • The codebase is based on OpenGait.
  • The 3D SMPL data is obtained by ROMP.
  • The 2D Silhouette data is obtained by HRNet-segmentation.
  • The 2D pose data is obtained by HRNet.
  • The ReID featrue used to make Gait3D is obtained by FastReID.
Comments
  • lib/modeling/models/smplgait.py throwing error when training a new dataset

    lib/modeling/models/smplgait.py throwing error when training a new dataset

    Hi Jinkai,

    When I try to use the SMPLGait to apply on other dataset, during the training process, the smplgait.py throws the error that: smpls = ipts[1][0] # [n, s, d] IndexError: list index out of range It is also interesting that I used 4 GPUs in the training. 3 of them could detect the the ipts[1][0] tensor with size 1. However, the fourth one failed to do so. Could I know how I can solve this?

    opened by zhiyuann 7
  • I have a few questions about Gait3D-Benchmark Datasets

    I have a few questions about Gait3D-Benchmark Datasets

    Hi. Im jjun. I read your paper impressively.

    We don't currently live in China, so it is difficult to use dataset on baidu disk.

    If you don't mind, is there a way to download the dataset to another disk (e.g Google drive)?

    opened by jjunnii 6
  • Question about 3D SMPL skeleton topology diagram

    Question about 3D SMPL skeleton topology diagram

    Your work promotes the application of gait recognition in real scenes, can you provide the topology diagram of the SMPL 3D skeleton in Gait3D? Because the specific meaning of the 24 joint points is not stated in your data description document.

    opened by HL-HYX 4
  • ROMP SMPL transfer

    ROMP SMPL transfer

    When I try to use the ROMP to generate out the 3D mesh, I detect there is a version conflict with the ROMP used by SMPLGait. Could I know which version of the ROMP the SMPLGait used? In this way I could use the SMPLGait to run on other ReID dataset.

    opened by zhiyuann 3
  • question about iteration and epoch

    question about iteration and epoch

    Hi! The total iteration in your code is set to 180000, and you report the total epoch as 1200 in your paper. What's the relationship between iteration and epoch?

    opened by yan811 2
  • About data generation

    About data generation

    Hi! I 'd like to know some details about data generation in NPZ files.

    In npz file: 1 What's the order of "pose"? SMPL pose parameter should be [24,3] dim, how did you convert it to [72,]? The order is [keypoint1_angel1, keypoint1_angle2, keypoint1_angle3, keypoint2_angel1, keypoint2_angle2, keypoint2_angle3...] or [keypoint1_angle1, keypoint2_angle1... keypoint1_angle2, keypoint2_angle2... keypoint1_angle3, keypoint1_angle3... ] ?

    2 How did you generate pose into SMPL format,SPIN format , and OpenPose format? What's the order of the second dim? Is the keypoint order the same with SMPL model?

    3 In pkl file: For example, dim of data in './0000/camid0_videoid2/seq0/seq0.pkl' is [48,85]. What's the order of dim 1? Is it ordered by time order or shuffled?

    opened by yan811 2
  • GREW pretreatment `to_pickle` has size 0

    GREW pretreatment `to_pickle` has size 0

    I'm trying to run GREW pretreatment code but it generates no GREW-pkl folder at the end of the process. I debugged myself and checked if the --dataset flag is set properly and the to_pickle list size before saving the pickle file. The flag is well set but the size of the list is always 0.

    I downloaded the GREW dataset from the link you guys sent me and made de GREW-rearranged folder using the code provided. I'll keep investigating what is causing such an error and if I find I'll set a fixing PR.

    opened by gosiqueira 1
  • About the pose data

    About the pose data

    Can you make a detailed description of the pose data? This is the path of one frame pose and the corresponding content of the txt file Gait3D/2D_Poses/0000/camid9_videoid2/seq0/human_crop_f17279.txt '311,438,89.201164,62.87694,0.57074964,89.201164,54.322254,0.47146344,84.92382,62.87694,0.63443935,42.150383....' I have 3 questions. Q1: what does 'f17279' means? Q2: what does the first number (e.g. 311) in the txt file mean? Q3: which number('f17279' or '311') should I regard as a base when I order the sequence? Thank you very much!

    opened by HiAleeYang 0
Owner
Official repo for Gait3D dataset
null
Inference code for "StylePeople: A Generative Model of Fullbody Human Avatars" paper. This code is for the part of the paper describing video-based avatars.

NeuralTextures This is repository with inference code for paper "StylePeople: A Generative Model of Fullbody Human Avatars" (CVPR21). This code is for

Visual Understanding Lab @ Samsung AI Center Moscow 18 Oct 6, 2022
This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

Benjamin Biggs 29 Dec 28, 2022
TensorFlow code for the neural network presented in the paper: "Structural Language Models of Code" (ICML'2020)

SLM: Structural Language Models of Code This is an official implementation of the model described in: "Structural Language Models of Code" [PDF] To ap

null 73 Nov 6, 2022
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

J K Terry 32 Nov 9, 2021
Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166

Region Proportion Regularized Inference (RePRI) for Few-Shot Segmentation In this repo, we provide the code for our paper : "Few-Shot Segmentation Wit

Malik Boudiaf 138 Dec 12, 2022
Code for ACM MM 2020 paper "NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination"

NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination The offical implementation for the "NOH-NMS: Improving Pedestrian Detection by

Tencent YouTu Research 64 Nov 11, 2022
Official TensorFlow code for the forthcoming paper

~ Efficient-CapsNet ~ Are you tired of over inflated and overused convolutional neural networks? You're right! It's time for CAPSULES :)

Vittorio Mazzia 203 Jan 8, 2023
This is the code for the paper "Contrastive Clustering" (AAAI 2021)

Contrastive Clustering (CC) This is the code for the paper "Contrastive Clustering" (AAAI 2021) Dependency python>=3.7 pytorch>=1.6.0 torchvision>=0.8

Yunfan Li 210 Dec 30, 2022
Code for the paper Learning the Predictability of the Future

Learning the Predictability of the Future Code from the paper Learning the Predictability of the Future. Website of the project in hyperfuture.cs.colu

Computer Vision Lab at Columbia University 139 Nov 18, 2022
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

null 43 Nov 19, 2022
Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation

A Theoretical Analysis of the Repetition Problem in Text Generation This repository share the code for the paper "A Theoretical Analysis of the Repeti

Zihao Fu 37 Nov 21, 2022
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Qing-Long Zhang 199 Jan 8, 2023
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Facebook Research 296 Dec 29, 2022
Code for the Shortformer model, from the paper by Ofir Press, Noah A. Smith and Mike Lewis.

Shortformer This repository contains the code and the final checkpoint of the Shortformer model. This file explains how to run our experiments on the

Ofir Press 138 Apr 15, 2022
PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection

Unbiased Teacher for Semi-Supervised Object Detection This is the PyTorch implementation of our paper: Unbiased Teacher for Semi-Supervised Object Detection

Facebook Research 366 Dec 28, 2022
Official code for paper "Optimization for Oriented Object Detection via Representation Invariance Loss".

Optimization for Oriented Object Detection via Representation Invariance Loss By Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Xue Yang, and Yunpeng Dong. Th

ming71 56 Nov 28, 2022
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022