Inference code for "StylePeople: A Generative Model of Fullbody Human Avatars" paper. This code is for the part of the paper describing video-based avatars.

Overview

NeuralTextures

This is repository with inference code for paper "StylePeople: A Generative Model of Fullbody Human Avatars" (CVPR21). This code is for the part of the paper describing video-based avatars. For inference of generative neural textures model refer to this repository.

Getting started

Data

To use this repository you first need to download model checkpoints and some auxiliary files.

  • Download the archive with data from Google Drive and unpack it into NeuralTextures/data/. It contains:
    • checkpoints for generative model and encoder network (data/checkpoint)
    • SMPL-X parameters for samples from AzurePeople dataset to run inference script on (data/smplx_dicts)
    • Some auxiliary data (data/uv_render and data/*.npy)
  • Download SMPL-X models (SMPLX_{MALE,FEMALE,NEUTRAL}.pkl) from SMPL-X project page and move them to data/smplx/

Docker

The easiest way to build an environment for this repository is to use docker image. To build it, make the following steps:

  1. Build the image with the following command:
bash docker/build.sh
  1. Start a container:
bash docker/run.sh

It mounts root directory of the host system to /mounted/ inside docker and sets cloned repository path as a starting directory.

  1. Inside the container install minimal_pytorch_rasterizer. (Unfortunately, docker fails to install it during image building)
pip install git+https://github.com/rmbashirov/minimal_pytorch_rasterizer
  1. (Optional) You can then commit changes to the image so that you don't need to install minimal_pytorch_rasterizer for every new container. See docker documentation.

Usage

For now the only scenario in this repository involves rendering an image of a person from AzurePeople dataset with giver SMPL-X parameters.

Example:

python render_azure_person.py --person_id=04 --smplx_dict_path=data/smplx_dicts/04.pkl --out_path=data/results/

will render a person with id='04' with SMPL-X parameters from data/smplx_dicts/04.pkl and save resulting images to data/results/04.

For ids of all 56 people consult this table

You might also like...
Video-Captioning - A machine Learning project to generate captions for video frames indicating the relationship between the objects in the video
Video-Captioning - A machine Learning project to generate captions for video frames indicating the relationship between the objects in the video

Video-Captioning - A machine Learning project to generate captions for video frames indicating the relationship between the objects in the video

Towards Part-Based Understanding of RGB-D Scans
Towards Part-Based Understanding of RGB-D Scans

Towards Part-Based Understanding of RGB-D Scans (CVPR 2021) We propose the task of part-based scene understanding of real-world 3D environments: from

 Exploit Camera Raw Data for Video Super-Resolution via Hidden Markov Model Inference
Exploit Camera Raw Data for Video Super-Resolution via Hidden Markov Model Inference

RawVSR This repo contains the official codes for our paper: Exploit Camera Raw Data for Video Super-Resolution via Hidden Markov Model Inference Xiaoh

An implementation of Geoffrey Hinton's paper
An implementation of Geoffrey Hinton's paper "How to represent part-whole hierarchies in a neural network" in Pytorch.

GLOM An implementation of Geoffrey Hinton's paper "How to represent part-whole hierarchies in a neural network" for MNIST Dataset. To understand this

An attempt at the implementation of GLOM, Geoffrey Hinton's paper for emergent part-whole hierarchies from data
An attempt at the implementation of GLOM, Geoffrey Hinton's paper for emergent part-whole hierarchies from data

GLOM TensorFlow This Python package attempts to implement GLOM in TensorFlow, which allows advances made by several different groups transformers, neu

Official implementation of the paper Visual Parser: Representing Part-whole Hierarchies with Transformers
Official implementation of the paper Visual Parser: Representing Part-whole Hierarchies with Transformers

Visual Parser (ViP) This is the official implementation of the paper Visual Parser: Representing Part-whole Hierarchies with Transformers. Key Feature

Utility tools for the
Utility tools for the "Divide and Remaster" dataset, introduced as part of the Cocktail Fork problem paper

Divide and Remaster Utility Tools Utility tools for the "Divide and Remaster" dataset, introduced as part of the Cocktail Fork problem paper The DnR d

[CVPR 2022] Official PyTorch Implementation for
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

Rewrite ultralytics/yolov5 v6.0 opencv inference code based on numpy, no need to rely on pytorch

Rewrite ultralytics/yolov5 v6.0 opencv inference code based on numpy, no need to rely on pytorch; pre-processing and post-processing using numpy instead of pytroch.

Comments
  • Adding Pose to the clothed SMPL-X Model

    Adding Pose to the clothed SMPL-X Model

    How do I add pose to the clothed SMPL-X model? I tried to do this the following way, but did not succeed. I see that that the inferer.make_rotation_images() function calls the load_smplx() function, in which the SMPLX model along with its mesh vertices (in the variable smpl_output.vertices are obtained. The SMPLX model has the following variable attributes

    vertices <class 'torch.Tensor'>, torch.Size([1, 10475, 3])
    joints <class 'torch.Tensor'>, torch.Size([1, 127, 3])
    full_pose <class 'NoneType'>, 
    global_orient <class 'torch.Tensor'>, torch.Size([1, 3])
    transl <class 'NoneType'>, 
    betas <class 'torch.Tensor'>, torch.Size([1, 10])
    body_pose <class 'torch.Tensor'>, torch.Size([1, 63])
    left_hand_pose <class 'torch.Tensor'>, torch.Size([1, 45])
    right_hand_pose <class 'torch.Tensor'>, torch.Size([1, 45])
    expression <class 'torch.nn.parameter.Parameter'>, torch.Size([1, 10])
    jaw_pose <class 'torch.Tensor'>, torch.Size([1, 3])
    
    

    I tried assigning values to body_pose, left_hand_pose, and right_hand_pose. However, this does not modify smpl_output.vertices, and consequently, the generated output images are not modified.

    opened by hshreeshail 2
  • OSError: Failed to interpret file 'data/smplx/SMPLX_NEUTRAL.pkl' as a pickle

    OSError: Failed to interpret file 'data/smplx/SMPLX_NEUTRAL.pkl' as a pickle

    I Download SMPL-X models from SMPL-X project page and move them to data/smplx/. But, unfortunately, I run into this trouble.

    "Failed to interpret file %s as a pickle" % repr(file)) OSError: Failed to interpret file 'data/smplx/SMPLX_NEUTRAL.pkl' as a pickle

    Could you give me a hand? Thanks~

    opened by hzhao1997 1
  • How to pretrain the rendering network?

    How to pretrain the rendering network?

    Thank you for your contributions!

    In the paper of StylePeople (Sec. 3.2 & supplementary material), you mention that the renderer network can be pretrained on 56 people from AzurePeople dataset. Based on my understanding, both neural texture and the weight of renderer network are unknown, jointly training the person-specific neural textures and a generalizable renderer network seems very hard and undeterminated. Could you please give more detailed instructions about how to pretrain the rendering network? Thank you so much!

    opened by ryancll 0
  • Issue with visualization output

    Issue with visualization output

    I ran the render_azure_person.py script. I loaded the pose values from a different image. While the pose transfer is correct, I don't understand why the visualization of certain parts such as fingers is suffering from artifacts. See the example pasted below.

    Left Side: Output on a standard SMPLX model; Right Side: Output on Clothed model (person_id=04). pixie_vs_stylepeople We can clearly see that the fingers are not well separated in the visualization. What is causing this and how to solve this problem?

    opened by hshreeshail 0
Owner
Visual Understanding Lab @ Samsung AI Center Moscow
Visual Understanding Lab @ Samsung AI Center Moscow
Avatarify Python - Avatars for Zoom, Skype and other video-conferencing apps.

Avatarify Python - Avatars for Zoom, Skype and other video-conferencing apps.

Ali Aliev 15.3k Jan 5, 2023
A modified version of DeepMind's Alphafold2 to divide CPU part (MSA and template searching) and GPU part (prediction model)

ParallelFold Author: Bozitao Zhong This is a modified version of DeepMind's Alphafold2 to divide CPU part (MSA and template searching) and GPU part (p

Bozitao Zhong 77 Dec 22, 2022
SMPLpix: Neural Avatars from 3D Human Models

subject0_validation_poses.mp4 Left: SMPL-X human mesh registered with SMPLify-X, middle: SMPLpix render, right: ground truth video. SMPLpix: Neural Av

Sergey Prokudin 292 Dec 30, 2022
Generate pixel-style avatars with python.

face2pixel Generate pixel-style avatars with python. Run: Clone the project: git clone https://github.com/theodorecooper/face2pixel install requiremen

Theodore Cooper 2 May 11, 2022
[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars Fangzhou Hong1*  Mingyuan Zhang1*  Liang Pan1  Zhongang Cai1,2,3  Lei Yang2 

Fangzhou Hong 749 Jan 4, 2023
Torchserve server using a YoloV5 model running on docker with GPU and static batch inference to perform production ready inference.

Yolov5 running on TorchServe (GPU compatible) ! This is a dockerfile to run TorchServe for Yolo v5 object detection model. (TorchServe (PyTorch librar

null 82 Nov 29, 2022
Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.

human-pose-estimation-3d-python-cpp RealSenseD435 (RGB) 480x640 + CPU Corei9 45 FPS (Depth is not used) 1. Run 1-1. RealSenseD435 (RGB) 480x640 + CPU

Katsuya Hyodo 8 Oct 3, 2022
PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices.

PyTorch-LIT PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices. With

Amin Rezaei 157 Dec 11, 2022
Data-depth-inference - Data depth inference with python

Welcome! This readme will guide you through the use of the code in this reposito

Marco 3 Feb 8, 2022
The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter

FAPIS The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter Introduction This repo is primari

Khoi Nguyen 8 Dec 11, 2022