Code for Neural-GIF: Neural Generalized Implicit Functions for Animating People in Clothing(ICCV21)

Overview

NeuralGIF

Code for Neural-GIF: Neural Generalized Implicit Functions for Animating People in Clothing(ICCV21)

We present Neural Generalized Implicit Functions (Neural-GIF), to animate people in clothing as a function of body pose. Neural-GIF learns directly from scans, models complex clothing and produces pose-dependent details for realistic animation. We show for four different characters the query input pose on the left (illustrated with a skeleton) and our output animation on the right.

Dataset and Pretrained models

https://nextcloud.mpi-klsb.mpg.de/index.php/s/FweAP5Js58Q9tsq

Installation

1. Install kaolin: https://github.com/NVIDIAGameWorks/kaolin

2. conda env create -f neuralgif.yml

3. conda activate neuralgif

Training NeuralGIF

 1. Edit configs/*yaml with correct path
        a. data/data_dir:
        b. data/split_file: <path to train/test split file> (see example in dataset folder)
        c. experiment/root_dir: training dir
        d. experiment/exp_name: <exp_name>
 2 . python trainer_shape.py --config=<path to config file>

Generating meshes from NeuralGIF

1. python generator.py --config=<path to config file>

Data preparation

1. SMPL pose and shape parameters:  https://github.com/bharat-b7/IPNet

2. Save the registartion data and original scan data as: 
    
    a. data_dir/scan_dir: contain original scans
    b. data_dir/beta.npy: SMPL beta parameter of subject
    c. data_dir/pose.npz: SMPL pose parameters for all frames of scan

3. Prepare training data:
    python prepare_data/scan_data.py -data_dir=<path to data directory>

Visualisation

python visualisation/render_meshes.py -mesh_path=<folder containing meshes> -out_dir=<output dir>

Citation:

@inproceedings{tiwari21neuralgif,
  title = {Neural-GIF: Neural Generalized Implicit Functions for Animating People in Clothing},
  author = {Tiwari, Garvita and Sarafianos, Nikolaos and Tung, Tony and Pons-Moll, Gerard},
  booktitle = {International Conference on Computer Vision ({ICCV})},
  month = {October},
  year = {2021},
  }
Comments
  • Some obj files of the ShrugsPants sequence are broken or empty.

    Some obj files of the ShrugsPants sequence are broken or empty.

    Hi Garvita, I am trying to load ClothSeq's data with trimesh. My code looks like this:

    import trimesh as tr
    
    mesh = tr.load_mesh(path, 'obj')
    verts = mesh.vertices
    

    It works fine for JacketsPants and JacketsShorts, but reports errors for some frames of ShrugsPants:

    1. When loading 000447.obj, the error information is:
    invalid literal for int() with base 10: '20674\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x
    
    1. For some frames (e.g. 000304.obj, 000865.obj), I encounter:
    'Scene' object has no attribute 'vertices'
    

    I manually loaded one of these frames and found no geometry in the Scene object.

    1. I also see warnings like this:
    unable to load materials from: Jacket.001170.mtl
    

    I believe this can be relieved by removing the material information in the header of each obj file:

    # 3dMD
    # 3D Surface file output, Multi-image Stereo - 4.1.5.3 - 190424
    # File type: ASCII OBJ
    # All enquiries to : [email protected]
    # COPYRIGHT: (c) 1998-2019 3dMD Technologies Ltd.
    # COMMERCIAL IN CONFIDENCE.
    
    mtllib Jacket.001170.mtl
    

    Thanks for your time!

    opened by ShenhanQian 4
  • The pretrained ClothSeq models are unusable

    The pretrained ClothSeq models are unusable

    I'm trying to generate meshes with the pretrained clothseq model from this link: https://nextcloud.mpi-klsb.mpg.de/index.php/s/FweAP5Js58Q9tsq?path=%2Fpretrained_models%2Fsingle_shape%2Fclothseq_1. However, I got:

    Traceback (most recent call last):
      File "/public/home/xujl1/projects/human-animation/neuralgif/generator.py", line 34, in <module>
        train(opt)
      File "/public/home/xujl1/projects/human-animation/neuralgif/generator.py", line 18, in train
        gen = gen( opt=opt, checkpoint=args.checkpoint, resolution=resolution)
      File "/public/home/xujl1/projects/human-animation/neuralgif/models/generate_shape.py", line 47, in __init__
        self.load_checkpoint_path(checkpoint)
      File "/public/home/xujl1/projects/human-animation/neuralgif/models/generate_shape.py", line 178, in load_checkpoint_path
        self.model_occ.load_state_dict(checkpoint['model_state_occ_dict'])
      File "/public/home/xujl1/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1482, in load_state_dict
        raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    RuntimeError: Error(s) in loading state_dict for CanSDF:
            size mismatch for layers.0.weight: copying a param with shape torch.Size([960, 3075]) from checkpoint, the shape in current model is torch.Size([960, 75]).
    (base)
    

    I modified config['model']['CanSDF']['num_parts'] from 24 to 1024, then I got:

    Traceback (most recent call last):
      File "/public/home/xujl1/projects/human-animation/neuralgif/generator.py", line 34, in <module>
        train(opt)
      File "/public/home/xujl1/projects/human-animation/neuralgif/generator.py", line 18, in train
        gen = gen( opt=opt, checkpoint=args.checkpoint, resolution=resolution)
      File "/public/home/xujl1/projects/human-animation/neuralgif/models/generate_shape.py", line 47, in __init__
        self.load_checkpoint_path(checkpoint)
      File "/public/home/xujl1/projects/human-animation/neuralgif/models/generate_shape.py", line 181, in load_checkpoint_path
        self.model_wgt.load_state_dict(checkpoint['model_state_wgt_dict'])
      File "/public/home/xujl1/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1482, in load_state_dict
        raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    RuntimeError: Error(s) in loading state_dict for WeightPred:
            size mismatch for layers.0.weight: copying a param with shape torch.Size([960, 837]) from checkpoint, the shape in current model is torch.Size([960, 27]).
    

    Then I modified config['model']['WeightPred']['num_parts'] from 24 to 834, then I got:

    File "/public/home/xujl1/projects/human-animation/neuralgif/generation_iterator.py", line 41, in gen_iterator
        logits, min, max,can_pt = gen.generate_mesh(data)
      File "/public/home/xujl1/projects/human-animation/neuralgif/models/generate_shape.py", line 109, in generate_mesh
        weight_pred = self.model_wgt(pointsf, body_enc_feat, pose_in)
      File "/public/home/xujl1/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/public/home/xujl1/projects/human-animation/neuralgif/models/network/net_modules.py", line 68, in forward
        x_net = self.actvn(self.layers[i](x_net))
      File "/public/home/xujl1/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/public/home/xujl1/anaconda3/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 103, in forward
        return F.linear(input, self.weight, self.bias)
      File "/public/home/xujl1/anaconda3/lib/python3.9/site-packages/torch/nn/functional.py", line 1848, in linear
        return torch._C._nn.linear(input, weight, bias)
    RuntimeError: mat1 and mat2 shapes cannot be multiplied (1000000x27 and 837x960)
    

    I think the problem is that the clothseq.yaml you provided in this repo is not the same as the configurations you used in your training. Can you fix this problem?

    opened by bluestyle97 2
  • prepare_data/scan_data.py is missing

    prepare_data/scan_data.py is missing

    Hi, researchers,

    Thanks for the released code of this excellent work.

    It seems that the file of prepare_data/scan_data.py is missing, and also the data folder is empty.

    opened by StevenLiuWen 2
  • Missing files in load_data.py

    Missing files in load_data.py

    Hi, in models/load_data.py#36, it loads skinning_body.npy and 000_tpose.npy, but I did't find these 2 files in the released data. Where do they come from? Thanks.

    opened by bluestyle97 1
  • I'd like to run

    I'd like to run "Generating meshes from NeuralGif"

    I am currently looking to try this program after reading through your paper.

    First, I wanted to run "Generating meshes from NeuralGif". So I built the environment according to "Installation" and downloaded the data and pretrained models from "https://nextcloud.mpi-klsb.mpg.de/index.php/s/FweAP5Js58Q9tsq". Then I tried to run Generator.py, but it obviously did not work with just the downloaded files. So I would like to ask you a few questions.

    Q.1 According to your answer in #3, it is not necessary to run IPNet registrations for the Clotheseq dataset. However, the files needed to input Generator.py are obviously missing, so do I still need to run "Data preparation"?

    Q.2. I heard that registration is not required in the paper, but why is it necessary? If it is just an Inference, isn't it a program that generates a mesh with smpl param as input?

    Q.3 It would be helpful if you could describe the data placement in detail in the ReadMe. Also, I am not sure about the meaning of each name in Config.

    opened by graywolf0918 0
  • About training loss in paper and implementation

    About training loss in paper and implementation

    Hi,

    In the paper, you said that the pretaining is conducted with the supervision of blend weights only (wgt in the below code). However, there are several losses except the skinning weight loss named diff_can and spr_wgt. https://github.com/garvita-tiwari/neuralgif/blob/40a4a96c234aaaa795770e25ae10be958b56932f/models/train_shape.py#L156-L160

    The definition of diff_can is obvious but spr_wgt defined below is hard to understand for me. Could you give me an explanation about what this loss means or any reference related to this loss? https://github.com/garvita-tiwari/neuralgif/blob/40a4a96c234aaaa795770e25ae10be958b56932f/models/train_shape.py#L152-L153

    Thanks in advance :)

    opened by SuwoongHeo 0
  • Adding clothing capability to SMPL-X body model

    Adding clothing capability to SMPL-X body model

    I am working on a problem which requires animating finger movements and facial expressions. Hence, I am using PIXIE, which uses the SMPL-X model. However, PIXIE works on the vanilla, undressed SMPL-X model. I was wondering what the easiest way is to add the clothing feature to PIXIE (i.e., I want PIXIE's output on a clothed SMPL-X model). It would suffice for the clothing/skin color to be a fixed value i.e., it doesn't have to be derived from the input image.

    opened by hshreeshail 2
  • Question about supervision for training on ClothesSeq data.

    Question about supervision for training on ClothesSeq data.

    Hi, thanks for releasing your code!

    If i get the following codes right. In the first stage, the weights for both SMPL and clothed human meshes are predicted. My question is how the gt weight for the clothed human (skin_bp) is calculated. https://github.com/garvita-tiwari/neuralgif/blob/5e602563d24559bffc16188f11ef0a36fba8c9db/models/train_shape.py#L138-L139

    https://github.com/garvita-tiwari/neuralgif/blob/5e602563d24559bffc16188f11ef0a36fba8c9db/models/train_shape.py#L150-L151

    Moreover, here the clothed human in the canonical pose (can_pts_gt) is also used for supervision. https://github.com/garvita-tiwari/neuralgif/blob/5e602563d24559bffc16188f11ef0a36fba8c9db/models/train_shape.py#L148-L149

    I can understand these variables for a SMPL mesh can be obtaiend. But how to obtain these parameters for a clothed human mesh?

    opened by GostInShell 0
  • Qusetion about SMPL parameters.

    Qusetion about SMPL parameters.

    Dear Garvita, thanks for your amazing work.

    I am trying to use ClothSeq, but find the SMPL fitting not that aligned to the scan, the foot and the hands in particular: snapshot00

    Is this abnormal? I am using the SMPL body model v1.0 with 10 principal components.

    opened by ShenhanQian 7
Owner
Garvita Tiwari
Garvita Tiwari
Code for "ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on", accepted at WACV 2021 Generation of Human Behavior Workshop.

ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on [ Paper ] [ Project Page ] This repository contains the code fo

Andrew Jong 97 Dec 13, 2022
[ICCV21] Code for RetrievalFuse: Neural 3D Scene Reconstruction with a Database

RetrievalFuse Paper | Project Page | Video RetrievalFuse: Neural 3D Scene Reconstruction with a Database Yawar Siddiqui, Justus Thies, Fangchang Ma, Q

Yawar Nihal Siddiqui 75 Dec 22, 2022
LeViT a Vision Transformer in ConvNet's Clothing for Faster Inference

LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference This repository contains PyTorch evaluation code, training code and pretrained

Facebook Research 504 Jan 2, 2023
Official implementation of the ICCV 2021 paper: "The Power of Points for Modeling Humans in Clothing".

The Power of Points for Modeling Humans in Clothing (ICCV 2021) This repository contains the official PyTorch implementation of the ICCV 2021 paper: T

Qianli Ma 158 Nov 24, 2022
[ICCV21] Self-Calibrating Neural Radiance Fields

Self-Calibrating Neural Radiance Fields, ICCV, 2021 Project Page | Paper | Video Author Information Yoonwoo Jeong [Google Scholar] Seokjun Ahn [Google

null 381 Dec 30, 2022
Unofficial Tensorflow 2 implementation of the paper Implicit Neural Representations with Periodic Activation Functions

Siren: Implicit Neural Representations with Periodic Activation Functions The unofficial Tensorflow 2 implementation of the paper Implicit Neural Repr

Seyma Yucer 2 Jun 27, 2022
Ranking Models in Unlabeled New Environments (iccv21)

Ranking Models in Unlabeled New Environments Prerequisites This code uses the following libraries Python 3.7 NumPy PyTorch 1.7.0 + torchivision 0.8.1

null 14 Dec 17, 2021
Converts given image (png, jpg, etc) to amogus gif.

Image to Amogus Converter Converts given image (.png, .jpg, etc) to an amogus gif! Usage Place image in the /target/ folder (or anywhere realistically

Hank Magan 1 Nov 24, 2021
A small tool to joint picture including gif

README 做设计的时候遇到拼接长图的情况,但是发现没有什么好用的能拼接gif的工具。 于是自己写了个gif拼接小工具。 可以自动拼接gif、png和jpg等常见格式。 效果 从上至下 从下至上 从左至右 从右至左 使用 克隆仓库 git clone https://github.com/Dels

null 3 Dec 15, 2021
Deep Implicit Moving Least-Squares Functions for 3D Reconstruction

DeepMLS: Deep Implicit Moving Least-Squares Functions for 3D Reconstruction This repository contains the implementation of the paper: Deep Implicit Mo

null 103 Dec 22, 2022
This is the source code for our ICLR2021 paper: Adaptive Universal Generalized PageRank Graph Neural Network.

GPRGNN This is the source code for our ICLR2021 paper: Adaptive Universal Generalized PageRank Graph Neural Network. Hidden state feature extraction i

Jianhao 92 Jan 3, 2023
Learning hidden low dimensional dyanmics using a Generalized Onsager Principle and neural networks

OnsagerNet Learning hidden low dimensional dyanmics using a Generalized Onsager Principle and neural networks This is the original pyTorch implemenati

Haijun.Yu 3 Aug 24, 2022
code for ICCV 2021 paper 'Generalized Source-free Domain Adaptation'

G-SFDA Code (based on pytorch 1.3) for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'. [project] [paper]. Dataset preparing Download

Shiqi Yang 84 Dec 26, 2022
Code for Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid Representations

Implementation for Iso-Points (CVPR 2021) Official code for paper Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid Representations paper |

Yifan Wang 66 Nov 8, 2022
Code for the paper "Implicit Representations of Meaning in Neural Language Models"

Implicit Representations of Meaning in Neural Language Models Preliminaries Create and set up a conda environment as follows: conda create -n state-pr

Belinda Li 39 Nov 3, 2022
Code for "Layered Neural Rendering for Retiming People in Video."

Layered Neural Rendering in PyTorch This repository contains training code for the examples in the SIGGRAPH Asia 2020 paper "Layered Neural Rendering

Google 154 Dec 16, 2022
A generalized framework for prototyping full-stack cooperative driving automation applications under CARLA+SUMO.

OpenCDA OpenCDA is a SIMULATION tool integrated with a prototype cooperative driving automation (CDA; see SAE J3216) pipeline as well as regular autom

UCLA Mobility Lab 726 Dec 29, 2022
An official implementation of "Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation" (ICCV 2021) in PyTorch.

Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation This is an official implementation of the paper "Exploiting a Joint

CV Lab @ Yonsei University 35 Oct 26, 2022