《Lerning n Intrinsic Grment Spce for Interctive Authoring of Grment Animtion》

Overview

Learning an Intrinsic Garment Space for Interactive Authoring of Garment Animation


Overview


This is the demo code for training a motion invariant encoding network. The following diagram provides an overview of the network structure.

For more information, please visit http://geometry.cs.ucl.ac.uk/projects/2019/garment_authoring/

network

Structure


The project's directory is shown as follows. The data set is in the data_set folder, including cloth mesh(generated by Maya Qualoth), garment template, character animation and skeletons. Some supporting files can be found in support. The shape feature descriptor and motion invariant encoding network are saved in nnet.

├─data_set
│  ├─anim
│  ├─case
│  ├─garment
│  ├─skeleton
│  └─Maya
├─nnet
│  ├─basis
│  └─mie
├─support
│  ├─eval_basis
│  ├─eval_mie
│  ├─info_basis
│  └─info_mie
└─scripts

In the scripts folder, there are several python scripts which implement the training process. We also provide a data set for testing, generated from a sequence of dancing animation and a skirt.

Data Set


The data set includes not only the meshes and garment template, but also some supporting information. You can check the animation in the Maya folder. The animation information is saved in the anim folder. In the case folder, there are many meshes generated by Qualoth in different simulation parameters. The garment template is in the garment folder.

network

Installation


  • Clone the repo:
git clone https://github.com/YuanBoot/Intrinsic_Garment_Space.git

Model Training


Shape Descriptor

After all preparing works done, you can start to train the network. In scripts folder, some scripts named basis_* are used for training shape descriptor.

Run them as follows:

01.basis_prepare.py (data preparing)

02.basis_train.py (training)

03.basis_eval.py (evaluation)

After running 01 and 02 scripts, there will be a *.net file in the nnet/basis folder. It is the shape feature descriptor.

The result of a specific frame after running 03.basis_eval.py script. The yellow skirt is our output and the blue one is the ground truth. If the loss of the descriptor is low enough, these two skirt are almost overlap.

f2

Motion Invariant Encoding

Then, you can run mie_*.py scripts to get the motion invariant encoding network.

04.mie_prepare.py (data preparing)

05.mie_train.py (training)

06.mie_eval.py (evaluation)

If everything goes well, the exported mesh would be like the following figures. For the output from06.mie_eval.py is painted by red and the green one is the ground truth.

f3

You might also like...
 	Code for
Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight

dimensions Estimating the instrinsic dimensionality of image datasets Code for: The Intrinsic Dimensionaity of Images and Its Impact On Learning - Phi

Code for our NeurIPS 2021 paper 'Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation'

Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation (NeurIPS 2021) Code for our NeurIPS 2021 paper 'Exploiting the Intri

Camera Intrinsic Calibration and Hand-Eye Calibration in Pybullet
Camera Intrinsic Calibration and Hand-Eye Calibration in Pybullet

This repository is mainly for camera intrinsic calibration and hand-eye calibration. Synthetic experiments are conducted in PyBullet simulator. 1. Tes

Python module for performing linear regression for data with measurement errors and intrinsic scatter
Python module for performing linear regression for data with measurement errors and intrinsic scatter

Linear regression for data with measurement errors and intrinsic scatter (BCES) Python module for performing robust linear regression on (X,Y) data po

Comments
  • Empty data_set directory?

    Empty data_set directory?

    Hi there, I'm interested in giving this technique a try, but it looks like the data_set folder is empty. Are there more updates we can expect in order to run these examples?

    Thanks! -- Paul

    opened by pkanyuk 4
  • Mie_eval is not generalizing to any new motions.  Is more data needed?

    Mie_eval is not generalizing to any new motions. Is more data needed?

    Hi there, it's me again. First, I wanted to thank you for providing this working implementation, I was able to run all the example scripts and produce very high quality results! Once I got the networks trained, I've been experimenting a bit to see how well they work on new inputs. Unfortunately, running 06.mie_eval.py on new data, even just a copy of data_set/anim/anim_01/Dude.anim.npy with slight perturbations, created very unstable results. I was digging in to debug the problem, when it occurred to me that 05.mie_train.py is only training on the 800 frames of animation in Dude.anim.py. This doesn't seem like enough data to allow much if any generalization to new motions. Checking the original paper, 30,000 frames of animation was used, which I imagine leads to much better results. Is there any chance you can provide that data set?

    Thanks! -- Paul

    opened by pkanyuk 2
  • Function not working as expected in 05.mie_train.py

    Function not working as expected in 05.mie_train.py

    Hello, I am trying to use your code for my project, but I find something not working as expected in 05.mie_train.py. The function is get_same_material_list().

    def get_same_material_list(flist_train, batchsize):
    
    	blst = np.random.permutation(len(flist_train))
    	xlst = []
    	t = 0
    
    	for i in range(len(blst)):
    		if flist_train[blst[i]][1] == flist_train[blst[0]][1]:
    			xlst.append(blst[i])
    			t=t+1
    			if t == batchsize: break
    
    	return xlst
    

    If I understand correctly, the file list is a list of string of file information with the form "case_01 XX 123" "case_01 YY 234", where XX and YY correspond to the simulation parameter and the number indicates the frame number. In line if flist_train[blst[i]][1] == flist_train[blst[0]][1], we try to find mesh files with the same simulation parameter. However, the indexed flist_train[blst[i]] is a string, the second element (use [1] to index) is essentially a character 'a', not XX or YY. The function is trying to get four random data in the training set.

    Thanks for your attention.

    opened by non-void 0
  • When generating cloth with new human motions, what is the input?

    When generating cloth with new human motions, what is the input?

    When generating cloth with new human motions, what is the input? I found the npy_path at line 98 in 06.mie_eval.py is the eval mesh corresponding to the new motion, which means the ground truth. Is it a bug? So what is the input when testing new motions? Maybe a random selected model which has the same material to the ground truth (eval mesh) from the training dataset, but different input may get various result.

    opened by lanchen2019 0
Owner
YuanBo
YuanBo
TorchX is a library containing standard DSLs for authoring and running PyTorch related components for an E2E production ML pipeline.

TorchX is a library containing standard DSLs for authoring and running PyTorch related components for an E2E production ML pipeline

null 193 Dec 22, 2022
magiCARP: Contrastive Authoring+Reviewing Pretraining

magiCARP: Contrastive Authoring+Reviewing Pretraining Welcome to the magiCARP API, the test bed used by EleutherAI for performing text/text bi-encoder

EleutherAI 43 Dec 29, 2022
Intrinsic Image Harmonization

Intrinsic Image Harmonization [Paper] Zonghui Guo, Haiyong Zheng, Yufeng Jiang, Zhaorui Gu, Bing Zheng Here we provide PyTorch implementation and the

VISION @ OUC 44 Dec 21, 2022
Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight

dimensions Estimating the instrinsic dimensionality of image datasets Code for: The Intrinsic Dimensionaity of Images and Its Impact On Learning - Phi

Phil Pope 41 Dec 10, 2022
Code for our NeurIPS 2021 paper 'Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation'

Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation (NeurIPS 2021) Code for our NeurIPS 2021 paper 'Exploiting the Intri

Shiqi Yang 53 Dec 25, 2022
《Fst Lerning of Temporl Action Proposl vi Dense Boundry Genertor》(AAAI 2020)

Update 2020.03.13: Release tensorflow-version and pytorch-version DBG complete code. 2019.11.12: Release tensorflow-version DBG inference code. 2019.1

Tencent 338 Dec 16, 2022
TorchX is a library containing standard DSLs for authoring and running PyTorch related components for an E2E production ML pipeline.

TorchX is a library containing standard DSLs for authoring and running PyTorch related components for an E2E production ML pipeline

null 193 Dec 22, 2022
magiCARP: Contrastive Authoring+Reviewing Pretraining

magiCARP: Contrastive Authoring+Reviewing Pretraining Welcome to the magiCARP API, the test bed used by EleutherAI for performing text/text bi-encoder

EleutherAI 43 Dec 29, 2022
Code for ACL 2021 main conference paper "Conversations are not Flat: Modeling the Intrinsic Information Flow between Dialogue Utterances".

Conversations are not Flat: Modeling the Intrinsic Information Flow between Dialogue Utterances This repository contains the code and pre-trained mode

ICTNLP 90 Dec 27, 2022
Intrinsic Image Harmonization

Intrinsic Image Harmonization [Paper] Zonghui Guo, Haiyong Zheng, Yufeng Jiang, Zhaorui Gu, Bing Zheng Here we provide PyTorch implementation and the

VISION @ OUC 44 Dec 21, 2022