null

Overview

DeformingThings4D dataset

Video | Paper

DeformingThings4D is an synthetic dataset containing 1,972 animation sequences spanning 31 categories of humanoids and animals.

Alt text

Animation file (.anime)

An animation example. Colors indicate dense correspondence Alt text

We store an animation sequence in the .anime file. The first frame is the canonical frame for which we store its triangle mesh. From the 2nd to the last frame, we store the 3D offsets of the mesh vertices.

#length         #type       #contents
|1              |int32      |nf: number of frames in the animation 
|1              |int32      |nv: number of vertices in the mesh (mesh topology fixed through frames)
|1              |int32      |nt: number of triangle face in the mesh
|nv*3           |float32    |vertice data of the 1st frame (3D positions in x-y-z-order)
|nt*3           |int32      |triangle face data of the 1st frame
|(nf-1)*nv*3    |float32    |3D offset data from the 2nd to the last frame

1,972 animations

Currently, we provide 200 animations for humanoids and 1772 animations for animals. The followings show the structure of the dataset. The screenshots show the animations in the dataset.

|---|--humanoids (200 animations, 34228 frames)
    |   |--clarie_run  #a animation folder [objectID]_[ActionID])
    |       |--clarie_run.anime # animation file, storing per-frame shape and
    |       |--screenshots # screenshots of animation
    |       |--clarie_run.fbx # raw blender animation file, only available for humanoids
    |--animals (1772 animations, 88137 frames)

Alt text

Use case of the dataset

The dataset is designed to tackle the following tasks using data-driven approaches

The dataset generalizes well to real-world scans. The following shows real-world scene flow estimation and 4dcomplete results using models that are trained with this dataset.

Alt text

Download Data

Currently, we provide the .anime files for all 1972 animations. If you would like to download the DeformingThings4D data, please fill out this google form, and, once accepted, we will send you the link to download the data.

We can also provide blender-generated scene flow & RGBD sequences and volume data upon request. You can also generate these data from the .anime files using the Blender scripts.

Citation

If you use DeformingThings4D data or code please cite:

@article{li20214dcomplete, 
    title={4dcomplete: Non-rigid motion estimation beyond the observable surface.}, 
    author={Yang Li, Hikari Takehara, Takafumi Taketomi, Bo Zheng, and Matthias Nießner},
    journal={IEEE International Conference on Computer Vision (ICCV)},
    year={2021}
}

Help

If you have any questions, please contact us at [email protected], or open an issue at Github.

License

The data is released under DeformingThings4D Terms of Use, and the code is release under a non-comercial creative commons license.

Comments
  • Obtaining correspondence from scene flow?

    Obtaining correspondence from scene flow?

    Hello, I'm trying to see if I can get dense(or even sparse) correspondence between two point clouds generated from RGBD using blender. I get the scene flow using the example, but I'm unable to convert it to a point-point map. Can you please advise how to go about? Thanks!

    opened by Sentient07 1
  • The name of the animal sequence

    The name of the animal sequence

    Hi, thanks for sharing this great dataset.

    I'm generating my own dataset base on your dataset, I would like to double-check that the name postfix, for example, 3EP in bear3EP_Death1 is the individual ID within the bear category right? That's to say, we have different bear individuals like 3EP,84Q,PDD etc with different body shapes but they are all bears right (like the different person in human category)? Thanks!

    opened by ray8828 0
  • How do we use the label(png file) in raw data?

    How do we use the label(png file) in raw data?

    Hi, I want to generate point-to-point correspondance of depth image, I think one way is that using the flow to transform source point cloud to traget point cloud which does not need to use the label file. So, how do we use the label(png file) in raw data?

    opened by puhuajiang 0
  • RGB data

    RGB data

    I am Elia Bonetto a Ph.D. student from Max Planck Institute. I have just gained access to your dataset. I was planning to use the generated RGBD data but I am able to get only the flow and the depth data from your script. Clearly, with the fbx I can generate everything on my own from blender and a custom camera. Is there a way to get the data also for the animals?

    opened by eliabntt 3
  • Which sequences were used for the motion estimation in Table2?

    Which sequences were used for the motion estimation in Table2?

    Hi, @rabbityl thanks for the great work. I am confused about which sequences were used for the motion estimation in Table2. There are 6 test sequences in Table 2, ranging from Humanoids(Samba Dance) to Panthera Onca(Run), but I can not find the exact corresponding sequences in the dataset. For example, for the sequence Fox(Jump) there are a number of different types of foxes such as foxWDFS, foxXAT, foxYSY, foxZED and different animations such as Jump0, Jump1, Jump2. In order to reproduce the results reported in Table2 of your paper, could you tell me the exact names of the 6 sequences used in Table 2. Thanks a lot!

    opened by wenbin-lin 0
  • Question about ray direction used in ray casting

    Question about ray direction used in ray casting

    Hi @rabbityl, thanks a lot for this great work! I have a question about ray direction used in the anime_renderer.py file and hope you can give me a hand if possible. My question is mainly about ray_direction used in the following line:

    https://github.com/rabbityl/DeformingThings4D/blob/aa5e38d3326347c10434a963e351459169954a1e/code/anime_renderer.py#L240

    I checked Blender's reference, function ray_cast requires ray_direction to be defined in object space. However, it seems like ray_direction is set up to be in world space:

    https://github.com/rabbityl/DeformingThings4D/blob/aa5e38d3326347c10434a963e351459169954a1e/code/anime_renderer.py#L170-L173

    I checked some meshes and find that their raycast_mesh.matrix_world is identical, which means that their object space is aligned with world space and there is no need to transform ray_direction. However, in general, I think we still need to have such a transformation, right?

    Not sure whether I miss something so any guidance would be appreciated. Thanks in advance.

    opened by Xiaoming-Zhao 0
Owner
null
Null safe support for Python

Null Safe Python Null safe support for Python. Installation pip install nullsafe Quick Start Dummy Class class Dummy: pass Normal Python code: o =

Paaksing 13 Nov 17, 2022
Ingest GreyNoise.io malicious feed for CVE-2021-44228 and apply null routes

log4j-nullroute Quick script to ingest IP feed from greynoise.io for log4j (CVE-2021-44228) and null route bad addresses. Works w/Cisco IOS-XE and Ari

Ryan 5 Sep 12, 2022