This repository contains the code for the ICCV 2019 paper "Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics"

Overview

Occupancy Flow

This repository contains the code for the project Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics.

You can find detailed usage instructions for training your own models and using pre-trained models below.

If you find our code or paper useful, please consider citing

@inproceedings{OccupancyFlow,
    title = {Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics},
    author = {Niemeyer, Michael and Mescheder, Lars and Oechsle, Michael and Geiger, Andreas},
    booktitle = {Proc. of the IEEE International Conf. on Computer Vision (ICCV)},
    year = {2019}
}

Installation

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create and activate an anaconda environment called oflow using

conda env create -f environment.yaml
conda activate oflow

Next, compile the extension modules. You can do this via

python setup.py build_ext --inplace

Demo

You can test our code on the provided input point cloud sequences in the demo/ folder. To this end, simple run

python generate.py configs/demo.yaml

This script should create a folder out/demo/ where the output is stored.

Dataset

Point-based Data

To train a new model from scratch, you have to download the full dataset. You can download the pre-processed data (~42 GB) using

bash scripts/download_data.sh

The script will download the point-based point-based data for the Dynamic FAUST (D-FAUST) dataset to the data/ folder.

Please note: We do not provide the renderings for the 4D reconstruction from image sequences experiment nor the meshes for the interpolation and generative tasks due to privacy regulations. We outline how you can download the mesh data in the following.

Mesh Data

Please follow the instructions on D-FAUST homepage to download the "female and male registrations" as well as "scripts to load / parse the data". Next, follow their instructions in the scripts/README.txt file to extract the obj-files of the sequences. Once completed, you should have a folder with the following structure:


your_dfaust_folder/
| 50002_chicken_wings/
    | 00000.obj
    | 00001.obj
    | ...
    | 000215.obj
| 50002_hips/
    | 00000.obj
    | ...
| ...
| 50027_shake_shoulders/
    | 00000.obj
    | ...


You can now run

bash scripts/migrate_dfaust.sh path/to/your_dfaust_folder

to copy the mesh data to the dataset folder. The argument has to be the folder to which you have extracted the mesh data (the your_dfaust_folder from the directory tree above).

Usage

When you have installed all dependencies and obtained the preprocessed data, you are ready to run our pre-trained models and train new models from scratch.

Generation

To start the normal mesh generation process using a trained model, use

python generate.py configs/CONFIG.yaml

where you replace CONFIG.yaml with the name of the configuration file you want to use.

The easiest way is to use a pretrained model. You can do this by using one of the config files

configs/pointcloud/oflow_w_correspond_pretrained.yaml
configs/interpolation/oflow_pretrained.yaml
configs/generative/oflow_pretrained.yaml

Our script will automatically download the model checkpoints and run the generation. You can find the outputs in the out/ folder.

Please note that the config files *_pretrained.yaml are only for generation, not for training new models: when these configs are used for training, the model will be trained from scratch, but during inference our code will still use the pretrained model.

Generation - Generative Tasks

For model-specific latent space interpolations and motion transfers, you first have to run

python encode_latent_motion_space.py config/generative/CONFIG.yaml

Next, you can call

python generate_latent_space_interpolation.py config/generative/CONFIG.yaml

or

python generate_motion_transfer.py config/generative/CONFIG.yaml

Please note: Make sure that you use the appropriate model for the generation processes, e.g. the latent space interpolations and motion transfers can only be generated with a generative model (e.g. configs/generative/oflow_pretrained.yaml).

Evaluation

You can evaluate the generated output of a model on the test set using

python eval.py configs/CONFIG.yaml

The evaluation results will be saved to pickle and csv files.

Training

Finally, to train a new network from scratch, run

python train.py configs/CONFIG.yaml

You can monitor the training process on http://localhost:6006 using tensorboard:

cd OUTPUT_DIR
tensorboard --logdir ./logs --port 6006

where you replace OUTPUT_DIR with the respective output directory. For available training options, please have a look at config/default.yaml.

Further Information

Implicit Representations

If you like the Occupancy Flow project, please check out our similar projects on inferring 3D shapes (Occupancy Networks) and texture (Texture Fields).

Neural Ordinary Differential Equations

If you enjoyed our approach using differential equations, checkout Ricky Chen et. al.'s awesome implementation of differentiable ODE solvers which we used in our project.

Dynamic FAUST Dataset

We applied our method to the cool Dynamic FAUST dataset which contains sequences of real humans performing various actions.

Comments
  • Demo fails

    Demo fails

    I tried to run demo, but it failed with this error;

        from .pykdtree.kdtree import KDTree
      File "__init__.pxd", line 885, in init pykdtree.kdtree
    ValueError: numpy.ufunc has the wrong size, try recompiling. Expected 192, got 216
    
    opened by MasahiroOgawa 4
  • Error freeze_support() , not going to be frozen to produce an executable

    Error freeze_support() , not going to be frozen to produce an executable

    Hi When I run python generate.py configs/demo.yaml, I get the following error: `raceback (most recent call last): File "", line 1, in File "D:\ProgramData\Anaconda3\envs\oflow\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "D:\ProgramData\Anaconda3\envs\oflow\lib\multiprocessing\spawn.py", line 114, in _main prepare(preparation_data) File "D:\ProgramData\Anaconda3\envs\oflow\lib\multiprocessing\spawn.py", line 225, in prepare _fixup_main_from_path(data['init_main_from_path']) File "D:\ProgramData\Anaconda3\envs\oflow\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path run_name="mp_main") File "D:\ProgramData\Anaconda3\envs\oflow\lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "D:\ProgramData\Anaconda3\envs\oflow\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "D:\ProgramData\Anaconda3\envs\oflow\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\0.unity_prj\Fusions\occupancy_flow-master\generate.py", line 78, in for it, data in enumerate(tqdm(test_loader)): File "D:\ProgramData\Anaconda3\envs\oflow\lib\site-packages\tqdm_tqdm.py", line 979, in iter for obj in iterable: File "D:\ProgramData\Anaconda3\envs\oflow\lib\site-packages\torch\utils\data\dataloader.py", line 819, in iter return _DataLoaderIter(self) File "D:\ProgramData\Anaconda3\envs\oflow\lib\site-packages\torch\utils\data\dataloader.py", line 560, in init w.start() File "D:\ProgramData\Anaconda3\envs\oflow\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "D:\ProgramData\Anaconda3\envs\oflow\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "D:\ProgramData\Anaconda3\envs\oflow\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "D:\ProgramData\Anaconda3\envs\oflow\lib\multiprocessing\popen_spawn_win32.py", line 33, in init prep_data = spawn.get_preparation_data(process_obj._name) File "D:\ProgramData\Anaconda3\envs\oflow\lib\multiprocessing\spawn.py", line 143, in get_preparation_data _check_not_importing_main() File "D:\ProgramData\Anaconda3\envs\oflow\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main is not going to be frozen to produce an executable.''') RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:
    
        if __name__ == '__main__':
            freeze_support()
            ...
    
    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.`
    

    I download oflow_w_correspond_model-13c0a3ed.pt to use locally.

    opened by jimzou 3
  • eval.py for the demo

    eval.py for the demo

    Hello, Great work!

    I tried to run eval.py for the demo. Since in default.yaml, test:eval_mesh_iou is true, it does not initialize 'dataset' properly (Because data provided for demo does not contain points_seq). I solved this issue by adding test:eval_mesh_iou:false to demo.yaml.

    Is there anything I am missing here?

    Thank you, Duygu

    opened by duyguislakoglu 2
  • Debug/debug demo

    Debug/debug demo

    objective: solve the problem https://github.com/autonomousvision/occupancy_flow/issues/8 what I did: create docker files and let share the result with host machine so that we can visualize the result using e.g. meshlab.

    opened by MasahiroOgawa 1
  • Training details

    Training details

    Hi, thanks for sharing your code!

    I'm training O-Flow as a comparison, what's the total training epoch/iteration for D-FAUST point cloud completion? Thanks

    opened by ray8828 1
  • Unit of table 1 and reproduction

    Unit of table 1 and reproduction

    image

    Hi, thanks for sharing the code, I'm doing quantitative comparison with oflow! I use your code and run pointcloud oflow_pretrained to generate and evaluate, but got some results like this, the first column is the mean of 17 steps, it's seems to be a little bit better than you report in the paper table 1(Seen), I would like to know the unit of table 1 and is my output reasonable? Thanks!

    opened by ray8828 2
  • Some minor installation errors

    Some minor installation errors

    Thanks for sharing your code, the paper is really interesting. I hope to use it for future work and cite it!

    I'm mostly posting this for visibility due to some issues that had occurred during setup and how I fixed them in my case:

    On Ubuntu 20.04 with Anaconda 4.10.1,

    during python setup.py build_ext --inplace we get the following ouput:

    Traceback (most recent call last):
      File "setup.py", line 7, in <module>
        from torch.utils.cpp_extension import BuildExtension
      File "/home/matt/anaconda3/envs/oflow/lib/python3.6/site-packages/torch/__init__.py", line 84, in <module>
        from torch._C import *
    ImportError: /home/matt/anaconda3/envs/oflow/lib/python3.6/site-packages/torch/lib/libmkldnn.so.0: undefined symbol: cblas_sgemm_alloc
    

    For some reason pytorch doesn't install correctly so this can be solved with pip install torch==1.0.0.

    And in setup.py include_dirs=[numpy_include_dir] needs to be added to each extension that requires numpy as it doesn't seem to find it otherwise.

    opened by mjkmoynihan 4
This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21

Deep Virtual Markers This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21 Getting Started Get sa

KimHyomin 45 Oct 7, 2022
《Deep Single Portrait Image Relighting》(ICCV 2019)

Ratio Image Based Rendering for Deep Single-Image Portrait Relighting [Project Page] This is part of the Deep Portrait Relighting project. If you find

null 62 Dec 21, 2022
Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019)

Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019) Introduction Official implementation of Dynamic Multi-scale Filters for Semant

null 23 Oct 21, 2022
A Fast and Accurate One-Stage Approach to Visual Grounding, ICCV 2019 (Oral)

One-Stage Visual Grounding ***** New: Our recent work on One-stage VG is available at ReSC.***** A Fast and Accurate One-Stage Approach to Visual Grou

Zhengyuan Yang 118 Dec 5, 2022
Official PyTorch implementation of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image", ICCV 2019

PoseNet of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image" Introduction This repo is official Py

Gyeongsik Moon 677 Dec 25, 2022
This repository contains the code for the paper "Hierarchical Motion Understanding via Motion Programs"

Hierarchical Motion Understanding via Motion Programs (CVPR 2021) This repository contains the official implementation of: Hierarchical Motion Underst

Sumith Kulal 40 Dec 5, 2022
null 190 Jan 3, 2023
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

Sohil Shah 197 Nov 29, 2022
This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effects in Video."

Omnimatte in PyTorch This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effect

Erika Lu 728 Dec 28, 2022
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Facebook Research 281 Dec 22, 2022
This repository contains the code and models for the following paper.

DC-ShadowNet Introduction This is an implementation of the following paper DC-ShadowNet: Single-Image Hard and Soft Shadow Removal Using Unsupervised

AuAgCu 65 Dec 27, 2022
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Deep Cognition and Language Research (DeCLaRe) Lab 89 Dec 26, 2022
This repository contains the code for the CVPR 2021 paper "GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields"

GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields Project Page | Paper | Supplementary | Video | Slides | Blog | Talk If

null 1.1k Dec 30, 2022
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

null 697 Jan 6, 2023
This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by Divam Gupta, Wei Pu, Trenton Tabor, Jeff Schneider

SBEVNet: End-to-End Deep Stereo Layout Estimation This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by D

Divam Gupta 19 Dec 17, 2022
This GitHub repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.'

About Repository This repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.' About Code

Arun Verma 1 Nov 9, 2021
This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization News: [2020/05/04] Added EGL rendering option for training data g

Shunsuke Saito 1.5k Jan 3, 2023
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 5, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 5, 2022