Implementation of the ICCV'21 paper Temporally-Coherent Surface Reconstruction via Metric-Consistent Atlases

Overview

Temporally-Coherent Surface Reconstruction via Metric-Consistent Atlases [Papers 1, 2][Project page] [Video]

The implementation of the papers

Install

The framework was tested with Python 3.8, PyTorch 1.7.0. and CUDA 11.0. The easiest way to work with the code is to create a new virtual Python environment and install the required packages.

  1. Install the virtualenvwrapper.
  2. Create a new environment and install the required packages.
mkvirtualenv --python=python3.8 tcsr
pip install -r requirements.txt
  1. Install Pytorch3d.
cd ~
curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
tar xzf 1.10.0.tar.gz
export CUB_HOME=$PWD/cub-1.10.0
pip install git+https://github.com/facebookresearch/pytorch3d.git@3c15a6c2469249c8b90a4f3e41e34350b8051b92
  1. Get the code and prepare the environment as follows:
git clone [email protected]:bednarikjan/temporally_coherent_surface_reconstruction.git
git submodule update --init --recursive
export PYTHONPATH="{PYTHONPATH}:path/to/dir/temporally_coherent_surface_reconstruction"

Get the Data

The project was tested on 6 base datasets (and their derivatives). Each datasets has to be processed so as to generate the input point clouds for training, the GT correspondences for evauluation and other auxilliary data. To do so, please use the individual scripts in tcsr/process_datasets. For each dataset, follow these steps:

  1. Download the data (links below).
  2. Open the script <dataset_name>.py and set the input/output paths.
  3. Run the script: python <dataset_name>.py

1. ANIM

  • Download the sequences horse gallop, horse collapse, camel gallop, camel collapse, and elephant gallop.
  • Download the sequence walking cat.

2. AMA

  • Download all 10 sequences, meshes only.

3. DFAUST

4. CAPE

  • Request the access to the raw scans and download it.
  • At the time of writing the paper (September 2021) four subjects (00032, 00096, 00159, 03223) were available and used in the paper.

5. INRIA

  • Request the access to the dataset and download it.
  • At the time of writing the paper (September 2021), four subjects (s1, s2, s3, s6) were available and used in the paper.

6. CMU

Train

The provided code allows for training our proposed method (OUR) but also the other atlas based approaches Differential Surface Representation (DSR) and AtlasNet (AN). The training is configured using the *.yaml configuration scripts in tcsr/train/configs.

There are 9 sample configuration files our_<dataset_name>.yaml which train OUR on each individual dataset and 2 sample configuration files an_anim.yaml, dsr_anim.yaml which train AN and DSR respectivelly on ANIM dataset.

By default, the trainin uses the exact settings as in the paper, namely it trains for 200'000 iterations using SGD, learning rate of 0.001 and batch size of 4. This can be altered in the configuration files.

Before starting the training, follow these steps:

  • Open the source file tcsr/data/data_loader.py and set the paths to the datasets in each dataset class.
  • Open the desired training configuration *.yaml file in tcsr/train/configs/ and set the output path for the training run data in the attribute path_train_run.

Start the training usint the script tcsr/train/train.py:

python train.py --conf configs/<file_name>.yaml

By default the script saves the training progress each 2000 iterations so you can safely kill it at any point and resume the trianing later using:

python train.py --cont path/to/training_run/root_dir

Evaluate

To evaluate a trianed model on the dense correspondence prediction task, use the script tcsr/evaluate/eval_dataset.py which allows for evaluation of multiple sequences (i.e. individual training runs within one dataset) at once. Please have a look at the command line arguments in the file.

An example of how to run the evaluation for the training runs contained in the root directory train_runs_root corresponding to 2 training runs run for the sequences cat_walk and horse_gallop within ANIM dataset:

python eval_dataset.py /path/to/train_runs_root --ds anim --include_seqs cat_walk horse_gallop  

The script produces a *.csv file in train_runs_root with the 4 measured metrics (see the paper).

Visualize

There are currently two ways to visualize the predictions.

1. Tensorboard

By default, the training script saves the GT and the predicted point clouds (for a couple of random data samples) each 2000 iterations. These can be viewed within Tensorboard. Each patch is visualized with a different color. This visualization is mostly useful as a sanity check during the trianing to see that the model is converging as expected.

  • Navigate to the root directory of the trianing runs and run:
tensorboard --logdir=. --port=8008 --bind_all
  • Open your browser and navigate to http://localhost:8008/

2. Per-sequence reconstruction GIF

You can view the reconstructed surfaces as a patch-wise textured mesh as a video within a GIF file. For this purpose, use the IPython Notebook file tcsr/visualize/render_uv.ipynb and open it in jupyterlab which allows for viewing the GIF right after running the code.

The rendering parameters (such as the camera location, texturing mode, gif speed etc.) are set usin the configuration file tcsr/visualize/conf_patches.yaml. There are sample configurations for the sequence cat_walk, which can be used to write configurations for other sequences/datasets.

Before running the cells, set the variables in the second cell (paths, models, data).

Citation

@inproceedings{bednarik2021temporally_coherent,
   title = {Temporally-Coherent Surface Reconstruction via Metric-Consistent Atlases},
   author = {Bednarik, Jan and Kim, Vladimir G. and Chaudhuri, Siddhartha and Parashar, Shaifali and Salzmann, Mathieu and Fua, Pascal and Aigerman, Noam},
   booktitle = {Proceedings of IEEE International Conference on Computer Vision (ICCV)},
   year = {2021}
}

@inproceedings{bednarik2021temporally_consistent,
   title = {Temporally-Consistent Surface Reconstruction via Metrically-Consistent Atlases},
   author = {Bednarik, Jan and Aigerman, Noam and Kim, Vladimir G. and Chaudhuri, Siddhartha and Parashar, Shaifali and Salzmann, Mathieu and Fua, Pascal},
   booktitle = {arXiv},
   year = {2021}
}

Acknowledgements

This work was partially done while the main author was an intern at Adobe Research.

TODO

  • Add support for visualizing the correspondence error heatmap on the GT mesh.
  • Add support for visualizing the colorcoded correspondences on the GT mesh.
  • Add the support for generating the pre-aligned AMAa dataset using ICP.
  • Add the code for the nonrigid ICP experiments.
You might also like...
Implementation of CVPR'2022:Surface Reconstruction from Point Clouds by Learning Predictive Context Priors
Implementation of CVPR'2022:Surface Reconstruction from Point Clouds by Learning Predictive Context Priors

Surface Reconstruction from Point Clouds by Learning Predictive Context Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository c

The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.
The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

SCOOD-UDG (ICCV 2021) This repository is the official implementation of the paper: Semantically Coherent Out-of-Distribution Detection Jingkang Yang,

Poisson Surface Reconstruction for LiDAR Odometry and Mapping
Poisson Surface Reconstruction for LiDAR Odometry and Mapping

Poisson Surface Reconstruction for LiDAR Odometry and Mapping Surfels TSDF Our Approach Table: Qualitative comparison between the different mapping te

[ICCV 2021 (oral)] Planar Surface Reconstruction from Sparse Views
[ICCV 2021 (oral)] Planar Surface Reconstruction from Sparse Views

Planar Surface Reconstruction From Sparse Views Linyi Jin, Shengyi Qian, Andrew Owens, David F. Fouhey University of Michigan ICCV 2021 (Oral) This re

Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance
Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance

Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance Project Page | Paper | Data This repository contains an implementatio

ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction
ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction

ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction. NeurIPS 2021.

Deep Surface Reconstruction from Point Clouds with Visibility Information
Deep Surface Reconstruction from Point Clouds with Visibility Information

Data, code and pretrained models for the paper Deep Surface Reconstruction from Point Clouds with Visibility Information.

This is the official repository for evaluation on the NoW Benchmark Dataset. The goal of the NoW benchmark is to introduce a standard evaluation metric to measure the accuracy and robustness of 3D face reconstruction methods from a single image under variations in viewing angle, lighting, and common occlusions.
Github project for Attention-guided Temporal Coherent Video Object Matting.

Attention-guided Temporal Coherent Video Object Matting This is the Github project for our paper Attention-guided Temporal Coherent Video Object Matti

Comments
  • Errors found when run the code

    Errors found when run the code

    Hi, I have found some errors when I try to run the train.py.

    Dataset: I select "camel-poses" in animal dataset.

    In the process "Train", when I set the input data folder in tcsr/data/data_loader.py and output folder in tcsr/train/configs/our_anim.yaml, I run the train.py with command "python train.py --conf configs/our_anim.yaml".

    1.File ".../tcsr/tcsr/models/encoder.py", line 13, in from tcsr.train.helpers import Device ImportError: cannot import name 'Device' from partially initialized module 'tcsr.train.helpers' (most likely due to a circular import) (.../tcsr/tcsr/train/helpers.py)

    This error happens because "helpers.py" import somthing from "encoder.py" but "class Device" is defined in "helpers.py", which is imported also in "encoder.py", cause the circular error.

    2.File ".../tcsr/tcsr/train/train.py", line 77, in model = tr_helpers.create_model_train(conf, num_cws=len(ds)) TypeError: create_model_train() got an unexpected keyword argument 'num_cws' There is no augument "num_cws" in function "create_model_train(conf, num_cws=len(ds))" in helpers.py

    3.Traceback (most recent call last): File "/home/zzl/tcsr2.0/tcsr/tcsr/train/train.py", line 77, in model = tr_helpers.create_model_train(conf) File "/home/zzl/tcsr2.0/tcsr/tcsr/train/helpers.py", line 190, in create_model_train alphas_sciso=conf['alphas_sciso'], loss_mmcl=conf['loss_mmcl'], KeyError: 'loss_mmcl' In "class ModelMetricConsistency(nn.Module, Device)" in models_mc.py, there is no augument "loss_mmcl" and also there is no "loss_mmcl" found in "our_anim.yaml".

    4.Traceback (most recent call last): File ".../tcsr/tcsr/train/train.py", line 141, in losses = model.loss( File ".../tcsr/tcsr/models/models_mc.py", line 308, in loss L_mc = self.loss_mc_no_interp(EFG=efg) AttributeError: 'ModelMetricConsistency' object has no attribute 'loss_mc_no_interp'

    In "loss()" function in "models_mc.py", if loss_mc == True, Metric consistency is used but " L_mc = self.loss_mc_no_interp(EFG=efg)", loss_mc_no_interp() is used without defined.

    5.Traceback (most recent call last): File ".../tcsr/tcsr/train/train.py", line 141, in losses = model.loss( File ".../tcsr/tcsr/models/models_mc.py", line 315, in loss L_ssc = self._loss_self_supervised_correspondences( File ".../tcsr/tcsr/models/models_mc.py", line 264, in _loss_self_supervised_correspondences inds_p2gt = self._reg_func_impl(pc_gt[:, 0], pc_p[:, 0])[0] # (B, M)

    AttributeError: 'ModelMetricConsistency' object has no attribute '_reg_func_impl'

    In "loss()" function in "models_mc.py", if loss_ssc == True, Self-supervised correspondences is used. "L_ssc = self._loss_self_supervised_correspondences()", in function _loss_self_supervised_correspondences(), there is a line "inds_p2gt = self._reg_func_impl(pc_gt[:, 0], pc_p[:, 0])[0]", _reg_func_impl() is used but noe defined.

    After set loss_mc and loss_ssc is False, it can run.

    opened by zili-zhou 1
Owner
null
Temporally Coherent GAN SIGGRAPH project.

TecoGAN This repository contains source code and materials for the TecoGAN project, i.e. code for a TEmporally COherent GAN for video super-resolution

Duc Linh Nguyen 2 Jan 18, 2022
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Hongsuk Choi 215 Jan 6, 2023
Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral

NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video Project Page | Paper NeuralRecon: Real-Time Coherent 3D Reconstruction from Mon

ZJU3DV 1.4k Dec 30, 2022
[ICCV21] Code for RetrievalFuse: Neural 3D Scene Reconstruction with a Database

RetrievalFuse Paper | Project Page | Video RetrievalFuse: Neural 3D Scene Reconstruction with a Database Yawar Siddiqui, Justus Thies, Fangchang Ma, Q

Yawar Nihal Siddiqui 75 Dec 22, 2022
This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation".

[CVPRW 2021] - Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation

Anirudh S Chakravarthy 6 May 3, 2022
Implementation for the "Surface Reconstruction from 3D Line Segments" paper.

Surface Reconstruction from 3D Line Segments Surface reconstruction from 3d line segments. Langlois, P. A., Boulch, A., & Marlet, R. In 2019 Internati

null 85 Jan 4, 2023
The code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Neural Deformation Graphs Project Page | Paper | Video Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction Aljaž Božič, Pablo P

Aljaz Bozic 134 Dec 16, 2022
Interpretation of T cell states using reference single-cell atlases

Interpretation of T cell states using reference single-cell atlases ProjecTILs is a computational method to project scRNA-seq data into reference sing

Cancer Systems Immunology Lab 139 Jan 3, 2023
This is the repository for CVPR2021 Dynamic Metric Learning: Towards a Scalable Metric Space to Accommodate Multiple Semantic Scales

Intro This is the repository for CVPR2021 Dynamic Metric Learning: Towards a Scalable Metric Space to Accommodate Multiple Semantic Scales Vehicle Sam

null 39 Jul 21, 2022
The official implementation code of "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction."

PlantStereo This is the official implementation code for the paper "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction".

Wang Qingyu 14 Nov 28, 2022