Code release of paper Improving neural implicit surfaces geometry with patch warping

Overview

NeuralWarp: Improving neural implicit surfaces geometry with patch warping

Project page | Paper

Code release of paper Improving neural implicit surfaces geometry with patch warping
François Darmon, Bénédicte Bascle, Jean-Clément Devaux, Pascal Monasse and Mathieu Aubry

Installation

See requirements.txt for the python packages.

Data

Download data with ./download_dtu.sh and ./download_epfl.sh

Extract mesh from a pretrained model

Download the pretrained models with ./download_pretrained_models.sh then run the extraction script

python extract_mesh.py --conf CONF --scene SCENE [--OPTIONS]

  • CONF is the configuration file (e.g. confs/NeuralWarp_dtu.conf)
  • SCENE is the scan id for DTU data and either fountain or herzjesu for EPFL.
  • See python extract_mesh.py --help for a detailed explanation of the options. The evaluation in the papers are with default options for DTU and with --bbox_size 4 --no_one_cc --filter_visible_triangles --min_nb_visible 1 for EPFL.

The output mesh will be in evals/CONF_SCENE/ouptut_mesh.ply

You can also run the evaluation: first download the DTU evaluation data ./download_dtu_eval, then run the evaluation script python eval.py --scene SCENE. The evaluation metrics will be written in evals/CONF_SCENE/result.txt.

Train a model from scratch

First train a baseline model (i.e. VolSDF) python train.py --conf confs/baseline_DATASET --scene SCENE.

Then finetune using our method with python train.py --conf confs/NeuralWarp_DATASET --scene SCENE.

A visualization html file is generated for each training in exps/CONF_SCENE/TIMESTAMP/visu.html.

Acknowledgments

This repository is inspired by IDR

This work was supported in part by ANR project EnHerit ANR-17-CE23-0008 and was performed using HPC resources from GENCI–IDRIS 2021-AD011011756R1. We thank Tom Monnier for valuable feedback and Jingyang Zhang for sending MVSDF results.

Copyright

NeuralWarp All rights reseved to Thales LAS and ENPC.

This code is freely available for academic use only and Provided “as is” without any warranty.

Modification are allowed for academic research provided that the following conditions are met :
  * Redistributions of source code or any format must retain the above copyright notice and this list of conditions.
  * Neither the name of Thales LAS and ENPC nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
Comments
  • Reproducibility concerns

    Reproducibility concerns

    Hello. Thank you for your amazing work and for sharing the code! I tried training your model from scratch on some of the benchmark scenes and had problems reproducing your results. It seems like the model is quite susceptible to the random seed, and even after several attempts, the quality I obtained was lower than reported.

    | Experiment | Fountain - Full | Fountain - Center | Herzjesu - Full | Herzjesu - Center | | :--- | :----: | :----: | :----: | :----: | | pre-trained | 7.77 | 1.91 | 8.88 | 2.03 | | seed 2022 | 8.03 | 2.65 | 7.66 | 2.55 | | seed 42 | 13.36 | 7.43 | 10.54 | 2.58 |

    Could you provide some insights regarding this issue?

    opened by AndreeaDogaru 7
  • The result on DTU scan24 does not align to the paper

    The result on DTU scan24 does not align to the paper

    Follow the instruction of this repo. I got a result, 0.72, on DTU scan24 scene, which is different from the result in this paper, 0.49. Could you please tell me if anywhere I made some mistakes? image

    opened by SAOHPRWHG 3
  • result visualization

    result visualization

    Hello,

    thanks for your excellent work! I am wondering how you create the result visulization teaser.gif in readme, with a moving camera and a sliding bar to switch between two meshes. Thank you in advance!

    Best regards

    opened by decai-chen 2
  • Error visualization in appendix

    Error visualization in appendix

    Dear @fdarmon

    Thank you very much for your work. In your error visualization in appendix, the GT points look very clean compared to the provided GT. I wonder if you preprocess the GT points from DTU to get such visualization.

    Look forward to your reply!

    opened by giwel 2
  • Quantitative results on DTU dataset

    Quantitative results on DTU dataset

    As descripted in your paper, "the results for each method are taken from their original paper". But why results for Neus[34] are different from original paper?

    [34] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. NeuS: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. In Adv. Neural Inform. Process. Syst., 2021.

    opened by lity20 2
  • Camera Normalization on DTU dataset

    Camera Normalization on DTU dataset

    Dear author, Thank you for your open source code. I notice that volsdf normalize the camera (within a sphere with R=3) on DTU dataset. However, NeuralWarp loads camera params from the original cameras.npz, which is not normalized. Since this project is built upon volsdf, I want to know why the author removed the camera normalization step. Moreover, I think the default value of --bbox_size in extract_mesh.py is not much reasonable. It should be set carefully for each scene because the camera is not normalized. If the value is too large, then a lot of empty space is wasted. If the value is too small, the extracted mesh is defective. I want to hear the author's viewpoint on this. Thanks!

    opened by o0Helloworld0o 1
  • How to reduce the GPU memory usage

    How to reduce the GPU memory usage

    Hello, I encountered OOM Error after "generate_visu" on RTX3080, can you provide some suggestions on how to reduce batch size or use multiple GPU? image

    opened by BianFeiHu 1
  • Question about the two-stage training

    Question about the two-stage training

    Dear author, According to the description of the paper, the training pipeline includes two stages. First, train for 100k iterations in the same setting as VolSDF. Then finetune for 50k iterations with the proposed method. Does this mean that I need to add the option "--is_continue --timestamp XXXXX" in stage 2? Moreover, according to the paper, the learning rate of stage 2 is 1e-5, which is different from the learning rate (5.0e-4) in NeuralWarp.conf. Do I need to change the learning rate in the configuration to 1e-5? Thanks!

    opened by o0Helloworld0o 1
  • Is Patch Warping enabled in current Codebase ?

    Is Patch Warping enabled in current Codebase ?

    Hi, Thanks for the excellent paper.

    I was going through the code base and the warping seems to happen only for random pixels sampled on the image plane (viz. clipped off at boundaries), and not patches. line 155-166 in https://github.com/fdarmon/NeuralWarp/blob/main/datasets/scene_dataset.py

    Is it right, or am I missing something ?

    opened by anuveshkumaratavataar 1
  • How to get pair.txt file? dtu_supp

    How to get pair.txt file? dtu_supp

    Thank you very much for the author's contribution, I want to try to train my own data, but I am confused about the pair.txt file in "dtu_supp", how do I get this?

    opened by lfhgljj 1
  • Missing license file

    Missing license file

    Hello!

    Thanks for sharing the code, your work is really impressive! I have noticed that you did not include any license file. Per copyright laws, we have to assume the most restrictive license (i.e. all derivatives are forbidden, etc.)

    Was that intentional, or do you actually not mind other research projects building on top of your code?

    opened by egorzakharov 0
  • Question about the paper

    Question about the paper

    Hi, your great work is impressing! But I'm a little confused about the warp loss and Validity masks: 1."our warping-based loss such that every valid patch in the reference image is given the same weight." why is same weight? 2."second, when the reference and source views are on two different sides of the plane defined by xi and the normal ni; third, when a camera center is too close to the plane defined by xi and the normal ni." Why the binary indicator V will be 0. Looking forward to your reply.

    opened by DongyangHuLi 0
Owner
François Darmon
PhD student in 3D computer vision at Imagine team ENPC and Thales LAS FRANCE
François Darmon
Volsdf - Volume Rendering of Neural Implicit Surfaces

Volume Rendering of Neural Implicit Surfaces Project Page | Paper | Data This re

Lior Yariv 221 Jan 7, 2023
Code release to accompany paper "Geometry-Aware Gradient Algorithms for Neural Architecture Search."

Geometry-Aware Gradient Algorithms for Neural Architecture Search This repository contains the code required to run the experiments for the DARTS sear

null 18 May 27, 2022
Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

null 235 Dec 26, 2022
Official PyTorch implementation of Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations

Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations Zhenyu Jiang, Yifeng Zhu, Maxwell Svetlik, Kuan Fang, Yu

UT-Austin Robot Perception and Learning Lab 63 Jan 3, 2023
data/code repository of "C2F-FWN: Coarse-to-Fine Flow Warping Network for Spatial-Temporal Consistent Motion Transfer"

C2F-FWN data/code repository of "C2F-FWN: Coarse-to-Fine Flow Warping Network for Spatial-Temporal Consistent Motion Transfer" (https://arxiv.org/abs/

EKILI 46 Dec 14, 2022
Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces(ICML 2021)

Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces(ICML 2021) This repository contains the code

null 149 Dec 15, 2022
an implementation of softmax splatting for differentiable forward warping using PyTorch

softmax-splatting This is a reference implementation of the softmax splatting operator, which has been proposed in Softmax Splatting for Video Frame I

Simon Niklaus 338 Dec 28, 2022
Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis

Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis, including human motion imitation, appearance transfer, and novel view synthesis. Currently the paper is under review of IEEE TPAMI. It is an extension of our previous ICCV project impersonator, and it has a more powerful ability in generalization and produces higher-resolution results (512 x 512, 1024 x 1024) than the previous ICCV version.

null 2.3k Jan 5, 2023
Python implementation of 3D facial mesh exaggeration using the techniques described in the paper: Computational Caricaturization of Surfaces.

Python implementation of 3D facial mesh exaggeration using the techniques described in the paper: Computational Caricaturization of Surfaces.

Wonjong Jang 8 Nov 1, 2022
Code for "Diffusion is All You Need for Learning on Surfaces"

Source code for "Diffusion is All You Need for Learning on Surfaces", by Nicholas Sharp Souhaib Attaiki Keenan Crane Maks Ovsjanikov NOTE: the linked

Nick Sharp 247 Dec 28, 2022
Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"

Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition This repository contains code for the CVPR2021 paper "Patch-NetV

QVPR 368 Jan 6, 2023
Code for the paper "Implicit Representations of Meaning in Neural Language Models"

Implicit Representations of Meaning in Neural Language Models Preliminaries Create and set up a conda environment as follows: conda create -n state-pr

Belinda Li 39 Nov 3, 2022
This package is for running the semantic SLAM algorithm using extracted planar surfaces from the received detection

Semantic SLAM This package can perform optimization of pose estimated from VO/VIO methods which tend to drift over time. It uses planar surfaces extra

Hriday Bavle 125 Dec 2, 2022
Implementation of CVPR'2022:Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors

Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository contains

null 151 Dec 26, 2022
Data and Code for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning"

Introduction Code and data for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning". We cons

Pan Lu 81 Dec 27, 2022
Neural implicit reconstruction experiments for the Vector Neuron paper

Neural Implicit Reconstruction with Vector Neurons This repository contains code for the neural implicit reconstruction experiments in the paper Vecto

Congyue Deng 35 Jan 2, 2023
PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks

Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks. Code, based on the PyTorch framework, for reprodu

Asaf 3 Dec 27, 2022
Unofficial Tensorflow 2 implementation of the paper Implicit Neural Representations with Periodic Activation Functions

Siren: Implicit Neural Representations with Periodic Activation Functions The unofficial Tensorflow 2 implementation of the paper Implicit Neural Repr

Seyma Yucer 2 Jun 27, 2022