Open source repository for the code accompanying the paper 'PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations'.

Overview

PatchNets

This is the official repository for the project "PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations". For details, we refer to our project page, which also includes supplemental videos.

This code requires a functioning installation of DeepSDF, which can then be modified using the provided files.

(Optional) Making ShapeNet V1 Watertight

If you want to use ShapeNet, please follow these steps:

  1. Download Occupancy Networks
  2. On Linux, follow the installation steps from there:
conda env create -f environment.yaml
conda activate mesh_funcspace
python setup.py build_ext --inplace
  1. Install the four external dependencies from external/mesh-fusion:
    • for libfusioncpu and libfusiongpu, run cmake and then setup.py
    • for libmcubes and librender, run setup.py
  2. Replace the original OccNet files with the included slightly modified versions. This mostly switches to using .obj instead of .off
  3. Prepare the original Shapenet meshes by copying all objs as follows: from 02858304/1b2e790b7c57fc5d2a08194fd3f4120d/model.obj to 02858304/1b2e790b7c57fc5d2a08194fd3f4120d.obj
  4. Use generate_watertight_meshes_and_sample_points() from useful_scripts.py. Needs to be run twice, see comment at generate_command.
  5. On a Linux machine with display, activate mesh_funcspace
  6. Run the generated command.sh. Note: this preprocessing crashes frequently because some meshes cause issues. They need to be deleted.

Preprocessing

During preprocessing, we generate SDF samples from obj files.

The C++ files in src/ are modified versions of the corresponding DeepSDF files. Please follow the instruction on the DeepSDF github repo to compile these. Then run preprocess_data.py. There is a special flag in preprocess_data.py for easier handling of ShapeNet. There is also an example command for preprocessing ShapeNet in the code comments. If you want to use depth completion, add the --randomdepth and --depth flags to the call to preprocess_data.py.

Training

The files in code/ largely follow DeepSDF and replace the corresponding files in your DeepSDF installation. Note that some legacy functions from these files might not be compatible with PatchNets.

  • Some settings files are available in code/specs/. The training/test splits can be found in code/examples/splits/. The DataSource and, if used, the patch_network_pretrained_path and pretrained_depth_encoder_weights need to be adapted.
  • Set a folder that collects all experiments in code/localization/SystemSpecific.py.
  • The code uses code/specs.json as the settings file. Replace this file with the desired settings file.
  • The code creates a results folder, which also includes a backup. This is necessary to later run the evaluation script.
  • Throughout the code, metadata refers to patch extrinsics.
  • mixture_latent_mode can be set to all_explicit for normal PatchNets mode or to all_implicit for use with object latents.
    • Some weights automatically change in deep_sdf_decoder.py depending on whether all_explicit or all_implicit is used.
  • For all_implicit/object latents, please set sdf_filename under use_precomputed_bias_init in deep_sdf_decoder.py to an .npz file that was obtained via Preprocessing and for which initialize_mixture_latent_vector() from train_deep_sdf.py has been run (e.g. by including it in the training set and training a normal PatchNet). MixtureCodeLength is the object latent size and PatchCodeLength is the size of each of the regressed patch codes.
  • For all_explicit/normal PatchNets, MixtureCodeLength needs to be compatible with PatchCodeLength. Set MixtureCodeLength = (PatchCodeLength + 7) x num_patches. The 7 comes from position (3) + rotation (3) + scale (1). Always use 7, regardless of whether scale and/or rotation are used. Consider keeping the patch extrinsics fixed at their initial values instead of optimizing them with the extrinsics loss, see the second stage of StagedTraining.
  • When using staged training, NumEpochs and the total Lengths of each Staged schedule should be equal. Also note that both Staged schedules should have the exact same Lengths list.

Evaluation

  1. Fit PatchNets to test data: Use train_deep_sdf.py to run the trained network on the test data. Getting the patch parameters for a test set is almost the same workflow as training a network, except that the network weights are initialized and then kept fixed and a few other settings are changed. Please see included test specs.json for examples. In all cases, set test_time = True, train_patch_network = False, train_object_to_patch = False. Set patch_network_pretrained_path in the test specs.json to the results folder of the trained network. Make sure that ScenesPerBatch is a multiple of the test set size. Adjust the learning rate schedules according to the test specs.json examples included.
  2. Get quantitative evaluation: Use evaluate_patch_network_metrics() from useful_scripts.py with the test results folder. Needs to be run twice, see comment at generate_meshes. Running this script requires an installation of Occupancy Networks, see comments in evaluate_patch_network_metrics(). It also requires the obj files of the dataset that were used for Preprocessing.

Applications, Experiments, and Mesh Extraction

useful_scripts.py contains code for the object latent applications from Sec. 4.3: latent interpolation, the generative model and depth completion. The depth completion code contains a mode for quantitative evaluation. useful_scripts.py also contains code to extract meshes.

code/deep_sdf/data.py contains the code snippet used for the synthetic noise experiments in Sec. 7 of the supplementary material.

Additional Functionality

The code contains additional functionalities that are not part of the publication. They might work but have not been thoroughly tested and can be removed.

  • wrappers to allow for easy interaction with a trained network (do not remove, required to run evaluation)
    • _setup_torch_network() in useful_scripts.py
  • a patch encoder
    • Instead of autodecoding a patch latent code, it is regressed from SDF point samples that lie inside the patch.
    • Encoder in specs.json. Check that this works as intended, later changes to the code might have broken something.
  • a depth encoder
    • A depth encoder maps from one depth map to all patch parameters.
    • use_depth_encoder in specs.json. Check that this works as intended, later changes to the code might have broken something.
  • a tiny PatchNet version
    • The latent code is reshaped and used as network weights, i.e. there are no shared weights between different patches.
    • dims in specs.json should be set to something small like [ 8, 8, 8, 8, 8, 8, 8 ]
    • use_tiny_patchnet in specs.json
    • Requires to set PatchLatentCode correctly, the desired value is printed by _initialize_tiny_patchnet() in deep_sdf_decoder.py.
  • a hierarchical representation
    • Represents/encodes a shape using large patches for simple regions and smaller patches for complex regions of the geometry.
    • hierarchical_representation() in useful_scripts.py. Never tested. Later changes to the network code might also have broken something.
  • simplified curriculum weighting from Curriculum DeepSDF
    • use_curriculum_weighting in specs.json. Additional parameters are in train_deep_sdf.py. This is our own implementation, not based on their repo, so mistakes are ours.
  • positional encoding from NeRF
    • positional_encoding in specs.json. Additional parameters are in train_deep_sdf.py. This is our own implementation, not based on their repo, so mistakes are ours.
  • a Neural ODE deformation model for patches
    • Instead of a simple MLP regressing the SDF value, a velocity field first deforms the patch region and then the z-value of the final xyz position is returned as the SDF value. Thus the field flattens the surface to lie in the z=0 plane. Very slow due to Neural ODE. Might be useful to get UV maps/a direct surface parametrization.
    • use_ode and time_dependent_ode in specs.json. Additional parameters are in train_deep_sdf.py.
  • a mixed representation that has explicit patch latent codes and only regresses patch extrinsics from an object latent code
    • Set mixture_latent_mode in specs.json to patch_explicit_meta_implicit. posrot_latent_size is the size of the object latent code in this case. mixture_to_patch_parameters is the network that regresses the patch extrinsics. Check that this works as intended, later changes to the code might have broken something.

Citation

This code builds on DeepSDF. Please consider citing DeepSDF and PatchNets if you use this code.

@article{Tretschk2020PatchNets, 
    author = {Tretschk, Edgar and Tewari, Ayush and Golyanik, Vladislav and Zollh\"{o}fer, Michael and Stoll, Carsten and Theobalt, Christian}, 
    title = "{PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations}", 
    journal = {European Conference on Computer Vision (ECCV)}, 
    year = "2020" 
} 
@InProceedings{Park_2019_CVPR,
    author = {Park, Jeong Joon and Florence, Peter and Straub, Julian and Newcombe, Richard and Lovegrove, Steven},
    title = {DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2019}
}

License

Please note that this code is released under an MIT licence, see LICENCE. We have included and modified third-party components, which have their own licenses. We thank all of the respective authors for releasing their code, especially the team behind DeepSDF!

Comments
  • the preprocessed data does’nt has attribute [pos_normals] or [neg_normals]

    the preprocessed data does’nt has attribute [pos_normals] or [neg_normals]

    I tried to set [use_precomputed]=false but error still happened cause of :

    if sdf_samples_with_normals is None:
        if use_precomputed_init:
            return torch.from_numpy(np.load(initialization_file))  
        else:
            npz = np.load(sdf_filename)
            pos_tensor = deep_sdf.data.remove_nans(torch.from_numpy(np.concatenate([npz["pos"], npz["pos_normals"]], axis=1))).numpy()
            neg_tensor = deep_sdf.data.remove_nans(torch.from_numpy(np.concatenate([npz["neg"], npz["neg_normals"]], axis=1))).numpy()
    

    #----------

    import numpy as np cat=np.load('/user-data/patchnets-master/code/data/SdfSamples/ShapeNetV2/02691156/1a29042e20ab6f005e9e2656aff7dd5b.npz') cat.files ['pos', 'neg']

    opened by QtEngineer 12
  • Some objects cannot be evaluated

    Some objects cannot be evaluated

    I noticed that some .obj files of ShapeNetV2 cannot be correctly evaluateed. They can be generated when [generate_meshes = True] on useful_scripts.py ,but an error will happen when evaluate_patch_network_metrics runs again with [generate_meshes = False]

    Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/root/miniconda3/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "useful_scripts.py", line 1101, in _evaluate_object iou = intersection_over_union(groundtruth_mesh_file, regressed_trimesh) File "useful_scripts.py", line 1028, in intersection_over_union watertight_result = precompute_iou_on_watertight(groundtruth_mesh_file, num_points=num_points) # num_points x 4 (x,y,z, 1.0f if inside else 0.0f) File "useful_scripts.py", line 1013, in precompute_iou_on_watertight max_bb, min_bb = _get_bounding_box(groundtruth_trimesh.vertices) AttributeError: 'Scene' object has no attribute 'vertices'

    Is it because these objects are nor water-tight? And how to fix it?

    opened by QtEngineer 1
  • errors when evaluating

    errors when evaluating

    Hi, an error happened when I tried to use evaluate_patch_network_metrics() on useful_scripts.py File "useful_scripts.py", line 1702, in main evaluate_patch_network_metrics() File "useful_scripts.py", line 1141, in evaluate_patch_network_metrics visualize_mixture(results_folder=results_folder, grid_resolution=grid_resolution, mesh_files=evaluate_json, data_source=data_source, break_if_latent_does_not_exist=True, output_name=regressed_meshes_folder,checkpoints=3) File "useful_scripts.py", line 55, in visualize_mixture network, patch_latent_size, mixture_latent_size, load_weights_into_network, sdf_to_latent, patch_forward, latent_to_mesh, get_training_latent_vectors, specs = _setup_torch_network(results_folder,checkpoint=checkpoints) File "useful_scripts.py", line 135, in _setup_torch_network network = Network(patch_latent_size=patch_latent_size, File "/root/autodl-tmp/patchnets_neuralpull/code/networks/deep_sdf_decoder.py", line 357, in init self._init_patch_network_training(train_patch_network, patch_network_pretrained_path, results_folder) File "/root/autodl-tmp/patchnets_neuralpull/code/networks/deep_sdf_decoder.py", line 404, in _init_patch_network_training self.load_state_dict(current_weights) File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1223, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for Decoder: size mismatch for patch_lin0.weight_v: copying a param with shape torch.Size([128, 35]) from checkpoint, the shape in current model is torch.Size([128, 131]). size mismatch for patch_lin3.bias: copying a param with shape torch.Size([224]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for patch_lin3.weight_g: copying a param with shape torch.Size([224, 1]) from checkpoint, the shape in current model is torch.Size([128, 1]). size mismatch for patch_lin3.weight_v: copying a param with shape torch.Size([224, 128]) from checkpoint, the shape in current model is torch.Size([128, 128]). I am confusued .The network recorded between specs train.json and test.json are identical. How to fix it?

    opened by QtEngineer 1
  • how to get gradient of input xyz

    how to get gradient of input xyz

    I wanted to get grad of input xyz,but function returned None,and requires_grad=True/certain_grad doesnot work.Is it because the input of network concate x and xyz? def patch_network_forward(self, input,): xyz = input[:, -original_coordinate_size:] ..., for layer in range(0, self.num_layers - 1): lin = getattr(self, "patch_lin" + str(layer)) if layer in self.latent_in: x = torch.cat([x, input], 1) elif layer != 0 and self.xyz_in_all: x = torch.cat([x, xyz], 1) x = lin(x) if layer < self.num_layers - 2: if ( self.norm_layers is not None and layer in self.norm_layers and not self.weight_norm ): bn = getattr(self, "patch_bn" + str(layer)) x = bn(x) #x = self.softplus(x) x = self.relu(x) #x = self.elu(x) if self.dropout is not None and layer in self.dropout: x = F.dropout(x, p=self.dropout_prob, training=self.training)

    opened by QtEngineer 0
  • Pretrained model request

    Pretrained model request

    Hello,

    Thank you for releasing this amazing work, by the way, could you kindly release your pretrained model so others can evaluate your method rather than training from scratch?

    Best.

    opened by Huang-ZhangJin 0
  • How to configure the environment?

    How to configure the environment?

    Hi, thanks for your work.

    I want to reproduce this code on my machine, but I find no python environment dependency files in this repo, like environment.yml or requirements.txt. So I wonder how to prepare a satisfied environment? I can not find these files in the repo of DeepSDF either.

    Another question is that how to configure the dataset? Which dataset I need to download and where should I put them in order to train the network by my own?

    opened by Co1lin 1
  • How to get initialize_mixture_latent_vector?

    How to get initialize_mixture_latent_vector?

    initialization_file = sdf_filename + "init" + str(patch_latent_size) + "" + str(num_patches) + "" + str(surface_sdf_threshold) + "_" + str(final_scaling_increase_factor) + ("_tiny" if use_tiny_patchnet else "") + ".npy"

    but there's no such file or directory: 'datasets/shapenet_v2/preprocessed/SdfSamples/ShapeNetV2/04554684/1a23fdbb1b6d4c53902c0a1a69e25bd9.npz_init_128_30_0.02_1.2.npy' (only have 1a23fdbb1b6d4c53902c0a1a69e25bd9.npz)

    opened by QtEngineer 6
Owner
null
Code for Learning Manifold Patch-Based Representations of Man-Made Shapes, in ICLR 2021.

LearningPatches | Webpage | Paper | Video Learning Manifold Patch-Based Representations of Man-Made Shapes Dmitriy Smirnov, Mikhail Bessmeltsev, Justi

Dima Smirnov 22 Nov 14, 2022
Implementation of "Deep Implicit Templates for 3D Shape Representation"

Deep Implicit Templates for 3D Shape Representation Zerong Zheng, Tao Yu, Qionghai Dai, Yebin Liu. arXiv 2020. This repository is an implementation fo

Zerong Zheng 144 Dec 7, 2022
Code for the paper "Implicit Representations of Meaning in Neural Language Models"

Implicit Representations of Meaning in Neural Language Models Preliminaries Create and set up a conda environment as follows: conda create -n state-pr

Belinda Li 39 Nov 3, 2022
This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21

Deep Virtual Markers This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21 Getting Started Get sa

KimHyomin 45 Oct 7, 2022
Code of paper "Compositionally Generalizable 3D Structure Prediction"

Compositionally Generalizable 3D Structure Prediction In this work, We bring in the concept of compositional generalizability and factorizes the 3D sh

Songfang Han 30 Dec 17, 2022
Code for NeurIPS 2021 paper: Invariant Causal Imitation Learning for Generalizable Policies

Invariant Causal Imitation Learning for Generalizable Policies Ioana Bica, Daniel Jarrett, Mihaela van der Schaar Neural Information Processing System

Ioana Bica 17 Dec 1, 2022
Unofficial Tensorflow 2 implementation of the paper Implicit Neural Representations with Periodic Activation Functions

Siren: Implicit Neural Representations with Periodic Activation Functions The unofficial Tensorflow 2 implementation of the paper Implicit Neural Repr

Seyma Yucer 2 Jun 27, 2022
Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).

Hurdles to Progress in Long-form Question Answering This repository contains the official scripts and datasets accompanying our NAACL 2021 paper, "Hur

Kalpesh Krishna 41 Nov 8, 2022
Code repository accompanying the paper "On Adversarial Robustness: A Neural Architecture Search perspective"

On Adversarial Robustness: A Neural Architecture Search perspective Preparation: Clone the repository: https://github.com/tdchaitanya/nas-robustness.g

Chaitanya Devaguptapu 4 Nov 10, 2022
Towards Implicit Text-Guided 3D Shape Generation (CVPR2022)

Towards Implicit Text-Guided 3D Shape Generation Towards Implicit Text-Guided 3D Shape Generation (CVPR2022) Code for the paper [Towards Implicit Text

null 55 Dec 16, 2022
Code for Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid Representations

Implementation for Iso-Points (CVPR 2021) Official code for paper Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid Representations paper |

Yifan Wang 66 Nov 8, 2022
This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is accepted to ICCV2021.

GMPQ: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation This is the pytorch implementation for the paper: Generalizable Mix

null 18 Sep 2, 2022
PyTorch code accompanying our paper on Maximum Entropy Generators for Energy-Based Models

Maximum Entropy Generators for Energy-Based Models All experiments have tensorboard visualizations for samples / density / train curves etc. To run th

Rithesh Kumar 135 Oct 27, 2022
Official PyTorch implementation of Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations

Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations Zhenyu Jiang, Yifeng Zhu, Maxwell Svetlik, Kuan Fang, Yu

UT-Austin Robot Perception and Learning Lab 63 Jan 3, 2023
Pytorch implementation of COIN, a framework for compression with implicit neural representations 🌸

COIN ?? This repo contains a Pytorch implementation of COIN: COmpression with Implicit Neural representations, including code to reproduce all experim

Emilien Dupont 104 Dec 14, 2022
This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization News: [2020/05/04] Added EGL rendering option for training data g

Shunsuke Saito 1.5k Jan 3, 2023
Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"

Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition This repository contains code for the CVPR2021 paper "Patch-NetV

QVPR 368 Jan 6, 2023
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022
GrailQA: Strongly Generalizable Question Answering

GrailQA is a new large-scale, high-quality KBQA dataset with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It can be used to test three levels of generalization in KBQA: i.i.d., compositional, and zero-shot.

OSU DKI Lab 76 Dec 21, 2022