Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes (CVPR 2021 Oral)

Related tags

Deep Learning nglod
Overview

Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Surfaces

Official code release for NGLOD. For technical details, please refer to:

Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Surfaces
Towaki Takikawa*, Joey Litalien*, Kangxue Xin, Karsten Kreis, Charles Loop, Derek Nowrouzezahrai, Alec Jacobson, Morgan McGuire, and Sanja Fidler
In Computer Vision and Pattern Recognition (CVPR), 2021 (Oral)
[Paper] [Bibtex] [Project Page]

If you find this code useful, please consider citing:

@article{takikawa2021nglod,
    title = {Neural Geometric Level of Detail: Real-time Rendering with Implicit {3D} Shapes}, 
    author = {Towaki Takikawa and
              Joey Litalien and 
              Kangxue Yin and 
              Karsten Kreis and 
              Charles Loop and 
              Derek Nowrouzezahrai and 
              Alec Jacobson and 
              Morgan McGuire and 
              Sanja Fidler},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2021},
}

New: Sparse training code with Kaolin now available in app/spc! Read more about it here

Directory Structure

sol-renderer contains our real-time rendering code.

sdf-net contains our training code.

Within sdf-net:

sdf-net/lib contains all of our core codebase.

sdf-net/app contains standalone applications that users can run.

Getting started

Python dependencies

The easiest way to get started is to create a virtual Python 3.8 environment:

conda create -n nglod python=3.8
conda activate nglod
pip install --upgrade pip
pip install -r ./infra/requirements.txt

The code also relies on OpenEXR, which requires a system library:

sudo apt install libopenexr-dev 
pip install pyexr

To see the full list of dependencies, see the requirements.

Building CUDA extensions

To build the corresponding CUDA kernels, run:

cd sdf-net/lib/extensions
chmod +x build_ext.sh && ./build_ext.sh

The above instructions were tested on Ubuntu 18.04/20.04 with CUDA 10.2/11.1.

Training & Rendering

Note. All following commands should be ran within the sdf-net directory.

Download sample data

To download a cool armadillo:

wget https://raw.githubusercontent.com/alecjacobson/common-3d-test-models/master/data/armadillo.obj -P data/

To download a cool matcap file:

wget https://raw.githubusercontent.com/nidorx/matcaps/master/1024/6E8C48_B8CDA7_344018_A8BC94.png -O data/matcap/green.png

Training from scratch

python app/main.py \
    --net OctreeSDF \
    --num-lods 5 \
    --dataset-path data/armadillo.obj \
    --epoch 250 \
    --exp-name armadillo

This will populate _results with TensorBoard logs.

Rendering the trained model

If you set custom network parameters in training, you need to also reflect them for the renderer.

For example, if you set --feature-dim 16 above, you need to set it here too.

python app/sdf_renderer.py \
    --net OctreeSDF \
    --num-lods 5 \
    --pretrained _results/models/armadillo.pth \
    --render-res 1280 720 \
    --shading-mode matcap \
    --lod 4

By default, this will populate _results with the rendered image.

If you want to export a .npz model which can be loaded into the C++ real-time renderer, add the argument --export path/file.npz. Note that the renderer only supports the base Neural LOD configuration (the default parameters with OctreeSDF).

Core Library Development Guide

To add new functionality, you will likely want to make edits to the files in lib.

We try our best to keep our code modular, such that key components such as trainer.py and renderer.py need not be modified very frequently to add new functionalities.

To add a new network architecture for an example, you can simply add a new Python file in lib/models that inherits from a base class of choice. You will probably only need to implement the sdf method which implements the forward pass, but you have the option to override other methods as needed if more custom operations are needed.

By default, the loss function used are defined in a CLI argument, which the code will automatically parse and iterate through each loss function. The network architecture class is similarly defined in the CLI argument; simply use the exact class name, and don't forget to add a line in __init__.py to resolve the namespace.

App Development Guide

To make apps that use the core library, add the sdf-net directory into the Python sys.path, so the modules can be loaded correctly. Then, you will likely want to inherit the same CLI parser defined in lib/options.py to save time. You can then add a new argument group app to the parser to add custom CLI arguments to be used in conjunction with the defaults. See app/sdf_renderer.py for an example.

Examples of things that are considered apps include, but are not limited to:

  • visualizers
  • training code
  • downstream applications

Third-Party Libraries

This code includes code derived from 3 third-party libraries, all distributed under the MIT License:

https://github.com/zekunhao1995/DualSDF

https://github.com/rogersce/cnpy

https://github.com/krrish94/nerf-pytorch

Acknowledgements

We would like to thank Jean-Francois Lafleche, Peter Shirley, Kevin Xie, Jonathan Granskog, Alex Evans, and Alex Bie at NVIDIA for interesting discussions throughout the project. We also thank Peter Shirley, Alexander Majercik, Jacob Munkberg, David Luebke, Jonah Philion and Jun Gao for their help with paper editing.

We also thank Clement Fuji Tsang for his help with the code release.

The structure of this repo was inspired by PIFu: https://github.com/shunsukesaito/PIFu

Comments
  • Question about modeling a 3d shape using marching cube

    Question about modeling a 3d shape using marching cube

    Hi, Thanks for your great work. I trained an OctreeSDF model on LOD5, and want to do marching cube that similar to SIREN. Unfortunately, it dosen't work. The output .ply model have no shape and all of noise in the space.

    Sorry for the broken english and my stupid question. Looking forward to your answer.

    opened by vincenthesiyuan 5
  • Is there an ETA on the code?

    Is there an ETA on the code?

    Hello @tovacinni, thanks for this great work! The results are really impressive. I'm also very interested since it can be seamlessly integrated into a 3D reconstruction project I'm currently working on. You said on Twitter that the code will be released soon. Could you please give a more detailed ETA on the code release? For example in how many weeks can it be released? It would be really helpful for me to plan my current project accordingly. Thanks very much!

    opened by JiamingSuen 5
  • Args to Export .npz File

    Args to Export .npz File

    Does anyone know how to set the args to successfully run with --export PATH/TO/FILE.npz? It looks like there are a bunch of variables in lib/tracer//SphereTracer.py that are None during the rendering, such as self.num_steps, self.camera_clamp and self.min_dis etc.

    opened by zhaoyuanyuan2011 4
  • Error running TrimMesh

    Error running TrimMesh

    Hi, thank you for releasing the code! I've been looking into replicating your results, however, I'm running into an issue when preprocessing the input mesh and specifically while running the trimmesh operation here (it returns only False elements).

    I'm using the armadillo mesh as input and running pytorch 1.8, CUDA 11.1, and as far as I can tell I managed to build all the extensions successfully. Have you seen this before? And would it be possible to upload the "normalized" version of the mesh so I can check if the rest of the pipeline works? Thanks a lot for the help and for your work!

    opened by RaresAmbrus 4
  • Question about the details of the decoder

    Question about the details of the decoder

    HI, thanks for your excellent work! Your paper describes that the decoder occupies 90KB, but I can't find the corresponding implementation in the code. Can you tell me the details of the decoder? For example, which parts are included, thank you very much!

    opened by Zhanyuan23333 3
  • fixed requirements for google colab

    fixed requirements for google colab

    Little updates to requirements to be able to run nglod training (app.main) on Google Colab:

    1. pysdf missing from requirements --> added to requirements.txt
    2. Some flows use torch.clip, which is only available from torch 1.7.0 and onwards (tested with CUDA 11) --> requirements.txt updated
    3. MeshDataset has a stale import spc statement which fails the training code --> removed from MeshDataset

    Another change required to run training, and not included in this pull-request: The armadillo_normalized.obj model is n/a, so possibly the normalization code of MeshDataset should be uncommented (I'm not sure how other flows may be affected).

    opened by orperel 3
  • Question about the implementation of octree

    Question about the implementation of octree

    Hi @tovacinni, thx for your excellent work! recently many sparse octree-based rendering papers are published e.g. PlenOctrees. However, in plenoctrees code for building octree, they initialize the octree with a dense buffer, like

    class N3Tree(nn.Module):
        """
        PyTorch :math:`N^3`-tree library with CUDA acceleration.
        By :math:`N^3`-tree we mean a 3D tree with branching factor N at each interior node,
        where :math:`N=2` is the familiar octree.
    
    .. warning::
        `nn.Parameters` can change size, which
        makes current optimizers invalid. If any refine() or
        shrink_to_fit() call returns True,
        please re-make any optimizers
        """
        def __init__(self, N=2, data_dim=4, depth_limit=10,
                init_reserve=1, init_refine=0, geom_resize_fact=1.5,
                radius=0.5, center=[0.5, 0.5, 0.5],
                data_format="RGBA",
                extra_data=None,
                map_location="cpu"):
            """
            Construct N^3 Tree
    
            :param N: int branching factor N
            :param data_dim: int size of data stored at each leaf
            :param depth_limit: int maximum depth of tree to stop branching/refining
            :param init_reserve: int amount of nodes to reserve initially
            :param init_refine: int number of times to refine entire tree initially
            :param geom_resize_fact: float geometric resizing factor
            :param radius: float or list, 1/2 side length of cube (possibly in each dim)
            :param center: list center of space
            :param data_format: a string to indicate the data format
            :param extra_data: extra data to include with tree
            :param map_location: str device to put data
    
            """
            super().__init__()
            assert N >= 2
            assert depth_limit >= 0
            self.N : int = N
            self.data_dim : int = data_dim
    
            if init_refine > 0:
                for i in range(1, init_refine + 1):
                    init_reserve += (N ** i) ** 3
            
            # Here N is the voxel size. 
            self.register_parameter("data", nn.Parameter(torch.zeros(init_reserve, N, N, N, data_dim, device=map_location)))
            self.register_buffer("child", torch.zeros(init_reserve, N, N, N, dtype=torch.int32, device=map_location))
    

    How's your implementation for octree? Will it still be sparse during the building and support large scale scene? Looking forward to your answer.

    opened by fishfishson 3
  • Storage problem of your paper

    Storage problem of your paper

    What is included in the storage volume(KB) of different LODs in Table 1 in your paper?Just the octree related data structures saved by the weight file? Looking forward to your reply!

    opened by dhuwzj 2
  • Installation Help

    Installation Help

    Any advice on how to get nglod running on NVIDIA Ampere GPUs? I tried to install and run on a Ubuntu 20.04, CUDA 11.5, A4000 16 GB system but it doesn't work. I don't think PyTorch 1.6 supports Ampere. Would it matter if I installed another version of pytroch?

    opened by sixftninja 2
  • Crash using Kaolin SPC

    Crash using Kaolin SPC

    Hi there,

    First of all congrats on your amazing work and for sharing it!

    I am trying to run the Kaolin SPC (app/spc folder) implementation on the Armadillo only on the last LOD level (--return-lst) I am using:

    • Ubuntu 18.04
    • pytorch 1.9.0
    • Kaolin 0.9.1

    I had to do a few changes first:

    • change the amount of samples in mesh_to_octree() down to 1 000 000 vertices to run it on my little 1080ti
    • change the size of ths split in the SPCDataset to 10e6 to be compatible with the new samples size (as far as I understood).

    Now I am getting a crash in the SPC.interpolate() method. Unfortunately, as it happens in cuda, and the error log is not very meaningfull (see below) I tracked it down to the following issue: in line 97 of app/spc/SPC.py: return self._interpolate(coeffs, self.features[lod][self.trinkets[pidx]])

    self.trinkets[pidx] (resulting from the query) contains indices that are higher than the size of self.features[lod] where lod is the last level.

    It should be easy to reproduce by downloading the Armadillo obj and run the following code: python app/spc/main_spc.py --num-lods 5 --epoch 250 --exp-name armadillo --mesh-path data/armadillo.obj --return-lst

    Thanks in advance,

    Pierre

    Here is my log up to crash: [23/09 10:35:30] [INFO] Parameters:

               'l2_loss': 1.0,
               'mesh_path': 'data/armadillo.obj',
               'normalize_mesh': False},
      'dataset': { 'analytic': False,
                   'block_res': 7,
                   'build_dataset': False,
                   'dataset_path': None,
                   'exclude': None,
                   'get_normals': False,
                   'glsl_path': '../sdf-viewer/data-files/sdf',
                   'include': None,
                   'mesh_batch': False,
                   'mesh_dataset': 'MeshDataset',
                   'mesh_subset_size': -1,
                   'num_samples': 100000,
                   'raw_obj_path': None,
                   'sample_mode': ['rand', 'near', 'near', 'trace', 'trace'],
                   'sample_tex': False,
                   'samples_per_voxel': 256,
                   'train_valid_split': None,
                   'trim': False,
                   'viewer_path': '../sdf-viewer'},
      'global': { 'debug': False,
                  'exp_name': 'armadillo',
                  'ngc': False,
                  'perf': False,
                  'seed': None,
                  'valid_every': 1,
                  'valid_only': False,
                  'validator': None},
      'net': { 'base_lod': 2,
               'feat_sum': False,
               'feature_dim': 32,
               'feature_size': 4,
               'ff_dim': -1,
               'ff_width': 16.0,
               'freeze': -1,
               'hidden_dim': 128,
               'jit': False,
               'joint_decoder': False,
               'joint_feature': False,
               'net': 'OverfitSDF',
               'num_layers': 1,
               'num_lods': 5,
               'periodic': False,
               'pos_enc': False,
               'pos_invariant': False,
               'pretrained': None,
               'skip': None},
      'optimizer': { 'grad_method': 'finitediff',
                     'loss': ['l2_loss'],
                     'lr': 0.001,
                     'optimizer': 'adam'},
      'optional arguments': {'help': None},
      'positional arguments': {},
      'renderer': { 'ao': False,
                    'camera_clamp': [-5, 10],
                    'camera_fov': 30,
                    'camera_lookat': [0, 0, 0],
                    'camera_origin': [-2.8, 2.8, -2.8],
                    'camera_proj': 'persp',
                    'ground_height': None,
                    'interpolate': None,
                    'lod': None,
                    'matcap_path': 'data/matcap/green.png',
                    'min_dis': 0.0003,
                    'num_steps': 256,
                    'render_batch': 0,
                    'render_every': 1,
                    'render_res': [512, 512],
                    'shading_mode': 'matcap',
                    'shadow': False,
                    'sol': False,
                    'step_size': 1.0,
                    'tracer': 'SphereTracer'},
      'trainer': { 'batch_size': 512,
                   'epochs': 250,
                   'grow_every': -1,
                   'growth_strategy': 'increase',
                   'latent': False,
                   'latent_dim': 128,
                   'logs': '_results/logs/runs/',
                   'loss_sample': -1,
                   'model_path': '_results/models',
                   'only_last': False,
                   'resample_every': 10,
                   'return_lst': True,
                   'save_all': False,
                   'save_as_new': False,
                   'save_every': 1}}```
    [23/09 10:35:30] [INFO] Training on None
    [23/09 10:35:31] [INFO] Using GeForce GTX 1080 Ti with CUDA v11.1
    [23/09 10:35:31] [INFO] Active LODs: [2, 2, 3, 4, 5]
    [23/09 10:35:31] [INFO] Built dual octree and trinkets
    [23/09 10:35:31] [INFO] # Feature Vectors: 9243
    [23/09 10:35:32] [INFO] Total number of parameters: 317541
    [23/09 10:35:32] [INFO] Block Indices: [0]
    [23/09 10:35:32] [INFO] Model configured and ready to go
    [23/09 10:35:32] [INFO] Initializing dataset...
    [23/09 10:36:15] [INFO] Active Block IDX: 0
    [23/09 10:36:15] [INFO] Resampling...
    [23/09 10:36:15] [INFO] Permuted Samples
    [23/09 10:36:15] [INFO] Reset DataLoader
    /opt/conda/conda-bld/pytorch_1631630839582/work/aten/src/ATen/native/cuda/IndexKernel.cu:97: operator(): block: [151,0,0], thread: [32,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    opened by theponpon 2
  • Questions about training for multiple shapes and inference

    Questions about training for multiple shapes and inference

    Hello, thanks for the wonderful works and code! While trying to reproduce this work for shapenet150, I've got several questions.

    1. It seems that the released code is for training network for single mesh data. How should the training for multiple shapes be done? AFAI understand, each shape has its own sparse feature (defined with FeatureVolume in the code). Am I getting right so that I have to train individual shape with shared mlp and different feature volume, or the model is the same through the entire dataset, which means feature volume is also shared?

    1-1) If feature volume is also trained for all data, how is the minibatch consist? Should the shape be sampled from dataset so that dimension is (num_shape) x (num_samples) x (dimension)? Or should the point be sampled from dataset regardless of the shape? (in this case, the minibatch dim will be (num_samples) x (dimension))

    1. What is the inference process when we have fully trained model and arbitrary pointcloud which does not have groundtruth sdf value? In case of DeepSDF, the trained net is fixed and feature vector is trained with slightly different loss function when inferring.

    Sorry for the broken english. If you don't understand those questions, I'll explain in detail. Thanks in advance!

    opened by shylee2021 2
  • Can you share chamfer distance evaluation code for SPC part?

    Can you share chamfer distance evaluation code for SPC part?

    Thanks for your amazing work! I was wonder whether you can share code for computing metrics for SPC part? I try to train my own data using SPC, but I get wrong chamfer distance results when using other codes. Thanks again!

    opened by RhodaLiu17 0
  • Can't export .npz files

    Can't export .npz files

    Hi, when running the example command in the README for rendering, along with the --export flag (python app/sdf_renderer.py \ --net OctreeSDF \ --num-lods 5 \ --pretrained _results/models/armadillo.pth \ --render-res 1280 720 \ --shading-mode matcap \ --lod 4 --export file.npz

    I get the following error message Total number of parameters: 10146213 Traceback (most recent call last): File "app/sdf_renderer.py", line 106, in <module> net = SOL_NGLOD(net) File "/home/luis/nglod/sdf-net/lib/models/SOL_NGLOD.py", line 50, in __init__ self.vs = voxel_sparsify(2000000, net, self.lod, sol=False) File "/home/luis/nglod/sdf-net/lib/renderutils.py", line 63, in voxel_sparsify surface = sample_surface(n, net, sol=sol, device=device)[:n] File "/home/luis/nglod/sdf-net/lib/renderutils.py", line 33, in sample_surface tracer = SphereTracer(device, sol=sol) TypeError: __init__() got an unexpected keyword argument 'sol'

    I cannot figure out how to fix it.

    Thank you in advance

    opened by lccatala 1
  • The accuracy of predicted sdf function

    The accuracy of predicted sdf function

    Hi, thank you for your wonderful job. I have tested the performance on your models and get some trouble about the accuracy. Here is my test process: I have a mesh model: a sphere with center at (0,5,0,5,0,5) and with radius 0.5. So this mesh surface contains the points such as [0.5, 1, 0.5], [1, 0.5, 0.5], [0.5,0.5, 1] I use your code to train a model with following commands: python --net OctreeSDF --num_lods 5 --dataset_path my.obj --epoch 250 --exp-name test

    And then I use the model to get some sdf predicted value around the points , [0.5, 1, 0.5], [1, 0.5, 0.5], [0.5,0.5, 1]. I make up three points sets: [0.5, i, 0.5], [i, 0.5, 0.5], [0.5,0.5, i] for i in numpy.linspace(0.997, 1.003, 1001) I test the accuracy of the model with these three points sets and get some results: lADPJxuMPr9gd_DNARrNAYU_389_282

    I find the the sdf values change their sign at [0.5, 0.99895, 0.5], [0.999027, 0.5, 0.5], [0.5,0.5, 0.999027], which are about 0.001 from the ground truth. And at points 0.5], [1, 0.5, 0.5], [0.5,0.5, 1], the predicted sdf values are about 0.002, which is also too big compared with the 1e-6 training L2 loss.

    Could you please give some explaination about this phenomenon? And is there any suggestion to improve the accuracy of the predicted sdf value?

    I'm looking forward to your reply. It is quite important for me.

    opened by 1999kevin 0
  • Rendering on Mobile

    Rendering on Mobile

    Has anyone tried rendering on a mobile device? While I would assume that sphere casting is slower than a traditional rendering pipeline it seems to me that there benefits in how it scales. I deal a fair amount with user generated 3D models. Unlike, media like images or video, you can't just down res 3D models (duh).

    Could a representation like this be used to ensure that content uses a predictable number of resources?

    opened by FreakTheMighty 0
  • Question about generate parents in create_trinkets.

    Question about generate parents in create_trinkets.

    Hi, Thanks for a great work. I want to use your spc code, and found a function can generate a point's corners and parent. But the parent index generated has NAN values.

    In my understanding, the map created in line 8 should be from motron code of level i's parents to the index of node in spc.points. But here the keys are motron code of nodes in level i, which is a mismatch from index used in pd.Series.reindex(i.e., the motron code of parents). Do you know how to fix this? Or it supposed to behave like this?

            if i == 0:
                parents.append(torch.LongTensor([-1]).cuda())
            else:
                # Dividing by 2 will yield the morton code of the parent
                pc = torch.floor(points / 2.0).short()
                mt_pc = spc_ops.points_to_morton(pc.contiguous())
                mt_pc_dest = spc_ops.points_to_morton(points)
                plut = dict(zip(mt_pc_dest.cpu().numpy(), np.arange(mt_pc_dest.shape[0])))
                pc_idx = pd.Series(plut).reindex(mt_pc.cpu().numpy()).values
                parents.append(torch.LongTensor(pc_idx).cuda())
    

    By the way, I tried to modify code as following, it seems working

            if i == 0:
                parents.append(torch.LongTensor([-1]).cuda())
            else:
                # Dividing by 2 will yield the morton code of the parent
                pc = torch.floor(points / 2.0).short()
                mt_pc = spc_ops.points_to_morton(pc.contiguous())
                plut = dict(zip(mt_pc_dest.cpu().numpy(), np.arange(mt_pc_dest.shape[0])))
                pc_idx = pd.Series(plut).reindex(mt_pc.cpu().numpy()).values + pyramid[1, i-1].item()
                parents.append(torch.LongTensor(pc_idx).cuda())
            mt_pc_dest = spc_ops.points_to_morton(points)
    
    opened by Burningdust21 0
Owner
null
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

null 697 Jan 6, 2023
Implementation detail for paper "Multi-level colonoscopy malignant tissue detection with adversarial CAC-UNet"

Multi-level-colonoscopy-malignant-tissue-detection-with-adversarial-CAC-UNet Implementation detail for our paper "Multi-level colonoscopy malignant ti

CVSM Group -  email: czhu@bupt.edu.cn 84 Nov 22, 2022
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Facebook Research 281 Dec 22, 2022
Volsdf - Volume Rendering of Neural Implicit Surfaces

Volume Rendering of Neural Implicit Surfaces Project Page | Paper | Data This re

Lior Yariv 221 Jan 7, 2023
Official implementation of NPMs: Neural Parametric Models for 3D Deformable Shapes - ICCV 2021

NPMs: Neural Parametric Models Project Page | Paper | ArXiv | Video NPMs: Neural Parametric Models for 3D Deformable Shapes Pablo Palafox, Aljaz Bozic

PabloPalafox 109 Nov 22, 2022
Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021

AutoInt: Automatic Integration for Fast Neural Volume Rendering CVPR 2021 Project Page | Video | Paper PyTorch implementation of automatic integration

Stanford Computational Imaging Lab 149 Dec 22, 2022
This repository contains the implementation of Deep Detail Enhancment for Any Garment proposed in Eurographics 2021

Deep-Detail-Enhancement-for-Any-Garment Introduction This repository contains the implementation of Deep Detail Enhancment for Any Garment proposed in

null 40 Dec 13, 2022
Real-Time-Student-Attendence-System - Real Time Student Attendence System

Real-Time-Student-Attendence-System The Student Attendance Management System Pro

Rounak Das 1 Feb 15, 2022
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Jiwoon Ahn 337 Dec 15, 2022
Learning Skeletal Articulations with Neural Blend Shapes

This repository provides an end-to-end library for automatic character rigging and blend shapes generation as well as a visualization tool. It is based on our work Learning Skeletal Articulations with Neural Blend Shapes that is published in SIGGRAPH 2021.

Peizhuo 504 Dec 30, 2022
This is the official implementation code repository of Underwater Light Field Retention : Neural Rendering for Underwater Imaging (Accepted by CVPR Workshop2022 NTIRE)

Underwater Light Field Retention : Neural Rendering for Underwater Imaging (UWNR) (Accepted by CVPR Workshop2022 NTIRE) Authors: Tian Ye†, Sixiang Che

jmucsx 17 Dec 14, 2022
Code for Learning Manifold Patch-Based Representations of Man-Made Shapes, in ICLR 2021.

LearningPatches | Webpage | Paper | Video Learning Manifold Patch-Based Representations of Man-Made Shapes Dmitriy Smirnov, Mikhail Bessmeltsev, Justi

Dima Smirnov 22 Nov 14, 2022
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021) An efficient PyTorch library for Point Cloud Completion.

Microsoft 119 Jan 2, 2023
PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021.

IBRNet: Learning Multi-View Image-Based Rendering PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021. IBRN

Google Interns 371 Jan 3, 2023
FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset (CVPR2022)

FaceVerse FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset Lizhen Wang, Zhiyuan Chen, Tao Yu, Chenguang

Lizhen Wang 219 Dec 28, 2022
Python and C++ implementation of "MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation". Accepted at LXCV @ CVPR 2021.

MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation This is a PyTorch and LibTorch implementation of MarkerPose: a

Jhacson Meza 47 Nov 18, 2022
Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)

Implicit3DUnderstanding (Im3D) [Project Page] Holistic 3D Scene Understanding from a Single Image with Implicit Representation Cheng Zhang, Zhaopeng C

Cheng Zhang 149 Jan 8, 2023
TCNN Temporal convolutional neural network for real-time speech enhancement in the time domain

TCNN Pandey A, Wang D L. TCNN: Temporal convolutional neural network for real-time speech enhancement in the time domain[C]//ICASSP 2019-2019 IEEE Int

凌逆战 16 Dec 30, 2022