This repository contains code to train and render Mixture of Volumetric Primitives (MVP) models

Related tags

Deep Learning mvp
Overview

Mixture of Volumetric Primitives -- Training and Evaluation

This repository contains code to train and render Mixture of Volumetric Primitives (MVP) models.

If you use Mixture of Volumetric Primitives in your research, please cite:
Mixture of Volumetric Primitives for Efficient Neural Rendering
Stephen Lombardi, Tomas Simon, Gabriel Schwartz, Michael Zollhoefer, Yaser Sheikh, Jason Saragih
ACM Transactions on Graphics (SIGGRAPH 2021) 40, 4. Article 59

@article{Lombardi21,
author = {Lombardi, Stephen and Simon, Tomas and Schwartz, Gabriel and Zollhoefer, Michael and Sheikh, Yaser and Saragih, Jason},
title = {Mixture of Volumetric Primitives for Efficient Neural Rendering},
year = {2021},
issue_date = {August 2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {40},
number = {4},
issn = {0730-0301},
url = {https://doi.org/10.1145/3450626.3459863},
doi = {10.1145/3450626.3459863},
journal = {ACM Trans. Graph.},
month = {jul},
articleno = {59},
numpages = {13},
keywords = {neural rendering}
}

Requirements

  • Python (3.8+)
    • PyTorch
    • NumPy
    • SciPy
    • Pillow
    • OpenCV
  • ffmpeg (in $PATH to render videos)
  • CUDA 10 or higher

Building

The repository contains two CUDA PyTorch extensions. To build, cd to each directory and use make:

cd extensions/mvpraymarcher
make
cd -
cd extensions/utils
make

How to Use

There are two main scripts in the root directory: train.py and render.py. The scripts take a configuration file for the experiment that defines the dataset used and the options for the model (e.g., the type of decoder that is used).

Download the latest release on Github to get the experiments directory.

To train the model:

python train.py experiments/dryice1/experiment1/config.py

To render a video of a trained model:

python render.py experiments/dryice1/experiment1/config.py

See ARCHITECTURE.md for more details.

Training Data

See the latest Github release for data.

Using your own Data

Implement your own Dataset class to return images and camera parameters. An example is given in data.multiviewvideo. A dataset class will need to return camera pose parameters, image data, and tracked mesh data.

How to Extend

See ARCHITECTURE.md

License

See the LICENSE file for details.

Comments
  • ModuleNotFoundError: No module named 'utilslib'

    ModuleNotFoundError: No module named 'utilslib'

    Hi , thanks for share with this awsome job , have nice day:-) When I just want to render the demo in the experiment data ,I got this error :-) By the way ,I have compiled the extension file 2022-07-06 23-31-21 的屏幕截图

    opened by Myzhencai 9
  • build success, but cannot run

    build success, but cannot run

    Traceback (most recent call last):
      File "render.py", line 118, in <module>
        output, _ = ae(
      File "/home/an/anaconda3/envs/py38-t19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/an/an_project/mvp/models/volumetric.py", line 286, in forward
        rayrgba, rmlosses = self.raymarcher(raypos, raydir, tminmax,
      File "/home/an/anaconda3/envs/py38-t19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/an/an_project/mvp/models/raymarchers/mvpraymarcher.py", line 32, in forward
        rayrgba = mvpraymarch(raypos, raydir, dt, tminmax,
      File "/home/an/an_project/mvp/extensions/mvpraymarch/mvpraymarch.py", line 273, in mvpraymarch
        out = MVPRaymarch.apply(raypos, raydir, stepsize, tminmax,
      File "/home/an/an_project/mvp/extensions/mvpraymarch/mvpraymarch.py", line 119, in forward
        sortedobjid, nodechildren, nodeaabb = build_accel(primtransfin,
      File "/home/an/an_project/mvp/extensions/mvpraymarch/mvpraymarch.py", line 44, in build_accel
        sortedobjid = (torch.arange(N*K, dtype=torch.int32, device=dev) % K).view(N, K)
    RuntimeError: CUDA error: no kernel image is available for execution on the device
    
    

    use: python 3.8.13 pytorch 1.13.0a0+git4503c45 cuda 11.3.0 gcc version 8.4.0


    Hi, are you having similar issues?
    pytorch is working normally.

    opened by AN-ZE 8
  • What's basetransf matrix used for?

    What's basetransf matrix used for?

    Hi, I have a little question when I apply my own data using this code. What's the self.basetransf matrix used for as in multiviewvideo.py do? I find this 3x4 matrix is applied for all camera poses and all the frametransf, but wht's the purpose for it? :)

    https://github.com/facebookresearch/mvp/blob/d758f53662e79d7fec885f4dd1a3ee457f7c4b00/data/multiviewvideo.py#L410-L415

    https://github.com/facebookresearch/mvp/blob/d758f53662e79d7fec885f4dd1a3ee457f7c4b00/data/multiviewvideo.py#L385-L387

    Besides, I find it necessary to apply this basetransf, because when I change it to an eyes matrix, it didn't converge during training. So how to get my own basetransf?

    Your answer will help me a lot! Thank you!

    opened by Qingcsai 5
  • Background image uesed in Lombardi‘s MVP not be found in multiface dataset

    Background image uesed in Lombardi‘s MVP not be found in multiface dataset

    Multiface dataset has been used in "Mixture of Volumetric Primitives for Efficient Neural Rendering". The "mvp" onfig file needs the path to the background image. But I can't find the background image in multiface dataset. The code in config file is: bgpath = os.path.join(imagepathbase, 'bg', image', 'cam{cam}', 'image0000.png').

    opened by shuishiwojiade 2
  • Can not build the cuda pytorch extension

    Can not build the cuda pytorch extension

    May I know what pytorch version and cuda version are used to build the two CUDA PyTorch extensions? I was using pytorch 1.7.1, cuda 10.1 and gcc 5.5.0, but I got the error "torch/utils/cpp_extension.py", line 445, in unix_wrap_ninja_compile post_cflags = extra_postargs['cxx'] KeyError: 'cxx' when I tried to build the two CUDA PyTorch extensions. Any suggestions on how to solve the problem?

    opened by ZhaoyangLyu 2
  • Bug Report: mvp/extensions/mvpraymarch/bvh.cu error: too many initializer values

    Bug Report: mvp/extensions/mvpraymarch/bvh.cu error: too many initializer values

    Hi, I meet a bug after cd to extensions/mvpraymarch and run "make" command. Could someone kindly help me to solve the problem?

    I use a remote server with: gcc (Ubuntu 7.5.0-3ubuntu1~16.04) 7.5.0, g++ (Ubuntu 7.5.0-3ubuntu1~16.04) 7.5.0, GNU Make 4.1, Ubuntu 16.04.7 LTS, Python 3.9.12, nvcc 10.2.

    The bug is from pointer assignment as shown in the screenshot. K

    Below is the output message after run "make" command in extensions/mvpraymarch directory:

    python setup.py build_ext --inplace CUDA_HOME: /data/hzhangcc/cuda-10.2 CUDNN_HOME: None running build_ext building 'mvpraymarchlib' extension Emitting ninja build file /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/2] /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/bvh.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 FAILED: /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/bvh.o /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/bvh.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu(214): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu(272): error: too many initializer values

    2 errors detected in the compilation of "/tmp/tmpxft_00003cf1_00000000-6_bvh.cpp1.ii". [2/2] /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/mvpraymarch_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 FAILED: /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/mvpraymarch_kernel.o /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/mvpraymarch_kernel.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(80): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(87): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(93): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(99): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(167): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(174): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(180): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(186): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_subset_kernel.h(30): warning: variable "validthread" was declared but never referenced

    8 errors detected in the compilation of "/tmp/tmpxft_00003cf2_00000000-6_mvpraymarch_kernel.cpp1.ii". ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1814, in _run_ninja_build subprocess.run( File "/data/hzhangcc/anaconda3/lib/python3.9/subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "/data/hzhangcc/mvp/extensions/mvpraymarch/setup.py", line 13, in setup( File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/init.py", line 87, in setup return distutils.core.setup(**attrs) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 148, in setup return run_commands(dist) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 163, in run_commands dist.run_commands() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands self.run_command(cmd) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/dist.py", line 1214, in run_command super().run_command(command) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run _build_ext.build_ext.run(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 339, in run self.build_extensions() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 771, in build_extensions build_ext.build_extensions(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions _build_ext.build_ext.build_extensions(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 202, in build_extension _build_ext.build_extension(self, ext) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 528, in build_extension objects = self.compiler.compile(sources, File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 592, in unix_wrap_ninja_compile _write_ninja_file_and_compile_objects( File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1493, in _write_ninja_file_and_compile_objects _run_ninja_build( File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1830, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension makefile:2: recipe for target 'all' failed make: *** [all] Error 1

    opened by Wushanfangniuwa 1
  • Traning data for MVP

    Traning data for MVP

    Hi, thank you for sharing your wonderful work! I was able to train the Neural Volume data. I was wondering if the training data for MVP will be released in the near future. Thank you!

    opened by weilunhuang-jhu 1
  • miss files of example?

    miss files of example?

    Hi, I notice there are some discrepancies between the latest release and README.md and mvp/ARCHITECTURE.md. Are there some files of example missing? Such as 'experiments/dryice1/experiment1/config.py' metioned in README.md or 'experiments/example/config.py' metioned in mvp/ARCHITECTURE.md? I follow the steps in README.md but can not train or render the model.

    opened by faneggs 1
  • Training time

    Training time

    Awesome Work! I have been a huge fan of your works since Neural Volumes. This work also seems very interesting!

    How long does it take for training?

    Thank you!

    opened by yeong5366 1
  • render .py not exporting the images for

    render .py not exporting the images for "render_rotate.mp4"

    Hi, thank you for sharing your great work! I'm trying it work the repository's code of the "neuralvolumes" and "mvp" of your great works with each experiments data you provided. "neuralvolumes" worked well! Thank you!! I was able to exec train.py and render.py of the "NeuralVolume" code of your previous work,and could get the images sequence of "prog_XXXXXX.jpg" and "render_rotate.mp4" movie file. Then I'm trying it work "mvp" at the same emvironment,and I did success to work train.py and could get the images sequence of "prog_XXXXXX.jpg".And "log.txt","optimparams.pt","aeparams.pt" file too. But I'm trying exec render.py,the image sequence files of consisting for "render_rotate.mp4" are not exported at /tmp/xxxxxxxxxx/ directory.

    The error message is [image2 @ 0x56400177b780] Could find no file with path '/tmp/5613023327/%06d.png' and index in the range 0-4 /tmp/5613023327/%06d.png: No such file or directory

    Do you have any information about this error?

    My environment is ubuntu20.04LTS, Python 3.8.13, GCC 9.4.0, PyTorch 1.10.1+cu113,GPU_A6000. The setup.py has been fixed about cuda arch at the mvpraymarch and the utils directory.

    opened by ppponpon 3
  • Understanding about the opacity fade factor

    Understanding about the opacity fade factor

    Thanks for sharing this awsome job. I have read the paper. And I have some questions about the opacity fade factor.

    1. Due to the fade factor, the opacity downsample at the volume edge. Does it make the primitives scale become bigger to cover the scene?(maybe opposite to the Volume Minimization Prior)
    2. Is there any experiments to ensure the parameters $\alpha$ and $\beta$?
    3. Does the fade factor make the center of the primitives more close to high occupancy point?
    4. And what is the relationship between stylegan2 and fade factor?

    And is there any suggestions to understand the cuda code of raymarching? I haven't do anything rely to cuda parallel.

    opened by LSQsjtu 0
  •  How you can reconstruct the mesh from images with different views

    How you can reconstruct the mesh from images with different views

    Hi, I noticed that the multi-view images of different frames of the same expression with the same id in your dataset have fixed camera parameters when reconstructing the mesh. When I try to perform mesh reconstruction from multi-view images, the camera parameters obtained from multi-view image reconstruction from different frames are all different. I was wondering how you fixed the camera parameters for mesh reconstruction.

    opened by LiTian0215 1
  • Cannot replicate experiments/neuralvolumes results: Completely vague output and vanishing kldiv

    Cannot replicate experiments/neuralvolumes results: Completely vague output and vanishing kldiv

    Hi, could someone kindly help me? I cannot replicate the output results in experiments/neuralvolumes. I have successfully built the extension and download the experiment.zip in the latest release. However, the output images are completely vague. I found that my kldiv term is quickly vanishing to 0, while in the example log.txt file given in experiment.zip, the kldiv terms remain larger than 0.3.

    Below are my outputs after 79579, 92139, and 106682 iterations. prog_079579 prog_092139 prog_106682

    Here I append my log.txt file which contains my configuration information and training statistics: log.txt

    I'm not sure where is the problem. Incorrect camera pose? Or the code in this repository has some bugs. Really hope some kind guy could help me. Thanks! :)

    opened by Wushanfangniuwa 2
  • About how is the tracking mesh built?

    About how is the tracking mesh built?

    Hello, I have a new problem: I am trying to use the video data of my head to reconstruct mesh and texture as the input and supervision of the network. I am trying to use colmap to reconstruct the point cloud and retargeting it to a public model with 7306 vertices; But the effect is very poor. Is there any method to build mesh for reference?

    opened by Luh1124 6
Releases(v0.1)
  • v0.1(Jan 6, 2022)

    This is an initial release of the Mixture of Volumetric Primitives code. This release includes code for training, rendering, and evaluation. Bundled with this release is a pretrained MVP model. Note that training data for MVP is not included with this release, but will be released in the future. To give an example of how to use the training code, training data and a training configuration file for Neural Volumes is included.

    Source code(tar.gz)
    Source code(zip)
    experiments.zip(1069.70 MB)
Owner
Meta Research
Meta Research
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

null 697 Jan 6, 2023
MVP Benchmark for Multi-View Partial Point Cloud Completion and Registration

MVP Benchmark: Multi-View Partial Point Clouds for Completion and Registration [NEWS] 2021-07-12 [NEW ?? ] The submission on Codalab starts! 2021-07-1

PL 93 Dec 21, 2022
This repository holds the code for the paper "Deep Conditional Gaussian Mixture Model forConstrained Clustering".

Deep Conditional Gaussian Mixture Model for Constrained Clustering. This repository holds the code for the paper Deep Conditional Gaussian Mixture Mod

null 17 Oct 30, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 5, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 5, 2022
This repository contains a set of codes to run (i.e., train, perform inference with, evaluate) a diarization method called EEND-vector-clustering.

EEND-vector clustering The EEND-vector clustering (End-to-End-Neural-Diarization-vector clustering) is a speaker diarization framework that integrates

null 45 Dec 26, 2022
Optimized primitives for collective multi-GPU communication

NCCL Optimized primitives for inter-GPU communication. Introduction NCCL (pronounced "Nickel") is a stand-alone library of standard communication rout

NVIDIA Corporation 2k Jan 9, 2023
HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives

HashNeRF-pytorch Instant-NGP recently introduced a Multi-resolution Hash Encodin

Yash Sanjay Bhalgat 616 Jan 6, 2023
Code for A Volumetric Transformer for Accurate 3D Tumor Segmentation

VT-UNet This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet. Environmen

Himashi Amanda Peiris 7 Dec 3, 2021
This repository contains several image-to-image translation models, whcih were tested for RGB to NIR image generation. The models are Pix2Pix, Pix2PixHD, CycleGAN and PointWise.

RGB2NIR_Experimental This repository contains several image-to-image translation models, whcih were tested for RGB to NIR image generation. The models

null 5 Jan 4, 2023
This repo contains the code required to train the multivariate time-series Transformer.

Multi-Variate Time-Series Transformer This repo contains the code required to train the multivariate time-series Transformer. Download the data The No

Gregory Duthé 4 Nov 24, 2022
A PyTorch implementation for V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation

A PyTorch implementation of V-Net Vnet is a PyTorch implementation of the paper V-Net: Fully Convolutional Neural Networks for Volumetric Medical Imag

Matthew Macy 606 Dec 21, 2022
Unofficial implementation of Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation

Point-Unet This is an unofficial implementation of the MICCAI 2021 paper Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segment

Namt0d 9 Dec 7, 2022
Code for "Single-view robot pose and joint angle estimation via render & compare", CVPR 2021 (Oral).

Single-view robot pose and joint angle estimation via render & compare Yann Labbé, Justin Carpentier, Mathieu Aubry, Josef Sivic CVPR: Conference on C

Yann Labbé 51 Oct 14, 2022
SMD-Nets: Stereo Mixture Density Networks

SMD-Nets: Stereo Mixture Density Networks This repository contains a Pytorch implementation of "SMD-Nets: Stereo Mixture Density Networks" (CVPR 2021)

Fabio Tosi 115 Dec 26, 2022
Tutel MoE: An Optimized Mixture-of-Experts Implementation

Project Tutel Tutel MoE: An Optimized Mixture-of-Experts Implementation. Supported Framework: Pytorch Supported GPUs: CUDA(fp32 + fp16), ROCm(fp32) Ho

Microsoft 344 Dec 29, 2022
Audio Source Separation is the process of separating a mixture into isolated sounds from individual sources

Audio Source Separation is the process of separating a mixture into isolated sounds from individual sources (e.g. just the lead vocals).

Victor Basu 14 Nov 7, 2022