A PyTorch Library for Accelerating 3D Deep Learning Research

Overview

Kaolin: A Pytorch Library for Accelerating 3D Deep Learning Research

Overview

NVIDIA Kaolin library provides a PyTorch API for working with a variety of 3D representations and includes a growing collection of GPU-optimized operations such as modular differentiable rendering, fast conversions between representations, data loading, 3D checkpoints and more.

Kaolin library is part of a larger suite of tools for 3D deep learning research. For example, the Omniverse Kaolin App will allow interactive visualization of 3D checkpoints. To find out more about the Kaolin ecosystem, visit the NVIDIA Kaolin Dev Zone page.

Installation and Getting Started

Visit the Kaolin Library Documentation to get started!

About this Update

With the version 0.9 release we have revamped the entire Kaolin library, redesigned the API, rewrote and optimized operations and removed unreliable or outdated code. Although this may appear to be a smaller library than our original release, test-driven development of Kaolin>=0.9 ensures reliable functionality and timely updates from now on. See change logs for details.

Contributing

Please review our contribution guidelines.

Citation

If you found this codebase useful in your research, please cite

@article{Kaolin,
title = {Kaolin: A PyTorch Library for Accelerating 3D Deep Learning Research},
author = {Krishna Murthy Jatavallabhula and Edward Smith and Jean-Francois Lafleche and Clement Fuji Tsang and Artem Rozantsev and Wenzheng Chen and Tommy Xiang and Rev Lebaredian and Sanja Fidler},
journal = {arXiv:1911.05063},
year = {2019}
}

Contributors

Current Team:

  • Project Lead: Clement Fuji Tsang
  • Jean-Francois Lafleche
  • Charles Loop
  • Masha Shugrina
  • Towaki Takikawa
  • Jiehan Wang

Other Majors Contributors:

  • Wenzheng Chen
  • Sanja Fidler
  • Jason Gorski
  • Rev Lebaredian
  • Jianing Li
  • Michael Li
  • Krishna Murthy
  • Artem Rozantsev
  • Frank Shen
  • Edward Smith
  • Gavriel State
  • Tommy Xiang
Comments
  • [Windows] cl fails with exit code 2

    [Windows] cl fails with exit code 2

    Hi! I have been trying to compile on windows with the following system information:

    nvcc version

    PS C:\users\{OMITTED}\Downloads\kaolin-0.1\kaolin-0.1> nvcc --version
    nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2020 NVIDIA Corporation
    Built on Tue_Sep_15_19:12:04_Pacific_Daylight_Time_2020
    Cuda compilation tools, release 11.1, V11.1.74
    Build cuda_11.1.relgpu_drvr455TC455_06.29069683_0
    PS C:\users\{OMITTED}\Downloads\kaolin-0.1\kaolin-0.1>
    

    python version:

    PS C:\users\{OMITTED}\Downloads\kaolin-0.1\kaolin-0.1> python --version
    Python 3.6.7
    

    and cude version:

    >>> import torch
    >>> torch.version.cuda
    '11.1'
    >>>
    

    the error logs when I run python setup.py develop is failing with exit code 2 on cl binary cl is set on my env variable as: C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\bin\Hostx64\x64\cl.exe

    The error logs are as follows:

    PS C:\users\keert\Downloads\kaolin-0.1\kaolin-0.1> python setup.py develop
    C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1 cuda home
    WARNING - Kaolin is tested with PyTorch >=1.2.0, <1.5.0 Found version 1.9.1+cu111 instead.
    WARNING - Kaolin is tested with torchvision >=0.4.0, <0.6.0 Found version 0.10.1+cu111 instead.
    Building nv-usd...
    '.' is not recognized as an internal or external command,
    operable program or batch file.
    running develop
    Checking .pth file support in C:\Users\keert\AppData\Local\Programs\Python\Python36\Lib\site-packages\
    C:\Users\keert\AppData\Local\Programs\Python\Python36\pythonw.exe -E -c pass
    TEST PASSED: C:\Users\keert\AppData\Local\Programs\Python\Python36\Lib\site-packages\ appears to support .pth files
    running egg_info
    writing kaolin.egg-info\PKG-INFO
    writing dependency_links to kaolin.egg-info\dependency_links.txt
    writing requirements to kaolin.egg-info\requires.txt
    writing top-level names to kaolin.egg-info\top_level.txt
    reading manifest file 'kaolin.egg-info\SOURCES.txt'
    writing manifest file 'kaolin.egg-info\SOURCES.txt'
    running build_ext
    C:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\utils\cpp_extension.py:306: UserWarning: Error checking compiler version for cl: Command 'cl' returned non-zero exit status 2.
      warnings.warn(f'Error checking compiler version for {compiler}: {error}')
    building 'kaolin.cuda.load_textures' extension
    C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MT -IC:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include -IC:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\TH -IC:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\include" -IC:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\numpy\core\include -IC:\Users\keert\AppData\Local\Programs\Python\Python36\include -IC:\Users\keert\AppData\Local\Programs\Python\Python36\include -IC:\Users\keert\AppData\Local\Programs\Python\Python36\include -IC:\Users\keert\AppData\Local\Programs\Python\Python36\include "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\cppwinrt" /EHsc /Tpkaolin/cuda/load_textures_cuda.cpp /Fobuild\temp.win-amd64-3.6\Release\kaolin/cuda/load_textures_cuda.obj /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0
    cl : Command line warning D9025 : overriding '/MT' with '/MD'
    cl : Command line warning D9024 : unrecognized source file type 'C:\Program', object file assumed
    cl : Command line warning D9027 : source file 'C:\Program' ignored
    cl : Command line warning D9024 : unrecognized source file type 'Files\Microsoft', object file assumed
    cl : Command line warning D9027 : source file 'Files\Microsoft' ignored
    cl : Command line warning D9024 : unrecognized source file type 'Visual', object file assumed
    cl : Command line warning D9027 : source file 'Visual' ignored
    cl : Command line warning D9024 : unrecognized source file type 'Studio\2022\Community\VC\Tools\MSVC\14.33.31629\bin\Hostx64\x64\cl.exe', object file assumed
    cl : Command line warning D9027 : source file 'Studio\2022\Community\VC\Tools\MSVC\14.33.31629\bin\Hostx64\x64\cl.exe' ignored
    load_textures_cuda.cpp
    C:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Optional.h(183): warning C4624: 'c10::constexpr_storage_t<T>': destructor was implicitly defined as deleted
            with
            [
                T=at::Tensor
            ]
    C:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Optional.h(367): note: see reference to class template instantiation 'c10::constexpr_storage_t<T>' being compiled
            with
            [
                T=at::Tensor
            ]
    C:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Optional.h(427): note: see reference to class template instantiation 'c10::trivially_copyable_optimization_optional_base<T>' being compiled
            with
            [
                T=at::Tensor
            ]
    C:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Optional.h(427): note: see reference to alias template instantiation 'c10::OptionalBase<T>' being compiled
            with
            [
                T=at::Tensor
            ]
    C:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\ATen/core/TensorBody.h(734): note: see reference to class template instantiation 'c10::optional<at::Tensor>' being compiled
    C:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Optional.h(395): warning C4624: 'c10::trivially_copyable_optimization_optional_base<T>': destructor was implicitly defined as deleted
            with
            [
                T=at::Tensor
            ]
    C:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Optional.h(183): warning C4624: 'c10::constexpr_storage_t<T>': destructor was implicitly defined as deleted
            with
            [
                T=at::Generator
            ]
    C:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Optional.h(367): note: see reference to class template instantiation 'c10::constexpr_storage_t<T>' being compiled
            with
            [
                T=at::Generator
            ]
    C:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Optional.h(427): note: see reference to class template instantiation 'c10::trivially_copyable_optimization_optional_base<T>' being compiled
            with
            [
                T=at::Generator
            ]
    C:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Optional.h(427): note: see reference to alias template instantiation 'c10::OptionalBase<T>' being compiled
            with
            [
                T=at::Generator
            ]
    ...
    ...
    ...
    ...
    ...        with
            [
                T=std::vector<at::Tensor,std::allocator<at::Tensor>>
            ]
    C:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\pybind11\detail/common.h(108): warning C4005: 'HAVE_SNPRINTF': macro redefinition
    C:\Users\keert\AppData\Local\Programs\Python\Python36\include\pyerrors.h(489): note: see previous definition of 'HAVE_SNPRINTF'
    kaolin/cuda/load_textures_cuda.cpp(46): warning C4996: 'at::Tensor::type': Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device().
    kaolin/cuda/load_textures_cuda.cpp(47): warning C4996: 'at::Tensor::type': Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device().
    kaolin/cuda/load_textures_cuda.cpp(48): warning C4996: 'at::Tensor::type': Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device().
    kaolin/cuda/load_textures_cuda.cpp(49): warning C4996: 'at::Tensor::type': Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device().
    C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\bin\nvcc.exe -c kaolin/cuda/load_textures_cuda_kernel.cu -o build\temp.win-amd64-3.6\Release\kaolin/cuda/load_textures_cuda_kernel.obj -IC:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include -IC:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\TH -IC:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\include" -IC:\Users\keert\AppData\Local\Programs\Python\Python36\lib\site-packages\numpy\core\include -IC:\Users\keert\AppData\Local\Programs\Python\Python36\include -IC:\Users\keert\AppData\Local\Programs\Python\Python36\include -IC:\Users\keert\AppData\Local\Programs\Python\Python36\include -IC:\Users\keert\AppData\Local\Programs\Python\Python36\include "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\cppwinrt" -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --use-local-env
    error: command 'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.1\\bin\\nvcc.exe' failed with exit status 2
    

    Would really appreciate some help on this! Thanks

    opened by theskcd 26
  •  cannot import name '_C' from 'kaolin'

    cannot import name '_C' from 'kaolin'

    I want to use it on gpu 2090, so I used kaolin-v0.11.0, but he reported an error "ImportError: cannot import name '_C' from 'kaolin' " The relevant configuration is: python 3.7.13; torch1.8.1+cu111. I well tried to install 0.11.0 and 0.12.0 manually, but reported an error "FileNotFoundError: [Errno 2] No such file or directory: '/home/user/anaconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/ sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda/bin:/usr/local/cuda/bin/nvcc': '/home/user/anaconda3 /condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda/bin:/ usr/local/cuda/bin/nvcc'" But I run nvcc -V, and get the following result. nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2020 NVIDIA Corporation Built on Tue_Sep_15_19:10:02_PDT_2020 Cuda compilation tools, release 11.1, V11.1.74 Build cuda_11.1.TC455_06.29069683_0

    opened by surheaven 15
  • RuntimeError: Error compiling objects for extension during installation of kaolin

    RuntimeError: Error compiling objects for extension during installation of kaolin

    Hi, I followed the steps provided in README. I create a virtual environment and installed latest version PyTorch. When I execute python setup.py build_ext --inplace or python setup.py install, I encountered the following errors. I have tried both python 3.6 and python 3.7. The results are the same. My cuda version is 10.2, PyTorch is 1.5, NumPy is 1.18, ninja version is 1.10. The OS is Ubuntu 18.04. Thanks for your help!

    [2/2] /usr/local/cuda/bin/nvcc -I/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/include -I/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/include/TH -I/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/site-packages/numpy/core/include -I/home/yyyeh/anaconda3/envs/kaolin/include/python3.7m -c -c /home/yyyeh/library/kaolin/kaolin/cuda/load_textures_cuda_kernel.cu -o /home/yyyeh/library/kaolin/build/temp.linux-x86_64-3.7/kaolin/cuda/load_textures_cuda_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -Wno-deprecated-declarations -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1400, in _run_ninja_build check=True) File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/subprocess.py", line 512, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "setup.py", line 230, in cmdclass={'build_ext': KaolinBuildExtension} File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/site-packages/setuptools/init.py", line 144, in setup return distutils.core.setup(**attrs) File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 87, in run _build_ext.run(self) File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "setup.py", line 89, in build_extensions super().build_extensions() File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 580, in build_extensions build_ext.build_extensions(self) File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions self._build_extensions_serial() File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial self.build_extension(ext) File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 208, in build_extension _build_ext.build_extension(self, ext) File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension depends=ext.depends) File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 423, in unix_wrap_ninja_compile with_cuda=with_cuda) File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1140, in _write_ninja_file_and_compile_objects error_prefix='Error compiling objects for extension') File "/home/yyyeh/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1413, in _run_ninja_build raise RuntimeError(message) RuntimeError: Error compiling objects for extension

    install-issues 
    opened by yuyingyeh 15
  • build error: pybind11/cast.h & pybind11.h

    build error: pybind11/cast.h & pybind11.h

    Dear All, I got the build error on pybind11 when processing the "python setup.py install". The errors are as follows: torch/include/pybind11/pytypes.h:1205:395: error: template argument 2 is invalid torch/include/pybind11/pytypes.h:1205:397: error: template argument 1 is invalid torch/include/pybind11/pytypes.h:1205:397: error: template argument 2 is invalid torch/include/pybind11/pytypes.h:1205:412: error: template argument 1 is invalid /root/miniconda3/envs/kaolin/lib/python3.6/site-packages/torch/include/pybind11/cast.h:776:149: error: expansion pattern ‘std::is_copy_constructible<_Tp>::value’ contains no argument packs .../torch/include/pybind11pybind11.h:1471:131: error: no matching function for call to ‘pybind11::cpp_function::cpp_function(pybind11::detail::enum_base::init(bool, bool)::<lambda(pybind11::object, pybind11::object)>, pybind11::is_method)’ ... pybind11/cast.h:2108:44: error: no matching function for call to ‘collect_arguments(pybind11::object&)’

    I have tried built the pybind11 separately, and it's ok, I don't know what's wrong with the pybind11 when combining with the kaolin. Env: System: ubuntu 16.04 CUDA Version 10.1 cudnn: 9.0

    Anybody know? Thanks very much!

    bug 
    opened by zizhao 13
  • ModuleNotFoundError: No module named 'kaolin.nnsearch'

    ModuleNotFoundError: No module named 'kaolin.nnsearch'

    I'm using pytorch:19.10-py3 docker image (docker pull nvcr.io/nvidia/pytorch:19.10-py3). I get the following error: .......

    Finished processing dependencies for kaolin==0.2.0+a76a004 root@d404b78867d9:/workspace/kaolin# python Python 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) [GCC 7.3.0] on linux Type "help", "copyright", "credits" or "license" for more information.

    import kaolin as kal Traceback (most recent call last): File "", line 1, in File "/workspace/kaolin/kaolin/init.py", line 17, in from kaolin import conversions File "/workspace/kaolin/kaolin/conversions/init.py", line 1, in from kaolin.conversions.meshconversions import * File "/workspace/kaolin/kaolin/conversions/meshconversions.py", line 22, in from kaolin.metrics.point import directed_distance as directed_distance File "/workspace/kaolin/kaolin/metrics/init.py", line 2, in from .point import * File "/workspace/kaolin/kaolin/metrics/point.py", line 16, in from kaolin.nnsearch import nnsearch ModuleNotFoundError: No module named 'kaolin.nnsearch'

    opened by Shabayek 13
  • ModuleNotFoundError: No module named 'pxr'

    ModuleNotFoundError: No module named 'pxr'

    Hello, when I run this command as described in the documentation (with a space between "kaolin/" and "tests"):

    pytest --cov=kaolin/ tests I get the attached output. Any hints on what I am missing? other than: Hint: make sure your test modules/packages have valid Python names.

    Thanks

    Pablo

    `========================================================================================================== test session starts ========================================================================================================== platform linux -- Python 3.6.8, pytest-5.3.0, py-1.8.0, pluggy-0.13.0 rootdir: /home/pabs/PycharmProjects/kaolin plugins: cov-2.8.1 collected 114 items / 2 errors / 112 selected
    Coverage.py warning: No data was collected. (no-data-collected) WARNING: Failed to generate report: No data to report.

    /home/pabs/PycharmProjects/kaolin/venv/lib/python3.6/site-packages/pytest_cov-2.8.1-py3.6.egg/pytest_cov/plugin.py:254: PytestWarning: Failed to generate report: No data to report.

    self.cov_controller.finish()

    ================================================================================================================ ERRORS ================================================================================================================= ____________________________________________________________________________________________ ERROR collecting tests/datasets/test_usdfile.py ____________________________________________________________________________________________ ImportError while importing test module '/home/pabs/PycharmProjects/kaolin/tests/datasets/test_usdfile.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: tests/datasets/test_usdfile.py:19: in from kaolin.datasets.usdfile import USDMeshes /home/pabs/PycharmProjects/kaolin/venv/lib/python3.6/site-packages/kaolin-0.2.0+f858c58-py3.6-linux-x86_64.egg/kaolin/datasets/usdfile.py:19: in ??? E ModuleNotFoundError: No module named 'pxr' ___________________________________________________________________________________________ ERROR collecting tests/visualize/test_vis_usd.py ____________________________________________________________________________________________ ImportError while importing test module '/home/pabs/PycharmProjects/kaolin/tests/visualize/test_vis_usd.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: tests/visualize/test_vis_usd.py:8: in from kaolin.visualize.vis_usd import VisUsd /home/pabs/PycharmProjects/kaolin/venv/lib/python3.6/site-packages/kaolin-0.2.0+f858c58-py3.6-linux-x86_64.egg/kaolin/visualize/vis_usd.py:24: in ??? E ModuleNotFoundError: No module named 'pxr' =========================================================================================================== warnings summary ============================================================================================================ venv/lib/python3.6/site-packages/kaolin-0.2.0+f858c58-py3.6-linux-x86_64.egg/kaolin/nnsearch.py:3 /home/pabs/PycharmProjects/kaolin/venv/lib/python3.6/site-packages/kaolin-0.2.0+f858c58-py3.6-linux-x86_64.egg/kaolin/nnsearch.py:3: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses

    venv/lib/python3.6/site-packages/kaolin-0.2.0+f858c58-py3.6-linux-x86_64.egg/kaolin/datasets/shapenet.py:861 venv/lib/python3.6/site-packages/kaolin-0.2.0+f858c58-py3.6-linux-x86_64.egg/kaolin/datasets/shapenet.py:861 /home/pabs/PycharmProjects/kaolin/venv/lib/python3.6/site-packages/kaolin-0.2.0+f858c58-py3.6-linux-x86_64.egg/kaolin/datasets/shapenet.py:861: DeprecationWarning: invalid escape sequence *

    -- Docs: https://docs.pytest.org/en/latest/warnings.html

    ----------- coverage: platform linux, python 3.6.8-final-0 -----------

    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ===================================================================================================== 3 warnings, 2 errors in 4.99s`

    documentation 
    opened by pbermell 10
  • RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected

    RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected

    Hey,

    great work with the library!

    I am trying to install it, but I am getting a cuda error. I have been using pytorch the gpus wihout problems until now.

    The full line reads: RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected at /opt/conda/conda-bld/pytorch_1570910687650/work/aten/src/THC/THCGeneral.cpp:50

    I am using python 3.7.3 and pytorch 1.3.

    the output of nvidia_smi is: Fri Nov 15 16:13:25 2019
    +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |==================+======================+======================| | 0 GeForce GTX TIT... On | 00000000:04:00.0 Off | N/A | | 22% 41C P8 18W / 250W | 11MiB / 12212MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 GeForce GTX TIT... On | 00000000:06:00.0 Off | N/A | | 22% 38C P8 17W / 250W | 11MiB / 12212MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 GeForce GTX 108... On | 00000000:07:00.0 Off | N/A | | 31% 34C P8 8W / 250W | 1283MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 3 GeForce GTX 108... On | 00000000:08:00.0 Off | N/A | | 31% 33C P8 8W / 250W | 10MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 4 TITAN X (Pascal) On | 00000000:0C:00.0 Off | N/A | | 23% 35C P8 8W / 250W | 10MiB / 12196MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 5 TITAN X (Pascal) On | 00000000:0E:00.0 Off | N/A | | 23% 30C P8 8W / 250W | 10MiB / 12196MiB | 0% Default | +-------------------------------+----------------------+----------------------+

    +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |============================================================| | 2 14534 C ...zzi/anaconda3/envs/tmtm/bin/python3.7 1273MiB | +-----------------------------------------------------------------------------+

    Any suggestions?

    opened by VictorZuanazzi 10
  • Cannot import kaolin.metrics.mesh etc...

    Cannot import kaolin.metrics.mesh etc...

    Hi! Thanks for developing this tool!

    I have been trying to install kaolin in my Linux docker. Now the installation is complete, and I can see the info through pip show kaolin and successfully import kaolin. However, I cannot succeed any imports other than that. All these following imports do not work for me and gives "No module named 'xxx'":

    import kaolin.rep, import kaolin.metrics.mesh, kaolin.cuda...

    Also, when I install kaolin through python setup.py develop, it fives an error "No local pacakges or working download links found for usd-core==20.11". I tried to install this usd-core through manually downloaded packages, but also gives me error "usd_core-20.11-cp36-none-manylinux2014_x86_64.whl is not a supported wheel on this platform". So I commented out the line requirements.append('usd-core==20.11') in setup.py and finished installing. I am not sure whether this is the problem.

    Thank you so much for your time and patience! Looking forward to hearing from you!

    opened by Tonight1121 9
  • importerror  undefined symbol: _ZN3c1011CPUTensorIdEv

    importerror undefined symbol: _ZN3c1011CPUTensorIdEv

    env: pytorch 1.3.1 , cuda 10.1 ,python 3.6.9. ubuntu-18.04, gpu: GTX 960M, driver version 435.

    during import kaolin, report the import error as follows. builtins.ImportError: /home/xxx/.cache/Python-Eggs/kaolin-0.2.0+ed13273-py3.6-linux-x86_64.egg-tmp/kaolin/cuda/sided_distance.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN3c1011CPUTensorIdEv

    anyone in this problem? how to solve it.

    opened by kobeyuan 9
  • build error:error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1

    build error:error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1

    Repost (seems like the project is reset)

    I faced this error with pytorch 1.2.0, cuda 10.0 and gcc 5.4.0. If you face the same error, try to modify #include <torch/extension.h> to #include <torch/types.h> in the following files (*.h and *.cu files only):

    kaolin/cuda/util.h
    kaolin/cuda/cuda_util.h
    kaolin/cuda/mesh_intersection_cuda.cu
    kaolin/cuda/sided_distance_cuda.cu
    
    install-issues 
    opened by hubert0527 9
  • Error during setup (windows)

    Error during setup (windows)

    cuda 11.3 Python 3.7.12 torch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113 GPU GTX1060
    NVIDIA-SMI 526.98 Driver Version: 526.98 CUDA Version: 12.0 windows 10 but I still have errors and worked for two days

    Could you help me?

    opened by yyyqwq 8
  • ImportError Upon Trying To Use dmtet_tutorial.ipynb

    ImportError Upon Trying To Use dmtet_tutorial.ipynb

    I have cloned the kaolin repo and I am trying to get the dmtet_tutorial Jupyter Notebook to work and I create a conda virtual enviroment and follow all the setup steps for the kaolin before I try to run the notebook. It required kaolin to be imported as a package so I install that as well, then when I run the first cell in the notebook (see first image) I am given a weird import error which is the following:

    ImportError Traceback (most recent call last) Cell In[2], line 2 1 import torch ----> 2 import kaolin 3 import numpy as np 4 from dmtet_network import Decoder

    File ~/.conda/envs/cent7/2020.11-py38/pointe/lib/python3.8/site-packages/kaolin/init.py:1 ----> 1 from . import io 2 from . import metrics 3 from . import ops

    File ~/.conda/envs/cent7/2020.11-py38/pointe/lib/python3.8/site-packages/kaolin/io/init.py:5 3 from . import obj 4 from . import off ----> 5 from . import render 6 from . import shapenet 7 from . import usd

    File ~/.conda/envs/cent7/2020.11-py38/pointe/lib/python3.8/site-packages/kaolin/io/render.py:23 21 import numpy as np 22 from PIL import Image ---> 23 from ..render.camera import generate_perspective_projection 26 def import_synthetic_view(root_dir, idx, rgb=True, depth_linear=False, ... ---> 17 from kaolin import _C 19 class _TileToPackedCuda(torch.autograd.Function): 20 """torch.autograd.function wrapper for :func:tile_to_packed CUDA implementations"""

    ImportError: /home/nkwade/.conda/envs/cent7/2020.11-py38/pointe/lib/python3.8/site-packages/kaolin/_C.so: undefined symbol: _ZNK3c1010TensorImpl36is_contiguous_nondefault_policy_implENS_12MemoryFormatE

    Image 1: image

    I think this is beyond my knowledge and this is my first every github issue ive published so please let me know if I did anything wrong! Thanks in advance!

    opened by nkwade 1
  • trilinear interp gradients by coords computation error (shape mismatch)

    trilinear interp gradients by coords computation error (shape mismatch)

    Hi! I recently tried out the changes from a commit on kaolin that added the jacobian for trilinear interpolation w.r.t. coords (https://github.com/NVIDIAGameWorks/kaolin/commit/17491c8e74c4b4a23107e0dc2b67c19c6a683c85) for a kaolin-wisp nglod model and ran into a shape mismatch error when computing grad_coords = grad_output @ grad_fout_by_xyz. The error was "The size of tensor a (133232) must match the size of tensor b (15148) at non-singleton dimension 0". It seems to be due to grad_output having dim 0 as the number of coords, while grad_fout_by_xyz has dim 0 as the number of intersected cells. The test function from the same commit (test_interpolate_trilinear_by_coords_backward(self, points)) does not crash since all cells are intersected.

    opened by Salarios77 0
  • Bump minimist from 1.2.5 to 1.2.7

    Bump minimist from 1.2.5 to 1.2.7

    Bumps minimist from 1.2.5 to 1.2.7.

    Changelog

    Sourced from minimist's changelog.

    v1.2.7 - 2022-10-10

    Commits

    • [meta] add auto-changelog 0ebf4eb
    • [actions] add reusable workflows e115b63
    • [eslint] add eslint; rules to enable later are warnings f58745b
    • [Dev Deps] switch from covert to nyc ab03356
    • [readme] rename and add badges 236f4a0
    • [meta] create FUNDING.yml; add funding in package.json 783a49b
    • [meta] use npmignore to autogenerate an npmignore file f81ece6
    • Only apps should have lockfiles 56cad44
    • [Dev Deps] update covert, tape; remove unnecessary tap 49c5f9f
    • [Tests] add aud in posttest 228ae93
    • [meta] add safe-publish-latest 01fc23f
    • [meta] update repo URLs 6b164c7

    v1.2.6 - 2022-03-21

    Commits

    • test from prototype pollution PR bc8ecee
    • isConstructorOrProto adapted from PR c2b9819
    • security notice for additional prototype pollution issue ef88b93
    Commits
    • c590d75 v1.2.7
    • 0ebf4eb [meta] add auto-changelog
    • e115b63 [actions] add reusable workflows
    • 01fc23f [meta] add safe-publish-latest
    • f58745b [eslint] add eslint; rules to enable later are warnings
    • 228ae93 [Tests] add aud in posttest
    • 236f4a0 [readme] rename and add badges
    • ab03356 [Dev Deps] switch from covert to nyc
    • 49c5f9f [Dev Deps] update covert, tape; remove unnecessary tap
    • 783a49b [meta] create FUNDING.yml; add funding in package.json
    • Additional commits viewable in compare view
    Maintainer changes

    This version was pushed to npm by ljharb, a new releaser for minimist since your current version.


    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • How to customize dataset and use in Kaolin?

    How to customize dataset and use in Kaolin?

    Hello I am new to Kaolin and now trying how to use my own image set to run the DIB-R Differentiable Renderer.

    I see the tutorial they used the samples with 3 formats (png, metadata & npy). Now I only have the png images and may I know if there is any method to create the remaining metadata & npy formats by code?

    opened by auk003150 1
  • Winerror2 Occur, when I run

    Winerror2 Occur, when I run "python setup.py develop"

    For install kaolin, I got this warning message from my window cmd terminal. Anyone knows how to solve this problem?

    I run this command in cmd "python setup.py develop" And I got this message.

    I also did cd kaolin for move working directory. please help me for research..!

    Traceback (most recent call last): File "C:\Users\user\kaolin\setup.py", line 300, in ext_modules=get_extensions(), File "C:\Users\user\kaolin\setup.py", line 218, in get_extensions include_dirs = get_include_dirs() File "C:\Users\user\kaolin\setup.py", line 270, in get_include_dirs _, bare_metal_major, _ = get_cuda_bare_metal_version(CUDA_HOME) File "C:\Users\user\kaolin\setup.py", line 105, in get_cuda_bare_metal_version raw_output = subprocess.check_output([cuda_dir + "/bin/nvcc", "-V"], universal_newlines=True) File "C:\Users\user\anaconda3\envs\nefr\lib\subprocess.py", line 424, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "C:\Users\user\anaconda3\envs\nefr\lib\subprocess.py", line 505, in run with Popen(*popenargs, **kwargs) as process: File "C:\Users\user\anaconda3\envs\nefr\lib\subprocess.py", line 951, in init self._execute_child(args, executable, preexec_fn, close_fds, File "C:\Users\user\anaconda3\envs\nefr\lib\subprocess.py", line 1420, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, FileNotFoundError: [WinError 2] system cannot find the file specified

    opened by dopeornope-Lee 9
  • TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'

    TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'

    I have followed the stable version in the installation page https://kaolin.readthedocs.io/en/stable/notes/installation.html, and I have this issue

    C:\Users\miche\kaolin>python setup.py develop setup.py:37: UserWarning: Kaolin is compatible with PyTorch >=1.5.0, <=1.11.0, but found version 1.12.1+cu113. Continuing with the installed version as IGNORE_TORCH_VER is set. f'Kaolin is compatible with PyTorch >={TORCH_MIN_VER}, <={TORCH_MAX_VER}, ' Traceback (most recent call last): File "setup.py", line 300, in ext_modules=get_extensions(), File "setup.py", line 218, in get_extensions include_dirs = get_include_dirs() File "setup.py", line 270, in get_include_dirs _, bare_metal_major, _ = get_cuda_bare_metal_version(CUDA_HOME) File "setup.py", line 105, in get_cuda_bare_metal_version raw_output = subprocess.check_output([cuda_dir + "/bin/nvcc", "-V"], universal_newlines=True) TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'

    Please review and help for this issue. Thanks a lot to anyone who may solve it.

    opened by auk003150 3
Releases(v0.12.0)
  • v0.12.0(Aug 9, 2022)

    Changelogs:

    Summary:

    With the version 0.12.0 we have added a Camera API, allowing to use all our renderers and multiple coordinate systems.

    Checkout our news tutorials:

    Features:

    Added Camera API

    Bugfix:

    Fix bug with kaolin.ops.mesh.check_sign when the point is aligned with a vertice or an edge. Fix bug on some ops when using 2nd GPU

    Tutorials:

    Added a bunch of recipes for Camera API Added a tutorial to show how to use Camera API with nvdiffrast

    Contributors (by last name alphabetical order):

    Sanja Fidler Clement Fuji Tsang Or Perel Masha Shugrina Towaki Takikawa Jiehan Wang Alexander Zook

    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(Jun 15, 2022)

    Changelogs:

    Summary:

    In Kaolin 0.11.0 we are focusing on improving performance for our main renderers, strongly improving SPC raytracing and trilinear interpolation, and integrating nvdiffrast as a backend for DIB-R rasterization. We are also adding a few features such as tetrahedral and triangle mesh subdivision, support for heterogeneous mesh in obj loader, as well as improving Dash3D usability.

    Finally, several tutorials and recipes are implemented for new users to quickly get a grasp on Kaolin features.

    Features:

    Improved Dash3D usability (#538) Improved SPC raytracing memory usage / speed (~12x less memory used / +33% on a test model at level 11) (#548) Added tetrahedral mesh subdivision used in DMTet (#551) Added triangle mesh subdivision used in DMTet (#562) Allowed SPC unbatched_query to output parents nodes (#559) Splitted rasterization and dibr soft mask in two functions (#560) Integrated nvdiffrast for rasterization (measured up to x640 faster on large model from ShapeNet at 1024x1024 resolution) (#560) Added support for Heterogeneous mesh in obj loader (#563) Implemented a fused trilinear interpolation kernel with computation of dual of octree (#567)

    Tutorials:

    Added recipe to convert pointcloud to SPC (#535) Added recipe for basic explanation of spc's octree (#535) Added example for fitting a 3D bounding box using differentiable renderer (#543) Added recipe to compute Occupancy using check_sign (#564) Showed backend keyword for rasterization in DIB-R tutorial (#569) Added recipe for the dual octree and trilinear_interpolation

    Bug fix:

    Fixed DIB-R tutorial due to normalization changing in the Omniverse App (#532) Fixed issue with uint type issue on Windows (#537) Fixed f_score reduction bug (#546) Fixed indexing bug on understanding SPC tutorial (#553)

    Contributors (by last name alphabetical order):

    Sanja Fidler Clement Fuji Tsang Charles Loop Or Perel Frank Shen Masha Shugrina Gavriel State Towaki Takikawa Jiehan Wang Alexander Zook

    Source code(tar.gz)
    Source code(zip)
  • v0.10.0(Feb 7, 2022)

    Changelogs:

    Summary

    In Kaolin 0.10.0 we are focusing on Volumetric rendering, adding new features for tetrahedral meshes, including DefTet volumetric renderer and losses, and Deep Marching Tetrahedrons, and adding new primitive operations for efficient volumetric rendering of Structured Point Clouds, we are also adding support for materials with USD importation.

    Finally we are adding two new tutorials to show how to use the latest features from Kaolin:

    • How to use DMtet to reconstruct a mesh from point clouds generated by the Omniverse Kaolin App

    • An Introduction to Structured Point Clouds, with conversion from mesh and interactive visualization with raytracing.

    Features:

    Simplify kaolin.ops.spc.unbatched_query API (#442) Added point to vertice type of distance in kaolin.metrics.trianglemesh.point_to_mesh_distance, with a little speedup (#446, #458) Added new “thin” mode for kaolin.ops.voxelgrids.extract_surface (#448) Adding Marching Tetrahedra (#476) Extend SPC raytracing to output depth (#478) Refactor SPC raytracing API (#478) Adding Differentiable Volumetric rendering features for SPC (#485) Adding “squared” option for Chamfer distance (#486) Adding Deftet Volumetric Renderer (#488) Adding Deftet losses (#496) Adding interpolation of features for point sampling on mesh (#499) Adding DMtet Tutorial (#492) Adding SPC Tutorial (#500) Adding unbatched_pointcloud_to_spc wrapper (#498) Adding materials support for USD importers (#502)

    Bug fix:

    Fix small bugs on USD importer / exporter (#441, #445) Fix trianglemesh_to_voxelgrids when sparse (#455) Fix bug where Kaolin were not building with submodule CUB (#457) Fix Preprocessing bug where “name” attributes contains “/” (#469)

    Misc:

    Define a proper C++ Style Guide and fine-tune codebase accordingly (#470, #471, #472, #477)

    Kaolin now featured in:

    DIB-R Deftet DMtet GradSim NeuralLOD Text2Mesh

    Contributors (by last name alphabetical order):

    Sanja Fidler Clement Fuji Tsang Jun Gao Jean-François Lafleche Michael Li Charles Loop Or Perel Frank Shen Masha Shugrina Gavriel State Towaki Takikawa Jiehan Wang Alexander Zook (github) Talmaj (github) le-Greg

    Source code(tar.gz)
    Source code(zip)
  • v0.9.1(Oct 3, 2021)

    Changelogs:

    Summary

    The latest Kaolin release includes a new representation, structured point clouds, an octree-based acceleration data structure, with highly efficient convolution and ray tracing capabilities. This representation is useful for neural implicit representations, popular in 3D DL research today, and beyond, and powers the latest version of NeuralLOD training.

    The release is also coming with extended support for 3d dataset like ModelNet / ShapeNet / SHREC, new utility functions to improve usability and speedups on import / export of USD used in checkpoints. In this version, we added a lightweight visualizer Dash3D for quickly visualizing from a low-end config such as a laptop.

    New Additions

    Features

    • added pointclouds_to_voxelgrids (#361)
    • added support for non-fully connected mesh for uniform_laplacian (#380)
    • added mask_IoU on rendered images (#372)
    • added support for camera transform matrix (instead of just rotation / translation) (#372)
    • support for SHREC (#375)
    • support for colors in exporting point clouds in USD (#400)
    • support for UsdGeomPoints (#400)
    • support for .off (#367)
    • support for ModelNet (#382, #410)
    • added utility function for loading synthetic data from OV app (#372)
    • added material support for ShapeNet (#388)
    • added version support for ShapeNet (#399)

    Optimizations

    • accelerated USD import: ~10-5X (#421)
    • accelerated USD export: ~8-4X for exporting and timelapse (#422)
    • accelerated backward of index_by_face_vertices (#419)

    Bug fix:

    • fixing a bug on texture_mapping removing when UVs are out-of-bounds. fix some issues with ShapeNet and support for bad models (#391, #411)

    Misc:

    • Allow users to install Pytorch version out of official support (#390)

    Contributors:

    • Clement Fuji Tsang
    • Masha Shugrina
    • Charles Loop
    • Towaki Takikawa
    • Jiehan Wang
    • Michael Li
    • @AndresCasado
    • @mjd3
    Source code(tar.gz)
    Source code(zip)
  • doctest(Feb 25, 2021)

  • v0.9.0(Dec 9, 2020)

    Changelogs:

    Highlights

    The Kaolin 0.9 release include a reformat of the API and various improvment of the performance and the ergonomy of Kaolin. A reformat was required to be able to have a maintainable, clean and reliable Kaolin in the long term.

    Low level API

    Mesh class contained too many attributes and methods that were too specific or unused or redundant. Also given how quickly the field can shift to new methods, having a fixed class representation can be a constraint. We chose to focus on low-level functions with torch tensors as inputs / outputs, to favor reusability. High-level representation will be added later once the common use cases get more easy to define.

    Model Zoo

    Maintainable and reliable Kaolin means a more compact library. We decided to move the model zoo out of Kaolin, this model zoo will have a dedicate repository, will rely on release of Kaolin, and so will be maintained separately.

    Batching

    Kaolin is now fully batched, by default with a fixed topology, but also (with limited support) representation for heterogenous structures using packed and padded approch, see documentation for more details. We intend to provide more primitive ops for heterogenous structures.

    Optimizations

    We've been mostly focusing on GPU efficiency. Among the optimizations, speedups are reported on:

    • kaolin.render.mesh.rasterization.dibr_rasterization(height, width, face_vertice_z, face_vertices_image, face_features, face_normals_z) (~1.35x faster).
    • GraphConv: added a functionality of pre-normalization of the adjacency matrix kaolin.ops.gcn.GraphConv(node_feat, adj, normalized_adj=False) (~1.85x faster).
    • kaolin.ops.mesh.check_sign(vertices, faces, points, hash_resolution): (~2.75x faster).
    • kaolin.ops.mesh.sample_points(vertices, faces, num_samples, areas): added a functionality of pre-computation of faces areas (~1.6x faster)
    • kaolin.ops.conversions.voxelgrids_to_cubic_meshes(voxelgrids, is_trimesh) (~17x faster on cpu, >10000x faster on gpu)
    • kaolin.ops.voxelgrid.downsample(voxelgrids, scale) (~6.2x faster on cpu, ~25x faster on gpu)
    • kaolin.ops.voxelgrid.fill(voxelgrids) (~1.3x faster on cpu)
    • kaolin.ops.voxelgrid.extract_surface(voxelgrids) (~6.9x faster on cpu, ~37x faster on gpu)
    • kaolin.ops.voxelgrid.extract_odms(voxelgrids) (~250x faster on cpu, ~1276x faster on gpu)
    • kaolin.ops.voxelgrid.project_odms(odms, voxelgrids, votes) (~125x faster on cpum ~882x faster on gpu) We added a cuda implementation of lorensen's marching cube (used in kaolin.ops.conversions.voxelgrids_to_trianglemeshes(voxelgrids, iso_value)) We added backpropagation to the triangle distance (used in kaolin.metrics.trianglemesh.point_to_mesh_distance(pointcloud, vertices, faces)) and side distance (used in kaolin.metrics.pointcloud.sided_distance(p1, p2), kaolin.metrics.pointcloud.chamfer_distance(p1, p2, w1, w2), kaolin.metrics.pointcloud.f_score(gt_points, pred_points, radius, eps)).

    USD Visualization

    We now provide importer and exporter to Universal Scene Description files, see the documentation for more information. You can open those file using the Omniverse companion app, see Kaolin Devpage.

    Contributors

    In alphabetical order:

    Wenzheng Chen Sanja Fidler Clement Fuji Tsang Jason Gorski Jean-Francois Lafleche Rev Lebaredian Jianing Li Frank Shen Masha Shugrina Gavriel State Jiehan Wang Tommy Xiang

    Source code(tar.gz)
    Source code(zip)
  • v0.1(Nov 27, 2020)

Owner
NVIDIA GameWorks
NVIDIA Technologies for game and application developers
NVIDIA GameWorks
Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)

Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)

Yihui He 1k Jan 3, 2023
Official implementation of "Accelerating Reinforcement Learning with Learned Skill Priors", Pertsch et al., CoRL 2020

Accelerating Reinforcement Learning with Learned Skill Priors [Project Website] [Paper] Karl Pertsch1, Youngwoon Lee1, Joseph Lim1 1CLVR Lab, Universi

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 134 Dec 6, 2022
Sky Computing: Accelerating Geo-distributed Computing in Federated Learning

Sky Computing Introduction Sky Computing is a load-balanced framework for federated learning model parallelism. It adaptively allocate model layers to

HPC-AI Tech 72 Dec 27, 2022
QueryDet: Cascaded Sparse Query for Accelerating High-Resolution SmallObject Detection

QueryDet-PyTorch This repository is the official implementation of our paper: QueryDet: Cascaded Sparse Query for Accelerating High-Resolution Small O

Chenhongyi Yang 276 Dec 31, 2022
Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"

TR-BERT Source code and dataset for "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference". The code is based on huggaface's transformers.

THUNLP 37 Oct 30, 2022
The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing".

BMC The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing". BibTex entry available here. B

Orange 383 Dec 16, 2022
NPBG++: Accelerating Neural Point-Based Graphics

[CVPR 2022] NPBG++: Accelerating Neural Point-Based Graphics Project Page | Paper This repository contains the official Python implementation of the p

Ruslan Rakhimov 57 Dec 3, 2022
Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pytorch Lightning 1.4k Jan 1, 2023
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.

Detectron is deprecated. Please see detectron2, a ground-up rewrite of Detectron in PyTorch. Detectron Detectron is Facebook AI Research's software sy

Facebook Research 25.5k Jan 7, 2023
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-t

Facebook Research 5.1k Jan 4, 2023
PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos

PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos. By adopting a unified pipeline-based API design, PyKale enforces standardization and minimalism, via reusing existing resources, reducing repetitions and redundancy, and recycling learning models across areas.

PyKale 370 Dec 27, 2022
SenseNet is a sensorimotor and touch simulator for deep reinforcement learning research

SenseNet is a sensorimotor and touch simulator for deep reinforcement learning research

null 59 Feb 25, 2022
A Real-Time-Strategy game for Deep Learning research

Description DeepRTS is a high-performance Real-TIme strategy game for Reinforcement Learning research. It is written in C++ for performance, but provi

Centre for Artificial Intelligence Research (CAIR) 156 Dec 19, 2022
TLoL (Python Module) - League of Legends Deep Learning AI (Research and Development)

TLoL-py - League of Legends Deep Learning Library TLoL-py is the Python component of the TLoL League of Legends deep learning library. It provides a s

null 7 Nov 29, 2022
Research on Tabular Deep Learning (Python package & papers)

Research on Tabular Deep Learning For paper implementations, see the section "Papers and projects". rtdl is a PyTorch-based package providing a user-f

Yura Gorishniy 510 Dec 30, 2022
A PyTorch library and evaluation platform for end-to-end compression research

CompressAI CompressAI (compress-ay) is a PyTorch library and evaluation platform for end-to-end compression research. CompressAI currently provides: c

InterDigital 680 Jan 6, 2023
A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

FedML-AI 175 Dec 1, 2022
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

null 63 Oct 17, 2022