Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering

Overview

Nvdiffrast – Modular Primitives for High-Performance Differentiable Rendering

Teaser image

Modular Primitives for High-Performance Differentiable Rendering
Samuli Laine, Janne Hellsten, Tero Karras, Yeongho Seol, Jaakko Lehtinen, Timo Aila
http://arxiv.org/abs/2011.03277

Nvdiffrast is a PyTorch/TensorFlow library that provides high-performance primitive operations for rasterization-based differentiable rendering. Please refer to ☞☞ nvdiffrast documentation ☜☜ for more information.

Licenses

Copyright © 2020, NVIDIA Corporation. All rights reserved.

This work is made available under the Nvidia Source Code License.

For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing

We do not currently accept outside code contributions in the form of pull requests.

Environment map stored as part of samples/data/envphong.npz is derived from a Wave Engine sample material originally shared under MIT License. Mesh and texture stored as part of samples/data/earth.npz are derived from 3D Earth Photorealistic 2K model originally made available under TurboSquid 3D Model License.

Citation

@article{Laine2020diffrast,
  title   = {Modular Primitives for High-Performance Differentiable Rendering},
  author  = {Samuli Laine and Janne Hellsten and Tero Karras and Yeongho Seol and Jaakko Lehtinen and Timo Aila},
  journal = {ACM Transactions on Graphics},
  year    = {2020},
  volume  = {39},
  number  = {6}
}
Comments
  • Runtime Error: glLinkProgram() failed

    Runtime Error: glLinkProgram() failed

    Hi, I tried to use nvdiffrast with the mentioned document in Windows 10. When I executed following commad, runtime error happened:

    D:\development\anaconda3\envs\dmodel\lib\site-packages\torch\utils\cpp_extension.py:304: UserWarning: Error checking compiler version for cl: 'utf-8' codec can't decode byte 0xd3 in position 0: invalid continuation byte
      warnings.warn(f'Error checking compiler version for {compiler}: {error}')
    Traceback (most recent call last):
      File ".\samples\torch\cube.py", line 200, in <module>
        main()
      File ".\samples\torch\cube.py", line 191, in main
        mp4save_fn='progress.mp4'
      File ".\samples\torch\cube.py", line 76, in fit_cube
        glctx = dr.RasterizeGLContext()
      File "D:\development\anaconda3\envs\dmodel\lib\site-packages\nvdiffrast\torch\ops.py", line 151, in __init__
        self.cpp_wrapper = _get_plugin().RasterizeGLStateWrapper(output_db, mode == 'automatic', cuda_device_idx)
    RuntimeError: glLinkProgram() failed:
    Fragment info
    -------------
    0(2) : error C7528: OpenGL reserves names starting with 'gl_'
    (0) : error C2003: incompatible options for link
    

    my PyOpenGL version is 3.1.5 and glfw version is 2.3.0.

    opened by Mirocos 21
  • Add missing python dependency for examples

    Add missing python dependency for examples

    Following this example I have encountered the following error:

    ./run_sample.sh samples/torch/cube.py --resolution 32 --display-interval 10            
    Using container image: gltorch:latest
    Running command: samples/torch/cube.py --resolution 32 --display-interval 10
    No output directory specified, not saving log or images
    Mesh has 12 triangles and 8 vertices.
    iter=0,err=0.473205
    Traceback (most recent call last):
      File "samples/torch/cube.py", line 200, in <module>
        main()
      File "samples/torch/cube.py", line 191, in main
        mp4save_fn='progress.mp4'
      File "samples/torch/cube.py", line 150, in fit_cube
        util.display_image(result_image, size=display_res, title='%d / %d' % (it, max_iter))
      File "/app/samples/torch/util.py", line 69, in display_image
        import OpenGL.GL as gl
    ModuleNotFoundError: No module named 'OpenGL'
    

    Later on, I found that also glfw was also missing.

    This is easily solved in this PR by just updating the docker image.

    opened by nachovizzo 15
  • Segmentation fault in RasterizeGLContext()

    Segmentation fault in RasterizeGLContext()

    Reproduction:

    python3 samples/torch/triangle.py
    

    Digging down into this the crash occurs on this line. It's first starting with glBindBuffer but even if I comment those ones out it crashes on further ones.

    Ubuntu 20.04.4 Nvidia RTX 3090 Cuda toolkit 11.4 python 3.9 pytorch 1.11 (from conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch)

    which seems that it's supported from the nvdiffrec installation steps?

    glxinfo | grep "OpenGL version"
    OpenGL version string: 4.6.0 NVIDIA 470.129.06
    

    Any ideas why this is crashing? Thanks!

    opened by djsamseng 12
  • [E glutil.cpp:248] eglMakeCurrent() failed when setting GL context

    [E glutil.cpp:248] eglMakeCurrent() failed when setting GL context

    Hi, I follow the document and use nvdiffrast in ubuntu18.04LTS with cuda11.4. I executed the following command which throws an exception.

    command:

    python3 cube.py --resolution 16 --display-interval 10
    

    exception

    No output directory specified, not saving log or images
    Mesh has 12 triangles and 8 vertices.
    iter=0,err=0.489876
    [E glutil.cpp:248] eglMakeCurrent() failed when setting GL context
    Traceback (most recent call last):
      File "cube.py", line 200, in <module>
        main()
      File "cube.py", line 191, in main
        mp4save_fn='progress.mp4'
      File "cube.py", line 122, in fit_cube
        color     = render(glctx, r_mvp, vtx_pos, pos_idx, vtx_col, col_idx, resolution)
      File "cube.py", line 30, in render
        rast_out, _ = dr.rasterize(glctx, pos_clip, pos_idx, resolution=[resolution, resolution])
      File "/home/zeming/.local/lib/python3.6/site-packages/nvdiffrast/torch/ops.py", line 241, in rasterize
        return _rasterize_func.apply(glctx, pos, tri, resolution, ranges, grad_db, -1)
      File "/home/zeming/.local/lib/python3.6/site-packages/nvdiffrast/torch/ops.py", line 175, in forward
        out, out_db = _get_plugin().rasterize_fwd(glctx.cpp_wrapper, pos, tri, resolution, ranges, peeling_idx)
    RuntimeError: Cuda error: 219[cudaGraphicsMapResources(2, &s.cudaPosBuffer, stream);]
    [E glutil.cpp:248] eglMakeCurrent() failed when setting GL context
    terminate called after throwing an instance of 'c10::Error'
      what():  Cuda error: 219[cudaGraphicsUnregisterResource(s.cudaPosBuffer);]
    Exception raised from rasterizeReleaseBuffers at /home/zeming/.local/lib/python3.6/site-packages/nvdiffrast/common/rasterize.cpp:573 (most recent call first):
    frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7efc192eda22 in /home/zeming/.local/lib/python3.6/site-packages/torch/lib/libc10.so)
    frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x5b (0x7efc192ea3db in /home/zeming/.local/lib/python3.6/site-packages/torch/lib/libc10.so)
    frame #2: rasterizeReleaseBuffers(int, RasterizeGLState&) + 0xdb (0x7efaa982e63f in /home/zeming/.cache/torch_extensions/nvdiffrast_plugin/nvdiffrast_plugin.so)
    frame #3: RasterizeGLStateWrapper::~RasterizeGLStateWrapper() + 0x33 (0x7efaa9885397 in /home/zeming/.cache/torch_extensions/nvdiffrast_plugin/nvdiffrast_plugin.so)
    frame #4: std::default_delete<RasterizeGLStateWrapper>::operator()(RasterizeGLStateWrapper*) const + 0x22 (0x7efaa986c9f2 in /home/zeming/.cache/torch_extensions/nvdiffrast_plugin/nvdiffrast_plugin.so)
    frame #5: std::unique_ptr<RasterizeGLStateWrapper, std::default_delete<RasterizeGLStateWrapper> >::~unique_ptr() + 0x49 (0x7efaa98618c9 in /home/zeming/.cache/torch_extensions/nvdiffrast_plugin/nvdiffrast_plugin.so)
    frame #6: <unknown function> + 0xab003 (0x7efaa985b003 in /home/zeming/.cache/torch_extensions/nvdiffrast_plugin/nvdiffrast_plugin.so)
    frame #7: <unknown function> + 0x4ff688 (0x7efc222e2688 in /home/zeming/.local/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
    frame #8: <unknown function> + 0x50098e (0x7efc222e398e in /home/zeming/.local/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
    frame #9: python3() [0x5732de]
    frame #10: python3() [0x54edd2]
    frame #11: python3() [0x588fd8]
    frame #12: python3() [0x5add78]
    frame #13: python3() [0x5add8e]
    frame #14: python3() [0x5add8e]
    frame #15: python3() [0x56b606]
    <omitting python frames>
    frame #21: __libc_start_main + 0xe7 (0x7efc28269bf7 in /lib/x86_64-linux-gnu/libc.so.6)
    
    

    Any advice ? Thanks !!!

    opened by Mirocos 11
  • [F glutil.cpp:366] eglCreateContext() failed  ... RuntimeError: OpenGL 4.4 or later is required

    [F glutil.cpp:366] eglCreateContext() failed ... RuntimeError: OpenGL 4.4 or later is required

    Hi,

    I tried to run it on local Linux machines following installations in the docker file, and I also tried to install on Docker image running Ubuntu 20.04 + CUDA 11, both seem to fail with the following message. I'm running on an AWS linux server so not sure if this is related to headless display.

    Creating GL context for Cuda device 0
    Failed, falling back to default display
    eglInitialize() failed
    eglChooseConfig() failed
    eglCreateContext() failed
    EGL 1471947312.32765 OpenGL context created (disp: 0x0000000082415470, ctx: 0x0000000000000000)
    setGLContext() called with null gltcx
    Traceback (most recent call last):
      File "nvdiffrast/samples/torch/envphong.py", line 226, in <module>
        main()
      File "nvdiffrast/samples/torch/envphong.py", line 211, in main
        fit_env_phong(
      File "nvdiffrast/samples/torch/envphong.py", line 77, in fit_env_phong
        glctx = dr.RasterizeGLContext()
      File "/usr/local/lib/python3.8/dist-packages/nvdiffrast/torch/ops.py", line 151, in __init__
        self.cpp_wrapper = _get_plugin().RasterizeGLStateWrapper(output_db, mode == 'automatic', cuda_device_idx)
    RuntimeError: OpenGL 4.4 or later is required
    

    Is there a setting that I need to set up for it to work? Thanks so much!!

    opened by nairb2020 10
  • I have problem about nvdiffrast_plugin_gl.so

    I have problem about nvdiffrast_plugin_gl.so

    Hello, thank you for your great research!

    I have a problem running the code that import nvdiffrast.

    Traceback (most recent call last):
      File "/home/jiyouseo/anaconda3/envs/dmodel/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1809, in _run_ninja_build
        subprocess.run(
      File "/home/jiyouseo/anaconda3/envs/dmodel/lib/python3.9/subprocess.py", line 528, in run
        raise CalledProcessError(retcode, process.args,
    subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "/home/jiyouseo/nvdiffrec/train.py", line 556, in <module>
        glctx = dr.RasterizeGLContext()
      File "/home/jiyouseo/anaconda3/envs/dmodel/lib/python3.9/site-packages/nvdiffrast/torch/ops.py", line 221, in __init__
        self.cpp_wrapper = _get_plugin(gl=True).RasterizeGLStateWrapper(output_db, mode == 'automatic', cuda_device_idx)
      File "/home/jiyouseo/anaconda3/envs/dmodel/lib/python3.9/site-packages/nvdiffrast/torch/ops.py", line 118, in _get_plugin
        torch.utils.cpp_extension.load(name=plugin_name, sources=source_paths, extra_cflags=opts, extra_cuda_cflags=opts+['-lineinfo'], extra_ldflags=ldflags, with_cuda=True, verbose=False)
      File "/home/jiyouseo/anaconda3/envs/dmodel/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1202, in load
        return _jit_compile(
      File "/home/jiyouseo/anaconda3/envs/dmodel/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1425, in _jit_compile
        _write_ninja_file_and_build_library(
      File "/home/jiyouseo/anaconda3/envs/dmodel/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1537, in _write_ninja_file_and_build_library
        _run_ninja_build(
      File "/home/jiyouseo/anaconda3/envs/dmodel/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1825, in _run_ninja_build
        raise RuntimeError(message) from e
    RuntimeError: Error building extension 'nvdiffrast_plugin_gl': [1/1] c++ common.o glutil.o rasterize_gl.o torch_bindings_gl.o torch_rasterize_gl.o -shared -lGL -lEGL -L/home/jiyouseo/anaconda3/envs/dmodel/lib/python3.9/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda_cu -ltorch_cuda_cpp -ltorch -ltorch_python -L/usr/local/cuda-11.3/lib64 -lcudart -o nvdiffrast_plugin_gl.so
    FAILED: nvdiffrast_plugin_gl.so 
    c++ common.o glutil.o rasterize_gl.o torch_bindings_gl.o torch_rasterize_gl.o -shared -lGL -lEGL -L/home/jiyouseo/anaconda3/envs/dmodel/lib/python3.9/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda_cu -ltorch_cuda_cpp -ltorch -ltorch_python -L/usr/local/cuda-11.3/lib64 -lcudart -o nvdiffrast_plugin_gl.so
    /usr/bin/ld: cannot find -lGL
    /usr/bin/ld: cannot find -lEGL
    collect2: error: ld returned 1 exit status
    ninja: build stopped: subcommand failed.
    

    So, I modify ['ninja', '-v'] to ['ninja', '--version'] in cpp_extension.py Then It return

    ImportError:/home/jiyouseo/.cache/torch_extensions/py36_cu113/nvdiffrast_plugin_gl/nvdiffrast_plugin_gl.so: cannot open shared object file: No such file or directory
    

    I guess it is because it cannot build nvdiffrast_plugin_gl.so. How can I build exactly nvdiffrast_plugin_gl.so? And this is my code environment.

    No LSB modules are available.
    Distributor ID:	Ubuntu
    Description:	Ubuntu 18.04.6 LTS
    Release:	18.04
    Codename:	bionic
    
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 465.19.01    Driver Version: 465.19.01    CUDA Version: 11.3     |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |                               |                      |               MIG M. |
    |===============================+======================+======================|
    |   0  NVIDIA RTX A6000    On   | 00000000:3B:00.0 Off |                  Off |
    | 58%   82C    P2   232W / 300W |  34285MiB / 48685MiB |    100%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    |   1  NVIDIA RTX A6000    On   | 00000000:5E:00.0 Off |                  Off |
    | 67%   85C    P2   234W / 300W |  33903MiB / 48685MiB |    100%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    |   2  NVIDIA RTX A6000    On   | 00000000:AF:00.0 Off |                  Off |
    | 67%   85C    P2   235W / 300W |  34135MiB / 48685MiB |    100%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    |   3  NVIDIA RTX A6000    On   | 00000000:D8:00.0 Off |                  Off |
    | 61%   84C    P2   232W / 300W |  34101MiB / 48685MiB |    100%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
                                                                                   
    +-----------------------------------------------------------------------------+
    | Processes:                                                                  |
    |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
    |        ID   ID                                                   Usage      |
    |=============================================================================|
    |    0   N/A  N/A      1733      G   /usr/lib/xorg/Xorg                  4MiB |
    |    0   N/A  N/A     16354      C   ...onda/envs/uvtr/bin/python    34277MiB |
    |    1   N/A  N/A      1733      G   /usr/lib/xorg/Xorg                  4MiB |
    |    1   N/A  N/A     16355      C   ...onda/envs/uvtr/bin/python    33895MiB |
    |    2   N/A  N/A      1733      G   /usr/lib/xorg/Xorg                  4MiB |
    |    2   N/A  N/A     16356      C   ...onda/envs/uvtr/bin/python    34127MiB |
    |    3   N/A  N/A      1733      G   /usr/lib/xorg/Xorg                  4MiB |
    |    3   N/A  N/A     16357      C   ...onda/envs/uvtr/bin/python    34093MiB |
    +-----------------------------------------------------------------------------+
    

    Thank you.

    opened by JiyouSeo 9
  • Possible memory leak when using nn.DataParallel

    Possible memory leak when using nn.DataParallel

    Hi, when I use your code to implement multi-gpu training with the provided rasterization, the gpu memory keeps increasing.

    I first define a list of instances of the class RasterizeGLContext for each gpu in the init func of pytorch nn.Module class. During forward, I choose the RasterizeGLContext instance according to the current device id. The gpu memory keeps increasing when I use gpus >= 2.

    I don't know whether I wrongly use the code or there exists some bugs in your implementation. If possible, could you provide some sample codes for multi-gpu training? Thanks!

    opened by Turlan 9
  • glewInit() failed

    glewInit() failed

    I'm on Ubuntu-18.04 and I've installed all dependencies as in the docker file including

    • source-recompiling glew2.1.0 like in dockerfile
    • setting evironment variables LD_LIBRARY_PATH=/usr/lib64:$LD_LIBRARY_PATH, PYOPENGL_PLATFORM=egl, and CC=gcc-8.

    I'm using torch 1.7.1, cuda 10.2

    I keep getting this error on any sample in the torch folder:

    [F glutil.inl:188] glewInit() failed, return value = 4

    Any idea why?

    opened by mich-sanna 9
  • Question abount build in cpp

    Question abount build in cpp

    Hi, I am trying to implement this powerful framework in c++ style. And when I build project with nvdiffrast, cpp link error says that it can not open file setgpu.lib provided in this project. Why this happened? and is there any way to solve this if possible?

    I am using Visual Studio 2019 professional with CMake to build my project.

    opened by Mirocos 8
  • About the first running time

    About the first running time

    Thanks for your code. I have a question that why the time is so long at the first running time. I run the code in the docker which you provide. I just realize a function which is like warpaffine. The running time is as follows

    image shape : 2448 3264 warp_img time is 0.318
    image shape : 1024 1024 warp_img time is 0.019
    image shape : 720 1407 warp_img time is 0.020
    image shape : 2448 3264 warp_img time is 0.085
    image shape : 1920 2560 warp_img time is 0.055
    image shape : 634 951 warp_img time is 0.010
    image shape : 1944 2592 warp_img time is 0.056
    image shape : 900 1600 warp_img time is 0.022
    image shape : 720 1424 warp_img time is 0.018
    

    Can I avoid the cold boot?

    Looking forward to your reply.

    opened by LeslieZhoa 8
  • RasterizeCudaContext behaves very differently from RasterizeGLContext

    RasterizeCudaContext behaves very differently from RasterizeGLContext

    I tested Deep3DFaceRecon_pytorch with both RasterizeGLContext(as default in this project) and RasterizeCudaContext, and they behaved very differently.

    The key part of code is in Deep3DFaceRecon_pytorch/util/nvdiffrast.py. You can change rasterizer in line 58. You may change line 78 from mask = (rast_out[..., 3] > 0).float().unsqueeze(1) to mask = (rast_out[..., 3] > float("-inf")).float().unsqueeze(1) to ignore the background to show the result clearly.

    I got complete rendered human face with RasterizeGLContext, but broken image with RasterizeCudaContext. I tested both in Linux and Windows with various versions of torch, I believe the problem is irrelevant with platform and enviromnent.

    Result of RasterizeGLContext: 000002 Result of RasterizeCudaContext: 000002

    opened by XCR16729438 7
  • Parallel Rendering through PyOpenGL

    Parallel Rendering through PyOpenGL

    I am attempting to use glMultiDrawElementsIndirect to implement parallel batched rendering using PyOpenGL and am running into a variety of issues, mostly due to my inexperience with OpenGL. I am following your implementation in nvdiffrast/common/rasterize_gl.cpp since its the only readable usage of glMultiDrawElementsIndirect that I've found thus far. Any help with the following issues would be much appreciated 🙏 :

    I am using a multilayered image buffer (height x width x batch_size):

    color_tex = glGenTextures(1)
    glBindTexture(GL_TEXTURE_2D_ARRAY, color_tex)
    glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, color_tex, 0)
    glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_RGBA32F, width, height, batch_size, 0,
                    GL_RGBA, GL_UNSIGNED_BYTE, None)
    

    and then doing rendering into it using:

    indirect = np.array([
        [indices.shape[0]*3, batch_size, 0, 0, 0, 1]
        for _ in range(batch_size)
        ], dtype=np.uint32)
    glMultiDrawElementsIndirect(GL_TRIANGLES,
     	GL_UNSIGNED_INT,
     	indirect,
     	batch_size,
     	indirect.dtype.itemsize * indirect.shape[1]
    )
    

    It seems to work, but the rendered images look like this: when they are actually supposed to look like:

    I can't track down the cause of this mismatch.

    What's even more odd is that when I add this depth framebuffer:

    glBindTexture(GL_TEXTURE_2D_ARRAY, depth_tex)
    glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depth_tex, 0)
    glTexImage3D.wrappedOperation(
        GL_TEXTURE_2D_ARRAY, 0, GL_DEPTH24_STENCIL8, width, height, batch_size, 0, 
        GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, None
    );
    

    the image now comes out correctly. But it only writes to the first layer of the multilayer framebuffer.

    Any ideas on where to look next would be much appreciated!

    Probably unrelated, but I saw this in the code:

        // Enable depth modification workaround on A100 and later.
        int capMajor = 0;
        NVDR_CHECK_CUDA_ERROR(cudaDeviceGetAttribute(&capMajor, cudaDevAttrComputeCapabilityMajor, cudaDeviceIdx));
        s.enableZModify = (capMajor >= 8);
    

    I am running on an A100. Might that be related to this?

    opened by nishadgothoskar 7
  • LINK: fatal error LNK1104: can't open the file

    LINK: fatal error LNK1104: can't open the file"nvdiffrast_plugin.pyd"

    I was able to use version 0.28 before. But, it seems like meet a compile error when I try use version 0.30. image

    Thank you very much for any suggestionl.

    opened by xiaomihefeifei 2
  • ImportError: No module named 'nvdiffrast_plugin'

    ImportError: No module named 'nvdiffrast_plugin'

    When I run codes in ./samples/torch,there is always an error: No module named 'nvdiffrast_plugin'

    Traceback (most recent call last): File "triangle.py", line 21, in glctx = dr.RasterizeGLContext() File "/opt/conda/envs/fomm/lib/python3.7/site-packages/nvdiffrast/torch/ops.py", line 142, in init self.cpp_wrapper = _get_plugin().RasterizeGLStateWrapper(output_db, mode == 'automatic') File "/opt/conda/envs/fomm/lib/python3.7/site-packages/nvdiffrast/torch/ops.py", line 83, in _get_plugin torch.utils.cpp_extension.load(name=plugin_name, sources=source_paths, extra_cflags=opts, extra_cuda_cflags=opts, extra_ldflags=ldflags, with_cuda=True, verbose=False) File "/opt/conda/envs/fomm/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1091, in load keep_intermediates=keep_intermediates) File "/opt/conda/envs/fomm/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1317, in _jit_compile return _import_module_from_library(name, build_directory, is_python_module) File "/opt/conda/envs/fomm/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1706, in _import_module_from_library file, path, description = imp.find_module(module_name, [path]) File "/opt/conda/envs/fomm/lib/python3.7/imp.py", line 299, in find_module raise ImportError(_ERR_MSG.format(name), name=name) ImportError: No module named 'nvdiffrast_plugin'

    It seems like that some packages are lost. I install nvdiffrast as the instruction in document ----cd ./nvdiffrast and pip install . I uninstall and install many times but this error still exists. I try installing in cuda10.0, torch 1.6, cuda11.1, torch 1.8.1, and Cuda 9.0, torch 1.6, but all these situations have this error. I use an Nvidia 3090 GPU. Is there anyone who can solve this problem? Thanks.

    opened by sunkymepro 20
Owner
NVIDIA Research Projects
NVIDIA Research Projects
UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering

UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering This repository holds all the code and data for our recent work on

Mohamed El Banani 118 Dec 6, 2022
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

null 697 Jan 6, 2023
Official Repo for ICCV2021 Paper: Learning to Regress Bodies from Images using Differentiable Semantic Rendering

[ICCV2021] Learning to Regress Bodies from Images using Differentiable Semantic Rendering Getting Started DSR has been implemented and tested on Ubunt

Sai Kumar Dwivedi 83 Nov 27, 2022
Differentiable Neural Computers, Sparse Access Memory and Sparse Differentiable Neural Computers, for Pytorch

Differentiable Neural Computers and family, for Pytorch Includes: Differentiable Neural Computers (DNC) Sparse Access Memory (SAM) Sparse Differentiab

ixaxaar 302 Dec 14, 2022
Optimized primitives for collective multi-GPU communication

NCCL Optimized primitives for inter-GPU communication. Introduction NCCL (pronounced "Nickel") is a stand-alone library of standard communication rout

NVIDIA Corporation 2k Jan 9, 2023
HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives

HashNeRF-pytorch Instant-NGP recently introduced a Multi-resolution Hash Encodin

Yash Sanjay Bhalgat 616 Jan 6, 2023
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. Website • Key Features • How To Use • Docs •

Pytorch Lightning 21.1k Jan 1, 2023
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Jan 4, 2023
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 3, 2023
ML-Ensemble – high performance ensemble learning

A Python library for high performance ensemble learning ML-Ensemble combines a Scikit-learn high-level API with a low-level computational graph framew

Sebastian Flennerhag 764 Dec 31, 2022
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Microsoft 14.5k Jan 8, 2023
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. Website • Key Features • How To Use • Docs •

Pytorch Lightning 11.9k Feb 13, 2021
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 5.7k Feb 12, 2021
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 2.8k Feb 12, 2021
《LightXML: Transformer with dynamic negative sampling for High-Performance Extreme Multi-label Text Classification》(AAAI 2021) GitHub:

LightXML: Transformer with dynamic negative sampling for High-Performance Extreme Multi-label Text Classification

null 76 Dec 5, 2022
Code from the paper "High-Performance Brain-to-Text Communication via Handwriting"

High-Performance Brain-to-Text Communication via Handwriting Overview This repo is associated with this manuscript, preprint and dataset. The code can

Francis R. Willett 306 Jan 3, 2023
MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data

This repository is the official PyTorch implementation of Meta-Balance. Find the paper on arxiv MetaBalance: High-Performance Neural Networks for Clas

Arpit Bansal 20 Oct 18, 2021
PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

null 943 Jan 7, 2023
HyperPose is a library for building high-performance custom pose estimation applications.

HyperPose is a library for building high-performance custom pose estimation applications.

TensorLayer Community 1.2k Jan 4, 2023