Python interface to GPU-powered libraries

Overview

scikit-cuda

Package Description

scikit-cuda provides Python interfaces to many of the functions in the CUDA device/runtime, CUBLAS, CUFFT, and CUSOLVER libraries distributed as part of NVIDIA's CUDA Programming Toolkit, as well as interfaces to select functions in the CULA Dense Toolkit. Both low-level wrapper functions similar to their C counterparts and high-level functions comparable to those in NumPy and Scipy are provided.

0.5.3 Latest Version Downloads Support the project Open Hub

Documentation

Package documentation is available at http://scikit-cuda.readthedocs.org/. Many of the high-level functions have examples in their docstrings. More illustrations of how to use both the wrappers and high-level functions can be found in the demos/ and tests/ subdirectories.

Development

The latest source code can be obtained from https://github.com/lebedov/scikit-cuda.

When submitting bug reports or questions via the issue tracker, please include the following information:

  • Python version.
  • OS platform.
  • CUDA and PyCUDA version.
  • Version or git revision of scikit-cuda.

Citing

If you use scikit-cuda in a scholarly publication, please cite it as follows:

@misc{givon_scikit-cuda_2019,
          author = {Lev E. Givon and
                    Thomas Unterthiner and
                    N. Benjamin Erichson and
                    David Wei Chiang and
                    Eric Larson and
                    Luke Pfister and
                    Sander Dieleman and
                    Gregory R. Lee and
                    Stefan van der Walt and
                    Bryant Menn and
                    Teodor Mihai Moldovan and
                    Fr\'{e}d\'{e}ric Bastien and
                    Xing Shi and
                    Jan Schl\"{u}ter and
                    Brian Thomas and
                    Chris Capdevila and
                    Alex Rubinsteyn and
                    Michael M. Forbes and
                    Jacob Frelinger and
                    Tim Klein and
                    Bruce Merry and
                    Nate Merill and
                    Lars Pastewka and
                    Li Yong Liu and
                    S. Clarkson and
                    Michael Rader and
                    Steve Taylor and
                    Arnaud Bergeron and
                    Nikul H. Ukani and
                    Feng Wang and
                    Wing-Kit Lee and
                    Yiyin Zhou},
    title        = {scikit-cuda 0.5.3: a {Python} interface to {GPU}-powered libraries},
    month        = May,
    year         = 2019,
    doi          = {10.5281/zenodo.3229433},
    url          = {http://dx.doi.org/10.5281/zenodo.3229433},
    note         = {\url{http://dx.doi.org/10.5281/zenodo.3229433}}
}

Authors & Acknowledgments

See the included AUTHORS file for more information.

Note Regarding CULA Availability

As of 2017, the CULA toolkit is available to premium tier users of Celerity Tools (EM Photonics' new HPC site).

Related

Python wrappers for cuDNN by Hannes Bretschneider are available here.

ArrayFire is a free library containing many GPU-based routines with an officially supported Python interface.

License

This software is licensed under the BSD License. See the included LICENSE file for more information.

Comments
  • cublas library not found

    cublas library not found

    Problem

    step 1: install scikit-cuda==0.5.2 step 2: Just do import skcuda.linalg as linalg

    Error

    File "C:\Users\cegprakash.virtualenvs\similarity_articles-JuZA6FGA\lib\site-packages\skcuda\linalg.py", line 23, in from . import cublas File "C:\Users\cegprakash.virtualenvs\similarity_articles-JuZA6FGA\lib\site-packages\skcuda\cublas.py", line 55, in raise OSError('cublas library not found') OSError: cublas library not found

    Environment

    • OS platform : Windows 10
    • Python version : 3.6.3
    • CUDA version : V10.0.130 (Installed it from https://developer.nvidia.com/cuda-downloads)
    • PyCUDA version : 2018.1.1
    • scikit-cuda version : 0.5.2
    opened by cegprakash 27
  • SVD on GPU is MUCH slower than on CPU

    SVD on GPU is MUCH slower than on CPU

    Hey,

    I have the following code:

        # measurement
        from timeit import timeit
        import pycuda.gpuarray as gpuarray
        import pycuda.autoinit
        import numpy as np
        from skcuda.linalg import svd
        import skcuda
        skcuda.misc.init()
        N = 6400
        Y = np.random.randn(N, N) + 1j*np.random.randn(N, N)
        X = np.asarray(Y, np.complex64)
        a_gpu = gpuarray.to_gpu(X)
             
        tm = timeit("svd(a_gpu, jobu='A', jobvt='A', lib='cusolver')", 
                        globals={'a_gpu': a_gpu, 'svd': svd}, 
                        number=1)
    

    You should be able to run it if you have skcuda, pycuda and pandas installed.

    What I am trying to do is to compare the speed with numpy's SVD version:

        import numpy as np
        import pandas as pd
        from timeit import timeit
        N = 10000
        X = np.random.randn(N, N).astype('float32')
        tm = timeit('svd(X, full_matrices=True)', 
                        globals={'X': X, 'svd': np.linalg.svd}, 
                        number=1)
    
    

    but when I compare the results, it is terrible for GPU. On CPU, I compute SVD matrix about 10000 rows in 15 minutes on single CPU with numpy, while on skcuda SVD it takes about 90 minutes for matrix about 6400 rows.

    Am I doing something wrong? I can see that skcuda is using the card (at least it is warming up and taking up memory on the card).

    opened by hnykda 22
  • Mac OSX Mavericks 10.9.5 can't find CULA

    Mac OSX Mavericks 10.9.5 can't find CULA

    I'm consistently getting this problem

    raise OSError('%s not found' % _load_err) OSError: libcula_lapack.dylib, libcula_core.dylib not found

    I had to change the default library names in cula.py since CULA 16 gives libcula_lapack.dylib, libcula_core.dylib for Mac OSX 10.9.5. CULA is installed and works. All CULA libraries are included in the DYLD_LIBRARY_PATH. CUBLAS is working fine, but the scikits.cuda just can't find CULA, even though the relevant .dylib files are in the library path.

    opened by stevertaylor 21
  • Scikits Cuda on OS x

    Scikits Cuda on OS x

    Hi, I am new user to Scikits.cuda and I am trying to set it up on a machine running Os x 10.8.2 with nvidia cuda 5.0 already installed and working on it. I installed scikits.cuda using easy_install and when I try to import it I get an error message saying that "CUDA driver not found".

    Please find the log below,

    Python 2.7.5 (default, Aug 1 2013, 01:01:17) [GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin Type "help", "copyright", "credits" or "license" for more information.

    import pycuda.autoinit import scikits.cuda.fft as gpufft Traceback (most recent call last): File "", line 1, in File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scikits.cuda-0.042-py2.7.egg/scikits/cuda/fft.py", line 19, in import misc File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scikits.cuda-0.042-py2.7.egg/scikits/cuda/misc.py", line 16, in import cuda File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scikits.cuda-0.042-py2.7.egg/scikits/cuda/cuda.py", line 8, in from cudadrv import * File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scikits.cuda-0.042-py2.7.egg/scikits/cuda/cudadrv.py", line 29, in raise OSError('CUDA driver library not found') OSError: CUDA driver library not found

    Please help me to resolve this issue.

    Thanks & Regards, Kartheek.

    opened by kartheekmedathati 20
  • Windows support

    Windows support

    Hi there,

    Ive been wanting to use cufft on windows using pycuda; I noted scikits cuda doesnt support windows, but monkey-patching it so that at least cufft works on win7 64bit proved quite trivial (see below).

    import platform if platform.system() == "Windows": _libcufft = ctypes.windll.LoadLibrary('cufft32_42_9.dll')

    I am anything but an expert on these matters, so I dont know how hard it is to generalize this; but if at least cufft would be made to have windows support this easily, id love to see it get merged into the main branch, so I dont have to ship my project with this monkey-patch of mine.

    Thanks for the great work on this!

    ENH 
    opened by EelcoHoogendoorn 20
  • Support for CUDA 8.0 - cusolver.py?

    Support for CUDA 8.0 - cusolver.py?

    I am failing to run my program that starts with some standard imports.

    test.py

    import pycuda.driver as cuda
    import pycuda.autoinit
    import pycuda.gpuarray as gpuarray
    import pycuda.cumath
    from pycuda.compiler import SourceModule
    import numpy as np
    from skcuda import linalg
    linalg.init()
    

    Error output:

    File "test.py", line 10, in <module>
        linalg.init()
      File "/usr/lib/python2.7/site-packages/skcuda/misc.py", line 177, in init
        from . import cusolver
      File "/usr/lib/python2.7/site-packages/skcuda/cusolver.py", line 55, in <module>
        raise OSError('cusolver library not found')
    OSError: cusolver library not found
    

    All pyCUDA and scikit-CUDA dependencies are installed freshly and all CUDA paths are set up according CUDA documentation.

    Cheers

    opened by DomagojHack 19
  • OSX Support

    OSX Support

    I am trying to run scikits.cuda on my OS10.8.3 apple laptop. I modified utils.py to look for libdl.dylib rather than libdl.so, and used homebrew to install gobjdump (I believe) help find the cublas version. I also tried changing the objdump invocation to gobjdump/llvm-objdump.

    I'm now stuck on line 227 in utils.py, above which it says:

    XXX This approach to obtaining the CUBLAS version number may break Windows/MacOSX compatibility XXX
    

    This appears to have happened. I'll probably keep plugging away at this throughout the day, but any chance of help?

    BUG 
    opened by c0g 17
  • CUDA errors when unit tests are run via setuptools/nose, but not when run individually

    CUDA errors when unit tests are run via setuptools/nose, but not when run individually

    Hi. I am facing an issue with the cublas library. I cannot import cublas library.

    Problem

    After installing the dependences: setuptools and installing scikit-cuda with: "pip install scikit-cuda" I run the following line:

                  import skcuda.linalg as linalg
    

    The execution give the following:

    Traceback (most recent call last):

    File "", line 1, in import skcuda.linalg as linalg

    File "C:\Users\victo\Anaconda3\envs\spikesorting_tsne\lib\site-packages\skcuda\linalg.py", line 21, in from . import cublas

    File "C:\Users\victo\Anaconda3\envs\spikesorting_tsne\lib\site-packages\skcuda\cublas.py", line 55, in raise OSError('cublas library not found')

    OSError: cublas library not found

    Environment

    List the following info:

    • Microsoft Windows [Versión 10.0.17713.1002]
    • Python 3.6.6
    • Cuda Toolkit 9.0. I installed downloading the package from Nvidia web page, including the cudnn Library (cudnn-9.0-windows10-x64-v7.2.1.38). I made sure the environment variables CUDA_PATH and CUDA_PATH_V9_0 were set.
    • scikit-cuda 0.5.1
    BUG 
    opened by vhcg77 15
  • CUFFT library not found

    CUFFT library not found

    OS X noob and have never encountered this one on LINUX machines with similar software configurations. Running skcuda version 0.5.1 in ANACONDA env with CUDA toolkit 7.5 & pycuda installed on OS X 10.11.2.

    from skcuda.cuda import fft

    returns an returns OSError: cufft library not found, also affecting other python programs using CUDA e.g., mne.cuda @Eric89GXL

    opened by ktavabi 14
  • failed installation of scikits.cuda: nvcc not in path

    failed installation of scikits.cuda: nvcc not in path

    Hi, the installation of scikits.cuda via pip failed: "nvcc not in path"

    marco@marco-All-Series:~$ sudo pip install scikits.cuda Downloading/unpacking scikits.cuda Downloading scikits.cuda-0.042.tar.gz (97kB): 97kB downloaded Running setup.py (path:/tmp/pip_build_root/scikits.cuda/setup.py) egg_info for package scikits.cuda

    Requirement already satisfied (use --upgrade to upgrade): numpy in /usr/local/lib/python2.7/dist-packages (from scikits.cuda) Downloading/unpacking pycuda>=0.94.2 (from scikits.cuda) Downloading pycuda-2014.1.tar.gz (1.6MB): 1.6MB downloaded Running setup.py (path:/tmp/pip_build_root/pycuda/setup.py) egg_info for package pycuda *** WARNING: nvcc not in path. ************************************************************* *** I have detected that you have not run configure.py. ************************************************************* *** Additionally, no global config files were found. *** I will go ahead with the default configuration. *** In all likelihood, this will not work out. *** *** See README_SETUP.txt for more information. *** *** If the build does fail, just re-run configure.py with the *** correct arguments, and then retry. Good luck! ************************************************************* *** HIT Ctrl-C NOW IF THIS IS NOT WHAT YOU WANT ************************************************************* Continuing in 1 seconds... ..
    Traceback (most recent call last): File "", line 17, in File "/tmp/pip_build_root/pycuda/setup.py", line 216, in main() File "/tmp/pip_build_root/pycuda/setup.py", line 88, in main conf["CUDA_INC_DIR"] = [join(conf["CUDA_ROOT"], "include")] File "/usr/lib/python2.7/posixpath.py", line 77, in join elif path == '' or path.endswith('/'): AttributeError: 'NoneType' object has no attribute 'endswith' Complete output from command python setup.py egg_info: *** WARNING: nvcc not in path.


    *** I have detected that you have not run configure.py.


    *** Additionally, no global config files were found.

    *** I will go ahead with the default configuration.

    *** In all likelihood, this will not work out.


    *** See README_SETUP.txt for more information.


    *** If the build does fail, just re-run configure.py with the

    *** correct arguments, and then retry. Good luck!


    *** HIT Ctrl-C NOW IF THIS IS NOT WHAT YOU WANT


    Continuing in 1 seconds...

    Traceback (most recent call last):

    File "", line 17, in

    File "/tmp/pip_build_root/pycuda/setup.py", line 216, in

    main()
    

    File "/tmp/pip_build_root/pycuda/setup.py", line 88, in main

    conf["CUDA_INC_DIR"] = [join(conf["CUDA_ROOT"], "include")]
    

    File "/usr/lib/python2.7/posixpath.py", line 77, in join

    elif path == '' or path.endswith('/'):
    

    AttributeError: 'NoneType' object has no attribute 'endswith'


    Cleaning up... Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_root/pycuda Storing debug log for failure in /home/marco/.pip/pip.log

    Yesterday I had the same problem with Theano. Before installing some packages with Anaconda distribution and then removing Anaconda (didn't want to rely on proprietary distribution), everything with Theano went fine. But just right after anaconda's removal, exactly the same error's message appeared. For Theano I solved the problem, putting the path into .theanorc file (theano's configuration file-"tableau de bord" file).

    Any hints to solve the problem for scikits.cuda?

    Looking forward to your kind help. Kind regards. Marco

    opened by marcoippolito 13
  • 'module' object has no attribute 'cublasCgemmBatched'

    'module' object has no attribute 'cublasCgemmBatched'

    Hi guys,

    I recently installed scikits.cuda, the latest version. I simply got the zip from github and executed sudo python setup.py install, so I have the latest version.

    The code that I am running stops when it need cublasCgemmBatched, that have been added recently. I obtain:

    AttributeError: 'module' object has no attribute 'cublasCgemmBatched' Apply node that caused the error: BatchedComplexDotOp(GpuContiguous.0, GpuContiguous.0) (I am running the code of Sander Dieleman fft for convolution in theano http://benanne.github.io/2014/05/12/fft-convolutions-in-theano.html).

    Is there something obvious that could explain the fact that it seems not to find it?

    Thanks by advance, Sam

    opened by samhumeau 13
  • ValueError with linalg.dot when transa=True

    ValueError with linalg.dot when transa=True

    Problem

    I want to compute the skcuda equivalent to A.T @ b for numpy arrays (A, b) when A has 2 dimensions and b has one dimension. Here's some simple test data

    import numpy as np
    import pycuda.gpuarray as gpuarray
    import skcuda.linalg as skla
    
    A_gpu = gpuarray.to_gpu(np.ones((3, 2)))
    b_gpu = gpuarray.to_gpu(np.ones(3))
    

    I would expect that the function call below produces the desired output. But instead it raises a ValueError:

    skla.dot(A_gpu, b_gpu, transa='T')
    
    Traceback (most recent call last):
      File "/home/riley/anaconda3/envs/rla39a/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code
        exec(code_obj, self.user_global_ns, self.user_ns)
      File "<ipython-input-27-3e66e0c2b9c5>", line 1, in <module>
        skla.dot(A_gpu, b_gpu, transa='T')
      File "/home/riley/anaconda3/envs/rla39a/lib/python3.9/site-packages/skcuda/linalg.py", line 1060, in dot
        return out.reshape(out_shape)
      File "/home/riley/anaconda3/envs/rla39a/lib/python3.9/site-packages/pycuda/gpuarray.py", line 912, in reshape
        raise ValueError("total size of new array must be unchanged")
    ValueError: total size of new array must be unchanged
    

    The function call below computes the expected result

    c_gpu = skla.dot(b_gpu, A_gpu)
    

    Environment

    • Ubuntu 18.04
    • Python 3.9
    • CUDA 9.1.0 (I forget how I installed it, sorry!)
    • PyCUDA 2021.1
    • scikit-cuda 0.5.3.
    opened by rileyjmurray 1
  • gpuarray fail to transpose and flatten after cublas.Sgemm

    gpuarray fail to transpose and flatten after cublas.Sgemm

    Problem

    I am using Colab to do GPU programming. I came up with a problem when I am using matrix multiplication function Sgemm. The three inputs of the function would be gpuarrays. Once I got the result, I tried to transform the result gpuarray (which is a flatten one) by first transposing (.T) and second flattening (ravel()). I think this could get a matrix that is completely different from the raw output since I rearrange the order. But the truth is that I still got the identical matrix. Therefore, I want to know how to rearrange the order of gpuarray. Thanks in advance.

    Environment

    I think the environment is Linux 5.4.104+ and Python version is 3.7.12. CUDA is integrated in Colab with version Cuda 11.1. Pycuda version is 2021.1, and scikit-cuda version is 0.5.3.

    opened by SuperbTUM 0
  • cusolver.py can't import cusolver64_11.dll

    cusolver.py can't import cusolver64_11.dll

    Probably caused by a misprint in cusolver.py, "11" is missed in _win32_version_list.

    in line 26 two "10":

    _win32_version_list = [110, 10, 10, 100, 92, 91, 90, 80, 75, 70]

    I think it should be

    _win32_version_list = [110, 11, 10, 100, 92, 91, 90, 80, 75, 70]

    opened by Axciton 0
  • CUSOLVER library only available in CUDA 7.0 and later

    CUSOLVER library only available in CUDA 7.0 and later

    Problem

    I found pycuda.gpuarray.dot() is different from numpy.dot(). So I want to use linalg.dot() by import skcuda.linalg. (The calculation method of linalg.dot() is equivalent to numpy.dot()?).

    Then I got ImportError: CUSOLVER library only available in CUDA 7.0 and later

    Someone know why this will happen? Thanks for your help.

    Environment

    OS platform: Windows 10 Python version: 3.6.8 CUDA version: V10.2.89 PyCUDA version: 2021.1 scikit-cuda version: 0.5.3

    opened by decoli 0
  • Error while importing skcuda.linalg

    Error while importing skcuda.linalg

    Problem

    while importing skcuda.linalg, OSError: cublas library not found.

    I tried with other released pycuda, still not working.

    Environment

    List the following info:

    • OS platform (including distro if you are on Linux):windows10
    • Python version ==3.8.8
    • CUDA version==11.2.142 + downloaded and installed through nvidia webset
    • PyCUDA version == 2021.1+cuda114
    • scikit-cuda version (including GitHub revision if you have installed it from there) ==0.5.3
    opened by hyysky 4
  • Schedule for 0.5.4 release?

    Schedule for 0.5.4 release?

    Are there any plans to get a 0.5.4 release out of the door? It looks like there has been some good work on scikit-cuda over the last two years (since 0.5.3 was released), and it would be great to be able to get it in a proper release rather than installing from git.

    opened by bmerry 1
Releases(0.5.1)
Owner
Lev E. Givon
Data Scientist / Machine Learning Researcher
Lev E. Givon
A Python module for getting the GPU status from NVIDA GPUs using nvidia-smi programmically in Python

GPUtil GPUtil is a Python module for getting the GPU status from NVIDA GPUs using nvidia-smi. GPUtil locates all GPUs on the computer, determines thei

Anders Krogh Mortensen 927 Dec 8, 2022
Python 3 Bindings for NVML library. Get NVIDIA GPU status inside your program.

py3nvml Documentation also available at readthedocs. Python 3 compatible bindings to the NVIDIA Management Library. Can be used to query the state of

Fergal Cotter 212 Jan 4, 2023
BlazingSQL is a lightweight, GPU accelerated, SQL engine for Python. Built on RAPIDS cuDF.

A lightweight, GPU accelerated, SQL engine built on the RAPIDS.ai ecosystem. Get Started on app.blazingsql.com Getting Started | Documentation | Examp

BlazingSQL 1.8k Jan 2, 2023
A Python function for Slurm, to monitor the GPU information

Gpu-Monitor A Python function for Slurm, where I couldn't use nvidia-smi to monitor the GPU information. whole repo is not finish Installation TODO Mo

Squidward Tentacles 2 Feb 11, 2022
📊 A simple command-line utility for querying and monitoring GPU status

gpustat Just less than nvidia-smi? NOTE: This works with NVIDIA Graphics Devices only, no AMD support as of now. Contributions are welcome! Self-Promo

Jongwook Choi 3.2k Jan 4, 2023
ArrayFire: a general purpose GPU library.

ArrayFire is a general-purpose library that simplifies the process of developing software that targets parallel and massively-parallel architectures i

ArrayFire 4k Dec 29, 2022
cuDF - GPU DataFrame Library

cuDF - GPU DataFrames NOTE: For the latest stable README.md ensure you are on the main branch. Resources cuDF Reference Documentation: Python API refe

RAPIDS 5.2k Jan 8, 2023
A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.

NVIDIA DALI The NVIDIA Data Loading Library (DALI) is a library for data loading and pre-processing to accelerate deep learning applications. It provi

NVIDIA Corporation 4.2k Jan 8, 2023
Library for faster pinned CPU <-> GPU transfer in Pytorch

SpeedTorch Faster pinned CPU tensor <-> GPU Pytorch variabe transfer and GPU tensor <-> GPU Pytorch variable transfer, in certain cases. Update 9-29-1

Santosh Gupta 657 Dec 19, 2022
jupyter/ipython experiment containers for GPU and general RAM re-use

ipyexperiments jupyter/ipython experiment containers and utils for profiling and reclaiming GPU and general RAM, and detecting memory leaks. About Thi

Stas Bekman 153 Dec 7, 2022
CUDA integration for Python, plus shiny features

PyCUDA lets you access Nvidia's CUDA parallel computation API from Python. Several wrappers of the CUDA API already exist-so what's so special about P

Andreas Klöckner 1.4k Jan 2, 2023
Python 3 Bindings for the NVIDIA Management Library

====== pyNVML ====== *** Patched to support Python 3 (and Python 2) *** ------------------------------------------------ Python bindings to the NVID

Nicolas Hennion 95 Jan 1, 2023
A simple music player, powered by Python, utilising various libraries such as Tkinter and Pygame

A simple music player, powered by Python, utilising various libraries such as Tkinter and Pygame

PotentialCoding 2 May 12, 2022
A object detecting neural network powered by the yolo architecture and leveraging the PyTorch framework and associated libraries.

Yolo-Powered-Detector A object detecting neural network powered by the yolo architecture and leveraging the PyTorch framework and associated libraries

Luke Wilson 1 Dec 3, 2021
Lazy Profiler is a simple utility to collect CPU, GPU, RAM and GPU Memory stats while the program is running.

lazyprofiler Lazy Profiler is a simple utility to collect CPU, GPU, RAM and GPU Memory stats while the program is running. Installation Use the packag

Shankar Rao Pandala 28 Dec 9, 2022
General purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases.

Vulkan Kompute The general purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabl

The Institute for Ethical Machine Learning 1k Dec 26, 2022
nvitop, an interactive NVIDIA-GPU process viewer, the one-stop solution for GPU process management

An interactive NVIDIA-GPU process viewer, the one-stop solution for GPU process management.

Xuehai Pan 1.3k Jan 2, 2023
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.

mtomo Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation.

Katsuya Hyodo 24 Mar 2, 2022
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.

Anakin2.0 Welcome to the Anakin GitHub. Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineer

null 514 Dec 28, 2022
GrabGpu_py: a scripts for grab gpu when gpu is free

GrabGpu_py a scripts for grab gpu when gpu is free. WaitCondition: gpu_memory >

tianyuluan 3 Jun 18, 2022