📊 A simple command-line utility for querying and monitoring GPU status

Overview

gpustat

pypi Build Status license

Just less than nvidia-smi?

Screenshot: gpustat -cp

NOTE: This works with NVIDIA Graphics Devices only, no AMD support as of now. Contributions are welcome!

Self-Promotion: A web interface of gpustat is available (in alpha)! Check out gpustat-web.

Usage

$ gpustat

Options:

  • --color : Force colored output (even when stdout is not a tty)
  • --no-color : Suppress colored output
  • -u, --show-user : Display username of the process owner
  • -c, --show-cmd : Display the process name
  • -f, --show-full-cmd : Display full command and cpu stats of running process
  • -p, --show-pid : Display PID of the process
  • -F, --show-fan : Display GPU fan speed
  • -e, --show-codec : Display encoder and/or decoder utilization
  • -P, --show-power : Display GPU power usage and/or limit (draw or draw,limit)
  • -a, --show-all : Display all gpu properties above
  • --watch, -i, --interval : Run in watch mode (equivalent to watch gpustat) if given. Denotes interval between updates. (#41)
  • --json : JSON Output (Experimental, #10)

Tips

  • To periodically watch, try gpustat --watch or gpustat -i (#41).
    • For older versions, one may use watch --color -n1.0 gpustat --color.
  • Running nvidia-smi daemon (root privilege required) will make the query much faster and use less CPU (#54).
  • The GPU ID (index) shown by gpustat (and nvidia-smi) is PCI BUS ID, while CUDA differently assigns the fastest GPU with the lowest ID by default. Therefore, in order to make CUDA and gpustat use same GPU index, configure the CUDA_DEVICE_ORDER environment variable to PCI_BUS_ID (before setting CUDA_VISIBLE_DEVICES for your CUDA program): export CUDA_DEVICE_ORDER=PCI_BUS_ID.

Quick Installation

Install from PyPI:

pip install gpustat

If you don't have root privilege, please try to install on user namespace: pip install --user gpustat.

To install the latest version (master branch) via pip:

pip install git+https://github.com/wookayin/gpustat.git@master

Note that starting from v1.0, gpustat will support only Python 3.4+. For older versions (python 2.7, <3.4), you can continue using gpustat v0.x.

Default display

[0] GeForce GTX Titan X | 77'C, 96 % | 11848 / 12287 MB | python/52046(11821M)

  • [0]: GPUindex (starts from 0) as PCI_BUS_ID
  • GeForce GTX Titan X: GPU name
  • 77'C: Temperature
  • 96 %: Utilization
  • 11848 / 12287 MB: GPU Memory Usage
  • python/...: Running processes on GPU (and their memory usage)

Changelog

See CHANGELOG.md

License

MIT License

Comments
  • Error on calling nvidia-smi: Command 'ps ...' returned non-zero exit status 1

    Error on calling nvidia-smi: Command 'ps ...' returned non-zero exit status 1

    got above error msg when i run gpustat. but nvidia-smi works on my machine here are some details OS:Ubuntu 14.04.5 LTS Python Version: anaconda3.6

    Error on calling nvidia-smi. Use --debug flag for details
    Traceback (most recent call last):
      File "/usr/local/bin/gpustat", line 417, in print_gpustat                                                      gpu_stats = GPUStatCollection.new_query()
      File "/usr/local/bin/gpustat", line 245, in new_query
        return GPUStatCollection(gpu_list)
      File "/usr/local/bin/gpustat", line 218, in __init__
        self.update_process_information()
      File "/usr/local/bin/gpustat", line 316, in update_process_information
        processes = self.running_processes()
      File "/usr/local/bin/gpustat", line 275, in running_processes
        ','.join(map(str, pid_map.keys()))
      File "/usr/local/bin/gpustat", line 46, in execute_process
        stdout = check_output(command_shell, shell=True).strip()
      File "/home/xiyun/apps/anaconda3/lib/python3.6/subprocess.py", line 336, in check_output
        **kwargs).stdout
      File "/home/xiyun/apps/anaconda3/lib/python3.6/subprocess.py", line 418, in run
        output=stdout, stderr=stderr)
    subprocess.CalledProcessError: Command 'ps -o pid,user:16,comm -p1 -p 14471' returned non-zero exit status 1.
    
    

    how can i fix this ?

    bug 
    opened by feiwofeifeixiaowo 20
  • Faile to run ``gpustat --debug'': pynvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found

    Faile to run ``gpustat --debug'': pynvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found

    Hi,

    On Ubuntu 20.04 with Python 3.8.3, I failed to run gpustat --debug, as shown below:

    $ gpustat --debug
    Error on querying NVIDIA devices. Use --debug flag for details
    Traceback (most recent call last):
      File "/home/werner/.pyenv/versions/3.8.3/envs/socks5-haproxy/lib/python3.8/site-packages/pynvml.py", line 644, in _LoadNvmlLibrary
        nvmlLib = CDLL("libnvidia-ml.so.1")
      File "/home/werner/.pyenv/versions/3.8.3/lib/python3.8/ctypes/__init__.py", line 373, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: libnvidia-ml.so.1: cannot open shared object file: No such file or directory
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/werner/.pyenv/versions/3.8.3/envs/socks5-haproxy/lib/python3.8/site-packages/gpustat/__main__.py", line 19, in print_gpustat
        gpu_stats = GPUStatCollection.new_query()
      File "/home/werner/.pyenv/versions/3.8.3/envs/socks5-haproxy/lib/python3.8/site-packages/gpustat/core.py", line 281, in new_query
        N.nvmlInit()
      File "/home/werner/.pyenv/versions/3.8.3/envs/socks5-haproxy/lib/python3.8/site-packages/pynvml.py", line 608, in nvmlInit
        _LoadNvmlLibrary()
      File "/home/werner/.pyenv/versions/3.8.3/envs/socks5-haproxy/lib/python3.8/site-packages/pynvml.py", line 646, in _LoadNvmlLibrary
        _nvmlCheckReturn(NVML_ERROR_LIBRARY_NOT_FOUND)
      File "/home/werner/.pyenv/versions/3.8.3/envs/socks5-haproxy/lib/python3.8/site-packages/pynvml.py", line 310, in _nvmlCheckReturn
        raise NVMLError(ret)
    pynvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found
    
    
    question documentation 
    opened by hongyi-zhao 18
  • Add full process info.

    Add full process info.

    Fixes #50

    I added -f, --show-full-cmd that show the full process info as discussed in #50.

    Right now it shows the percent of CPU usage and the percent of system memory in use, but that can be changed.

    Let me know what you think.

    Example:

    server1  Wed Jun 26 15:24:33 2019  418.67
    [0] Tesla V100-SXM2-16GB | 34'C,  23 % |  1097 / 16130 MB | user1(1087M)
     └─ 72041 ( 80%,  0.24%): python /mnt/home/user1/git/horovod/examples/keras_mnist.py
    [1] Tesla V100-SXM2-16GB | 36'C,  23 % |  1097 / 16130 MB | user1(1087M)
     └─ 72042 ( 80%,  0.24%): python /mnt/home/user1/git/horovod/examples/keras_mnist.py
    [2] Tesla V100-SXM2-16GB | 35'C, 100 % |  2130 / 16130 MB | user2(777M) user1(1343M)
     ├─ 95638 (100%,  0.16%): /mnt/home/user2/anaconda3/envs/env/bin/python test_c10d.py
     └─ 72043 (100%,  0.24%): python /mnt/home/user1/git/horovod/examples/keras_mnist.py
    [3] Tesla V100-SXM2-16GB | 34'C,  22 % |  1097 / 16130 MB | user1(1087M)
     └─ 72044 ( 40%,  0.24%): python /mnt/home/user1/git/horovod/examples/keras_mnist.py
    
    new feature 
    opened by bethune-bryant 17
  • Extra character in watch colour mode on Ubuntu 17.10

    Extra character in watch colour mode on Ubuntu 17.10

    When I use command watch --color -n1.0 gpustat --color I get a lot of extra ^: https://imgur.com/a/A9Fxc

    This problem doesn't occur without watch. I'm on Ubuntu 17.10 with wayland.

    bug 
    opened by Rizhiy 15
  • No such file or directory: '/proc/30094/stat'

    No such file or directory: '/proc/30094/stat'

    I re-install gpustat by pip,it stall has error when I used gpustat:

    root$ gpustat --debug
    Error on querying NVIDIA devices. Use --debug flag for details
    Traceback (most recent call last):
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/gpustat/__main__.py", line 19, in print_gpustat
        gpu_stats = GPUStatCollection.new_query()
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/gpustat/core.py", line 396, in new_query
        gpu_info = get_gpu_info(handle)
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/gpustat/core.py", line 365, in get_gpu_info
        process = get_process_info(nv_process)
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/gpustat/core.py", line 294, in get_process_info
        ps_process = psutil.Process(pid=nv_process.pid)
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/__init__.py", line 339, in __init__
        self._init(pid)
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/__init__.py", line 366, in _init
        self.create_time()
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/__init__.py", line 697, in create_time
        self._create_time = self._proc.create_time()
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/_pslinux.py", line 1459, in wrapper
        return fun(self, *args, **kwargs)
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/_pslinux.py", line 1641, in create_time
        values = self._parse_stat_file()
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/_common.py", line 340, in wrapper
        return fun(self)
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/_pslinux.py", line 1498, in _parse_stat_file
        with open_binary("%s/%s/stat" % (self._procfs_path, self.pid)) as f:
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/_pslinux.py", line 205, in open_binary
        return open(fname, "rb", **kwargs)
    FileNotFoundError: [Errno 2] No such file or directory: '/proc/30094/stat'
    

    so , what's wrong with my gpu,please help me

    bug 
    opened by zhudd-hub 14
  • Use NVIDIA's official pynvml binding

    Use NVIDIA's official pynvml binding

    Since 2021, NVIDIA provides an official python binding pynvml: https://pypi.org/project/nvidia-ml-py/#history which should replace a third-party community fork nvidia-ml-py3 that we have been using.

    The main motivations are (1) to use an official library and (2) to add MIG support. See #102 for more details.

    Need to test whether:

    • The new pynvml API works well on old & recent NVIDIA Drivers; maybe some monkey patching needed (see https://github.com/wookayin/gpustat/issues/102#issuecomment-892833816)
    • The new pynvml API works well on Windows (see #90)

    /cc @XuehaiPan @Stonesjtu

    Important Changes

    • The official python bindings nvidia-ml-py needs to be installed, not nvidia-ml-py3. When the legacy one is installed for some reason, an error will occur:

      ImportError: pynvml is missing or an outdated version is installed. 
      
    • To fix this error, please uninstall nvidia-ml-py3 and install nvidia-ml-py<=11.495.46 (please follow the instruction in the error message). Or you can [bypass] the validation if you really want.

    • Due to compatibility reasons, NVIDIA Driver version needs to be 450.66 or higher.

    pynvml 
    opened by wookayin 13
  • pynvml not support lookup process info

    pynvml not support lookup process info

    When I call nvmlDeviceGetGraphicsRunningProcesses, raise below exception.

    ---------------------------------------------------------------------------
    AttributeError                            Traceback (most recent call last)
    ~/gitProject/venv/siren/lib64/python3.6/site-packages/pynvml/nvml.py in _nvmlGetFunctionPointer(name)
        759         try:
    --> 760             _nvmlGetFunctionPointer_cache[name] = getattr(nvmlLib, name)
        761             return _nvmlGetFunctionPointer_cache[name]
    
    /usr/lib64/python3.6/ctypes/__init__.py in __getattr__(self, name)
        355             raise AttributeError(name)
    --> 356         func = self.__getitem__(name)
        357         setattr(self, name, func)
    
    /usr/lib64/python3.6/ctypes/__init__.py in __getitem__(self, name_or_ordinal)
        360     def __getitem__(self, name_or_ordinal):
    --> 361         func = self._FuncPtr((name_or_ordinal, self))
        362         if not isinstance(name_or_ordinal, int):
    
    AttributeError: /lib64/libnvidia-ml.so.1: undefined symbol: nvmlDeviceGetGraphicsRunningProcesses_v2
    
    During handling of the above exception, another exception occurred:
    
    NVMLError_FunctionNotFound                Traceback (most recent call last)
    <ipython-input-5-6d9d0902fdc2> in <module>
    ----> 1 nvmlDeviceGetGraphicsRunningProcesses(handle)
    
    ~/gitProject/venv/hstk/lib64/python3.6/site-packages/pynvml/nvml.py in nvmlDeviceGetGraphicsRunningProcesses(handle)
       2179
       2180 def nvmlDeviceGetGraphicsRunningProcesses(handle):
    -> 2181     return nvmlDeviceGetGraphicsRunningProcesses_v2(handle)
       2182
       2183 def nvmlDeviceGetAutoBoostedClocksEnabled(handle):
    
    ~/gitProject/venv/hstk/lib64/python3.6/site-packages/pynvml/nvml.py in nvmlDeviceGetGraphicsRunningProcesses_v2(handle)
       2147     # first call to get the size
       2148     c_count = c_uint(0)
    AttributeError                            Traceback (most recent call last)
    ~/gitProject/venv/hstk/lib64/python3.6/site-packages/pynvml/nvml.py in _nvmlGetFunctionPointer(name)
        759         try:
    --> 760             _nvmlGetFunctionPointer_cache[name] = getattr(nvmlLib, name)
        761             return _nvmlGetFunctionPointer_cache[name]
    
    /usr/lib64/python3.6/ctypes/__init__.py in __getattr__(self, name)
        355             raise AttributeError(name)
    --> 356         func = self.__getitem__(name)
        357         setattr(self, name, func)
    
    /usr/lib64/python3.6/ctypes/__init__.py in __getitem__(self, name_or_ordinal)
        360     def __getitem__(self, name_or_ordinal):
    --> 361         func = self._FuncPtr((name_or_ordinal, self))
        362         if not isinstance(name_or_ordinal, int):
    
    AttributeError: /lib64/libnvidia-ml.so.1: undefined symbol: nvmlDeviceGetGraphicsRunningProcesses_v2
    
    During handling of the above exception, another exception occurred:
    
    NVMLError_FunctionNotFound                Traceback (most recent call last)
    <ipython-input-6-85e61951ad1d> in <module>
    ----> 1 nvmlDeviceGetGraphicsRunningProcesses_v2(handle)
    
    ~/gitProject/venv/hstk/lib64/python3.6/site-packages/pynvml/nvml.py in nvmlDeviceGetGraphicsRunningProcesses_v2(handle)
       2147     # first call to get the size
       2148     c_count = c_uint(0)
    -> 2149     fn = _nvmlGetFunctionPointer("nvmlDeviceGetGraphicsRunningProcesses_v2")
       2150     ret = fn(handle, byref(c_count), None)
       2151
    
    ~/gitProject/venv/hstk/lib64/python3.6/site-packages/pynvml/nvml.py in _nvmlGetFunctionPointer(name)
        761             return _nvmlGetFunctionPointer_cache[name]
        762         except AttributeError:
    --> 763             raise NVMLError(NVML_ERROR_FUNCTION_NOT_FOUND)
        764     finally:
        765         # lock is always freed
    
    NVMLError_FunctionNotFound: Function Not Found
    

    So, I guess may be is the pynvml change something lead to this problem #72

    opened by hstk30 12
  • nvidia-smi is not recognized as an internal or external command: with 0.3.x versions on windows

    nvidia-smi is not recognized as an internal or external command: with 0.3.x versions on windows

    C:>gpustat -cp 'nvidia-smi' is not recognized as an internal or external command, operable program or batch file. Error on calling nvidia-smi

    C:>nvidia-smi --query-gpu=index,uuid,name,temperature.gpu,utilization.gpu,memory.used,memory.total --format=csv,noheader,nounits 0, GPU-9d01c9ef-1d73-7774-8b4f-5bee4b3bf644, GeForce GTX 1080 Ti, 28, 65, 9219, 11264 1, GPU-9da3de3f-cdf2-8ca9-504d-fd9bc414a78e, GeForce GTX 1080 Ti, 22, 0, 140, 11264

    Any idea what might be the issue? Windows 10, Python 3.7.2, latest nvidia drivers, etc, as of the time of this post.

    duplicate 
    opened by gotonickpappas 12
  • Not supported?

    Not supported?

    I tried it using

    gpustat -cpFP --watch

    I use Debian 11.

    Here my result :

    image

    When I run something in gpu it sow me the amount of memory used only:

    image

    Any hints? I miss something?

    Thanks.

    invalid question 
    opened by git2013vb 11
  • Please make a new release

    Please make a new release

    Hi!

    I'm observing gpustat as a soft dependency for ray-project, and because currently released version (0.6.0) is not installable on Windows I'm facing hard times enabling this dependency in conda-forge.

    Knowing the compatibility issue is fixed in master for quite some time, it would be really good if you could make a new release.

    References:

    • PR to conda-forge feedstock of gpustat: https://github.com/conda-forge/gpustat-feedstock/pull/2
    • PR to add ray-project to conda-forge (where gpustat is needed): https://github.com/conda-forge/staged-recipes/pull/11160
    opened by vnlitvinov 11
  • Add support for enc/dec gpu utilization (#79)

    Add support for enc/dec gpu utilization (#79)

    This PR only adds encoder and decoder utilization to --json from the cmdline, or to the GPUStat object if gpustat is used as a library.

    The information is also exposed to the standard command line output via the -e or --show-codec flag

    See issue #79

    new feature 
    opened by ChaoticMind 11
  • Some low-level errors (like `pynvml.nvml.NVMLError_LibRmVersionMismatch`) result in nothing printed (std or diagnostic)

    Some low-level errors (like `pynvml.nvml.NVMLError_LibRmVersionMismatch`) result in nothing printed (std or diagnostic)

    Describe the bug

    Something caused a version mismatch somewhere and I can no longer use gpustat. Nothing at all is printed on stdout or stderr. Running with --debug prints nothing as well. I launched it as python -m pdb -m gpustat and stepped through until noticing an error raised in:

    /opt/conda/lib/python3.8/site-packages/pynvml/nvml.py(718)
    

    of type pynvml.nvml.NVMLError_LibRmVersionMismatch.

    Screenshots or Program Output

    Please provide the output of gpustat --debug and nvidia-smi. Or attach screenshots if applicable.

    Environment information:

    • OS: Ubuntu 20.04
    • NVIDIA Driver version: 510.73.08
    • The name(s) of GPU card: Tesla V100-SXM2
    • gpustat version: 1.0.0
    • pynvml version: 11.495.46

    Additional context

    Add any other context about the problem here.

    bug pynvml waiting for response 
    opened by munael 1
  • Add noexcept funcitons `gpu_count` and `is_available`

    Add noexcept funcitons `gpu_count` and `is_available`

    Commented in https://github.com/wookayin/gpustat/issues/142#issuecomment-1336066463, add new noexcept funcitons gpu_count and is_available.

    Closes #142

    opened by XuehaiPan 0
  • On windows, error is raised if nvidia query cannot find a card

    On windows, error is raised if nvidia query cannot find a card

    Describe the bug

    python -m gpustat errors if user is not admin or if nvidia drivers are installed but no NVidia card is available

    Screenshots or Program Output

    Please provide the output of gpustat --debug and nvidia-smi. Or attach screenshots if applicable.

    As a regular user:

    >python -m gpustat --debug
    Error on querying NVIDIA devices. Use --debug flag for details
    Traceback (most recent call last):
      File "d:\temp\ray_venv\lib\site-packages\gpustat\cli.py", line 20, in print_gpustat
        gpu_stats = GPUStatCollection.new_query(debug=debug)
      File "d:\temp\ray_venv\lib\site-packages\gpustat\core.py", line 362, in new_query
        N.nvmlInit()
      File "d:\temp\ray_venv\lib\site-packages\pynvml.py", line 1450, in nvmlInit
        nvmlInitWithFlags(0)
      File "d:\temp\ray_venv\lib\site-packages\pynvml.py", line 1440, in nvmlInitWithFlags
        _nvmlCheckReturn(ret)
      File "d:\temp\ray_venv\lib\site-packages\pynvml.py", line 765, in _nvmlCheckReturn
        raise NVMLError(ret)
    pynvml.NVMLError_NoPermission: Insufficient Permissions
    

    As an admin:

    Error on querying NVIDIA devices. Use --debug flag for details
    Traceback (most recent call last):
      File "d:\temp\ray_venv\lib\site-packages\gpustat\cli.py", line 18, in print_gpustat
        gpu_stats = GPUStatCollection.new_query(debug=debug)
      File "d:\temp\ray_venv\lib\site-packages\gpustat\core.py", line 370, in new_query
        N.nvmlInit()
      File "d:\temp\ray_venv\lib\site-packages\pynvml.py", line 1450, in nvmlInit
        nvmlInitWithFlags(0)
      File "d:\temp\ray_venv\lib\site-packages\pynvml.py", line 1440, in nvmlInitWithFlags
        _nvmlCheckReturn(ret)
      File "d:\temp\ray_venv\lib\site-packages\pynvml.py", line 765, in _nvmlCheckReturn
        raise NVMLError(ret)
    pynvml.NVMLError_DriverNotLoaded: Driver Not Loaded
    

    As a regular user

    >nvidia-smi
    NVIDIA-SMI has failed because you are not:
            a) running as an administrator or
            b) there is not at least one TCC device in the system
    

    As an admin

    >nvidia-smi
    NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running. This can also be happening if non-NVIDIA GPU is running as primary display, and NVIDIA GPU is in WDDM mode.
    

    Environment information:

    • OS: windows10
    • NVIDIA Driver version: 11.7 (Edit: changed from 11.3 to 11.7)
    • The name(s) of GPU card: None
    • gpustat version: 1.1.0
    • pynvml version: nvidia-ml-py 11.495.46

    Additional context

    Add any other context about the problem here.

    enhancement 
    opened by mattip 8
  • Option to display a bar besides the number to indicate memory usage

    Option to display a bar besides the number to indicate memory usage

    Hi,

    Thanks for the program, very useful. I was thinking a nice additional option could be to show the 'fullness' of the GPU ram with a bar instead of just with the number. This for the use case of quickly identifying (almost) empty GPUs when sharing a bunch of GPUs with labmates (eg see image: some of them are full, some have a lot of space left, a bar could show this more 'directly' I feel). image See second image for roughly what I mean: image

    new feature 
    opened by Natithan 2
  • Display thermal throttles

    Display thermal throttles

    nvidia-smi lists Max Clocks and current Clocks. It would be nice to be able to see these, and maybe display these in a % so you know how much you're throttled.

    example:

        Clocks
            Graphics                          : 2025 MHz
            SM                                : 2025 MHz
            Memory                            : 10251 MHz
            Video                             : 1785 MHz
        Max Clocks
            Graphics                          : 2100 MHz
            SM                                : 2100 MHz
            Memory                            : 10501 MHz
            Video                             : 1950 MHz
    
    new feature 
    opened by JohnCoates 0
Releases(v1.0)
  • v1.0(Sep 4, 2022)

    Add windows support, retire Python 2.x, use official NVML bindings, etc. Github Milestone: https://github.com/wookayin/gpustat/issues?q=milestone%3A1.0

    Breaking Changes

    • Retire Python 2 (#66). Add CI tests for python 3.8 and higher.
    • Use official nvidia python bindings (#107).
      • Due to API incompatibility issues, the nvidia driver version should be R450 or higher in order for process information to be correctly displayed.
      • NOTE: nvidia-ml-py<=11.495.46 is required (nvidia-ml-py3 shall not be used).
    • Use of '--gpuname-width' will truncate longer GPU names (#47).

    New Feature and Enhancements

    • Add windows support again, by switching to blessed (#78, @skjerns)
    • Add '--show-codec (-e)' option: display encoder/decoder utilization (#79, @ChaoticMind)
    • Add full process information (-f) (#65, @bethune-bryant)
    • Add '--show-all (-a)' flag (#64, @Michaelvll)
    • '--debug' will show more detailed stacktrace/exception information
    • Use unicode symbols (#58, @arinbjornk)
    • Include nvidia driver version into JSON output (#10)

    Bug Fixes

    • Fix color/highlight issues on power usage
    • Make color/highlight work correctly when TERM is not set
    • Do not list the same GPU process more than once (#84)
    • Fix a bug where querying zombie process can throw errors (#95)
    • Fix a bug where psutil may fail to get process info on Windows (#121, #123, @mattip)

    Etc.

    • Internal improvements on code style and tests
    • CI: Use Github Actions
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc1(Jul 5, 2022)

    • [Breaking changes] Retire Python 2 (#66). Add CI tests for python 3.8.
    • [Breaking changes] Backward-incompatible changes on JSON fields (#10)
    • [Breaking changes] Use official nvidia python bindings (#107).
      • Due to API incompatibility issues, the nvidia driver version should be R450 or higher in order for process information to be correctly displayed.
    • [New Feature] Add '--show-codec (-e)' option: display encoder/decoder utilization (#79)
    • [Enhancement] Re-add windows support, by switching to blessed (#78, @skjerns)
    • [Enhancement] Use unicode symbols (#58, @arinbjornk)
    • [Enhancement] Add full process information (-f) (#65, @bethune-bryant)
    • [Enhancement] Add '--show-all (-a)' flag (#64)
    • [Enhancement] '--debug' will show more stacktrace/exception information
    • [Bugfix] Fix color/highlight issues on power usage
    • [Bugfix] Make color/highlight work correctly when TERM is not set
    • [Bugfix] Do not list the same GPU process more than once (#84)
    • [Bugfix] Fix a bug where querying zombie process can throw errors (#95)
    • [Bugfix] Fix a bug where psutil may fail to get process info on Windows (#121, #123, @mattip)
    • [Etc] Internal improvements on code style and tests
    • [Etc] CI: Use Github Actions
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Jul 22, 2019)

    v0.6.0 (2019/07/22)

    • [Feature] Add a flag for fan speed (-F, --show-fan) (#62, #63), contributed by @bethune-bryant
    • [Enhancement] Align query datetime in the header with respect to --gpuname-width parameter.
    • [Enhancement] Alias gpustat --watch to -i/--interval option.
    • [Enhancement] Display NVIDIA driver version in the header (#53)
    • [Bugfix] Minor fixes on debug mode
    • [Etc] Travis: python 3.7

    Note: This will be the last version that supports python 2.7 and <3.4.

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Sep 10, 2018)

    Changelog

    • [Feature] Built-in watch mode (gpustat -i) (#7, #41).
      • Contributed by @drons and @Stonesjtu, Thanks!
    • [Bug] Fix the problem extra character was showing (#32)
    • [Bug] Fix a bug in json mode where process information is unavailable (#45)
    • [Etc.] Refactoring of internal code structure: gpustat is now a package (#33)
    • [Etc.] More unit tests and better use of code styles (flake8)

    See also: Milestone 0.5

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Dec 2, 2017)

  • v0.4.0(Nov 2, 2017)

    Changelog

    gpustat is no more a zero-dependency script and now depends on some packages. Please install using pip.

    • Use nvidia-ml-py bindings and psutil to replace command-line call of nvidia-smi and ps (#20, Thanks to @Stonesjtu).
    • A behavior on pipe is changed; it will not be in color by default, use --color explicitly. (e.g. watch --color -n1.0 gpustat --color)
    • Fix a bug in handling stale-state or zombie process (#16)
    • Include non-CUDA graphics applications in the process list (#18, Thanks to @kapsh)
    • Support power usage (#13, #28, Thanks to @cjw85)
    • Support --debug option
    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Sep 17, 2017)

  • v0.3.1(Apr 10, 2017)

    Minor update. CHANGELOG:

    • Experimental JSON output feature (#10)
    • Add some properties and dict-style access for GPUStat class
    • Fix Python3 compatibility
    Source code(tar.gz)
    Source code(zip)
  • v0.2(Nov 19, 2016)

Owner
Jongwook Choi
Researcher & Developer & Productivity Geek. PhD Student at @umich.
Jongwook Choi
A Python module for getting the GPU status from NVIDA GPUs using nvidia-smi programmically in Python

GPUtil GPUtil is a Python module for getting the GPU status from NVIDA GPUs using nvidia-smi. GPUtil locates all GPUs on the computer, determines thei

Anders Krogh Mortensen 927 Dec 8, 2022
General purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases.

Vulkan Kompute The general purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabl

The Institute for Ethical Machine Learning 1k Dec 26, 2022
A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.

NVIDIA DALI The NVIDIA Data Loading Library (DALI) is a library for data loading and pre-processing to accelerate deep learning applications. It provi

NVIDIA Corporation 4.2k Jan 8, 2023
jupyter/ipython experiment containers for GPU and general RAM re-use

ipyexperiments jupyter/ipython experiment containers and utils for profiling and reclaiming GPU and general RAM, and detecting memory leaks. About Thi

Stas Bekman 153 Dec 7, 2022
ArrayFire: a general purpose GPU library.

ArrayFire is a general-purpose library that simplifies the process of developing software that targets parallel and massively-parallel architectures i

ArrayFire 4k Dec 29, 2022
Python interface to GPU-powered libraries

Package Description scikit-cuda provides Python interfaces to many of the functions in the CUDA device/runtime, CUBLAS, CUFFT, and CUSOLVER libraries

Lev E. Givon 924 Dec 26, 2022
cuDF - GPU DataFrame Library

cuDF - GPU DataFrames NOTE: For the latest stable README.md ensure you are on the main branch. Resources cuDF Reference Documentation: Python API refe

RAPIDS 5.2k Jan 8, 2023
BlazingSQL is a lightweight, GPU accelerated, SQL engine for Python. Built on RAPIDS cuDF.

A lightweight, GPU accelerated, SQL engine built on the RAPIDS.ai ecosystem. Get Started on app.blazingsql.com Getting Started | Documentation | Examp

BlazingSQL 1.8k Jan 2, 2023
Library for faster pinned CPU <-> GPU transfer in Pytorch

SpeedTorch Faster pinned CPU tensor <-> GPU Pytorch variabe transfer and GPU tensor <-> GPU Pytorch variable transfer, in certain cases. Update 9-29-1

Santosh Gupta 657 Dec 19, 2022
A Python function for Slurm, to monitor the GPU information

Gpu-Monitor A Python function for Slurm, where I couldn't use nvidia-smi to monitor the GPU information. whole repo is not finish Installation TODO Mo

Squidward Tentacles 2 Feb 11, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

Introduction This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code her

NVIDIA Corporation 6.9k Dec 28, 2022
gget is a free and open-source command-line tool and Python package that enables efficient querying of genomic databases.

gget is a free and open-source command-line tool and Python package that enables efficient querying of genomic databases. gget consists of a collection of separate but interoperable modules, each designed to facilitate one type of database querying in a single line of code.

Pachter Lab 570 Dec 29, 2022
Lazy Profiler is a simple utility to collect CPU, GPU, RAM and GPU Memory stats while the program is running.

lazyprofiler Lazy Profiler is a simple utility to collect CPU, GPU, RAM and GPU Memory stats while the program is running. Installation Use the packag

Shankar Rao Pandala 28 Dec 9, 2022
A simple python script that parses the MSFT Teams log file for the users current Teams status and then outputs the status color to a MQTT connected light.

Description A simple python script that parses the MSFT Teams log file for the users current Teams status and then outputs the status color to a MQTT

Lorentz Factr 8 Dec 16, 2022
Discord bot that displays Jazz Jackrabbit 2 server status, current gamemode as "Playing.." status

JJ2-server-status-discord-bot Discord bot that displays Jazz Jackrabbit 2 server status, current gamemode as "Playing.." status How to setup: 0. Downl

null 2 Dec 9, 2021
🍃 A comprehensive monitoring and alerting solution for the status of your Chia farmer and harvesters.

chia-monitor A monitoring tool to collect all important metrics from your Chia farming node and connected harvesters. It can send you push notificatio

Philipp Normann 153 Oct 21, 2022
A command-line based, minimal torrent streaming client made using Python and Webtorrent-cli. Stream your favorite shows straight from the command line.

A command-line based, minimal torrent streaming client made using Python and Webtorrent-cli. Installation pip install -r requirements.txt It use

Jonardon Hazarika 17 Dec 11, 2022
More than 130 check plugins for Icinga and other Nagios-compatible monitoring applications. Each plugin is a standalone command line tool (written in Python) that provides a specific type of check.

Python-based Monitoring Check Plugins Collection This Enterprise Class Check Plugin Collection offers a package of more than 130 Python-based, Nagios-

Linuxfabrik 119 Dec 27, 2022
Command line tool for monitoring changes of File entities scoped in a Synapse File View

Synapse Monitoring Provides tools for monitoring and keeping track of File entity changes in Synapse with the use of File Views. Learn more about File

Sage Bionetworks 3 May 28, 2022
Python 3 Bindings for NVML library. Get NVIDIA GPU status inside your program.

py3nvml Documentation also available at readthedocs. Python 3 compatible bindings to the NVIDIA Management Library. Can be used to query the state of

Fergal Cotter 212 Jan 4, 2023