Library for 8-bit optimizers and quantization routines.

Overview

bitsandbytes

Bitsandbytes is a lightweight wrapper around CUDA custom functions, in particular 8-bit optimizers and quantization functions.

Paper -- Video -- Docs

TL;DR

Installation:

  1. Note down version: conda list | grep cudatoolkit
  2. Replace 111 with the version that you see: pip install bitsandbytes-cuda111

Usage:

  1. Comment out optimizer: #torch.optim.Adam(....)
  2. Add 8-bit optimizer of your choice bnb.optim.Adam8bit(....) (arguments stay the same)
  3. Replace embedding layer if necessary: torch.nn.Embedding(..) -> bnb.nn.Embedding(..)

Features

  • 8-bit Optimizers: Adam, AdamW, RMSProp, LARS, LAMB (saves 75% memory)
  • Stable Embedding Layer: Improved stability through better initialization, and normalization
  • 8-bit quantization: Quantile, Linear, and Dynamic quantization
  • Fast quantile estimation: Up to 100x faster than other algorithms

Requirements & Installation

Requirements: anaconda, cudatoolkit, pytorch Hardware requirements: NVIDIA Maxwell GPU or newer (>=GTX 9XX) Supported CUDA versions: 9.2 - 11.3

The requirements can best be fulfilled by installing pytorch via anaconda. You can install PyTorch by following the "Get Started" instructions on the official website.

bitsandbytes is compatible with all major PyTorch releases and cudatoolkit versions, but for now, you need to select the right version manually. To do this run:

conda list | grep cudatoolkit

and take note of the Cuda version that you have installed. Then you can install bitsandbytes via:

# choices: {cuda92, cuda 100, cuda101, cuda102, cuda110, cuda111, cuda113}
# replace XXX with the respective number
pip install bitsandbytes-cudaXXX

To check if your installation was successful, you can execute the following command, which runs a single bnb Adam update.

wget https://gist.githubusercontent.com/TimDettmers/1f5188c6ee6ed69d211b7fe4e381e713/raw/4d17c3d09ccdb57e9ab7eca0171f2ace6e4d2858/check_bnb_install.py && python check_bnb_install.py

Using bitsandbytes

Using the 8-bit Optimizers

With bitsandbytes 8-bit optimizers can be used by changing a single line of code in your codebase. For NLP models we recommend also to use the StableEmbedding layers (see below) which improves results and helps with stable 8-bit optimization. To get started with 8-bit optimizers, it is sufficient to replace your old optimizer with the 8-bit optimizer in the following way:

import bitsandbytes as bnb

# adam = torch.optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.995)) # comment out old optimizer
adam = bnb.optim.Adam8bit(model.parameters(), lr=0.001, betas=(0.9, 0.995)) # add bnb optimizer
adam = bnb.optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.995), optim_bits=8) # equivalent


torch.nn.Embedding(...) ->  bnb.nn.StableEmbedding(...) # recommended for NLP models

Note that by default all parameter tensors with less than 4096 elements are kept at 32-bit even if you initialize those parameters with 8-bit optimizers. This is done since such small tensors do not save much memory and often contain highly variable parameters (biases) or parameters that require high precision (batch norm, layer norm). You can change this behavior like so:

# parameter tensors with less than 16384 values are optimized in 32-bit
# it is recommended to use multiplies of 4096
adam = bnb.optim.Adam8bit(model.parameters(), min_8bit_size=16384) 

Change Bits and other Hyperparameters for Individual Parameters

If you want to optimize some unstable parameters with 32-bit Adam and others with 8-bit Adam, you can use the GlobalOptimManager. With this, we can also configure specific hyperparameters for particular layers, such as embedding layers. To do that, we need two things: (1) register the parameter while they are still on the CPU, (2) override the config with the new desired hyperparameters (anytime, anywhere). See our guide for more details

Fairseq Users

To use the Stable Embedding Layer, override the respective build_embedding(...) function of your model. Make sure to also use the --no-scale-embedding flag to disable scaling of the word embedding layer (nor replaced with layer norm). You can use the optimizers by replacing the optimizer in the respective file (adam.py etc.).

Release and Feature History

For upcoming features and changes and full history see Patch Notes.

Errors

  1. RuntimeError: CUDA error: no kernel image is available for execution on the device. Solution

License

The majority of bitsandbytes is licensed under MIT, however portions of the project are available under separate license terms: Pytorch is licensed under the BSD license.

We thank Fabio Cannizzo for his work on FastBinarySearch which we use for CPU quantization.

Citation

If you found this library and 8-bit optimizers or quantization routines useful, please consider citing out work.

@misc{dettmers2021optim8bit,
      title={8-bit Optimizers via Block-wise Quantization},
      author={Tim Dettmers and Mike Lewis and Sam Shleifer and Luke Zettlemoyer},
      year={2021},
      eprint={2110.02861},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Comments
  • python setup.py install error

    python setup.py install error

    (bitsandbytes) chenxin@chenxin-Nitro-AN515-52:~/disk1/github/bitsandbytes$ python setup.py install Traceback (most recent call last): File "setup.py", line 15, in name = f"bitsandbytes-cuda{os.environ['CUDA_VERSION']}", File "/home/chenxin/disk1/anaconda3/envs/bitsandbytes/lib/python3.8/os.py", line 675, in getitem raise KeyError(key) from None KeyError: 'CUDA_VERSION' (bitsandbytes) chenxin@chenxin-Nitro-AN515-52:~/disk1/github/bitsandbytes$ conda list | grep cudatoolkit cudatoolkit 11.1.1 h6406543_8 conda-forge

    documentation enhancement 
    opened by mathpopo 10
  • Did you ever try MNMT systems?

    Did you ever try MNMT systems?

    As reported in the paper, for training a bi-directional transformer model on WMT14 or WMT16 the performance of 8-bit Adam stays relatively consistent with the 32-bit counterparts. I was also able to verify this on other data sources for training bi-directional models with my own setup.

    However, I've also tried multiple variations of 8-bit optimizers on multilingual neural machine translation (MNMT) models in fairseq and there it seems that even with --no-scale-embedding as well as the StableEmbedding the performance is roughly 3 BLEU behind the counterparts. The --no-scale-embedding flag amounts to roughly 7 BLEU gain, while the xavier init amounts to roughly 0.4 BLEU gain. Didn't look into the effect of the layer norm of the stable embeddings yet.

    Did you do any testing on that and have practical tips on getting the performance up?

    bug question 
    opened by SirRob1997 9
  • undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37

    undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37

    (torch1.8-py3.8) jiaofangkai@dell-PowerEdge-T640:/home/share/jiaofangkai$ python check_bnb_install.py
    Traceback (most recent call last):
      File "check_bnb_install.py", line 1, in <module>
        import bitsandbytes as bnb
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/__init__.py", line 5, in <module>
        from .optim import adam
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/optim/__init__.py", line 5, in <module>
        from .adam import Adam, Adam8bit, Adam32bit
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/optim/adam.py", line 6, in <module>
        from bitsandbytes.optim.optimizer import Optimizer2State
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/optim/optimizer.py", line 6, in <module>
        import bitsandbytes.functional as F
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/functional.py", line 13, in <module>
        lib = ct.cdll.LoadLibrary(os.path.dirname(__file__) + '/libbitsandbytes.so')
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/ctypes/__init__.py", line 459, in LoadLibrary
        return self._dlltype(name)
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/ctypes/__init__.py", line 381, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: /home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes.so: undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37
    

    Hi, I have encountered similar questions to #5 . I have tested with TeslaT4 and RTX 2080Ti but both failed.

    The environment are as follows:

    # TeslaT4
    Ubuntu 18.04.6, Tesla T4, cuda-10.1, driver vesion: 418.197.02, python=3.8, torch=1.8.1+cu101
    
    # RTX 2080Ti
    Ubuntu 20.04.3, RTX 2080Ti, cuda-10.1, driver version: 435.21, python=3.8, torch=1.8.1+cu101
    
    bug 
    opened by SparkJiao 6
  • Support for Tesla Architecture

    Support for Tesla Architecture

    First of all, great work!

    Secondly, I can see that you specify that Maxwell Architecture is necessary, and I am wondering if

    1. it's possible to do 8-bit optimization on Tesla Architecture
    2. there are plans to implement it

    I ask because Kaggle and Colab notebooks use Tesla Architectures (P100, K80), and I'm sure those communities, myself included, would be interested in using bitsandbytes

    enhancement 
    opened by nbroad1881 5
  • import bitsandbytes as bnb 错误

    import bitsandbytes as bnb 错误

    import bitsandbytes as bnb 出现如下 OSError: /home/anaconda3/envs/ner/lib/python3.6/site-packages/bitsandbytes/libbitsandbytes.so: undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37

    您好,请问这该怎么解决啊

    opened by zhishui3 4
  • no difference in memory usage

    no difference in memory usage

    Hi. I am training my network with bnb.optim.Adam8bit vs torch.optim.Adam but I don't see any difference in memory consumption.

    Running on GTX 2080Ti (single gpu or DDP). with cudatoolkit 11.1.74 bitsandbytes-cuda111

    looking in nvidia-smi I see 9.6GB in both cases Am I missing something here?

    opened by ofrimasad 3
  • errors when training to the third epoch. everytime.

    errors when training to the third epoch. everytime.

    THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCTensorMath.cu line=29 error=1 : invalid argument
    Traceback (most recent call last):
      File "train_pointunet.py", line 211, in <module>
        loss_seg = lossfunc_seg(outputs_seg, labels)+lossfunc_dice(outputs_seg,labels)
      File "/home/why/miniconda3/envs/3.6.8/lib/python3.6/site-packages/torch/tensor.py", line 245, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/home/why/miniconda3/envs/3.6.8/lib/python3.6/site-packages/torch/autograd/__init__.py", line 147, in backward
        allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
    RuntimeError: cuda runtime error (1) : invalid argument at /pytorch/aten/src/THC/generic/THCTensorMath.cu:29
    

    im very confused because in the first several epoches it works fine.

    opened by Dootmaan 2
  • [Question] Usage of bnb.nn.Embedding with existing classes from other libraries

    [Question] Usage of bnb.nn.Embedding with existing classes from other libraries

    Replace embedding layer if necessary: torch.nn.Embedding(..) -> bnb.nn.Embedding(..)

    Does it suppose user creation of custom classes to replace (for example) huggingface transformers' GPT2DoubleHeadsModel? Or there is something like bnb.optim.GlobalOptimManager which change provided model instance to use bitsandbytes embeddings instead of torch ones?

    enhancement question 
    opened by LSinev 2
  • The code uses more GPU memory with Multi-scale Vision Transformers

    The code uses more GPU memory with Multi-scale Vision Transformers

    Hi,

    Thanks for the great work! I'm currently trying to apply your code to vision transformers, specifically, on this code base: https://github.com/facebookresearch/SlowFast/tree/main/projects/mvit When using torch.optim.SGD(momentum=0.9), the code consumes 9221MiB GPU memory during training. After changing it to use bnb.optim.SGD8bit() with the same arguments, it consumes even a bit more GPU memory of 9235MiB. Do you have any idea why this would happen? Thank you! My CUDA version is 10.2 and torch version is 1.9.1.

    Best, Junwei

    question 
    opened by JunweiLiang 2
  • bnb.optim.AdamW

    bnb.optim.AdamW

    Hey @TimDettmers,

    Awesome library! bnb.optim.Adam saved me from having to use model parallelism :heart_eyes:

    Do you think it would be easy to also add a bnb.optim.AdamW version for https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html#torch.optim.AdamW ?

    Happy to give it a try if you think it's easily feasible :-)

    enhancement 
    opened by patrickvonplaten 2
  • undefined symbol: __fatbinwrap_38

    undefined symbol: __fatbinwrap_38

    With some CUDA versions and on some architectures this error occurs:

    Traceback (most recent call last):
      File "check_bnb_install.py", line 1, in <module>
        import bitsandbytes as bnb
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/__init__.py", line 5, in <module>
        from .optim import adam
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/optim/__init__.py", line 5, in <module>
        from .adam import Adam, Adam8bit, Adam32bit
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/optim/adam.py", line 5, in <module>
        from bitsandbytes.optim.optimizer import Optimizer2State
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/optim/optimizer.py", line 6, in <module>
        import bitsandbytes.functional as F
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/functional.py", line 13, in <module>
        lib = ct.cdll.LoadLibrary(os.path.dirname(__file__) + '/libbitsandbytes.so')
      File "/miniconda/envs/pytorch_env/lib/python3.7/ctypes/__init__.py", line 442, in LoadLibrary
        return self._dlltype(name)
      File "/miniconda/envs/pytorch_env/lib/python3.7/ctypes/__init__.py", line 364, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: /miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/libbitsandbytes.so: undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37
    

    Confirmed for CUDA 10.1 for compute capability 7.5 (V100).

    bug 
    opened by TimDettmers 2
  • 'NoneType' object has no attribute 'cdequantize_blockwise_cpu_fp32'

    'NoneType' object has no attribute 'cdequantize_blockwise_cpu_fp32'

    I am trying to train GPT-J with 8bit weights. It's working well on GPU. But When I try to use it on CPU, it gives this error

    'NoneType' object has no attribute 'cdequantize_blockwise_cpu_fp32'

    I have used dequantize_blockwise from bitsandbytes.functional. Following is the class in which its used:

    class DequantizeAndLinear(torch.autograd.Function):
    
        def forward(ctx, input: torch.Tensor, weights_quantized: torch.ByteTensor,
                    absmax: torch.FloatTensor, code: torch.FloatTensor, bias: torch.FloatTensor):
            weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
            ctx.save_for_backward(input, weights_quantized, absmax, code)
            ctx._has_bias = bias is not None
            return F.linear(input, weights_deq, bias)
    
        def backward(ctx, grad_output: torch.Tensor):
            assert not ctx.needs_input_grad[1] and not ctx.needs_input_grad[2] and not ctx.needs_input_grad[3]
            input, weights_quantized, absmax, code = ctx.saved_tensors
            # grad_output: [*batch, out_features]
            weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
            grad_input = grad_output @ weights_deq
            grad_bias = grad_output.flatten(0, -2).sum(dim=0) if ctx._has_bias else None
            return grad_input, None, None, None, grad_bias
    
    

    Is it possible to run it on CPUor should I have to run it only GPU ?

    opened by HumzaSami00 0
  • Adding Code of Conduct file

    Adding Code of Conduct file

    This is pull request was created automatically because we noticed your project was missing a Code of Conduct file.

    Code of Conduct files facilitate respectful and constructive communities by establishing expected behaviors for project contributors.

    This PR was crafted with love by Facebook's Open Source Team.

    CLA Signed 
    opened by facebook-github-bot 0
  • Adding Contributing file

    Adding Contributing file

    This is pull request was created automatically because we noticed your project was missing a Contributing file.

    CONTRIBUTING files explain how a developer can contribute to the project - which you should actively encourage.

    This PR was crafted with love by Facebook's Open Source Team.

    CLA Signed 
    opened by facebook-github-bot 0
  • 8-bit optimizer crashes when fine-tuning gpt2-large

    8-bit optimizer crashes when fine-tuning gpt2-large

    Using the bnb.optim.Adam8bit optimizer in place of torch.optim.Adam causes a crash after a handful of batches:

    12it [00:22, 1.82s/it]Error an illegal memory access was encountered at line 198 in file /home/alyssa/gpt_math/bitsandbytes/csrc/ops.cu

    I am fine-tuning Huggingface's version of the gpt2-large model on an Ampere 3090 GPU with CUDA version 11.6 and nVidia driver version 510.73.05. I have tried compiling bitsandbytes on my machine from source, and the set_optim_to_run_embedding_in_fp32 trick from https://github.com/huggingface/transformers/issues/14819; neither of them affected the behavior. Running with the standard pytorch Adam optimizer works fine. nvidia-smi shows 16 GB of memory used on a GPU with 24 GB, so it shouldn't be running out of RAM or anywhere close to that.

    opened by rationalism 0
  • bfloat16 grads are not supported

    bfloat16 grads are not supported

    Is there any plans to support models/grads with bfloat16 type? Bfloat gained quite the popularity lately as every ampere GPU supports the type, and eliminates the need for loss scaling compared to float16. This is what I get when I try to initialize bnb.AdamW with a bfloat16 casted model: ValueError: Gradient+optimizer bit data type combination not supported: grad torch.bfloat16, optimizer torch.uint8

    opened by kurumuz 0
  • Check dtype of input tensors is correct

    Check dtype of input tensors is correct

    If a 16-bit float tensor on the CPU was passed as the input to quantize_blockwise or the output buffer for dequantize_blockwise, the code was previously passing its address to the c[de]quantize_blockwise_cpu_fp32 method, silently casting it to a 32-bit float* and resulting in segfaults.

    A similar issue occurs if the absmax/code arguments to dequantize_blockwise are (somehow) 16-bit, resulting in illegal memory accesses on the GPU.

    It took me a little while to track down the causes because of the cryptic errors; so I figured it was worth suggesting these changes. I've only been using the blockwise methods, so it's possible there are similar issues in other parts of the code - might be worth checking :)

    This PR also includes a couple unrelated typo fixes.

    Thanks for your work on this library, it's nice to squeeze the most I can out of my paltry GPU memory :)

    CLA Signed 
    opened by acarapetis 3
Releases(0.26.0)
  • 0.26.0(Nov 29, 2021)

    This release has important bug fixes for the StableEmbedding layer and it introduces new optimizers AdaGrad, and AdamW. The 0.26.0 release also features a new, lightweight embedding class, bnb.nn.Embedding which uses 32-bit optimizers but no layer norm. This layer allows for the easy use of pretrained models that do not use embedding layer norms. Now available on pip.

    Changelog

    Features:

    • Added Adagrad (without grad clipping) as 32-bit and 8-bit block-wise optimizer.
    • Added AdamW (copy of Adam with weight decay init 1e-2). #10
    • Introduced ModuleConfig overrides which can be seamlessly be used at initialization time of a module.
    • Added bnb.nn.Embedding layer which runs at 32-bit but without the layernorm. This works well if you need to fine-tune pretrained models that do not have a embedding layer norm. #19

    Bug fixes:

    • Fixed a bug where weight decay was incorrectly applied to 32-bit Adam. #13
    • Fixed an unsafe use of eval. #8
    • Fixed a bug where the StableEmbedding layer 32-bit optimizer override would not work without registering the whole model first (bnb.optim.GlobalOptimManager.get_instance().register_parameters(model.parameters())). #13 #15

    Docs:

    • Added instructions how to solve "__fatbinwrap_" errors.
    Source code(tar.gz)
    Source code(zip)
  • 0.25.0(Oct 22, 2021)

    This release offers full support for all GPUs for Kepler (GTX 600) or better. It introduces skip_zeros which ensures correct optimizer updates for sparse gradients. While some pieces are still missing, some features for 8-bit optimizer research are added. AnalysisAdam allows the tracking and analysis of 8-bit vs 32-bit Adam quantization errors.

    Features:

    • Added skip_zeros for block-wise and 32-bit optimizers. This ensures correct updates for sparse gradients and sparse models.
    • Added support for Kepler GPUs. (#4)
    • Added Analysis Adam to track 8-bit vs 32-bit quantization errors over time.
    • Make compilation more user-friendly.

    Bug fixes:

    • fixed "undefined symbol: __fatbinwrap_38" error for P100 GPUs on CUDA 10.1 (#5)

    Docs:

    • Added docs with instructions to compile from source.
    Source code(tar.gz)
    Source code(zip)
Owner
Facebook Research
Facebook Research
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.

HAWQ: Hessian AWare Quantization HAWQ is an advanced quantization library written for PyTorch. HAWQ enables low-precision and mixed-precision uniform

Zhen Dong 293 Dec 30, 2022
DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off between model size and accuracy.

Differentiable Model Compression via Pseudo Quantization Noise DiffQ performs differentiable quantization using pseudo quantization noise. It can auto

Facebook Research 145 Dec 30, 2022
QTool: A Low-bit Quantization Toolbox for Deep Neural Networks in Computer Vision

This project provides abundant choices of quantization strategies (such as the quantization algorithms, training schedules and empirical tricks) for quantizing the deep neural networks into low-bit counterparts.

Monash Green AI Lab 51 Dec 10, 2022
[ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization

F8Net Fixed-Point 8-bit Only Multiplication for Network Quantization (ICLR 2022 Oral) OpenReview | arXiv | PDF | Model Zoo | BibTex PyTorch implementa

Snap Research 76 Dec 13, 2022
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.

Nonuniform-to-Uniform Quantization This repository contains the training code of N2UQ introduced in our CVPR 2022 paper: "Nonuniform-to-Uniform Quanti

Zechun Liu 60 Dec 28, 2022
optimization routines for hyperparameter tuning

Optunity is a library containing various optimizers for hyperparameter tuning. Hyperparameter tuning is a recurrent problem in many machine learning t

Marc Claesen 398 Nov 9, 2022
Hardware accelerated, batchable and differentiable optimizers in JAX.

JAXopt Installation | Examples | References Hardware accelerated (GPU/TPU), batchable and differentiable optimizers in JAX. Installation JAXopt can be

Google 621 Jan 8, 2023
Differentiable Optimizers with Perturbations in Pytorch

Differentiable Optimizers with Perturbations in PyTorch This contains a PyTorch implementation of Differentiable Optimizers with Perturbations in Tens

Jake Tuero 54 Jun 22, 2022
TLDR; Train custom adaptive filter optimizers without hand tuning or extra labels.

AutoDSP TLDR; Train custom adaptive filter optimizers without hand tuning or extra labels. About Adaptive filtering algorithms are commonplace in sign

Jonah Casebeer 48 Sep 19, 2022
Repository for open research on optimizers.

Open Optimizers Repository for open research on optimizers. This is a test in sharing research/exploration as it happens. If you use anything from thi

Ariel Ekgren 6 Jun 24, 2022
QKeras: a quantization deep learning library for Tensorflow Keras

QKeras github.com/google/qkeras QKeras 0.8 highlights: Automatic quantization using QKeras; Stochastic behavior (including stochastic rouding) is disa

Google 437 Jan 3, 2023
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)

S2-BNN (Self-supervised Binary Neural Networks Using Distillation Loss) This is the official pytorch implementation of our paper: "S2-BNN: Bridging th

Zhiqiang Shen 52 Dec 24, 2022
FID calculation with proper image resizing and quantization steps

clean-fid: Fixing Inconsistencies in FID Project | Paper The FID calculation involves many steps that can produce inconsistencies in the final metric.

Gaurav Parmar 606 Jan 6, 2023
MQBench: Towards Reproducible and Deployable Model Quantization Benchmark

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark We propose a benchmark to evaluate different quantization algorithms on vari

null 494 Dec 29, 2022
YOLOv5 Series Multi-backbone, Pruning and quantization Compression Tool Box.

YOLOv5-Compression Update News Requirements 环境安装 pip install -r requirements.txt Evaluation metric Visdrone Model mAP mAP@50 Parameters(M) GFLOPs FPS@

ZhangYuan 719 Jan 2, 2023
ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training

ActNN : Activation Compressed Training This is the official project repository for ActNN: Reducing Training Memory Footprint via 2-Bit Activation Comp

UC Berkeley RISE 178 Jan 5, 2023
PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)

1-bit Wide ResNet PyTorch implementation of training 1-bit Wide ResNets from this paper: Training wide residual networks for deployment using a single

Sergey Zagoruyko 122 Dec 7, 2022
Implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hashing by Maximizing Bit Entropy

Deep Unsupervised Image Hashing by Maximizing Bit Entropy This is the PyTorch implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hash

null 62 Dec 30, 2022