ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training

Related tags

Deep Learning actnn
Overview

ActNN : Activation Compressed Training

This is the official project repository for ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training by Jianfei Chen*, Lianmin Zheng*, Zhewei Yao, Dequan Wang, Ion Stoica, Michael W. Mahoney, and Joseph E. Gonzalez.

TL; DR. ActNN is a PyTorch library for memory-efficient training. It reduces the training memory footprint by compressing the saved activations. ActNN is implemented as a collection of memory-saving layers. These layers have an identical interface to their PyTorch counterparts.

Abstract

The increasing size of neural network models has been critical for improvements in their accuracy, but device memory is not growing at the same rate. This creates fundamental challenges for training neural networks within limited memory environments. In this work, we propose ActNN, a memory-efficient training framework that stores randomly quantized activations for back propagation. We prove the convergence of ActNN for general network architectures, and we characterize the impact of quantization on the convergence via an exact expression for the gradient variance. Using our theory, we propose novel mixed-precision quantization strategies that exploit the activation's heterogeneity across feature dimensions, samples, and layers. These techniques can be readily applied to existing dynamic graph frameworks, such as PyTorch, simply by substituting the layers. We evaluate ActNN on mainstream computer vision models for classification, detection, and segmentation tasks. On all these tasks, ActNN compresses the activation to 2 bits on average, with negligible accuracy loss. ActNN reduces the memory footprint of the activation by 12×, and it enables training with a 6.6× to 14× larger batch size.

mem_speed_r50 Batch size vs. training throughput on ResNet-50. Red cross mark means out-of-memory. The shaded yellow region denotes the possible batch sizes with full precision training. ActNN achieves significantly larger maximum batch size over other state-of-the-art systems and displays a nontrivial trade-off curve.

Install

  • Requirements
torch>=1.7.1
torchvision>=0.8.2
  • Build
cd actnn
pip install -v -e .

Usage

mem_speed_benchmark/train.py is an example on using ActNN for models from torchvision.

Basic Usage

  • Step1: Configure the optimization level
    ActNN provides several optimization levels to control the trade-off between memory saving and computational overhead. You can set the optimization level by
import actnn
# available choices are ["L0", "L1", "L2", "L3", "L4", "L5"]
actnn.set_optimization_level("L3")

See set_optimization_level for more details.

  • Step2: Convert the model to use ActNN's layers.
model = actnn.QModule(model)

Note:

  1. Convert the model before calling .cuda().
  2. Set the optimization level before invoking actnn.QModule or constructing any ActNN layers.
  3. Automatic model conversion only works with standard PyTorch layers. Please use the modules (nn.Conv2d, nn.ReLU, etc.), not the functions (F.conv2d, F.relu).
  • Step3: Print the model to confirm that all the modules (Conv2d, ReLU, BatchNorm) are correctly converted to ActNN layers.
print(model)    # Should be actnn.QConv2d, actnn.QBatchNorm2d, etc.

Advanced Features

  • Convert the model manually.
    ActNN is implemented as a collection of memory-saving layers, including actnn.QConv1d, QConv2d, QConv3d, QConvTranspose1d, QConvTranspose2d, QConvTranspose3d, QBatchNorm1d, QBatchNorm2d, QBatchNorm3d, QLinear, QReLU, QSyncBatchNorm, QMaxPool2d. These layers have identical interface to their PyTorch counterparts. You can construct the model manually using these layers as the building blocks. See ResNetBuilder and resnet_configs in image_classification/image_classification/resnet.py for example.
  • (Optional) Change the data loader
    If you want to use per-sample gradient information for adaptive quantization, you have to update the dataloader to return sample indices. You can see train_loader in mem_speed_benchmark/train.py for example. In addition, you have to update the configurations.
from actnn import config, QScheme
config.use_gradient = True
QScheme.num_samples = 1300000   # the size of training set

You can find sample code in the above script.

Examples

Benchmark Memory Usage and Training Speed

See mem_speed_benchmark. Please do NOT measure the memory usage by nvidia-smi. nvidia-smi reports the size of the memory pool allocated by PyTorch, which can be much larger than the size of acutal used memory.

Image Classification

See image_classification

Object Detection, Semantic Segmentation, Self-Supervised Learning, ...

Here is the example memory-efficient training for ResNet50, built upon the OpenMMLab toolkits. We use ActNN with the default optimization level (L3). Our training runs are available at Weights & Biases.

Installation

  1. Install mmcv
export MMCV_ROOT=/path/to/clone/actnn-mmcv
git clone https://github.com/DequanWang/actnn-mmcv $MMCV_ROOT
cd $MMCV_ROOT
MMCV_WITH_OPS=1 MMCV_WITH_ORT=0 pip install -e .
  1. Install mmdet, mmseg, mmssl, ...
export MMDET_ROOT=/path/to/clone/actnn-mmdet
git clone https://github.com/DequanWang/actnn-mmdet $MMDET_ROOT
cd $MMDET_ROOT
python setup.py develop
export MMSEG_ROOT=/path/to/clone/actnn-mmseg
git clone https://github.com/DequanWang/actnn-mmseg $MMSEG_ROOT
cd $MMSEG_ROOT
python setup.py develop
export MMSSL_ROOT=/path/to/clone/actnn-mmssl
git clone https://github.com/DequanWang/actnn-mmssl $MMSSL_ROOT
cd $MMSSL_ROOT
python setup.py develop

Single GPU training

cd $MMDET_ROOT
python tools/train.py configs/actnn/faster_rcnn_r50_fpn_1x_coco_1gpu.py
# https://wandb.ai/actnn/detection/runs/ye0aax5s
# ActNN mAP 37.4 vs Official mAP 37.4
python tools/train.py configs/actnn/retinanet_r50_fpn_1x_coco_1gpu.py
# https://wandb.ai/actnn/detection/runs/1x9cwokw
# ActNN mAP 36.3 vs Official mAP 36.5
cd $MMSEG_ROOT
python tools/train.py configs/actnn/fcn_r50-d8_512x1024_80k_cityscapes_1gpu.py
# https://wandb.ai/actnn/segmentation/runs/159if8da
# ActNN mIoU 72.9 vs Official mIoU 73.6
python tools/train.py configs/actnn/fpn_r50_512x1024_80k_cityscapes_1gpu.py
# https://wandb.ai/actnn/segmentation/runs/25j9iyv3
# ActNN mIoU 74.7 vs Official mIoU 74.5

Multiple GPUs training

cd $MMSSL_ROOT
bash tools/dist_train.sh configs/selfsup/actnn/moco_r50_v2_bs512_e200_imagenet_2gpu.py 2
# https://wandb.ai/actnn/mmssl/runs/lokf7ydo
# https://wandb.ai/actnn/mmssl/runs/2efmbuww
# ActNN top1 67.3 vs Official top1 67.7

For more detailed guidance, please refer to the docs of mmcv, mmdet, mmseg, mmssl.

FAQ

  1. Does ActNN supports CPU training?
    Currently, ActNN only supports CUDA.

  2. Accuracy degradation / diverged training with ActNN.
    ActNN applies lossy compression to the activations. In some challenging cases, our default compression strategy might be too aggressive. In this case, you may try more conservative compression strategies (which consume more memory):

    • 4-bit per-group quantization
    actnn.set_optimization_level("L2")
    • 8-bit per-group quantization
    actnn.set_optimization_level("L2")
    actnn.config.activation_compression_bits = [8]

    If none of these works, you may report to us by creating an issue.

Correspondence

Please email Jianfei Chen and Lianmin Zheng. Any questions or discussions are welcomed!

Citation

If the actnn library is helpful in your research, please consider citing our paper:

@article{chen2021actnn,
  title={ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training},
  author={Chen, Jianfei and Zheng, Lianmin and Yao, Zhewei and Wang, Dequan and Stoica, Ion and Mahoney, Michael W and Gonzalez, Joseph E},
  journal={arXiv preprint arXiv:2104.14129},
  year={2021}
}
Comments
  • Cannot save memory during the FP

    Cannot save memory during the FP

    Hi, part of my work now follows the project and I try to quantize the activation during the forward pass. However, i just noticed that though i can modify the ctx.saved_tensor to be my new compressed activation, the overall cuda memory occupation doesn't decrease even increase. What i found is that the original fp32 activation will not be freed and still be count as part of the cuda memory usage. I just wonder the reason behind this and seeks for a solution.

    For your reference, what i did is like

    def forward(self, input):
          qconv2d.apply(input, weight, .....)
    

    Where inside the qconv2d, i did

    input_int8, scale_inp = quantize_int8(input)
    ...
    ctx.save_for_backward(input_int8, scale_inp, weight, bias)
    

    However, using the torch.cuda.memory_allocated(0), the input & input_int8 will be both saved during the path?

    Hoping for your reply.

    opened by SpringWave1 3
  • Installing actnn with some errors

    Installing actnn with some errors

    When I intalling actnn on ubuntu18.04, something wrong with it.

    /usr/include/c++/7/bits/basic_string.h:6693:95: required from here /usr/include/c++/7/bits/basic_string.tcc:1067:16: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’ without object ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1539, in _run_ninja_build env=env) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/subprocess.py", line 512, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/huangry/program/actnn/actnn/setup.py", line 24, in <module> packages=find_packages() File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/setuptools/command/develop.py", line 34, in run self.install_for_development() File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/setuptools/command/develop.py", line 136, in install_for_development self.run_command('build_ext') File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 670, in build_extensions build_ext.build_extensions(self) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions self._build_extensions_serial() File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial self.build_extension(ext) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 196, in build_extension _build_ext.build_extension(self, ext) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension depends=ext.depends) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 500, in unix_wrap_ninja_compile with_cuda=with_cuda) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1255, in _write_ninja_file_and_compile_objects error_prefix='Error compiling objects for extension') File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1555, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension

    opened by qinxianyuzi 3
  • QConv1d: no valid convolution algorithms available in CuDNN

    QConv1d: no valid convolution algorithms available in CuDNN

    In https://github.com/ucbrise/actnn/blob/main/tests/test_conv_layer.py line 52-56

    I just got this message when trying to run test_conv_layer.py, getting the following stacktrace:

    ~/code/actnn/tests$ CUDA_VISIBLE_DEVICES=1 python test_conv_layer.py
    Conv1d(100, 4, kernel_size=(3,), stride=(2,), groups=2)
    QConv1d(100, 4, kernel_size=(3,), stride=(2,), groups=2)
    torch.Size([4, 50, 3])
    torch.Size([10, 100, 2000]) tensor([2, 0, 3, 0, 0, 2, 3, 0, 2, 1], device='cuda:0')
    Traceback (most recent call last):
      File "test_conv_layer.py", line 60, in <module>
        test(layer, qlayer, x, y)
      File "test_conv_layer.py", line 33, in test
        grads.append(get_grad(qlayer))
      File "test_conv_layer.py", line 27, in get_grad
        loss.backward()
      File "/data/users/root/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/tensor.py", line 245, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/data/users/root/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/autograd/__init__.py", line 147, in backward
        allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
      File "/data/users/root/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/autograd/function.py", line 89, in apply
        return self._forward_cls.backward(self, *args)  # type: ignore
      File "/data/users/root/code/actnn/actnn/actnn/ops.py", line 244, in backward
        return convnd.run_backward(1, ctx, grad_output, [0, 2], _single)
      File "/data/users/root/code/actnn/actnn/actnn/ops.py", line 225, in run_backward
        [ctx.needs_input_grad[0], ctx.needs_input_grad[1]])
    RuntimeError: no valid convolution algorithms available in CuDNN
    
    
    opened by xesdiny 3
  • how to avoid memory fragmentation in ActNN?

    how to avoid memory fragmentation in ActNN?

    May I known how you guys implemented this defragmentation in ActNN? wecom-temp-63c4a7a412756448a2179e2b801edcd5

    In my model training experience: smaller MAX_SPLIT_SIZE, worse performance. bigger MAX_SPLIT_SIZE, will finally result OOM

    opened by Jack47 2
  • There are some errors when I install actnn

    There are some errors when I install actnn

    Hello! Thanks for your excellent work! I think it is useful for me. But there are some errors when I install actnn.

    D:/actnn/actnn/actnn/cpp_extension/minimax_cuda_kernel.cu(19): error: more than one instance of overloaded function "__shfl_sync" matches the argument list:
                    function "__shfl_sync(unsigned int, __half, int, int)"
                    function "__shfl_sync(unsigned int, c10::Half, unsigned int, int)"
                    argument types are: (const unsigned int, __half, const unsigned int, const int)
    
    D:/actnn/actnn/actnn/cpp_extension/minimax_cuda_kernel.cu(52): error: more than one instance of overloaded function "__shfl_sync" matches the argument list:
                function "__shfl_sync(unsigned int, int, int, int)"
                function "__shfl_sync(unsigned int, unsigned int, int, int)"
                function "__shfl_sync(unsigned int, float, int, int)"
                function "__shfl_sync(unsigned int, long long, int, int)"
                function "__shfl_sync(unsigned int, unsigned long long, int, int)"
                function "__shfl_sync(unsigned int, double, int, int)"
                function "__shfl_sync(unsigned int, long, int, int)"
                function "__shfl_sync(unsigned int, unsigned long, int, int)"
                function "__shfl_sync(unsigned int, __half, int, int)"
                function "__shfl_sync(unsigned int, c10::Half, unsigned int, int)"
                argument types are: (unsigned int, c10::Half, int, int)
              detected during instantiation of "void minimax_cuda_kernel(const scalar_t *, scalar_t *, scalar_t *, int64_t, int64_t) [with scalar_t=c10::Half]"
    (82): here
    
    D:/actnn/actnn/actnn/cpp_extension/minimax_cuda_kernel.cu(65): error: more than one instance of overloaded function "__shfl_sync" matches the argument list:
                function "__shfl_sync(unsigned int, int, int, int)"
                function "__shfl_sync(unsigned int, unsigned int, int, int)"
                function "__shfl_sync(unsigned int, float, int, int)"
                function "__shfl_sync(unsigned int, long long, int, int)"
                function "__shfl_sync(unsigned int, unsigned long long, int, int)"
                function "__shfl_sync(unsigned int, double, int, int)"
                function "__shfl_sync(unsigned int, long, int, int)"
                function "__shfl_sync(unsigned int, unsigned long, int, int)"
                function "__shfl_sync(unsigned int, __half, int, int)"
                function "__shfl_sync(unsigned int, c10::Half, unsigned int, int)"
                argument types are: (unsigned int, c10::Half, int, int)
              detected during instantiation of "void minimax_cuda_kernel(const scalar_t *, scalar_t *, scalar_t *, int64_t, int64_t) [with scalar_t=c10::Half]"
    (82): here
    
    3 errors detected in the compilation of "C:/Users/xJun/AppData/Local/Temp/tmpxft_00004d44_00000000-7_minimax_cuda_kernel.cpp1.ii".
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(126): error: no instance of overloaded function "std::min" matches the argument list
                argument types are: (long long, long)
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(190): error: no instance of overloaded function "std::min" matches the argument list
                argument types are: (long long, long)
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(302): error: no instance of overloaded function "std::min" matches the argument list
                argument types are: (long long, long)
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(379): error: no instance of overloaded function "std::min" matches the argument list
                argument types are: (long long, long)
    
    4 errors detected in the compilation of "C:/Users/xJun/AppData/Local/Temp/tmpxft_000024d8_00000000-7_quantization_cuda_kernel.cpp1.ii".
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(64): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_mixed_precision_kernel<double> ") is not allowed
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(64): error: identifier "fmax<double, float, (int)0> " is undefined in device code
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(64): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_mixed_precision_kernel<float> ") is not allowed
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(64): error: identifier "fmax<double, float, (int)0> " is undefined in device code
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(64): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_mixed_precision_kernel< ::c10::Half> ") is not allowed
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(64): error: identifier "fmax<double, float, (int)0> " is undefined in device code
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_single_precision_kernel<double, (bool)0> ") is not allowed
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: identifier "fmax<double, float, (int)0> " is undefined in device code
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_single_precision_kernel<float, (bool)0> ") is not allowed
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: identifier "fmax<double, float, (int)0> " is undefined in device code
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_single_precision_kernel< ::c10::Half, (bool)0> ") is not allowed
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: identifier "fmax<double, float, (int)0> " is undefined in device code
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_single_precision_kernel<double, (bool)1> ") is not allowed
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: identifier "fmax<double, float, (int)0> " is undefined in device code
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_single_precision_kernel<float, (bool)1> ") is not allowed
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: identifier "fmax<double, float, (int)0> " is undefined in device code
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_single_precision_kernel< ::c10::Half, (bool)1> ") is not allowed
    
    D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: identifier "fmax<double, float, (int)0> " is undefined in device code
    
    18 errors detected in the compilation of "C:/Users/xJun/AppData/Local/Temp/tmpxft_00004cdc_00000000-7_quantization_cuda_kernel.cpp1.ii".
    
    opened by XiongUp 2
  • what is pipeline_threshold used for?

    what is pipeline_threshold used for?

    I want to figure out what does pipeline and pipeline_threshold means in actnn. Din't find examples in test or readme, so may you guys give some examples or explain it a bit? Thanks image

    PS: I'm currently reading actnn source code and learned a lot from it , my chinese notes here.

    opened by Jack47 2
  • A kind suggestion on the pytorch version.

    A kind suggestion on the pytorch version.

    This is a kind suggestion to the authors to revise the required pytorch version to "torch >= 1.7.1" and "torch <= 1.8.0" in the readme file.

    opened by guanchuwang 1
  • problem with torch._six on latest pytorch

    problem with torch._six on latest pytorch

    Hi, thanks for your great work, but there's some wrong with latest torch. When I'm trying to import actnn on pytorch1.9, I faced an error

    >>>import actnn Traceback (most recent call last): File "", line 1, in File "/workspace/actnn-main/actnn/actnn/__init__.py", line 1, in from . import dataloader File "/workspace/actnn-main/actnn/actnn/dataloader.py", line 16, in from torch._six import queue, string_classes ImportError: cannot import name 'queue' from 'torch._six' (/miniconda3/lib/python3.8/site-packages/torch/_six.py)

    then I compare the difference with torch1.9 and torch1.7 there is a commitin torch and they remove some import in torch._six, then I revert torch._six to old version so that actnn can work well.

    Maybe just directly import queue it could work well. other import from torch._six should fix too.

    opened by liuyanyi 1
  • install problem

    install problem

    hi when install actnn, i meet pip._internal.exceptions.InstallationError: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode. any help is appreciated

    opened by zimenglan-sysu-512 1
  • how to deploy actnn by libtorch?

    how to deploy actnn by libtorch?

    It's really an interesting work!I just wonder how can we deploy actnn using libtorch. With libtorch or onnx( from python to c++),it will make actnn more useful.Thank you~

    opened by lucify123 1
  • [Bugfix] Fix QDropout

    [Bugfix] Fix QDropout

    Hi, I find some bugs of my early implementation when using QDropout.

    1. In the backward, the gradient should also be divided by the 1-p factor.
    2. In the validation step (self.training = False), we can directly use the forward of nn.Dropout since dropout performs different in training and validation steps.

    Please help check the modification.

    opened by cenyk1230 0
  • Does this work for any model with activation function as relu?

    Does this work for any model with activation function as relu?

    Hello, I'm trying to use the actnn with maddpg (an MARL algorithm). The model just has 3 layers with activation function relu. If so, can you let us know if this mechanism will give the results with smaller models.

    Thank you.

    Link of maddpg https://github.com/marlbenchmark/off-policy/tree/release/offpolicy/algorithms/maddpg

    opened by kailashg26 4
  • How Actnn used in windows 10?

    How Actnn used in windows 10?

    Hello, thanks for your work. I tried actnn in ubuntu, it was worked. But when it was used in windows 10, I get a error. Could you please help me to solve this problem? image

    opened by ElegantLee 0
  • There is something wrong with loss.backward()

    There is something wrong with loss.backward()

    I just modify the model by

    model = actnn.QModule(model)

    After that, something wrong happened as follows:

    Traceback (most recent call last): File "train.py", line 336, in main() File "train.py", line 332, in main train(args, model) File "train.py", line 212, in train loss.backward() File "/home/hku/anaconda3/envs/torch17/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/hku/anaconda3/envs/torch17/lib/python3.7/site-packages/torch/autograd/init.py", line 132, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Function linearBackward returned an invalid gradient at index 0 - got [25216, 3072] but expected shape compatible with [128, 197, 3072]

    opened by Harr7y 2
  • How to use it for GANs

    How to use it for GANs

    Thank you for sharing your great work!

    Can actNN be used for GAN models? I used actNN with the following GAN architecture, but I got the error. https://github.com/knazeri/edge-connect

    Traceback (most recent call last):
      File "train.py", line 3, in <module>
        main(mode=1)
      File "***\edge-connect\main.py", line 50, in main
        model = EdgeConnect(config)
      File "***\edge-connect\src\edge_connect.py", line 27, in __init__
        self.edge_model = EdgeModel(config).to(config.DEVICE)
      File "***\edge-connect\src\models.py", line 67, in __init__
        generator = actnn.QModule(generator)
      File "***\actnn\actnn\actnn\module.py", line 18, in __init__
        QModule.convert_layers(model)
      File "***\actnn\actnn\actnn\module.py", line 76, in convert_layers
        QModule.convert_layers(child)
      File "***\actnn\actnn\actnn\module.py", line 48, in convert_layers
        child.groups, child.bias, child.dilation, child.padding_mode))
      File "***\actnn\actnn\actnn\layers.py", line 137, in __init__
        padding, output_padding, groups, bias, dilation, padding_mode)
      File "***\Python37\lib\site-packages\torch\nn\modules\conv.py", line 904, in __init__
        True, output_padding, groups, bias, padding_mode, **factory_kwargs)
      File "***\Python37\lib\site-packages\torch\nn\modules\conv.py", line 602, in __init__
        groups, bias, padding_mode, **factory_kwargs)
      File "***\Python37\lib\site-packages\torch\nn\modules\conv.py", line 133, in __init__
        if bias:
    RuntimeError: Boolean value of Tensor with more than one value is ambiguous
    

    I have add the following code to model.py#L61

            generator = EdgeGenerator(use_spectral_norm=True)
            discriminator = Discriminator(in_channels=2, use_sigmoid=config.GAN_LOSS != 'hinge')
            generator = actnn.QModule(generator)
            print(generator)
            exit()
    

    I would like to seek your advice on this problem. Thanks

    opened by naoki7090624 1
  • When I intalling actnn on ubuntu18.04, something wrong with it.

    When I intalling actnn on ubuntu18.04, something wrong with it.

    When I intalling actnn on ubuntu18.04, something wrong with it. ` /usr/local/cuda-10.1/bin/nvcc -I/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include -I/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/TH -I/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/zhaofy/anaconda3/envs/actnn/include/python3.8 -c actnn/cpp_extension/minimax_cuda_kernel.cu -o build/temp.linux-x86_64-3.8/actnn/cpp_extension/minimax_cuda_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=minimax -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign

    /home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign
    
    /home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/ATen/core/builtin_function.h(97): warning: statement is unreachable
    
    /home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(191): warning: statement is unreachable
    
    /home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign
    
    /home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign
    
    /home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/ATen/core/builtin_function.h(97): warning: statement is unreachable
    
    /home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(191): warning: statement is unreachable
    
    /usr/include/c++/7/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’:
    /usr/include/c++/7/bits/basic_string.tcc:578:28:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’
    /usr/include/c++/7/bits/basic_string.h:5042:20:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_type) [with _InIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’
    /usr/include/c++/7/bits/basic_string.h:5063:24:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’
    /usr/include/c++/7/bits/basic_string.tcc:656:134:   required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’
    /usr/include/c++/7/bits/basic_string.h:6688:95:   required from here
    /usr/include/c++/7/bits/basic_string.tcc:1067:16: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’ without object
           __p->_M_set_sharable();
           ~~~~~~~~~^~
    /usr/include/c++/7/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’:
    /usr/include/c++/7/bits/basic_string.tcc:578:28:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’
    /usr/include/c++/7/bits/basic_string.h:5042:20:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_type) [with _InIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’
    /usr/include/c++/7/bits/basic_string.h:5063:24:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’
    /usr/include/c++/7/bits/basic_string.tcc:656:134:   required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’
    /usr/include/c++/7/bits/basic_string.h:6693:95:   required from here
    /usr/include/c++/7/bits/basic_string.tcc:1067:16: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’ without object
    /home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/utils/cpp_extension.py:352: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
      warnings.warn(msg.format('we could not find ninja.'))
    error: command '/usr/local/cuda-10.1/bin/nvcc' failed with exit status 1
    

    ERROR: Command errored out with exit status 1: /home/zhaofy/anaconda3/envs/actnn/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/home/zhaofy/actnn-main/actnn/setup.py'"'"'; file='"'"'/home/zhaofy/actnn-main/actnn/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' develop --no-deps Check the logs for full command output. `

    opened by zhaofangyuan98 3
Owner
UC Berkeley RISE
REAL-TIME INTELLIGENT SECURE EXPLAINABLE SYSTEMS
UC Berkeley RISE
PyTorchMemTracer - Depict GPU memory footprint during DNN training of PyTorch

A Memory Tracer For PyTorch OOM is a nightmare for PyTorch users. However, most

Jiarui Fang 9 Nov 14, 2022
Rational Activation Functions - Replacing Padé Activation Units

Rational Activations - Learnable Rational Activation Functions First introduce as PAU in Padé Activation Units: End-to-end Learning of Activation Func

ml-research@TUDarmstadt 38 Nov 22, 2022
Attention for PyTorch with Linear Memory Footprint

Attention for PyTorch with Linear Memory Footprint Unofficially implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention (+

null 11 Jan 9, 2022
The Dual Memory is build from a simple CNN for the deep memory and Linear Regression fro the fast Memory

Simple-DMA a simple Dual Memory Architecture for classifications. based on the paper Dual-Memory Deep Learning Architectures for Lifelong Learning of

null 1 Jan 27, 2022
Calculates carbon footprint based on fuel mix and discharge profile at the utility selected. Can create graphs and tabular output for fuel mix based on input file of series of power drawn over a period of time.

carbon-footprint-calculator Conda distribution ~/anaconda3/bin/conda install anaconda-client conda-build ~/anaconda3/bin/conda config --set anaconda_u

Seattle university Renewable energy research 7 Sep 26, 2022
[CVPR 2022] "The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy" by Tianlong Chen, Zhenyu Zhang, Yu Cheng, Ahmed Awadallah, Zhangyang Wang

The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy Codes for this paper: [CVPR 2022] The Pr

VITA 16 Nov 26, 2022
Compressed Video Action Recognition

Compressed Video Action Recognition Chao-Yuan Wu, Manzil Zaheer, Hexiang Hu, R. Manmatha, Alexander J. Smola, Philipp Krähenbühl. In CVPR, 2018. [Proj

Chao-Yuan Wu 479 Dec 26, 2022
An implementation of chunked, compressed, N-dimensional arrays for Python.

Zarr Latest Release Package Status License Build Status Coverage Downloads Gitter Citation What is it? Zarr is a Python package providing an implement

Zarr Developers 1.1k Dec 30, 2022
Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021.

NL-CSNet-Pytorch Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021. Note: this repo only shows the strategy of

WenxueCui 7 Nov 7, 2022
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)

S2-BNN (Self-supervised Binary Neural Networks Using Distillation Loss) This is the official pytorch implementation of our paper: "S2-BNN: Bridging th

Zhiqiang Shen 52 Dec 24, 2022
Segcache: a memory-efficient and scalable in-memory key-value cache for small objects

Segcache: a memory-efficient and scalable in-memory key-value cache for small objects This repo contains the code of Segcache described in the followi

TheSys Group @ CMU CS 78 Jan 7, 2023
PyTorch Code of "Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spatiotemporal Dynamics"

Memory In Memory Networks It is based on the paper Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spati

Yang Li 12 May 30, 2022
Episodic-memory - Ego4D Episodic Memory Benchmark

Ego4D Episodic Memory Benchmark EGO4D is the world's largest egocentric (first p

null 3 Feb 18, 2022
Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

Memory Efficient Attention Pytorch Implementation of a memory efficient multi-head attention as proposed in the paper, Self-attention Does Not Need O(

Phil Wang 180 Jan 5, 2023
[WACV 2020] Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints

Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints Official implementation for Reducing Footskate in Human Motion Recon

Virginia Tech Vision and Learning Lab 38 Nov 1, 2022
This repository contains the code for "Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP".

Self-Diagnosis and Self-Debiasing This repository contains the source code for Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based

Timo Schick 62 Dec 12, 2022
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)

Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021) The implementation of Reducing Infromation Bottleneck for W

Jungbeom Lee 81 Dec 16, 2022
The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

machen 11 Nov 27, 2022
MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution

Octave Convolution MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution Imag

Meta Research 549 Dec 28, 2022