RepVGG: Making VGG-style ConvNets Great Again

Related tags

Deep Learning RepVGG
Overview

RepVGG: Making VGG-style ConvNets Great Again (PyTorch)

This is a super simple ConvNet architecture that achieves over 80% top-1 accuracy on ImageNet with a stack of 3x3 conv and ReLU! This repo contains the pretrained models, code for building the model, training, and the conversion from training-time model to inference-time.

The MegEngine version: https://github.com/megvii-model/RepVGG.

TensorRT implemention with C++ API by @upczww https://github.com/upczww/TensorRT-RepVGG. Great work!

Another nice PyTorch implementation by @zjykzj https://github.com/ZJCV/ZCls.

Update (Jan 13, 2021): you can get the equivalent kernel and bias in a differentiable way at any time (get_equivalent_kernel_bias in repvgg.py). This may help training-based pruning or quantization.

Update (Jan 31, 2021): this training script (a super simple PyTorch-official-example-style script) has been tested with RepVGG-A0 and B1. The results are even slightly better than those reported in the paper.

Update (Feb 5, 2021): added a function (whole_model_convert in repvgg.py) for easily converting a customized model with RepVGG as one of its components (e.g., the backbone of a semantic segmentation model). It will convert the RepVGG blocks only and keep the other parts. If it does not work with your model, please raise an issue.

Citation:

@article{ding2101repvgg,
  title={RepVGG: Making VGG-style ConvNets Great Again},
  author={Ding, Xiaohan and Zhang, Xiangyu and Ma, Ningning and Han, Jungong and Ding, Guiguang and Sun, Jian},
  journal={arXiv preprint arXiv:2101.03697}
}

Abstract

We present a simple but powerful architecture of convolutional neural network, which has a VGG-like inference-time body composed of nothing but a stack of 3x3 convolution and ReLU, while the training-time model has a multi-branch topology. Such decoupling of the training-time and inference-time architecture is realized by a structural re-parameterization technique so that the model is named RepVGG. On ImageNet, RepVGG reaches over 80% top-1 accuracy, which is the first time for a plain model, to the best of our knowledge. On NVIDIA 1080Ti GPU, RepVGG models run 83% faster than ResNet-50 or 101% faster than ResNet-101 with higher accuracy and show favorable accuracy-speed trade-off compared to the state-of-the-art models like EfficientNet and RegNet.

image image image

Use our pretrained models

You may download all of the ImageNet-pretrained models reported in the paper from Google Drive (https://drive.google.com/drive/folders/1Avome4KvNp0Lqh2QwhXO6L5URQjzCjUq?usp=sharing) or Baidu Cloud (https://pan.baidu.com/s/1nCsZlMynnJwbUBKn0ch7dQ, the access code is "rvgg"). For the ease of transfer learning on other tasks, they are all training-time models (with identity and 1x1 branches). You may test the accuracy by running

python test.py [imagenet-folder with train and val folders] train [path to weights file] -a [model name]

Here "train" indicates the training-time architecture. For example,

python test.py [imagenet-folder with train and val folders] train RepVGG-B2-train.pth -a RepVGG-B2

Convert the training-time models into inference-time

You may convert a trained model into the inference-time structure with

python convert.py [weights file of the training-time model to load] [path to save] -a [model name]

For example,

python convert.py RepVGG-B2-train.pth RepVGG-B2-deploy.pth -a RepVGG-B2

Then you may test the inference-time model by

python test.py [imagenet-folder with train and val folders] deploy RepVGG-B2-deploy.pth -a RepVGG-B2

Note that the argument "deploy" builds an inference-time model.

ImageNet training

We trained for 120 epochs with cosine learning rate decay from 0.1 to 0. We used 8 GPUs, global batch size of 256, weight decay of 1e-4 (no weight decay on fc.bias, bn.bias, rbr_dense.bn.weight and rbr_1x1.bn.weight) (weight decay on rbr_identity.weight makes little difference, and it is better to use it in most of the cases), and the same simple data preprocssing as the PyTorch official example:

            trans = transforms.Compose([
                transforms.RandomResizedCrop(224),
                transforms.RandomHorizontalFlip(),
                transforms.ToTensor(),
                transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])

The multi-processing training script in this repo is based on the official PyTorch example for the simplicity and better readability. The only modifications include the model-building part, cosine learning rate scheduler, and the SGD optimizer that uses no weight decay on some parameters. You may find these code segments useful for your training code. We tested this training script with RepVGG-A0 and RepVGG-B1. The accuracy was 72.44 and 78.38, respectively, which was almost the same as (and even better than) the results we reported in the paper (72.41 and 78.37). You may train and test like this:

python train.py -a RepVGG-A0 --dist-url 'tcp://127.0.0.1:23333' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0 --workers 32 [imagenet-folder with train and val folders]
python test.py [imagenet-folder with train and val folders] train model_best.pth.tar -a RepVGG-A0

I would really appreciate it if you share with me your re-implementation results with other models.

Use like this in your own code

from repvgg import repvgg_model_convert, create_RepVGG_A0
train_model = create_RepVGG_A0(deploy=False)
train_model.load_state_dict(torch.load('RepVGG-A0-train.pth'))          # or train from scratch
# do whatever you want with train_model
deploy_model = repvgg_model_convert(train_model, create_RepVGG_A0, save_path='repvgg_deploy.pth')
# do whatever you want with deploy_model

or

deploy_model = create_RepVGG_A0(deploy=True)
deploy_model.load_state_dict(torch.load('RepVGG-A0-deploy.pth'))
# do whatever you want with deploy_model

If you use RepVGG as a component of another model, it will be more convenient to use whole_model_convert in repvgg.py for the conversion. Please refer to FAQs for more details.

FAQs

Q: Is the inference-time model's output the same as the training-time model?

A: Yes. You can verify that by

import torch
train_model = create_RepVGG_A0(deploy=False)
train_model.eval()      # Don't forget to call this before inference.
deploy_model = repvgg_model_convert(train_model, create_RepVGG_A0)
x = torch.randn(1, 3, 224, 224)
train_y = train_model(x)
deploy_y = deploy_model(x)
print(((train_y - deploy_y) ** 2).sum())    # Will be around 1e-10

Q: How to use the pretrained RepVGG models for other tasks?

A: It is better to finetune the training-time RepVGG models on your datasets. Then you should do the conversion after finetuning and before you deploy the models. For example, say you want to use PSPNet for semantic segmentation, you should build a PSPNet with a training-time RepVGG model as the backbone, load pre-trained weights into the backbone, and finetune the PSPNet on your segmentation dataset. Then you should convert the backbone following the code provided in this repo and keep the other task-specific structures (the PSPNet parts, in this case). Now we provide a function (whole_model_convert in repvgg.py) to do this. The pseudo code will be like

train_backbone = create_RepVGG_B2(deploy=False)
train_backbone.load_state_dict(torch.load('RepVGG-B2-train.pth'))
train_pspnet = build_pspnet(backbone=train_backbone)
segmentation_train(train_pspnet)
deploy_backbone = create_RepVGG_B2(deploy=True)
deploy_pspnet = build_pspnet(backbone=deploy_backbone)
whole_model_convert(train_pspnet, deploy_pspnet)
segmentation_test(deploy_pspnet)
torch.save(deploy_pspnet.state_dict(), 'deploy_pspnet.pth')

Finetuning with a converted RepVGG also makes sense if you insert a BN after each conv (the converted conv.bias params can be discarded), but the performance may be slightly lower.

Q: How to quantize a RepVGG model?

A1: Post-training quantization. After training and conversion, you may quantize the converted model with any post-training quantization method. Then you may insert a BN after each conv and finetune to recover the accuracy just like you quantize and finetune the other models. This is the recommended solution.

A2: Quantization-aware training. During the quantization-aware training, instead of constraining the params in a single kernel (e.g., making every param in {-127, -126, .., 126, 127} for int8) for ordinary models, you should constrain the equivalent kernel (get_equivalent_kernel_bias() in repvgg.py).

Q: I tried to finetune your model with multiple GPUs but got an error. Why are the names of params like "stage1.0.rbr_dense.conv.weight" in the downloaded weight file but sometimes like "module.stage1.0.rbr_dense.conv.weight" (shown by nn.Module.named_parameters()) in my model?

A: DistributedDataParallel may prefix "module." to the name of params and cause a mismatch when loading weights by name. The simplest solution is to load the weights (model.load_state_dict(...)) before DistributedDataParallel(model). Otherwise, you may insert "module." before the names like this

checkpoint = torch.load(...)    # This is just a name-value dict
ckpt = {('module.' + k) : v for k, v in checkpoint.items()}
model.load_state_dict(ckpt)

Likewise, if the param names in the checkpoint file start with "module." but those in your model do not, you may strip the names like line 50 in test.py.

ckpt = {k.replace('module.', ''):v for k,v in checkpoint.items()}   # strip the names
model.load_state_dict(ckpt)

Q: So a RepVGG model derives the equivalent 3x3 kernels before each forwarding to save computations?

A: No! More precisely, we do the conversion only once right after training. Then the training-time model can be discarded, and the resultant model only has 3x3 kernels. We only save and use the resultant model.

Contact

[email protected]

Google Scholar Profile: https://scholar.google.com/citations?user=CIjw0KoAAAAJ&hl=en

My open-sourced papers and repos:

Simple and powerful VGG-style ConvNet architecture (preprint, 2021): RepVGG: Making VGG-style ConvNets Great Again (https://github.com/DingXiaoH/RepVGG)

State-of-the-art channel pruning (preprint, 2020): Lossless CNN Channel Pruning via Decoupling Remembering and Forgetting (https://github.com/DingXiaoH/ResRep)

CNN component (ICCV 2019): ACNet: Strengthening the Kernel Skeletons for Powerful CNN via Asymmetric Convolution Blocks (https://github.com/DingXiaoH/ACNet)

Channel pruning (CVPR 2019): Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated Structure (https://github.com/DingXiaoH/Centripetal-SGD)

Channel pruning (ICML 2019): Approximated Oracle Filter Pruning for Destructive CNN Width Optimization (https://github.com/DingXiaoH/AOFP)

Unstructured pruning (NeurIPS 2019): Global Sparse Momentum SGD for Pruning Very Deep Neural Networks (https://github.com/DingXiaoH/GSM-SGD)

Comments
  • > ```直接跑初始化的模型的话是没问题的,但是如果是加载权重的模型就不行

    > ```直接跑初始化的模型的话是没问题的,但是如果是加载权重的模型就不行

        x = torch.from_numpy(np.random.randn(1,*shape)).float()
        y = model(x)
        model_d = repvgg_model_convert(model,model_func,out_c=186*2,num_blocks=[4,6,16,1],in_c=1)
        y_d = model_d(x)
        print('diff abs: max {},\n**2:{}'.format(abs(y - y_d).max(),((y - y_d) ** 2).sum()))
    

    输出: diff abs: max 6.67572021484375e-06, **2:1.419987460948846e-09 这里看正常的,但是实际训练下来,最后导出就是有之前贴出来的那么大差异。convert细节我没搞清,不好枉下结论。

    我在实现RepVGG的时候观察到两个现象:

    1. 训练阶段和测试阶段的模型都需要执行eval后再进行精度比较,否则会找成较大差异;
    2. 当初始化方式如下所示:
    def init_weights(modules):
        for m in modules:
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
                if m.bias is not None:
                    nn.init.zeros_(m.bias)
            elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)
    

    会造成较大精度不对齐,而使用下述初始化则可以保证一致性

        def _init_weights(self, gamma=0.01):
            for m in self.modules():
                if isinstance(m, nn.Conv2d):
                    nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
                elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
                    nn.init.constant_(m.weight, gamma)
                    nn.init.constant_(m.bias, gamma)
    

    下面是测试代码:

    def test_regvgg():
        model = RepVGGRecognizer()
        model.eval()
        print(model)
    
        data = torch.randn(1, 3, 224, 224)
        insert_repvgg_block(model)
        model.eval()
        train_outputs = model(data)[KEY_OUTPUT]
        print(model)
    
        fuse_repvgg_block(model)
        model.eval()
        eval_outputs = model(data)[KEY_OUTPUT]
        print(model)
    
        print(torch.sqrt(torch.sum((train_outputs - eval_outputs) ** 2)))
        print(torch.allclose(train_outputs, eval_outputs, atol=1e-8))
        assert torch.allclose(train_outputs, eval_outputs, atol=1e-8)
    

    希望能够对你有所帮助

    Originally posted by @zjykzj in https://github.com/DingXiaoH/RepVGG/issues/23#issuecomment-772960069

    opened by mcmingchang 11
  • 部署时精度差异大

    部署时精度差异大

    感谢大佬的作品。 使用时,我训练小模型10MFLOPS以内部署时精度损失可以忽略,但是大模型2GFLOPS时精度就对不齐了: LOG: deploy param: stage0.rbr_reparam.weight torch.Size([64, 1, 3, 3]) -0.048573527 deploy param: stage0.rbr_reparam.bias torch.Size([64]) 0.23182523 deploy param: stage1.0.rbr_reparam.weight torch.Size([128, 64, 3, 3]) -0.0054542203 deploy param: stage1.0.rbr_reparam.bias torch.Size([128]) 1.0140312 deploy param: stage1.1.rbr_reparam.weight torch.Size([128, 64, 3, 3]) 0.0006282824 deploy param: stage1.1.rbr_reparam.bias torch.Size([128]) 0.32761782 deploy param: stage1.2.rbr_reparam.weight torch.Size([128, 128, 3, 3]) 0.0023862773 deploy param: stage1.2.rbr_reparam.bias torch.Size([128]) 0.34976208 deploy param: stage1.3.rbr_reparam.weight torch.Size([128, 64, 3, 3]) -9.027165e-05 deploy param: stage1.3.rbr_reparam.bias torch.Size([128]) 0.0063683093 deploy param: stage2.0.rbr_reparam.weight torch.Size([256, 128, 3, 3]) -8.460902e-05 deploy param: stage2.0.rbr_reparam.bias torch.Size([256]) 0.11033552 deploy param: stage2.1.rbr_reparam.weight torch.Size([256, 128, 3, 3]) -0.00010023986 deploy param: stage2.1.rbr_reparam.bias torch.Size([256]) -0.15826604 deploy param: stage2.2.rbr_reparam.weight torch.Size([256, 256, 3, 3]) -5.3966836e-05 deploy param: stage2.2.rbr_reparam.bias torch.Size([256]) -0.15924689 deploy param: stage2.3.rbr_reparam.weight torch.Size([256, 128, 3, 3]) -6.7551824e-05 deploy param: stage2.3.rbr_reparam.bias torch.Size([256]) -0.37404576 deploy param: stage2.4.rbr_reparam.weight torch.Size([256, 256, 3, 3]) -0.00012947948 deploy param: stage2.4.rbr_reparam.bias torch.Size([256]) -0.6853457 deploy param: stage2.5.rbr_reparam.weight torch.Size([256, 128, 3, 3]) 7.473848e-05 deploy param: stage2.5.rbr_reparam.bias torch.Size([256]) -0.16874048 deploy param: stage3.0.rbr_reparam.weight torch.Size([512, 256, 3, 3]) -0.000433887 deploy param: stage3.0.rbr_reparam.bias torch.Size([512]) 0.18602118 deploy param: stage3.1.rbr_reparam.weight torch.Size([512, 256, 3, 3]) 0.00048246872 deploy param: stage3.1.rbr_reparam.bias torch.Size([512]) -0.7235512 deploy param: stage3.2.rbr_reparam.weight torch.Size([512, 512, 3, 3]) 0.00021061227 deploy param: stage3.2.rbr_reparam.bias torch.Size([512]) -0.5657553 deploy param: stage3.3.rbr_reparam.weight torch.Size([512, 256, 3, 3]) -0.00081703335 deploy param: stage3.3.rbr_reparam.bias torch.Size([512]) -0.37847003 deploy param: stage3.4.rbr_reparam.weight torch.Size([512, 512, 3, 3]) -0.00033185782 deploy param: stage3.4.rbr_reparam.bias torch.Size([512]) -0.57922906 deploy param: stage3.5.rbr_reparam.weight torch.Size([512, 256, 3, 3]) -0.0007206367 deploy param: stage3.5.rbr_reparam.bias torch.Size([512]) -0.56909364 deploy param: stage3.6.rbr_reparam.weight torch.Size([512, 512, 3, 3]) -0.0003344199 deploy param: stage3.6.rbr_reparam.bias torch.Size([512]) -0.5628111 deploy param: stage3.7.rbr_reparam.weight torch.Size([512, 256, 3, 3]) -0.00021987755 deploy param: stage3.7.rbr_reparam.bias torch.Size([512]) -0.34248477 deploy param: stage3.8.rbr_reparam.weight torch.Size([512, 512, 3, 3]) -0.00010127398 deploy param: stage3.8.rbr_reparam.bias torch.Size([512]) -0.5895205 deploy param: stage3.9.rbr_reparam.weight torch.Size([512, 256, 3, 3]) -0.0005824505 deploy param: stage3.9.rbr_reparam.bias torch.Size([512]) -0.37577158 deploy param: stage3.10.rbr_reparam.weight torch.Size([512, 512, 3, 3]) -0.00012262027 deploy param: stage3.10.rbr_reparam.bias torch.Size([512]) -0.6199002 deploy param: stage3.11.rbr_reparam.weight torch.Size([512, 256, 3, 3]) 1.503076e-06 deploy param: stage3.11.rbr_reparam.bias torch.Size([512]) -0.7054796 deploy param: stage3.12.rbr_reparam.weight torch.Size([512, 512, 3, 3]) 0.0006349176 deploy param: stage3.12.rbr_reparam.bias torch.Size([512]) -1.0350925 deploy param: stage3.13.rbr_reparam.weight torch.Size([512, 256, 3, 3]) 0.00037807773 deploy param: stage3.13.rbr_reparam.bias torch.Size([512]) -1.1399512 deploy param: stage3.14.rbr_reparam.weight torch.Size([512, 512, 3, 3]) 0.00025178236 deploy param: stage3.14.rbr_reparam.bias torch.Size([512]) -0.27695537 deploy param: stage3.15.rbr_reparam.weight torch.Size([512, 256, 3, 3]) 0.00074805244 deploy param: stage3.15.rbr_reparam.bias torch.Size([512]) -0.8776718 deploy param: stage4.0.rbr_reparam.weight torch.Size([1024, 512, 3, 3]) -0.00013951868 deploy param: stage4.0.rbr_reparam.bias torch.Size([1024]) 0.021552037 deploy param: linear.weight torch.Size([372, 1024]) 0.0051029953 deploy param: linear.bias torch.Size([372]) 0.17604762

    打印代码:

        deploy_model = build_func(deploy=True,**kwargs)
        for name, param in deploy_model.named_parameters():
            print('deploy param: ', name, param.size(), np.mean(converted_weights[name]))
            param.data = torch.from_numpy(converted_weights[name]).float()
    
    opened by MaeThird 8
  • 'int' object is not callable error in whole_model_convert

    'int' object is not callable error in whole_model_convert

    When I use repVGG as the backbone of my model, the whole_model_convert will raise the error "TypeError: 'int' object is not callable" in the function whole_model_convert. Could you please give a concert sample? Think you.

    opened by gs-ren 7
  • About training

    About training

    Thank you for your shared work. I want to train a Repvgg -A0 model, but I only have 4 GPUS. Apart from batch_size, do I need to change other parameters, such as lr?

    opened by NNNNAI 6
  • 替换resnet18作为TRN网络的backbone时的精度

    替换resnet18作为TRN网络的backbone时的精度

    十分感谢作者的工作和开源的代码。

    我近期用RepVGG-B0替换了ResNet-18做backbone来训练和测试,在相同超参数的情况下,ResNet18可以达到85%,而RepVGG-B0仅有70%,对此有一些疑惑。 整体模型是TRN,用于多帧的动作识别,网络的结构主要是:

    1. CNN对多帧进行feature提取;
    2. 对多个feature做concat;
    3. MLP对concat后的feature做分类

    以下是一些训练参数的设置,两个backbone用了相同参数: 优化器:Adam 学习率:1.0e-5 betas:[0.9, 0.99] eps:1.0e-8 weight_decay:1.0e-4

    学习率调整策略:ExponentialLR gamma:0.99

    epoch:150 batch:64 输入尺寸:96*96 同时训练帧数:5

    测试时用的指标是f1-score,对替换backbone前后各训练了5个模型,对5个模型取在测试集上的最优指标,取平均。 其中,对RepVGG-base的模型进行测试时,没有进行deploy转换。

    个人认为RepVGG对工业界模型部署十分友好,希望能用上这个模型,故提此issue。

    opened by kendyChina 6
  • Why converted model become slower?

    Why converted model become slower?

    I used the repvgg-a0 in my task and convert the trained model by whole_model_convert function. But the trained model is much faster than the converted model while testing. The trained model's test time is around 288s, but converted model's test time over 400s.

    opened by WCZ93762 5
  • The accuracy problem

    The accuracy problem

    I wonder use your pytorch script to train RepVGGA0 can achieve which accuracy?

    I try to reproduce RepVGGA0 with 0.1 labelsmooth, but get accuracy as 71.6%

    opened by MARD1NO 5
  • Why not compare with Mobilenetv2/v3?

    Why not compare with Mobilenetv2/v3?

    Dear author: thanks for this insightful idea. It's really userful to deploy plain cnn since its simplicity and high efficiency. I wonder why you did not conduct experiments on mobilenetv2/v3. I am eager to see whether mobilenetv2/v3 will benefit from the plain structure? Thank you again.

    opened by dragen1860 5
  • bug of convert.py

    bug of convert.py

    when i try to convert the A1-train.pt but get "return kernel * t, beta - running_mean * gamma / std RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!" can you help me ? thank you!

    opened by pikaqiuEric 5
  • Plug-in version implementation

    Plug-in version implementation

    hi @DingXiaoH, This is a simple and intuitive implementation !!! I implemented a plug-in version about RepVGGBlock. I hope it will help you and others

    This plug-in version implements the following functions:

    1. The training model and the test model are separated;
    2. You can apply RepVGGBlock to other models;
    3. You can use RepVGGBlock and ACBlock together in training, no matter which order。

    Framework

    Implementation and test files are as follows:

    Key implementation

    Training and testing models are separated by inserting and fusing functions

    ####### conv_helper
    def insert_repvgg_block(model: nn.Module):
        items = list(model.named_children())
        idx = 0
        while idx < len(items):
            name, module = items[idx]
            if isinstance(module, nn.Conv2d) and module.kernel_size[0] > 1:
                # 将标准卷积替换为RepVGGBlock
                in_channels = module.in_channels
                out_channels = module.out_channels
                kernel_size = module.kernel_size
                stride = module.stride
                padding = module.padding
                dilation = module.dilation
                groups = module.groups
                padding_mode = module.padding_mode
    
                acblock = RepVGGBlock(in_channels,
                                      out_channels,
                                      kernel_size[0],
                                      stride[0],
                                      padding=padding[0],
                                      padding_mode=padding_mode,
                                      dilation=dilation,
                                      groups=groups)
                model.add_module(name, acblock)
                # 如果conv层之后跟随着BN层,那么删除该BN层
                # 参考[About BN layer #35](https://github.com/DingXiaoH/ACNet/issues/35)
                if (idx + 1) < len(items) and isinstance(items[idx + 1][1], nn.BatchNorm2d):
                    new_layer = nn.Identity()
                    model.add_module(items[idx + 1][0], new_layer)
            else:
                insert_repvgg_block(module)
            idx += 1
    
    
    def fuse_repvgg_block(model: nn.Module):
        for name, module in model.named_children():
            if isinstance(module, RepVGGBlock):
                # 将RepVGGBlock替换为标准卷积
                kernel, bias = get_equivalent_kernel_bias(module.rbr_dense,
                                                          module.rbr_1x1,
                                                          module.rbr_identity,
                                                          module.in_channels,
                                                          module.groups,
                                                          module.padding)
                # 新建标准卷积,赋值权重和偏差后重新插入模型
                fused_conv = nn.Conv2d(module.in_channels,
                                       module.out_channels,
                                       module.kernel_size,
                                       stride=module.stride,
                                       padding=module.padding,
                                       dilation=module.dilation,
                                       groups=module.groups,
                                       padding_mode=module.padding_mode,
                                       bias=True
                                       )
                fused_conv.weight = nn.Parameter(kernel.detach().cpu())
                fused_conv.bias = nn.Parameter(bias.detach().cpu())
                model.add_module(name, fused_conv)
            else:
                fuse_repvgg_block(module)
    

    I modified the specific fusion function to ensure ACB and repvgg_block can be used in one training, and the block can be inserted into other models with different sizes conv

    ################ repvgg_block.py
    # -*- coding: utf-8 -*-
    
    """
    @date: 2021/2/2 下午8:32
    @file: repvgg_block.py
    @author: zj
    @description: 
    """
    
    import torch.nn as nn
    
    
    def conv_bn(in_channels, out_channels, kernel_size, stride, padding, groups=1):
        result = nn.Sequential()
        result.add_module('conv', nn.Conv2d(in_channels=in_channels, out_channels=out_channels,
                                            kernel_size=kernel_size, stride=stride, padding=padding, groups=groups,
                                            bias=False))
        result.add_module('bn', nn.BatchNorm2d(num_features=out_channels))
        return result
    
    
    class RepVGGBlock(nn.Module):
    
        def __init__(self, in_channels, out_channels, kernel_size,
                     stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros'):
            super(RepVGGBlock, self).__init__()
    
            self.in_channels = in_channels
            self.out_channels = out_channels
            self.kernel_size = kernel_size
            self.stride = stride
            self.padding = padding
            self.dilation = dilation
            self.groups = groups
            self.padding_mode = padding_mode
    
            # assert kernel_size == 3                      # ----------------- Annotate it so that the block can be inserted into other models
            # assert padding == 1
    
            padding_11 = padding - kernel_size // 2
    
            self.rbr_identity = nn.BatchNorm2d(
                num_features=in_channels) if out_channels == in_channels and stride == 1 else None
            self.rbr_dense = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size,
                                     stride=stride, padding=padding, groups=groups)
            self.rbr_1x1 = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=stride,
                                   padding=padding_11, groups=groups)
    
            self._init_weights()
    
        def _init_weights(self, gamma=0.01):
            for m in self.modules():
                if isinstance(m, nn.Conv2d):
                    nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
                elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
                    nn.init.constant_(m.weight, gamma)
                    nn.init.constant_(m.bias, gamma)
    
        def forward(self, inputs):
            if self.rbr_identity is None:
                id_out = 0
            else:
                id_out = self.rbr_identity(inputs)
    
            return self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out
    
        def repvgg_convert(self):
            kernel, bias = self.get_equivalent_kernel_bias()
            return kernel.detach().cpu().numpy(), bias.detach().cpu().numpy(),
    ############## repvgg_util.py
    # -*- coding: utf-8 -*-
    
    """
    @date: 2021/2/2 下午8:51
    @file: repvgg_util.py
    @author: zj
    @description: 
    """
    
    import torch
    import torch.nn as nn
    import numpy as np
    
    
    #   This func derives the equivalent kernel and bias in a DIFFERENTIABLE way.
    #   You can get the equivalent kernel and bias at any time and do whatever you want,
    #   for example, apply some penalties or constraints during training, just like you do to the other models.
    #   May be useful for quantization or pruning.
    def get_equivalent_kernel_bias(rbr_dense, rbr_1x1, rbr_identity, in_channels, groups, padding_11):
        kernel3x3, bias3x3 = _fuse_bn_tensor(rbr_dense, in_channels, groups)
        kernel1x1, bias1x1 = _fuse_bn_tensor(rbr_1x1, in_channels, groups)
        kernelid, biasid = _fuse_bn_tensor(rbr_identity, in_channels, groups)
        return kernel3x3 + _pad_1x1_to_3x3_tensor(kernel1x1, padding_11) + kernelid, bias3x3 + bias1x1 + biasid
    
    
    def _pad_1x1_to_3x3_tensor(kernel1x1, padding_11=1): # --------------->add  padding_11 to 1x1 conv can match 3x3 conv
        if kernel1x1 is None:
            return 0
        else:
            # return torch.nn.functional.pad(kernel1x1, [1, 1, 1, 1])
            return torch.nn.functional.pad(kernel1x1, [padding_11] * 4)
    
    
    def _fuse_bn_tensor(branch, in_channels, groups):
        if branch is None:
            return 0, 0
        if isinstance(branch, nn.Sequential):
            layer_list = list(branch)
            if len(layer_list) == 2 and isinstance(layer_list[1], nn.Identity):
                # conv/bn已经在acb中进行了融合
                return branch.conv.weight, branch.conv.bias
            kernel = branch.conv.weight
            running_mean = branch.bn.running_mean
            running_var = branch.bn.running_var
            gamma = branch.bn.weight
            beta = branch.bn.bias
            eps = branch.bn.eps
        else:
            assert isinstance(branch, nn.BatchNorm2d)
            input_dim = in_channels // groups
            kernel_value = np.zeros((in_channels, input_dim, 3, 3), dtype=np.float32)
            for i in range(in_channels):
                kernel_value[i, i % input_dim, 1, 1] = 1
    
            kernel = torch.from_numpy(kernel_value).to(branch.weight.device)
            running_mean = branch.running_mean
            running_var = branch.running_var
            gamma = branch.weight
            beta = branch.bias
            eps = branch.eps
        std = (running_var + eps).sqrt()
        t = (gamma / std).reshape(-1, 1, 1, 1)
        return kernel * t, beta - running_mean * gamma / std
    

    About Test

    I notice that the implementation of precision matching test in the warehouse can be further improved

    ################## origin
    print(((train_y - deploy_y) ** 2).sum())    # Will be around 1e-10
    ################## mine
    print(torch.sqrt(torch.sum((train_outputs - eval_outputs) ** 2)))
    print(torch.allclose(train_outputs, eval_outputs, atol=1e-8))
    assert torch.allclose(train_outputs, eval_outputs, atol=1e-8)
    

    how to use

    you can create model as usual, then insert acblock or repvgg_block or togetehr no matter which order

    。。。
    。。。
        if cfg.MODEL.CONV.ADD_BLOCKS is not None:
            assert isinstance(cfg.MODEL.CONV.ADD_BLOCKS, tuple)
            for add_block in cfg.MODEL.CONV.ADD_BLOCKS:
                if add_block == 'RepVGGBlock':
                    insert_repvgg_block(model)
                if add_block == 'ACBlock':
                    insert_acblock(model)
    。。。
    。。。
    

    Then normal training and model parameter preservation are carried out, if you want to fuse ACBlock, you can use func fuse_acblock; if you want to fuse RepVGGBlock, then use fuse_repvgg_block. Note: The order of insertion should be opposite to that of fusion.

    insert_acblock -> insert_regvgg_block .... fuse_regvgg_block -> fuse_acblock
    or 
    insert_regvgg_block -> insert_acblock .... fuse_acblock -> fuse_regvgg_block
    

    The complete implementation can be referred to ZJCV/ZCls

    opened by zjykzj 4
  • Different outputs from train-model and deploy-model

    Different outputs from train-model and deploy-model

    After I converted the trained model into the inference-time structure, I tested two models with the same input, and I got different outputs from train-model (RepVGG-X-train.pth) and deploy model (RepVGG-X-depoly.pth).

    Have you done that kind of comparison? lots of thanks~

    opened by nikkonew 4
  • The bug of QAT

    The bug of QAT

    Thanks very much for your great work! When I train the quantization model as your instrutions in the Solution C. The code of convert.py and insert_bn.py are not in quantization folder but in tools folder . What's more, the functions (get_ImageNet_train_dataset, get_default_train_trans)used in insert_bn.py don't exist. I hope you can modify the bug sincerely.

    opened by qhy991 0
  • Training script inquiry

    Training script inquiry

    Hi Xiaohan, thanks for the great work!

    I wonder if you could provide the original training script to reproduce the results of Table 5 (200 epochs with Autoaugment, label smoothing and mixup) in the RepVGG paper. It seems now the script in the README only includes the one for Table 4 (with epoch set to 300):

    python -m torch.distributed.launch --nproc_per_node 8 --master_port 12349 main.py --arch [model name] --data-path [/path/to/imagenet] --batch-size 32 --tag train_from_scratch --output-dir /path/to/save/the/log/and/checkpoints --opts TRAIN.EPOCHS 300 TRAIN.BASE_LR 0.1 TRAIN.WEIGHT_DECAY 1e-4 TRAIN.WARMUP_EPOCHS 5 MODEL.LABEL_SMOOTHING 0.1 AUG.PRESET weak AUG.MIXUP 0.0 DATA.DATASET imagenet DATA.IMG_SIZE 224

    opened by zeyuwang615 0
  • Question about multi-branch and single-branch network

    Question about multi-branch and single-branch network

    I replaced the single-branch network of a low-level vision task network with the multi-branch in the paper, but instead the training did not converge. I did not add the SE module, is this the reason?

    opened by MiaoJieF 1
  • Paper Question - Why less favored than MobileNets for low-powered devices?

    Paper Question - Why less favored than MobileNets for low-powered devices?

    Hi Xiaohan Ding,

    This is such excellent work and thanks you for sharing.

    I was reading your paper and in conclusion, I saw

    RepVGG models are fast, simple, and practical ConvNets designed for the maximum speed on GPU and specialized hardware, less concerning the number of parameters. They are more parameter-efficient than ResNets but may be less favored than the mobile-regime models like MobileNets [16, 30, 15] and ShuffleNets [41, 24] for low-power devices.

    I would appreciate it if you could explain why using RepVGG would make less sense to MobileNets.

    Is it simply because they are already optimized for fast memory access? or, is it that some optimizations here could create problems for these architectures?

    Regards & thanks Kapil

    opened by ksachdeva 1
  • eq_kernel in get_custom_L2 missing contribution of rbr_identity.

    eq_kernel in get_custom_L2 missing contribution of rbr_identity.

    My interpretation of get_custom_L2 is that L2 decay is applied not to the individual weights being trained, but instead to the deploy equivalent weights.

    If this is the motivation, wouldn't the eq_kernel also incorporate the identity from the skip connection when self.rbr_identity is not None? Currently the contribution of rbr_identity in the eq_kernel in get_custom_L2 is missing. Was this intentional? Is there a reference or ablation for why you would exclude it?

    opened by vchiley 0
Owner
null
StrongSORT: Make DeepSORT Great Again

StrongSORT StrongSORT: Make DeepSORT Great Again StrongSORT: Make DeepSORT Great Again Yunhao Du, Yang Song, Bo Yang, Yanyun Zhao arxiv 2202.13514 Abs

null 369 Jan 4, 2023
Transfer style api - An API to use with Tranfer Style App, where you can use two image and transfer the style

Transfer Style API It's an API to use with Tranfer Style App, where you can use

Brian Alejandro 1 Feb 13, 2022
Pretrained models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet.

Pretrained models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet.

Matthias Wright 169 Dec 26, 2022
Pretrained models for Jax/Haiku; MobileNet, ResNet, VGG, Xception.

Pre-trained image classification models for Jax/Haiku Jax/Haiku Applications are deep learning models that are made available alongside pre-trained we

Alper Baris CELIK 14 Dec 20, 2022
PyTorch implementation of spectral graph ConvNets, NIPS’16

Graph ConvNets in PyTorch October 15, 2017 Xavier Bresson http://www.ntu.edu.sg/home/xbresson https://github.com/xbresson https://twitter.com/xbresson

Xavier Bresson 287 Jan 4, 2023
Only a Matter of Style: Age Transformation Using a Style-Based Regression Model

Only a Matter of Style: Age Transformation Using a Style-Based Regression Model The task of age transformation illustrates the change of an individual

null 444 Dec 30, 2022
Fast Neural Style for Image Style Transform by Pytorch

FastNeuralStyle by Pytorch Fast Neural Style for Image Style Transform by Pytorch This is famous Fast Neural Style of Paper Perceptual Losses for Real

Bengxy 81 Sep 3, 2022
This project uses reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can learn to read tape. The project is dedicated to hero in life great Jesse Livermore.

Reinforcement-trading This project uses Reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can

Deepender Singla 1.4k Dec 22, 2022
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Davis E. King 11.6k Jan 1, 2023
naked is a Python tool which allows you to strip a model and only keep what matters for making predictions.

naked is a Python tool which allows you to strip a model and only keep what matters for making predictions. The result is a pure Python function with no third-party dependencies that you can simply copy/paste wherever you wish.

Max Halford 24 Dec 20, 2022
《Train in Germany, Test in The USA: Making 3D Object Detectors Generalize》(CVPR 2020)

Train in Germany, Test in The USA: Making 3D Object Detectors Generalize This paper has been accpeted by Conference on Computer Vision and Pattern Rec

Xiangyu Chen 101 Jan 2, 2023
[ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Chenyu You, Xiaohui Xie, Zhangyang Wang

Undistillable: Making A Nasty Teacher That CANNOT teach students "Undistillable: Making A Nasty Teacher That CANNOT teach students" Haoyu Ma, Tianlong

VITA 71 Dec 28, 2022
A tool for making map images from OpenTTD save games

OpenTTD Surveyor A tool for making map images from OpenTTD save games. This is not part of the main OpenTTD codebase, nor is it ever intended to be pa

Aidan Randle-Conde 9 Feb 15, 2022
[ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable

Unlearnable Examples Code for ICLR2021 Spotlight Paper "Unlearnable Examples: Making Personal Data Unexploitable " by Hanxun Huang, Xingjun Ma, Sarah

Hanxun Huang 98 Dec 7, 2022
Decentralized Reinforcment Learning: Global Decision-Making via Local Economic Transactions (ICML 2020)

Decentralized Reinforcement Learning This is the code complementing the paper Decentralized Reinforcment Learning: Global Decision-Making via Local Ec

null 40 Oct 30, 2022
TensorFlow Similarity is a python package focused on making similarity learning quick and easy.

TensorFlow Similarity is a python package focused on making similarity learning quick and easy.

null 912 Jan 8, 2023
Azua - build AI algorithms to aid efficient decision-making with minimum data requirements.

Project Azua 0. Overview Many modern AI algorithms are known to be data-hungry, whereas human decision-making is much more efficient. The human can re

Microsoft 197 Jan 6, 2023
Using this codebase as a tool for my own research. Making some modifications to the original repo for my own purposes.

For SwapNet Create a list.txt file containing all the images to process. This can be done with the GNU find command: find path/to/input/folder -name '

Andrew Jong 2 Nov 10, 2021
Making a music video with Wav2CLIP and VQGAN-CLIP

music2video Overview A repo for making a music video with Wav2CLIP and VQGAN-CLIP. The base code was derived from VQGAN-CLIP The CLIP embedding for au

Joel Jang | 장요엘 163 Dec 26, 2022