Awesome-Attention-Mechanism-in-cv
Table of Contents
Introduction
PyTorch实现多种计算机视觉中网络设计中用到的Attention机制,还收集了一些即插即用模块。由于能力有限精力有限,可能很多模块并没有包括进来,有任何的建议或者改进,可以提交issue或者进行PR。
Attention Mechanism
Plug and Play Module
- ACBlock
- Swish、wish Activation
- ASPP Block
- DepthWise Convolution
- Fused Conv & BN
- MixedDepthwise Convolution
- PSP Module
- RFBModule
- SematicEmbbedBlock
- SSH Context Module
- Some other usefull tools such as concate feature map、flatten feature map
- WeightedFeatureFusion:EfficientDet中的FPN用到的fuse方式
- StripPooling:CVPR2020中核心代码StripPooling
- GhostModule: CVPR2020GhostNet的核心模块
- SlimConv: SlimConv3x3
- Context Gating: video classification
- EffNetBlock: EffNet
- ECCV2020 BorderDet: Border aligment module
- CVPR2019 DANet: Dual Attention
- Object Contextual Representation for sematic segmentation: OCRModule
- FPT: 包含Self Transform、Grounding Transform、Rendering Transform
- DOConv: 阿里提出的Depthwise Over-parameterized Convolution
- PyConv: 起源人工智能研究院提出的金字塔卷积
- ULSAM:用于紧凑型CNN的超轻量级子空间注意力模块
- DGC: ECCV 2020用于加速卷积神经网络的动态分组卷积
- DCANet: ECCV 2020 学习卷积神经网络的连接注意力
- PSConv: ECCV 2020 将特征金字塔压缩到紧凑的多尺度卷积层中
- Dynamic Convolution: CVPR2020 动态滤波器卷积(非官方)
- CondConv: Conditionally Parameterized Convolutions for Efficient Inference
Evaluation
基于CIFAR10+ResNet+待测评模块,对模块进行初步测评。测评代码来自于另外一个库:https://github.com/kuangliu/pytorch-cifar/ 实验过程中,不使用预训练权重,进行随机初始化。
模型 | top1 acc | time | params(MB) |
---|---|---|---|
SENet18 | 95.28% | 1:27:50 | 11,260,354 |
ResNet18 | 95.16% | 1:13:03 | 11,173,962 |
ResNet50 | 95.50% | 4:24:38 | 23,520,842 |
ShuffleNetV2 | 91.90% | 1:02:50 | 1,263,854 |
GoogLeNet | 91.90% | 1:02:50 | 6,166,250 |
MobileNetV2 | 92.66% | 2:04:57 | 2,296,922 |
SA-ResNet50 | 89.83% | 2:10:07 | 23,528,758 |
SA-ResNet18 | 95.07% | 1:39:38 | 11,171,394 |
Paper List
SENet 论文: https://arxiv.org/abs/1709.01507 解读:https://zhuanlan.zhihu.com/p/102035721
Contribute
欢迎在issue中提出补充的文章paper和对应code链接。