Plug-and-play Module
Plug and play transformer you can find network structure and official complete code by clicking List
The following is to quickly retrieve the core code of the plug-and-play module
CV:
Survey:
Name | Paper | Time |
---|---|---|
Transformers in Vision: A Survey (v1,v2) | Paper:https://arxiv.org/abs/2101.01169 |
2021-01-05
|
Attention mechanisms and deep learning for machine vision:A survey of the state of the art | Paper:https://arxiv.org/abs/2106.07550 | 2021-06-05 |
Name | Paper Link | Main idea | Tutorial |
---|---|---|---|
1. Squeeze-and-Excitation | SE | https://github.com/leader402/Plug-and-play/blob/main/cv/tutorial/SE.py | |
2. Polarized Self-Attention | PSA | https://github.com/leader402/Plug-and-play/blob/main/cv/tutorial/PSA.py | |
3. Dual Attention Network | DaNet | 通道注意力和空间注意力 | https://github.com/leader402/Plug-and-play/blob/main/cv/tutorial/DaNet.py |
4. Self-attention | |||
5. Masked self-attention | |||
6. Multi-head attention | |||
7. Attention based deep learning architectures | |||
8. Single-channel model | |||
9. Multi-channel model | |||
10. Skip-layer model | |||
11. Bottom-up/top-down model | |||
12. CBAM: Convolutional Block Attention Module | CBAM | https://github.com/leader402/Plug-and-play/blob/main/cv/tutorial/CBAM.py |