SPT_LSA_ViT - Implementation for Visual Transformer for Small-size Datasets

Overview

Vision Transformer for Small-Size Datasets

Seung Hoon Lee and Seunghyun Lee and Byung Cheol Song | Paper

Inha University

Abstract

Recently, the Vision Transformer (ViT), which applied the transformer structure to the image classification task, has outperformed convolutional neural networks. However, the high performance of the ViT results from pre-training using a large-size dataset such as JFT-300M, and its dependence on a large dataset is interpreted as due to low locality inductive bias. This paper proposes Shifted Patch Tokenization (SPT) and Locality Self-Attention (LSA), which effectively solve the lack of locality inductive bias and enable it to learn from scratch even on small-size datasets. Moreover, SPT and LSA are generic and effective add-on modules that are easily applicable to various ViTs. Experimental results show that when both SPT and LSA were applied to the ViTs, the performance improved by an average of 2.96% in Tiny-ImageNet, which is a representative small-size dataset. Especially, Swin Transformer achieved an overwhelming performance improvement of 4.08% thanks to the proposed SPT and LSA.

Method

Shifted Patch Tokenization

teaser

Locality Self-Attention

teaser

Model Performance

Small-Size Dataset Classification

Model FLOPs CIFAR10 CIFAR100 SVHN Tiny-ImageNet
ViT 189.8 93.58 73.81 97.82 57.07
SL-ViT 199.2 94.53 76.92 97.79 61.07
T2T 643.0 95.30 77.00 97.90 60.57
SL-T2T 671.4 95.57 77.36 97.91 61.83
CaiT 613.8 94.91 76.89 98.13 64.37
SL-CaiT 623.3 95.81 80.32 98.28 67.18
PiT 279.2 94.24 74.99 97.83 60.25
SL-PiT 322.9 95.88 79.00 97.93 62.91
Swin 242.3 94.46 76.87 97.72 60.87
SL-Swin 284.9 95.93 79.99 97.92 64.95

Accuracy-Throughput Graph

teaser

How to train models

Pure ViT

python main.py --model vit 

SL-Swin

python main.py --model swin --is_LSA --is_SPT 

Citation

@misc{lee2021vision,
      title={Vision Transformer for Small-Size Datasets}, 
      author={Seung Hoon Lee and Seunghyun Lee and Byung Cheol Song},
      year={2021},
      eprint={2112.13492},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Comments
  • Patch size difference from paper

    Patch size difference from paper

    Hello. First of all, thanks for sharing this great repository! I have a question. In the paper, in section 4.1.2 when you describe the models you mention using patch size 8 for small scale experiments (image size 32 I guess) and patch size 16 for Swin/PIT. But in the source code you set it to 4 for ViTs, and to 2 for Swin/PIT if image size is 32, or 8 and 4 if image size is larger, respectively. So which one did you actually use? https://github.com/aanna0701/SPT_LSA_ViT/blob/main/models/create_model.py#L9

    opened by arkel23 3
  • ImageNet issue

    ImageNet issue

    Nice work! You use a small but high resolution network for the TinyImageNet. For example, the patch size of ViT is 8 and the window size of Swin is 4. When come to the ImageNet, the window size of Swin is 4 does not work. So what parameter you use for the ImageNet. Could you please give me some details about ViT and Swin?

    opened by SY-Xuan 2
  • The position of Layer Norm in Patch Embedding Layer

    The position of Layer Norm in Patch Embedding Layer

    Interesting Work! I have some questions about some implementation details. In the original ViT or Swin, the Layer Norm in Patch Embedding Layer is added after the Linear Projection. In your proposed SPT, you add the Layer Norm before the Linear Projection. Have you ever done some ablation about the position of the Layer Norm in Patch Embedding Layer? Why do you put it before the Linear Projection?

    opened by SY-Xuan 2
  • ImageNet-1k pretrained model

    ImageNet-1k pretrained model

    Hello,

    Thank you for your effort in publishing this repository online. I'd wonder whether you are planning to expand this repository by providing the results and pre-trained models on the ImageNet-1k dataset.

    opened by canerozer 1
  • validate acc is very low

    validate acc is very low

    image Hello! Your job is very nice,but when I train the model using your code,the validate accuracy is very low. I wander why? Is there something I need to do to the Tiny ImageNet?

    opened by 2225686820 7
  • stty size

    stty size

    Hi, thanks for sharing. And could you explain the stty size in the code _, term_width = os.popen('stty size', 'r').read().split() I have the ValueError: not enough values to unpack (expected 2, got 0)

    opened by jestland 1
Owner
Lee SeungHoon
Lee SeungHoon
A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or simply to separate onnx files to any size you want.

sne4onnx A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or

Katsuya Hyodo 10 Aug 30, 2022
Efficient Training of Visual Transformers with Small Datasets

Official codes for "Efficient Training of Visual Transformers with Small Datasets", NerIPS 2021.

Yahui Liu 112 Dec 25, 2022
Alex Pashevich 62 Dec 24, 2022
Implementation of "Debiasing Item-to-Item Recommendations With Small Annotated Datasets" (RecSys '20)

Debiasing Item-to-Item Recommendations With Small Annotated Datasets This is the code for our RecSys '20 paper. Other materials can be found here: Ful

Microsoft 34 Aug 10, 2022
Re-implementation of 'Grokking: Generalization beyond overfitting on small algorithmic datasets'

Re-implementation of the paper 'Grokking: Generalization beyond overfitting on small algorithmic datasets' Paper Original paper can be found here Data

Tom Lieberum 38 Aug 9, 2022
PyTorch implementation of Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets

Simple PyTorch Implementation of "Grokking" Implementation of Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets Usage Running

Teddy Koker 15 Sep 29, 2022
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

Jiezhang Cao 225 Nov 13, 2022
Minimal But Practical Image Classifier Pipline Using Pytorch, Finetune on ResNet18, Got 99% Accuracy on Own Small Datasets.

PyTorch Image Classifier Updates As for many users request, I released a new version of standared pytorch immage classification example at here: http:

JinTian 106 Nov 6, 2022
An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results

EasyDatas An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results Installation pip install git+https

Ximing Yang 4 Dec 14, 2021
Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data.

Deep Learning Dataset Maker Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data. How to use Down

deepbands 25 Dec 15, 2022
Cl datasets - PyTorch image dataloaders and utility functions to load datasets for supervised continual learning

Continual learning datasets Introduction This repository contains PyTorch image

berjaoui 5 Aug 28, 2022
This is an official implementation for "ResT: An Efficient Transformer for Visual Recognition".

ResT By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software Technology at Nanjing University] This repo is the official implement

zhql 222 Dec 13, 2022
improvement of CLIP features over the traditional resnet features on the visual question answering, image captioning, navigation and visual entailment tasks.

CLIP-ViL In our paper "How Much Can CLIP Benefit Vision-and-Language Tasks?", we show the improvement of CLIP features over the traditional resnet fea

null 310 Dec 28, 2022
DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off between model size and accuracy.

Differentiable Model Compression via Pseudo Quantization Noise DiffQ performs differentiable quantization using pseudo quantization noise. It can auto

Facebook Research 145 Dec 30, 2022
CenterFace(size of 7.3MB) is a practical anchor-free face detection and alignment method for edge devices.

CenterFace Introduce CenterFace(size of 7.3MB) is a practical anchor-free face detection and alignment method for edge devices. Recent Update 2019.09.

StarClouds 1.2k Dec 21, 2022
Code for "The Box Size Confidence Bias Harms Your Object Detector"

The Box Size Confidence Bias Harms Your Object Detector - Code Disclaimer: This repository is for research purposes only. It is designed to maintain r

Johannes G. 24 Dec 7, 2022
Computational Methods Course at UdeA. Forked and size reduced from:

Computational Methods for Physics & Astronomy Book version at: https://restrepo.github.io/ComputationalMethods by: Sebastian Bustamante 2014/2015 Dieg

Diego Restrepo 11 Sep 10, 2022
Parasite: a tool allowing you to compress and decompress files, to reduce their size

?? Parasite ?? Parasite is a tool written in Python3 allowing you to "compress" any file, reducing its size. ⭐ Features ⭐ + Fast + Good optimization,

Billy 30 Nov 25, 2022
Automatic labeling, conversion of different data set formats, sample size statistics, model cascade

Simple Gadget Collection for Object Detection Tasks Automatic image annotation Conversion between different annotation formats Obtain statistical info

llt 4 Aug 24, 2022