[Preprint] Escaping the Big Data Paradigm with Compact Transformers, 2021

Overview

Compact Transformers

Preprint Link: Escaping the Big Data Paradigm with Compact Transformers

By Ali Hassani[1]*, Steven Walton[1]*, Nikhil Shah[1], Abulikemu Abuduweili[1], Jiachen Li[1,2], and Humphrey Shi[1,2,3]

*Ali Hassani and Steven Walton contributed equal work

In association with SHI Lab @ University of Oregon[1] and UIUC[2], and Picsart AI Research (PAIR)[3]

model-sym

Abstract

With the rise of Transformers as the standard for language processing, and their advancements in computer vi-sion, along with their unprecedented size and amounts of training data, many have come to believe that they are not suitable for small sets of data. This trend leads to great concerns, including but not limited to: limited availability of data in certain scientific domains and the exclusion ofthose with limited resource from research in the field. In this paper, we dispel the myth that transformers are “data-hungry” and therefore can only be applied to large sets of data. We show for the first time that with the right size and tokenization, transformers can perform head-to-head with state-of-the-art CNNs on small datasets. Our model eliminates the requirement for class token and positional embed-dings through a novel sequence pooling strategy and the use of convolutions. We show that compared to CNNs, our compact transformers have fewer parameters and MACs,while obtaining similar accuracies. Our method is flexible in terms of model size, and can have as little as 0.28M parameters and achieve reasonable results. It can reach an ac-curacy of 94.72% when training from scratch on CIFAR-10,which is comparable with modern CNN based approaches,and a significant improvement over previous Transformer based models. Our simple and compact design democratizes transformers by making them accessible to those equipped with basic computing resources and/or dealing with important small datasets.

ViT-Lite: Lightweight ViT

Different from ViT we show that an image is not always worth 16x16 words and the image patch size matters. Transformers are not in fact ''data-hungry,'' as the authors proposed, and smaller patching can be used to train efficiently on smaller datasets.

CVT: Compact Vision Transformers

Compact Vision Transformers better utilize information with Sequence Pooling post encoder, eliminating the need for the class token while achieving better accuracy.

CCT: Compact Convolutional Transformers

Compact Convolutional Transformers not only use the sequence pooling but also replace the patch embedding with a convolutional embedding, allowing for better inductive bias and making positional embeddings optional. CCT achieves better accuracy than ViT-Lite and CVT and increases the flexibility of the input parameters.

Comparison

How to run

Please make sure you're using the latest stable PyTorch version:

torch==1.8.1
torchvision==0.8.1

Refer to PyTorch's Getting Started page for detailed instructions.

We recommend starting with our faster version (CCT-2/3x2) which can be run with the following command. If you are running on a CPU we recommend this model.

python main.py \
       --model cct_2 \
       --conv-size 3 \
       --conv-layers 2 \
       path/to/cifar10

If you would like to run our best running model (CCT-7/3x1) with CIFAR-10 on your machine, please use the following command.

python main.py \
       --model cct_7 \
       --conv-size 3 \
       --conv-layers 1 \
       path/to/cifar10

Results

Type can be read in the format L/PxC where L is the number of transformer layers, P is the patch/convolution size, and C (CCT only) is the number of convolutional layers.

Model Type CIFAR-10 CIFAR-100 # Params MACs
ViT-Lite 7/4 91.38% 69.75% 3.717M 0.239G
6/4 90.94% 69.20% 3.191M 0.205G
CVT 7/4 92.43% 73.01% 3.717M 0.236G
6/4 92.58% 72.25% 3.190M 0.202G
CCT 2/3x2 89.17% 66.90% 0.284M 0.033G
4/3x2 91.45% 70.46% 0.482M 0.046G
6/3x2 93.56% 74.47% 3.327M 0.241G
7/3x2 93.65% 74.77% 3.853M 0.275G
7/3x1 94.72% 76.67% 3.760M 0.947G

Model zoo will be available soon.

Citation

@article{hassani2021escaping,
	title        = {Escaping the Big Data Paradigm with Compact Transformers},
	author       = {Ali Hassani and Steven Walton and Nikhil Shah and Abulikemu Abuduweili and Jiachen Li and Humphrey Shi},
	year         = 2021,
	url          = {https://arxiv.org/abs/2104.05704},
	eprint       = {2104.05704},
	archiveprefix = {arXiv},
	primaryclass = {cs.CV}
}
Comments
  • Experiment Result CCT_7

    Experiment Result CCT_7

    Hi,

    Thank you for the wonderful paper. I have trained the CCT_7/3x1 by default and however, the result is differ from the paper. CIFAR10: 93.67 CIFAR100: 73.15 I've tried it many times and It never reaches 94.72% for CIFAR10 and 76.67% for CIFAR100. Could you please help me what makes it differ?

    Thanks.

    help wanted 
    opened by StephenEkaputra 12
  • Thank you for your nice work | Question on Flowers dataset

    Thank you for your nice work | Question on Flowers dataset

    Hi @alihassanijr,

    Many thanks for your super interesting work, and sharing the elegant code with the community.

    I am able to replicate your CIFAR-10 and CIFAR-100 results perfectly. But, there is a large gap when it comes to the Flowers dataset.

    After running the following command:

    python train.py -c configs/datasets/flowers102.yml --model cct_7_7x2_224_sine ./data/flowers102 --log-wandb
    

    I am able to get only 62% accuracy. Please find the wandb report here. I am attaching the logs too:
    output.log

    The only change that I made to the code was to use the PyTorch dataloaders:

    from torchvision.datasets import Flowers102
    dataset_train = Flowers102(root=args.data_dir, split="train", download=True)
    dataset_eval = Flowers102(root=args.data_dir, split="test", download=True)
    

    I am sure that this might be some minor configuration issue for Flowers Dataset, as I am able to replicate the results on CIFAR-10 and CIFAR-100.

    Thanks again, and it would be very kind of you if you could help me.

    Thanks, Joseph

    opened by JosephKJ 10
  • AttributeError: 'TransformerClassifier' object has no attribute 'num_tokens'

    AttributeError: 'TransformerClassifier' object has no attribute 'num_tokens'

    Hello, I got the error "AttributeError: 'TransformerClassifier' object has no attribute 'num_tokens'" when I use cct_7_7x2_224_sine and set pretrained=True

    bug 
    opened by XiaominLi1997 7
  • Transformer Encoder Code Similarity

    Transformer Encoder Code Similarity

    Hi @stevenwalton

    What is the difference between your "TransformerEncoderLayer" and original Vit paper "Transformer" class?

    Orginal VIT

    class Transformer(nn.Module):
        def __init__(self, dim, depth, heads, dim_head, mlp_dim, dropout):
            super().__init__()
            self.layers = nn.ModuleList([])  # There are using Residual
            for _ in range(depth):
                self.layers.append(nn.ModuleList([
                    Residual(PreNorm(dim, Attention(dim, heads=heads, dim_head=dim_head, dropout=dropout))),
                    # Here they implemented Residual
                    Residual(PreNorm(dim, FeedForward(dim, mlp_dim, dropout=dropout)))
                ]))
    
        def forward(self, x, mask=None):
            for attn, ff in self.layers:
                x = attn(x, mask=mask)  # Chnage in this part
                # embed()
                x = ff(x)
            return x
    
    

    Your Transformer

    class TransformerEncoderLayer(nn.Module):
        """
        Inspired by torch.nn.TransformerEncoderLayer and
        rwightman's timm package.
        """
    
        def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,
                     attention_dropout=0.1, drop_path_rate=0.1):
            super(TransformerEncoderLayer, self).__init__()
            self.pre_norm = nn.LayerNorm(d_model)
            self.self_attn = Attention(dim=d_model, num_heads=nhead,
                                       attention_dropout=attention_dropout, projection_dropout=dropout)
    
            self.linear1 = nn.Linear(d_model, dim_feedforward)
            self.dropout1 = nn.Dropout(dropout)
            self.norm1 = nn.LayerNorm(d_model)
            self.linear2 = nn.Linear(dim_feedforward, d_model)
            self.dropout2 = nn.Dropout(dropout)
    
            self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0 else nn.Identity()
    
            self.activation = F.gelu
    
        def forward(self, src: torch.Tensor, mask=None, *args, **kwargs) -> torch.Tensor:
            src = src + self.drop_path(self.self_attn(self.pre_norm(src)))
            src = self.norm1(src)
            src2 = self.linear2(self.dropout1(self.activation(self.linear1(src))))
            src = src + self.drop_path(self.dropout2(src2))
            return src
    

    Actually, I made some modifications in the original VIT and I want to add that modified part in your Transformer Encoder layer but your's code is different in terms of TransformerEncoderLayer.

    question 
    opened by khawar-islam 7
  • Config for training Flowers SOTA

    Config for training Flowers SOTA

    Hi,

    I'm trying to figure out how to train your model to achieve the SOTA accuracy you report on Flowers102. It seems like using finetuned/cct_14-7x2_flowers102.yml will download the model with the 99.76% test accuracy you report, but I can't find any config files which actually train this model from scratch (or from e.g. an ImageNet checkpoint if you use that). Do you mind pointing me to any config files for this that I might have missed, or else to a description of the training procedure for your SOTA Flowers102 model so that I can try to reproduce it?

    Thanks for your help, Will

    opened by wsgharvey 6
  • cifar100 HP with RandAug

    cifar100 HP with RandAug

    Hi, can you share the HP that your team trained it on RandAug with Timm? I have followed the same steps but still stuck on 76.9% and couldn't go any further (trained for 300 epochs should reach approximately 80%). Thanks for your help.

    question 
    opened by Justin900429 6
  • Recommendation

    Recommendation

    Thank you for sharing this amazing work. I am currently attempting to apply your ideas to a specific problem with bigger images sized 128x128. Do you have any recommendations on how to improve the performances of your network on bigger images?

    question 
    opened by Babars7 6
  • Question about reproducing CIFAR-10 results

    Question about reproducing CIFAR-10 results

    Hi, thank you for this very clean open-source implementation!

    I've been testing out some modifications and noticed, even without the modifications, I'm not quite achieving the same accuracies as you report on CIFAR-10. I wondered if you had any suggestions about things that I might have missed.

    Specifically, I tried to reproduce your results using one of your configs with the command python train.py -c configs/pretrained/cct_7-3x1_cifar10_300epochs.yml --model cct_7_3x1_32 datasets/CIFAR-10-images/ --log-wandb. I copied the dataset from this github repo. I couldn't find details on whether you use a train/validation split so then trained on all 50000 training images (i.e. with no validation set) and tested on all 10000 test images. I used your validate function for computing the test accuracy (by renaming the test image folder so that it is loaded as a validation set). After the full 300 epochs, I obtained 93.21% test accuracy, rather than the 96.53% that I think you report for this config in the README. Please let me know if there's anything I should do differently to obtain these results - perhaps using a different train/test split, computing the test loss in a different way, turning on EMA averaging, or if there's anything else that I might have missed.

    I also tried computing test statistics in the same way after loading your pretrained cct_7_3x1_32 checkpoint (instead of training it) and got a lower test accuracy of 91.67%. So this makes me think that the issue is likely related to testing rather than training.

    Thanks!

    opened by wsgharvey 5
  • HyperParameters of cifar

    HyperParameters of cifar

    I want to reproduce your paper, however I find the weight decay and learning rate of cifar10 in *.yaml is different from paper's said, could your please tell me which parameter should I use?

    opened by Holidays1999 5
  • What is config for 224 image size?

    What is config for 224 image size?

    How many image reductions should I have? Is this right?

    model = cct_7(img_size=im_size,
                  num_classes=classes,
                  positional_embedding='learnable',
                  n_conv_layers=2,
                  kernel_size=7,
                  stride=2,
                  padding=3,
                  pooling_kernel_size=3,
                  pooling_stride=2,
                  pooling_padding=1)
    
    question 
    opened by hadaev8 5
  • interpolation of imagenet

    interpolation of imagenet

    Hi, sorry to trouble you again. I have successfully your results on cifar10, and I plan to reproduce the results on ImageNet. I follow your configs for imagenet and my result is lower than yours about 2%.

    I would like to know the interpolation's type used in your paper for imagenet. As most paper's only use Bicubic, and your paper use "random" for imagenet.

    Thanks very much!

    opened by Holidays1999 4
  • Question about the batch size

    Question about the batch size

    Hi, this work is awesome. I just have one little question. The paper says the total batch size is 128 for CIFAR's and 4 GPU's were used in parallel. That doesn't mean the total batch size is 128 * 4 = 512, does it? DDP is for Imagenet, and non-distributed is for CIFAR, am I correct?

    Thanks a ton :)

    opened by imhgchoi 0
  • Output of the CCT classifier

    Output of the CCT classifier

    Hi,

    i am a little confuse about the output of the CCT. If I have a classification task with n possible classes, are the outputs the logits of each class? Thus to find the respective probabilities i have to apply a softmax function, or are the outputs the probabilities?

    Thanks in advance

    opened by enrico310786 0
  • Fixed text tokenizer mask shape

    Fixed text tokenizer mask shape

    Hi,

    There was a small problem with the mask returned from TextTokenizer forward function. The next function using this mask needs a 2D tensor. Therefore, in TextTokenizer, the mask should not be unsqueezed before being returned.

    The problem is fixed in this pull request.

    opened by HosseinZaredar 0
  • change TextTokenizer 2DConvolution to 1D

    change TextTokenizer 2DConvolution to 1D

    Hello,

    It seems to make more intuitive sense to use 1D convolutions here over the embedding with a channel size equal to the word embedding dimension, rather than the edge-case of a 2D convolution as is currently implemented. I would personally make this change to match other networks with similar convolutions over nn.Embeddings. I believe this has no change to performance but rather is presented for clarity. Thank you

    opened by simonlevine 1
  • Order of `LayerNorm` & `Residual`

    Order of `LayerNorm` & `Residual`

    First of all, thanks for your amazing work!

    And it seems that your TransformerEncoderLayer implementation is a bit different from the 'mainstream' implementations, because you create your residual link after the LayerNorm procedure:

    https://github.com/SHI-Labs/Compact-Transformers/blob/3f3d093746bc58213d9e9af4431242d305717855/src/utils/transformers.py#L96-L99

    However, from the original paper of ViT and many other implementations, the residual link is created before the LayerNorm procedure:

    src = src + self.drop_path(self.self_attn(self.pre_norm(src)))
    src2 = self.norm1(src)
    src2 = self.linear2(self.dropout1(self.activation(self.linear1(src2))))
    src = src + self.drop_path(self.dropout2(src2))
    

    I'm just wondering whether this is on purpose or some kind of 'typo'? Thanks in advance!

    opened by carefree0910 1
Owner
SHI Lab
Research in Synergetic & Holistic Intelligence, with current focus on Computer Vision, Machine Learning, and AI Systems & Applications
SHI Lab
Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks

Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks, which modifies the input text with a textual template and directly uses PLMs to conduct pre-trained tasks. This library provides a standard, flexible and extensible framework to deploy the prompt-learning pipeline. OpenPrompt supports loading PLMs directly from huggingface transformers. In the future, we will also support PLMs implemented by other libraries.

THUNLP 2.3k Jan 8, 2023
Big Bird: Transformers for Longer Sequences

BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle.

Google Research 457 Dec 23, 2022
🌸 fastText + Bloom embeddings for compact, full-coverage vectors with spaCy

floret: fastText + Bloom embeddings for compact, full-coverage vectors with spaCy floret is an extended version of fastText that can produce word repr

Explosion 222 Dec 16, 2022
BMInf (Big Model Inference) is a low-resource inference package for large-scale pretrained language models (PLMs).

BMInf (Big Model Inference) is a low-resource inference package for large-scale pretrained language models (PLMs).

OpenBMB 377 Jan 2, 2023
CPC-big and k-means clustering for zero-resource speech processing

The CPC-big model and k-means checkpoints used in Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing.

Benjamin van Niekerk 5 Nov 23, 2022
Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.

Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra. What is Lightning Tran

Pytorch Lightning 581 Dec 21, 2022
Code for CVPR 2021 paper: Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning

Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning This is the PyTorch companion code for the paper: A

Amazon 69 Jan 3, 2023
[ICCV 2021] Instance-level Image Retrieval using Reranking Transformers

Instance-level Image Retrieval using Reranking Transformers Fuwen Tan, Jiangbo Yuan, Vicente Ordonez, ICCV 2021. Abstract Instance-level image retriev

UVA Computer Vision 86 Dec 28, 2022
Transformers implementation for Fall 2021 Clinic

Installation Download miniconda3 if not already installed You can check by running typing conda in command prompt. Use conda to create an environment

Aakash Tripathi 1 Oct 28, 2021
:mag: End-to-End Framework for building natural language search interfaces to data by utilizing Transformers and the State-of-the-Art of NLP. Supporting DPR, Elasticsearch, HuggingFace’s Modelhub and much more!

Haystack is an end-to-end framework that enables you to build powerful and production-ready pipelines for different search use cases. Whether you want

deepset 1.4k Feb 18, 2021
Label data using HuggingFace's transformers and automatically get a prediction service

Label Studio for Hugging Face's Transformers Website • Docs • Twitter • Join Slack Community Transfer learning for NLP models by annotating your textu

Heartex 135 Dec 29, 2022
This repository contains data used in the NAACL 2021 Paper - Proteno: Text Normalization with Limited Data for Fast Deployment in Text to Speech Systems

Proteno This is the data release associated with the corresponding NAACL 2021 Paper - Proteno: Text Normalization with Limited Data for Fast Deploymen

null 37 Dec 4, 2022
Framework for fine-tuning pretrained transformers for Named-Entity Recognition (NER) tasks

NERDA Not only is NERDA a mesmerizing muppet-like character. NERDA is also a python package, that offers a slick easy-to-use interface for fine-tuning

Ekstra Bladet 141 Dec 30, 2022
KoBART model on huggingface transformers

KoBART-Transformers SKT에서 공개한 KoBART를 편리하게 사용할 수 있게 transformers로 포팅하였습니다. Install (Optional) BartModel과 PreTrainedTokenizerFast를 이용하면 설치하실 필요 없습니다. p

Hyunwoong Ko 58 Dec 7, 2022
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.

State-of-the-art Natural Language Processing for PyTorch and TensorFlow 2.0 ?? Transformers provides thousands of pretrained models to perform tasks o

Hugging Face 77.3k Jan 3, 2023
:mag: Transformers at scale for question answering & neural search. Using NLP via a modular Retriever-Reader-Pipeline. Supporting DPR, Elasticsearch, HuggingFace's Modelhub...

Haystack is an end-to-end framework for Question Answering & Neural search that enables you to ... ... ask questions in natural language and find gran

deepset 6.4k Jan 9, 2023
🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy This package provides spaCy components and architectures to use tr

Explosion 1.2k Jan 8, 2023
spaCy plugin for Transformers , Udify, ELmo, etc.

Camphr - spaCy plugin for Transformers, Udify, Elmo, etc. Camphr is a Natural Language Processing library that helps in seamless integration for a wid

null 342 Nov 21, 2022
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.

State-of-the-art Natural Language Processing for PyTorch and TensorFlow 2.0 ?? Transformers provides thousands of pretrained models to perform tasks o

Hugging Face 40.9k Feb 18, 2021