A PyTorch library for Vision Transformers

Overview

VFormer

A PyTorch library for Vision Transformers

Getting Started

Read the contributing guidelines in CONTRIBUTING.rst to learn how to start contributing.

Comments
  • Add attention visualization methods

    Add attention visualization methods

    • This article details different ways of visualizing a transformer's attention. It also talks about how such visualizations can aid in explainability of the models.
    • They also provide their code here.
    • We would like to have such visualization methods in the viz module.
    good first issue 
    opened by NeelayS 7
  • Remove _Projection class

    Remove _Projection class

    We can replace _Projection class with a one-liner if-else statement.

    Should we replace it with if-else or should we keep the current implementation?

    cc: @NeelayS @aditya-agrawal-30502 @alvanli

    opened by abhi-glitchhg 6
  •  Enhanced docstring

    Enhanced docstring

    During the last PR (#45), I had to revert back because of compatibility issues

    In this PR I have added some docstrings and Minor changes like changing variable names

    this PR is the same as - #48 with edited title :)

    @NeelayS

    opened by abhi-glitchhg 3
  • Restructuring AbsolutePositionEmbedding class

    Restructuring AbsolutePositionEmbedding class

    AbsolutePositionEmbedding class was structured specifically for the PVT, but we can use it in other models too if we re-structure it properly, it should also support sinusoidal position embedding or a separate class for Sinusoidal embedding also works.

    enhancement 
    opened by abhi-glitchhg 2
  • Add sharpness-aware optimizer

    Add sharpness-aware optimizer

    This paper describes how promoting smoothness with a recently proposed sharpness-aware optimizer substantially improves the performance of ViTs.

    It would be good to have an implementation of this optimizer in our library. It would fit in the functional module.

    A couple of PyTorch implementations are here and here.

    opened by NeelayS 2
  • Documentation related to visualization methods

    Documentation related to visualization methods

    I have added some fixes for page breaks in #86.

    Still, we need to enhance the docs for visualization methods.
    We can include the license/copyright disclaimer for visualization methods in our license or have a separate file.

    Additionally, we can add the sample outputs from these methods into the doc.

    CC : @NeelayS @aditya-agrawal-30502 @alvanli

    documentation enhancement good first issue 
    opened by abhi-glitchhg 1
  • [Paper] Visual Attention Network

    [Paper] Visual Attention Network

    paper - https://arxiv.org/abs/2202.09741 code- https://github.com/Visual-Attention-Network/VAN-Classification https://github.com/Visual-Attention-Network/VAN-Segmentation

    Paper implementation 
    opened by abhi-glitchhg 0
Releases(v0.1.3)
Owner
Society for Artificial Intelligence and Deep Learning
Society for Artificial Intelligence and Deep Learning
PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers

CvT: Introducing Convolutions to Vision Transformers Pytorch implementation of CvT: Introducing Convolutions to Vision Transformers Usage: img = torch

Rishikesh (ऋषिकेश) 193 Jan 3, 2023
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

Self-Supervised Vision Transformers with DINO PyTorch implementation and pretrained models for DINO. For details, see Emerging Properties in Self-Supe

Facebook Research 4.2k Jan 3, 2023
This repository contains PyTorch code for Robust Vision Transformers.

This repository contains PyTorch code for Robust Vision Transformers.

null 117 Dec 7, 2022
Official PyTorch implementation of Less is More: Pay Less Attention in Vision Transformers.

Less is More: Pay Less Attention in Vision Transformers Official PyTorch implementation of Less is More: Pay Less Attention in Vision Transformers. By

null 73 Jan 1, 2023
PyTorch evaluation code for Delving Deep into the Generalization of Vision Transformers under Distribution Shifts.

Out-of-distribution Generalization Investigation on Vision Transformers This repository contains PyTorch evaluation code for Delving Deep into the Gen

Chongzhi Zhang 72 Dec 13, 2022
A PyTorch implementation of ViTGAN based on paper ViTGAN: Training GANs with Vision Transformers.

ViTGAN: Training GANs with Vision Transformers A PyTorch implementation of ViTGAN based on paper ViTGAN: Training GANs with Vision Transformers. Refer

Hong-Jia Chen 127 Dec 23, 2022
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM

Class Activation Map methods implemented in Pytorch pip install grad-cam ⭐ Tested on many Common CNN Networks and Vision Transformers. ⭐ Includes smoo

Jacob Gildenblat 6.6k Jan 6, 2023
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Phil Wang 12.6k Jan 9, 2023
Multivariate Time Series Forecasting with efficient Transformers. Code for the paper "Long-Range Transformers for Dynamic Spatiotemporal Forecasting."

Spacetimeformer Multivariate Forecasting This repository contains the code for the paper, "Long-Range Transformers for Dynamic Spatiotemporal Forecast

QData 440 Jan 2, 2023
Implementation of various Vision Transformers I found interesting

Implementation of various Vision Transformers I found interesting

Kim Seonghyeon 78 Dec 6, 2022
Twins: Revisiting the Design of Spatial Attention in Vision Transformers

Twins: Revisiting the Design of Spatial Attention in Vision Transformers Very recently, a variety of vision transformer architectures for dense predic

null 482 Dec 18, 2022
Exploring whether attention is necessary for vision transformers

Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet Paper/Report TL;DR We replace the attention layer in a v

Luke Melas-Kyriazi 461 Jan 7, 2023
Contains code for the paper "Vision Transformers are Robust Learners".

Vision Transformers are Robust Learners This repository contains the code for the paper Vision Transformers are Robust Learners by Sayak Paul* and Pin

Sayak Paul 103 Jan 5, 2023
This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.

Introduction This is an official implementation of CvT: Introducing Convolutions to Vision Transformers. We present a new architecture, named Convolut

Microsoft 408 Dec 30, 2022
This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.

Introduction This is an official implementation of CvT: Introducing Convolutions to Vision Transformers. We present a new architecture, named Convolut

Bin Xiao 175 Jan 8, 2023
Official repository for "Intriguing Properties of Vision Transformers" (2021)

Intriguing Properties of Vision Transformers Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, & Ming-Hsuan Yang P

Muzammal Naseer 155 Dec 27, 2022
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification

DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification Created by Yongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie Zhou, Ch

Yongming Rao 414 Jan 1, 2023
Official repository for "On Improving Adversarial Transferability of Vision Transformers" (2021)

Improving-Adversarial-Transferability-of-Vision-Transformers Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Khan, Fatih Porikli arxiv link A

Muzammal Naseer 47 Dec 2, 2022
[Preprint] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang

Chasing Sparsity in Vision Transformers: An End-to-End Exploration Codes for [Preprint] Chasing Sparsity in Vision Transformers: An End-to-End Explora

VITA 64 Dec 8, 2022