Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)

Overview

Awesome Visual-Transformer Awesome

Collect some Transformer with Computer-Vision (CV) papers.

If you find some overlooked papers, please open issues or pull requests (recommended).

Papers

Transformer original paper

Technical blog

  • [Chinese Blog] 3W字长文带你轻松入门视觉transformer [Link]
  • [Chinese Blog] Vision Transformer 超详细解读 (原理分析+代码解读) [Link]

Survey

  • A Survey of Visual Transformers [paper] - 2021.11.30
  • Transformers in Vision: A Survey [paper] - 2021.02.22
  • A Survey on Visual Transformer [paper] - 2021.1.30
  • A Survey of Transformers [paper] - 2020.6.09

arXiv papers

  • [Discrete ViT] Discrete Representations Strengthen Vision Transformer Robustness [paper]
  • [StyleSwin] StyleSwin: Transformer-based GAN for High-resolution Image Generation [paper][code]
  • [SReT] Sliced Recursive Transformer [paper] [code]
  • Fast Point Transformer [paper]
  • Dynamic Token Normalization Improves Vision Transformer [paper]
  • TokenLearner: What Can 8 Learned Tokens Do for Images and Videos? [paper] [code]
  • Swin Transformer V2: Scaling Up Capacity and Resolution [paper]
  • [Restormer] Restormer: Efficient Transformer for High-Resolution Image Restoration [paper]
  • [MAE] Masked Autoencoders Are Scalable Vision Learners [paper]
  • Improved Robustness of Vision Transformer via PreLayerNorm in Patch Embedding [paper]
  • [ORViT] Object-Region Video Transformers [paper] [code]
  • Adaptively Multi-view and Temporal Fusing Transformer for 3D Human Pose Estimation [paper] [code]
  • [NViT] NViT: Vision Transformer Compression and Parameter Redistribution [paper]
  • 6D-ViT: Category-Level 6D Object Pose Estimation via Transformer-based Instance Representation Learning [paper]
  • Adversarial Token Attacks on Vision Transformers [paper]
  • Contextual Transformer Networks for Visual Recognition [paper] [code]
  • [TranSalNet] TranSalNet: Visual saliency prediction using transformers [paper]
  • [MobileViT] MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer [paper]
  • A free lunch from ViT: Adaptive Attention Multi-scale Fusion Transformer for Fine-grained Visual Recognition [paper]
  • [3D-Transformer] 3D-Transformer: Molecular Representation with Transformer in 3D Space [paper]
  • [CCTrans] CCTrans: Simplifying and Improving Crowd Counting with Transformer [paper]
  • [UFO-ViT] UFO-ViT: High Performance Linear Vision Transformer without Softmax [paper]
  • Sparse Spatial Transformers for Few-Shot Learning [paper]
  • Vision Transformer Hashing for Image Retrieval [paper]
  • [OH-Former] OH-Former: Omni-Relational High-Order Transformer for Person Re-Identification [paper]
  • [Pix2seq] Pix2seq: A Language Modeling Framework for Object Detection [paper]
  • [CoAtNet] CoAtNet: Marrying Convolution and Attention for All Data Sizes [paper]
  • [LOTR] LOTR: Face Landmark Localization Using Localization Transformer [paper]
  • Transformer-Unet: Raw Image Processing with Unet [paper]
  • [GraFormer] GraFormer: Graph Convolution Transformer for 3D Pose Estimation [paper]
  • [CDTrans] CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [paper]
  • PQ-Transformer: Jointly Parsing 3D Objects and Layouts from Point Clouds [paper] [code]
  • Anchor DETR: Query Design for Transformer-Based Detector [paper] [code]
  • [ESRT] Efficient Transformer for Single Image Super-Resolution [paper]
  • [MaskFormer] MaskFormer: Per-Pixel Classification is Not All You Need for Semantic Segmentation [paper] [code]
  • [SwinIR] SwinIR: Image Restoration Using Swin Transformer [paper] [code]
  • [Trans4Trans] Trans4Trans: Efficient Transformer for Transparent Object and Semantic Scene Segmentation in Real-World Navigation Assistance [paper]
  • Do Vision Transformers See Like Convolutional Neural Networks? [paper]
  • Boosting Salient Object Detection with Transformer-based Asymmetric Bilateral U-Net [paper]
  • Light Field Image Super-Resolution with Transformers [paper] [code]
  • Focal Self-attention for Local-Global Interactions in Vision Transformers [paper] [code]
  • Polyp-PVT: Polyp Segmentation with Pyramid Vision Transformers [paper] [code]
  • Mobile-Former: Bridging MobileNet and Transformer [paper]
  • [TriTransNet] TriTransNet: RGB-D Salient Object Detection with a Triplet Transformer Embedding Network [paper]
  • [PSViT] PSViT: Better Vision Transformer via Token Pooling and Attention Sharing [paper]
  • Boosting Few-shot Semantic Segmentation with Transformers [paper] [code]
  • Congested Crowd Instance Localization with Dilated Convolutional Swin Transformer [paper]
  • Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer [paper]
  • [CrossFormer] CrossFormer: A Versatile Vision Transformer Based on Cross-scale Attention [paper] [code]
  • [Styleformer] Styleformer: Transformer based Generative Adversarial Networks with Style Vector [paper] [code]
  • [CMT] CMT: Convolutional Neural Networks Meet Vision Transformers [paper]
  • [TransAttUnet] TransAttUnet: Multi-level Attention-guided U-Net with Transformer for Medical Image Segmentation [paper]
  • TransClaw U-Net: Claw U-Net with Transformers for Medical Image Segmentation [paper]
  • [ViTGAN] ViTGAN: Training GANs with Vision Transformers [paper]
  • What Makes for Hierarchical Vision Transformer? [paper]
  • CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows [paper] [code]
  • [Trans4Trans] Trans4Trans: Efficient Transformer for Transparent Object Segmentation to Help Visually Impaired People Navigate in the Real World [paper]
  • [FFVT] Feature Fusion Vision Transformer for Fine-Grained Visual Categorization [paper]
  • [TransformerFusion] TransformerFusion: Monocular RGB Scene Reconstruction using Transformers [paper]
  • Escaping the Big Data Paradigm with Compact Transformers [paper]
  • How to train your ViT? Data, Augmentation,and Regularization in Vision Transformers [paper]
  • Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks [paper]
  • [XCiT] XCiT: Cross-Covariance Image Transformers [paper] [code]
  • Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer [paper] [code]
  • Video Swin Transformer [paper] [code]
  • [VOLO] VOLO: Vision Outlooker for Visual Recognition [paper] [code]
  • Transformer Meets Convolution: A Bilateral Awareness Net-work for Semantic Segmentation of Very Fine Resolution Ur-ban Scene Images [paper]
  • [P2T] P2T: Pyramid Pooling Transformer for Scene Understanding [paper]
  • End-to-end Temporal Action Detection with Transformer [paper] [code]
  • How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers [paper]
  • Efficient Self-supervised Vision Transformers for Representation Learning [paper]
  • Space-time Mixing Attention for Video Transformer [paper]
  • Transformed CNNs: recasting pre-trained convolutional layers with self-attention [paper]
  • [CAT] CAT: Cross Attention in Vision Transformer [paper]
  • Scaling Vision Transformers [paper]
  • [DETReg] DETReg: Unsupervised Pretraining with Region Priors for Object Detection [paper] [code]
  • Chasing Sparsity in Vision Transformers:An End-to-End Exploration [paper]
  • [MViT] MViT: Mask Vision Transformer for Facial Expression Recognition in the wild [paper]
  • Demystifying Local Vision Transformer: Sparse Connectivity, Weight Sharing, and Dynamic Weight [paper]
  • On Improving Adversarial Transferability of Vision Transformers [paper]
  • Fully Transformer Networks for Semantic ImageSegmentation [paper]
  • Visual Transformer for Task-aware Active Learning [paper] [code]
  • Efficient Training of Visual Transformers with Small-Size Datasets [paper]
  • Reveal of Vision Transformers Robustness against Adversarial Attacks [paper]
  • Person Re-Identification with a Locally Aware Transformer [paper]
  • [Refiner] Refiner: Refining Self-attention for Vision Transformers [paper]
  • [ViTAE] ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias [paper]
  • Video Instance Segmentation using Inter-Frame Communication Transformers [paper]
  • Transformer in Convolutional Neural Networks [paper] [code]
  • [Uformer] Uformer: A General U-Shaped Transformer for Image Restoration [paper] [code]
  • Patch Slimming for Efficient Vision Transformers [paper]
  • [RegionViT] RegionViT: Regional-to-Local Attention for Vision Transformers [paper]
  • Associating Objects with Transformers for Video Object Segmentation [paper] [code]
  • Few-Shot Segmentation via Cycle-Consistent Transformer [paper]
  • Glance-and-Gaze Vision Transformer [paper] [code]
  • Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers [paper]
  • [DynamicViT] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification [paper] [code]
  • When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations [paper] [code]
  • Unsupervised Out-of-Domain Detection via Pre-trained Transformers [paper]
  • [TransMIL] TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classication [paper]
  • [TransVOS] TransVOS: Video Object Segmentation with Transformers [paper]
  • [KVT] KVT: k-NN Attention for Boosting Vision Transformers [paper]
  • [MSG-Transformer] MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens [paper] [code]
  • [SegFormer] SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers [paper] [code]
  • [SDNet] SDNet: mutil-branch for single image deraining using swin [paper] [code]
  • [DVT] Not All Images are Worth 16x16 Words: Dynamic Vision Transformers with Adaptive Sequence Length [paper]
  • [GazeTR] Gaze Estimation using Transformer [paper] [code]
  • Transformer-Based Deep Image Matching for Generalizable Person Re-identification [paper]
  • Less is More: Pay Less Attention in Vision Transformers [paper]
  • [FoveaTer] FoveaTer: Foveated Transformer for Image Classification [paper]
  • [TransDA] Transformer-Based Source-Free Domain Adaptation [paper] [code]
  • An Attention Free Transformer [paper]
  • [PTNet] PTNet: A High-Resolution Infant MRI Synthesizer Based on Transformer [paper]
  • [ResT] ResT: An Efficient Transformer for Visual Recognition [paper] [code]
  • [CogView] CogView: Mastering Text-to-Image Generation via Transformers [paper]
  • [NesT] Aggregating Nested Transformers [paper]
  • [TAPG] Temporal Action Proposal Generation with Transformers [paper]
  • Boosting Crowd Counting with Transformers [paper]
  • [COTR] COTR: Convolution in Transformer Network for End to End Polyp Detection [paper]
  • [TransVOD] End-to-End Video Object Detection with Spatial-Temporal Transformers [paper] [code]
  • Intriguing Properties of Vision Transformers [paper] [code]
  • Combining Transformer Generators with Convolutional Discriminators [paper]
  • Rethinking the Design Principles of Robust Vision Transformer [paper]
  • Vision Transformers are Robust Learners [paper] [code]
  • Manipulation Detection in Satellite Images Using Vision Transformer [paper]
  • [Swin-Unet] Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation [paper] [code]
  • Self-Supervised Learning with Swin Transformers [paper] [code]
  • [SCTN] SCTN: Sparse Convolution-Transformer Network for Scene Flow Estimation [paper]
  • [RelationTrack] RelationTrack: Relation-aware Multiple Object Tracking with Decoupled Representation [paper]
  • [VGTR] Visual Grounding with Transformers [paper]
  • [PST] Visual Composite Set Detection Using Part-and-Sum Transformers [paper]
  • [TrTr] TrTr: Visual Tracking with Transformer [paper] [code]
  • [MOTR] MOTR: End-to-End Multiple-Object Tracking with TRansformer [paper] [code]
  • Attention for Image Registration (AiR): an unsupervised Transformer approach [paper]
  • [TransHash] TransHash: Transformer-based Hamming Hashing for Efficient Image Retrieval [paper]
  • [ISTR] ISTR: End-to-End Instance Segmentation with Transformers [paper] [code]
  • [CAT] CAT: Cross-Attention Transformer for One-Shot Object Detection [paper]
  • [CoSformer] CoSformer: Detecting Co-Salient Object with Transformers [paper]
  • End-to-End Attention-based Image Captioning [paper]
  • [PMTrans] Pyramid Medical Transformer for Medical Image Segmentation [paper]
  • [HandsFormer] HandsFormer: Keypoint Transformer for Monocular 3D Pose Estimation ofHands and Object in Interaction [paper]
  • [GasHis-Transformer] GasHis-Transformer: A Multi-scale Visual Transformer Approach for Gastric Histopathology Image Classification [paper]
  • Emerging Properties in Self-Supervised Vision Transformers [paper]
  • [InTra] Inpainting Transformer for Anomaly Detection [paper]
  • [Twins] Twins: Revisiting Spatial Attention Design in Vision Transformers [paper] [code]
  • [MLMSPT] Point Cloud Learning with Transformer [paper]
  • Medical Transformer: Universal Brain Encoder for 3D MRI Analysis [paper]
  • [ConTNet] ConTNet: Why not use convolution and transformer at the same time? [paper] [code]
  • [DTNet] Dual Transformer for Point Cloud Analysis [paper]
  • Improve Vision Transformers Training by Suppressing Over-smoothing [paper] [code]
  • Transformer Meets DCFAM: A Novel Semantic Segmentation Scheme for Fine-Resolution Remote Sensing Images [paper]
  • [M3DeTR] M3DeTR: Multi-representation, Multi-scale, Mutual-relation 3D Object Detection with Transformers [paper] [code]
  • [Skeletor] Skeletor: Skeletal Transformers for Robust Body-Pose Estimation [paper]
  • [FaceT] Learning to Cluster Faces via Transformer [paper]
  • [MViT] Multiscale Vision Transformers [paper] [code]
  • [VATT] VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text [paper]
  • [So-ViT] So-ViT: Mind Visual Tokens for Vision Transformer [paper] [code]
  • Token Labeling: Training a 85.5% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet [paper] [code]
  • [TransRPPG] TransRPPG: Remote Photoplethysmography Transformer for 3D Mask Face Presentation Attack Detection [paper]
  • [VideoGPT] VideoGPT: Video Generation using VQ-VAE and Transformers [paper]
  • [M2TR] M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [paper]
  • Transformer Transforms Salient Object Detection and Camouflaged Object Detection [paper]
  • [TransCrowd] TransCrowd: Weakly-Supervised Crowd Counting with Transformer [paper] [code]
  • Visual Transformer Pruning [paper]
  • Self-supervised Video Retrieval Transformer Network [paper]
  • Vision Transformer using Low-level Chest X-ray Feature Corpus for COVID-19 Diagnosis and Severity Quantification [paper]
  • [TransGAN] TransGAN: Two Transformers Can Make One Strong GAN [paper] [code]
  • Geometry-Free View Synthesis: Transformers and no 3D Priors [paper] [code]
  • [CoaT] Co-Scale Conv-Attentional Image Transformers [paper] [code]
  • [LocalViT] LocalViT: Bringing Locality to Vision Transformers [paper] [code]
  • [CIT] Cloth Interactive Transformer for Virtual Try-On [paper] [code]
  • Handwriting Transformers [paper]
  • [SiT] SiT: Self-supervised vIsion Transformer [paper] [code]
  • On the Robustness of Vision Transformers to Adversarial Examples [paper]
  • An Empirical Study of Training Self-Supervised Visual Transformers [paper]
  • A Video Is Worth Three Views: Trigeminal Transformers for Video-based Person Re-identification [paper]
  • [AOT-GAN] Aggregated Contextual Transformations for High-Resolution Image Inpainting [paper] [code]
  • Deepfake Detection Scheme Based on Vision Transformer and Distillation [paper]
  • [ATAG] Augmented Transformer with Adaptive Graph for Temporal Action Proposal Generation [paper]
  • [TubeR] TubeR: Tube-Transformer for Action Detection [paper]
  • [AAformer] AAformer: Auto-Aligned Transformer for Person Re-Identification [paper]
  • [TFill] TFill: Image Completion via a Transformer-Based Architecture [paper]
  • Group-Free 3D Object Detection via Transformers [paper] [code]
  • [STGT] Spatial-Temporal Graph Transformer for Multiple Object Tracking [paper]
  • Going deeper with Image Transformers[paper]
  • [Meta-DETR] Meta-DETR: Few-Shot Object Detection via Unified Image-Level Meta-Learning [paper [code]
  • [DA-DETR] DA-DETR: Domain Adaptive Detection Transformer by Hybrid Attention [paper]
  • Robust Facial Expression Recognition with Convolutional Visual Transformers [paper]
  • Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers [paper]
  • Spatiotemporal Transformer for Video-based Person Re-identification[paper]
  • [TransUNet] TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation [paper] [code]
  • [CvT] CvT: Introducing Convolutions to Vision Transformers [paper] [code]
  • [TFPose] TFPose: Direct Human Pose Estimation with Transformers [paper]
  • [TransCenter] TransCenter: Transformers with Dense Queries for Multiple-Object Tracking [paper]
  • Face Transformer for Recognition [paper]
  • On the Adversarial Robustness of Visual Transformers [paper]
  • Understanding Robustness of Transformers for Image Classification [paper]
  • Lifting Transformer for 3D Human Pose Estimation in Video [paper]
  • [GSA-Net] Global Self-Attention Networks for Image Recognition[paper]
  • High-Fidelity Pluralistic Image Completion with Transformers [paper] [code]
  • [DPT] Vision Transformers for Dense Prediction [paper] [code]
  • [TransFG] TransFG: A Transformer Architecture for Fine-grained Recognition? [paper]
  • [TimeSformer] Is Space-Time Attention All You Need for Video Understanding? [paper]
  • Multi-view 3D Reconstruction with Transformer [paper]
  • Can Vision Transformers Learn without Natural Images? [paper] [code]
  • End-to-End Trainable Multi-Instance Pose Estimation with Transformers [paper]
  • Instance-level Image Retrieval using Reranking Transformers [paper] [code]
  • [BossNAS] BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search [paper] [code]
  • [CeiT] Incorporating Convolution Designs into Visual Transformers [paper]
  • [DeepViT] DeepViT: Towards Deeper Vision Transformer [paper]
  • Enhancing Transformer for Video Understanding Using Gated Multi-Level Attention and Temporal Adversarial Training [paper]
  • 3D Human Pose Estimation with Spatial and Temporal Transformers [paper] [code]
  • [SUNETR] SUNETR: Transformers for 3D Medical Image Segmentation [paper]
  • Scalable Visual Transformers with Hierarchical Pooling [paper]
  • [ConViT] ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases [paper]
  • [TransMed] TransMed: Transformers Advance Multi-modal Medical Image Classification [paper]
  • [U-Transformer] U-Net Transformer: Self and Cross Attention for Medical Image Segmentation [paper]
  • [SpecTr] SpecTr: Spectral Transformer for Hyperspectral Pathology Image Segmentation [paper] [code]
  • [TransBTS] TransBTS: Multimodal Brain Tumor Segmentation Using Transformer [paper] [code]
  • [SSTN] SSTN: Self-Supervised Domain Adaptation Thermal Object Detection for Autonomous Driving [paper]
  • Transformer is All You Need: Multimodal Multitask Learning with a Unified Transformer [paper] [code]
  • [CPVT] Do We Really Need Explicit Position Encodings for Vision Transformers? [paper] [code]
  • Deepfake Video Detection Using Convolutional Vision Transformer[paper]
  • Training Vision Transformers for Image Retrieval[paper]
  • [VTN] Video Transformer Network[paper]
  • [T2T-ViT] Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet [paper] [code]
  • [BoTNet] Bottleneck Transformers for Visual Recognition [paper]
  • [CPTR] CPTR: Full Transformer Network for Image Captioning [paper]
  • Learn to Dance with AIST++: Music Conditioned 3D Dance Generation [paper] [code]
  • [Trans2Seg] Segmenting Transparent Object in the Wild with Transformer [paper] [code]
  • Investigating the Vision Transformer Model for Image Retrieval Tasks [paper]
  • [Trear] Trear: Transformer-based RGB-D Egocentric Action Recognition [paper]
  • [VisualSparta] VisualSparta: Sparse Transformer Fragment-level Matching for Large-scale Text-to-Image Search [paper]
  • [TrackFormer] TrackFormer: Multi-Object Tracking with Transformers [paper]
  • [LETR] Line Segment Detection Using Transformers without Edges [paper]
  • [TAPE] Transformer Guided Geometry Model for Flow-Based Unsupervised Visual Odometry [paper]
  • [TRIQ] Transformer for Image Quality Assessment [paper] [code]
  • [TransTrack] TransTrack: Multiple-Object Tracking with Transformer [paper] [code]
  • [DeiT] Training data-efficient image transformers & distillation through attention [paper] [code]
  • [Pointformer] 3D Object Detection with Pointformer [paper]
  • [ViT-FRCNN] Toward Transformer-Based Object Detection [paper]
  • [Taming-transformers] Taming Transformers for High-Resolution Image Synthesis [paper] [code]
  • [SceneFormer] SceneFormer: Indoor Scene Generation with Transformers [paper]
  • [PCT] PCT: Point Cloud Transformer [paper]
  • [METRO] End-to-End Human Pose and Mesh Reconstruction with Transformers [paper]
  • [PED] DETR for Pedestrian Detection[paper]
  • Transformer Guided Geometry Model for Flow-Based Unsupervised Visual Odometry[paper]
  • [C-Tran] General Multi-label Image Classification with Transformers [paper]
  • [TSP-FCOS] Rethinking Transformer-based Set Prediction for Object Detection [paper]
  • [ACT] End-to-End Object Detection with Adaptive Clustering Transformer [paper]
  • [STTR] Revisiting Stereo Depth Estimation From a Sequence-to-Sequence Perspective with Transformers [paper] [code]

2021

NeurIPS

  • Augmented Shortcuts for Vision Transformers [paper]
  • [YOLOS] You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection [paper] [code]
  • [CATs] Semantic Correspondence with Transformers [paper] [code]
  • [Moment-DETR] QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries [paper] [code]
  • Dual-stream Network for Visual Recognition [paper] [code]
  • [Container] Container: Context Aggregation Network [paper] [code]
  • [TNT] Transformer in Transformer [paper] [code]
  • T6D-Direct: Transformers for Multi-Object 6D Pose Direct Regression [paper]

ICCV

  • Swin Transformer: Hierarchical Vision Transformer using Shifted Windows (Marr Prize) [paper] [code]
  • [PoinTr] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers (oral) [paper] [code]
  • Paint Transformer: Feed Forward Neural Painting with Stroke Prediction (oral) ) [paper [code]
  • 3DVG-Transformer: Relation Modeling for Visual Grounding on Point Clouds [paper]
  • [THUNDR] THUNDR: Transformer-Based 3D Human Reconstruction With Markers [paper]
  • Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding [paper]
  • [PVT] Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions [paper] [code]
  • Spatial-Temporal Transformer for Dynamic Scene Graph Generation [paper]
  • [GLiT] GLiT: Neural Architecture Search for Global and Local Image Transformer [paper]
  • [TRAR] TRAR: Routing the Attention Spans in Transformer for Visual Question Answering [paper]
  • [UniT] UniT: Multimodal Multitask Learning With a Unified Transformer [paper] [code]
  • Stochastic Transformer Networks With Linear Competing Units: Application To End-to-End SL Translation [paper]
  • Transformer-Based Dual Relation Graph for Multi-Label Image Recognition [paper]
  • [LocalTrans] LocalTrans: A Multiscale Local Transformer Network for Cross-Resolution Homography Estimation [paper]
  • Improving 3D Object Detection With Channel-Wise Transformer [paper]
  • A Latent Transformer for Disentangled Face Editing in Images and Videos [paper] [code]
  • [GroupFormer] GroupFormer: Group Activity Recognition With Clustered Spatial-Temporal Transformer [paper]
  • Unified Questioner Transformer for Descriptive Question Generation in Goal-Oriented Visual Dialogue [paper]
  • [WB-DETR] WB-DETR: Transformer-Based Detector Without Backbone [paper]
  • The Animation Transformer: Visual Correspondence via Segment Matching [paper]
  • The Animation Transformer: Visual Correspondence via Segment Matching [paper]
  • Relaxed Transformer Decoders for Direct Action Proposal Generation [paper]
  • [PPT-Net] Pyramid Point Cloud Transformer for Large-Scale Place Recognition [paper] [code]
  • Multimodal Co-Attention Transformer for Survival Prediction in Gigapixel Whole Slide Images [paper]
  • Uncertainty-Guided Transformer Reasoning for Camouflaged Object Detection [paper]
  • Image Harmonization With Transformer [paper] [cpde]
  • [COTR] COTR: Correspondence Transformer for Matching Across Images [paper]
  • [MUSIQ] MUSIQ: Multi-Scale Image Quality Transformer [paper]
  • Episodic Transformer for Vision-and-Language Navigation [paper]
  • Action-Conditioned 3D Human Motion Synthesis With Transformer VAE [paper]
  • [CrackFormer] CrackFormer: Transformer Network for Fine-Grained Crack Detection [paper]
  • [HiT] HiT: Hierarchical Transformer With Momentum Contrast for Video-Text Retrieval [paper]
  • Event-Based Video Reconstruction Using Transformer [paper]
  • [STVGBert] STVGBert: A Visual-Linguistic Transformer Based Framework for Spatio-Temporal Video Grounding [paper]
  • [HiFT] HiFT: Hierarchical Feature Transformer for Aerial Tracking [paper] [code]
  • [DocFormer] DocFormer: End-to-End Transformer for Document Understanding [paper]
  • [LeViT] LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference [paper] [code]
  • [SignBERT] SignBERT: Pre-Training of Hand-Model-Aware Representation for Sign Language Recognition[paper]
  • [VidTr] VidTr: Video Transformer Without Convolutions [paper]
  • [ACTOR] Action-Conditioned 3D Human Motion Synthesis with Transformer VAE [paper]
  • [Segmenter] Segmenter: Transformer for Semantic Segmentation [paper] [code]
  • [Visformer] Visformer: The Vision-friendly Transformer [paper] [code]
  • [PnP-DETR] PnP-DETR: Towards Efficient Visual Analysis with Transformers (ICCV) [paper] [code]
  • [VoTr] Voxel Transformer for 3D Object Detection [paper]
  • [TransVG] TransVG: End-to-End Visual Grounding with Transformers [paper]
  • [3DETR] An End-to-End Transformer Model for 3D Object Detection [paper] [code]
  • [Eformer] Eformer: Edge Enhancement based Transformer for Medical Image Denoising [paper]
  • [TransFER] TransFER: Learning Relation-aware Facial Expression Representations with Transformers [paper]
  • [Oriented RCNN] Oriented Object Detection with Transformer [paper]
  • [ViViT] ViViT: A Video Vision Transformer [paper]
  • [Stark] Learning Spatio-Temporal Transformer for Visual Tracking [paper] [code]
  • [CT3D] Improving 3D Object Detection with Channel-wise Transformer [paper]
  • [VST] Visual Saliency Transformer [paper]
  • [PiT] Rethinking Spatial Dimensions of Vision Transformers [paper] [code]
  • [CrossViT] CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification [paper] [code]
  • [PointTransformer] Point Transformer [paper]
  • [TS-CAM] TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization [paper] [code]
  • [VTs] Visual Transformers: Token-based Image Representation and Processing for Computer Vision [paper]
  • [TransDepth] Transformer-Based Attention Networks for Continuous Pixel-Wise Prediction [paper] [code]
  • [Conditional DETR] Conditional DETR for Fast Training Convergence [paper] [code]
  • [PIT] PIT: Position-Invariant Transform for Cross-FoV Domain Adaptation [paper] [code]
  • [SOTR] SOTR: Segmenting Objects with Transformers [paper] [code]
  • [SnowflakeNet] SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer [paper] [code]
  • [TransPose] TransPose: Keypoint Localization via Transformer [paper] [code]
  • [TransReID] TransReID: Transformer-based Object Re-Identification [paper] [code]
  • [CWT] Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer [paper] [code]
  • Anticipative Video Transformer [paper] [code]
  • Rethinking and Improving Relative Position Encoding for Vision Transformer [paper] [code]
  • Vision Transformer with Progressive Sampling [paper] [code]
  • [SMCA] Fast Convergence of DETR with Spatially Modulated Co-Attention [paper] [code]
  • [AutoFormer] AutoFormer: Searching Transformers for Visual Recognition [paper] [code]

CVPR

  • Diverse Part Discovery: Occluded Person Re-identification with Part-Aware Transformer [paper]
  • [HOTR] HOTR: End-to-End Human-Object Interaction Detection with Transformers (oral) [paper]
  • [TransFuser] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [paper] [code]
  • Pose Recognition with Cascade Transformers [paper]
  • Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning [paper]
  • [LoFTR] LoFTR: Detector-Free Local Feature Matching with Transformers [paper] [code]
  • Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers [paper]
  • [SETR] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers [paper] [code]
  • [TransT] Transformer Tracking [paper] [code]
  • Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking (** oral**) [paper]
  • [VisTR] End-to-End Video Instance Segmentation with Transformers [paper]
  • Transformer Interpretability Beyond Attention Visualization [paper] [code]
  • [IPT] Pre-Trained Image Processing Transformer [paper]
  • [UP-DETR] UP-DETR: Unsupervised Pre-training for Object Detection with Transformers [paper]
  • [IQT] Perceptual Image Quality Assessment with Transformers (workshop) [paper]
  • High-Resolution Complex Scene Synthesis with Transformers (workshop) [paper]

ICML

  • Generative Video Transformer: Can Objects be the Words? [paper]
  • [GANsformer] Generative Adversarial Transformers [paper] [code]

ICRA

  • [NDT-Transformer] NDT-Transformer: Large-Scale 3D Point Cloud Localisation using the Normal Distribution Transform Representation [paper]

ICLR

  • [VTNet] VTNet: Visual Transformer Network for Object Goal Navigation [paper]
  • [Vision Transformer] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale [paper] [code]
  • [Deformable DETR] Deformable DETR: Deformable Transformers for End-to-End Object Detection [paper] [code]
  • [LAMBDANETWORKS] MODELING LONG-RANGE INTERACTIONS WITHOUT ATTENTION [paper] [code]

ACM MM

  • Video Transformer for Deepfake Detection with Incremental Learning[paper]
  • [HAT] HAT: Hierarchical Aggregation Transformers for Person Re-identification [paper]
  • Token Shift Transformer for Video Classification [paper] [code]
  • [DPT] DPT: Deformable Patch-based Transformer for Visual Recognition [paper] [code]

MICCAI

  • [UTNet] UTNet: A Hybrid Transformer Architecture for Medical Image Segmentation [paper]
  • [MedT] Medical Transformer: Gated Axial-Attention for Medical Image Segmentation [paper] [code]
  • [MCTrans] Multi-Compound Transformer for Accurate Biomedical Image Segmentation [paper]
  • [PNS-Net] Progressively Normalized Self-Attention Network for Video Polyp Segmentation [paper] [code]
  • [MBT-Net] A Multi-Branch Hybrid Transformer Networkfor Corneal Endothelial Cell Segmentation [paper]

ISIE

  • VT-ADL: A Vision Transformer Network for Image Anomaly Detection and Localization (ISIE) [paper]

CORL

  • [DETR3D] DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries [paper]

IJCAI

  • Medical Image Segmentation using Squeeze-and-Expansion Transformers [paper]

IROS

  • [YOGO] You Only Group Once: Efficient Point-Cloud Processing with Token Representation and Relation Inference Module (IROS) [paper] [code]
  • [PTT] PTT: Point-Track-Transformer Module for 3D Single Object Tracking in Point Clouds [paper] [code]

WACV

  • [LSTR] End-to-end Lane Shape Prediction with Transformers [paper] [code]

ICDAR

  • Vision Transformer for Fast and Efficient Scene Text Recognition [paper]

2020

  • [DETR] End-to-End Object Detection with Transformers (ECCV) [paper] [code]
  • [FPT] Feature Pyramid Transformer (CVPR) [paper] [code]
  • [TTSR] Learning Texture Transformer Network for Image Super-Resolution (CVPR) [paper] [code]
  • [STTN] Learning Joint Spatial-Temporal Transformations for Video Inpainting (ECCV) [paper] [code]

Acknowledgement

Thanks the template from Awesome-Crowd-Counting

Comments
  • Add CoFormer

    Add CoFormer

    Hi, @dk-liang. Thanks for this great repository. Please add CoFormer.

    Collaborative Transformers for Grounded Situation Recognition

    Paper: https://arxiv.org/abs/2203.16518 Code: https://github.com/jhcho99/CoFormer

    This paper is accepted to CVPR 2022.

    opened by jhcho99 3
  • Add VideoMAE

    Add VideoMAE

    VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training paper: https://arxiv.org/abs/2203.12602 code: https://github.com/MCG-NJU/VideoMAE

    opened by yztongzhan 3
  • Add Augvit on NeurIPS 2021

    Add Augvit on NeurIPS 2021

    Thanks for your awesome paper list ! Our paper 'Augmented Shortcuts for Vision Transformers' has accepted by NeurIPS 2021. Could you add it in the paper list? Thanks.

    paper link: https://arxiv.org/abs/2106.15941

    opened by yehuitang 2
  • A kindly remind of the status of CrossFormer

    A kindly remind of the status of CrossFormer

    opened by cheerss 1
  • Please add RelViT

    Please add RelViT

    Hi,

    Thanks for making this learning list and indeed I learned a lot. Just want to share one of our recent work on ViT and I hope it could help the community through your platform:

    RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning (ICLR 2022) arxiv | code In this work, we explore vision transformers for many visual relational reasoning tasks, including HICO and zero-shot HICO. We further introduce concept-guided contrastive learning that helps these models master visual reasoning without massive pertaining or extra training data.

    opened by jeasinema 1
  • Add HGOnet [WACV 2022]

    Add HGOnet [WACV 2022]

    Hi, @dk-liang, thanks for this great repository. Could you please consider adding HGOnet, which has been accepted in WACV 2022? Thanks in advance!

    Image-Adaptive Hint Generation via Vision Transformer for Outpainting paper: https://openaccess.thecvf.com/content/WACV2022/papers/Kong_Image-Adaptive_Hint_Generation_via_Vision_Transformer_for_Outpainting_WACV_2022_paper.pdf code: https://github.com/kdh4672/hgonet

    opened by kdh4672 1
  • Add TransFusion (BMVC 2021)

    Add TransFusion (BMVC 2021)

    Hi, @dk-liang, thanks for this great repository. Could you please consider adding TransFusion, which has been accepted in BMVC 2021? Thanks in advance!

    TransFusion: Cross-view Fusion with Transformer for 3D Human Pose Estimation paper: https://arxiv.org/abs/2110.09554 code: https://github.com/HowieMa/TransFusion-Pose

    opened by HowieMa 1
  • Add BatchFormer

    Add BatchFormer

    Hi @dk-liang, thanks for your awesome repository. Could you add BatchFormer which has been accepted in CVPR2022.

    arxiv: https://arxiv.org/abs/2203.01522 code: https://github.com/zhihou7/BatchFormer

    In addition, a more general version, BatchFormerV2, is also released in https://arxiv.org/abs/2204.01254, in which we design a new module and present the consistent effectiveness on object detection, panoptic segmentation, and image classification.

    Regards,

    opened by zhihou7 1
  • Add ICT

    Add ICT

    Hi, @dk-liang, please help add the below papers:

    [ICT] High-Fidelity Pluralistic Image Completion with Transformers [paper], [code], ICCV 2021

    [BEVT] BEVT: BERT Pretraining of Video Transformers [paper], [code], CVPR 2022

    [PeCo] PeCo: Perceptual Codebook for BERT Pre-training of Vision Transformers [paper]

    [MobileFormer] Mobile-Former: Bridging MobileNet and Transformer [paper], CVPR 2022

    opened by cddlyf 1
  • add UniFormer

    add UniFormer

    Uniformer: Unified Transformer for Efficient Spatiotemporal Representation Learning

    Accepted by ICLR 2022 arxiv: https://arxiv.org/abs/2201.04676 code: https://github.com/Sense-X/UniFormer

    opened by Andy1621 1
  • add some papers

    add some papers

    hi, there are some recent papers i read, and they are missing from here:

    TokenLearner: What Can 8 Learned Tokens Do for Images and Videos? https://arxiv.org/pdf/2106.11297v2.pdf

    Sliced Recursive Transformer https://arxiv.org/pdf/2111.05297.pdf

    opened by fawazsammani 1
  • Paper Status of P2T: Pyramid Pooling Transformer

    Paper Status of P2T: Pyramid Pooling Transformer

    Dear Dingkang,

    Thanks a lot for your project. Our paper P2T has been accepted by IEEE TPAMI 2022 recently. Could you please update the status of P2T? BTW, full code of P2T has also been released here: https://github.com/yuhuan-wu/P2T IEEE online address is here: https://ieeexplore.ieee.org/document/9870559

    Best, Yu-Huan

    opened by yuhuan-wu 0
Owner
dkliang
A Master student in Vlr group, HUST.
dkliang
arxiv-sanity, but very lite, simply providing the core value proposition of the ability to tag arxiv papers of interest and have the program recommend similar papers.

arxiv-sanity, but very lite, simply providing the core value proposition of the ability to tag arxiv papers of interest and have the program recommend similar papers.

Andrej 671 Dec 31, 2022
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Phil Wang 12.6k Jan 9, 2023
This repository builds a basic vision transformer from scratch so that one beginner can understand the theory of vision transformer.

vision-transformer-from-scratch This repository includes several kinds of vision transformers from scratch so that one beginner can understand the the

null 1 Dec 24, 2021
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.

Object Pose Estimation Demo This tutorial will go through the steps necessary to perform pose estimation with a UR3 robotic arm in Unity. You’ll gain

Unity Technologies 187 Dec 24, 2022
NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation (ACL-IJCNLP 2021)

NeuralWOZ This code is official implementation of "NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation". Sungdong Kim, Mi

NAVER AI 31 Oct 25, 2022
A repository for the updated version of CoinRun used to collect MUGEN, a multimodal video-audio-text dataset.

A repository for the updated version of CoinRun used to collect MUGEN, a multimodal video-audio-text dataset. This repo contains scripts to train RL agents to navigate the closed world and collect video data.

MUGEN 11 Oct 22, 2022
GluonMM is a library of transformer models for computer vision and multi-modality research

GluonMM is a library of transformer models for computer vision and multi-modality research. It contains reference implementations of widely adopted baseline models and also research work from Amazon Research.

null 42 Dec 2, 2022
The code for two papers: Feedback Transformer and Expire-Span.

transformer-sequential This repo contains the code for two papers: Feedback Transformer Expire-Span The training code is structured for long sequentia

Facebook Research 125 Dec 25, 2022
A simple but complete full-attention transformer with a set of promising experimental features from various papers

x-transformers A concise but fully-featured transformer, complete with a set of promising experimental features from various papers. Install $ pip ins

Phil Wang 2.3k Jan 3, 2023
Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding

Vision Longformer This project provides the source code for the vision longformer paper. Multi-Scale Vision Longformer: A New Vision Transformer for H

Microsoft 209 Dec 30, 2022
Open source Python module for computer vision

About PCV PCV is a pure Python library for computer vision based on the book "Programming Computer Vision with Python" by Jan Erik Solem. More details

Jan Erik Solem 1.9k Jan 6, 2023
PyTorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision.

PyTorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{CV2018, author = {Donny You ([email protected])}, howpubl

Donny You 40 Sep 14, 2022
Build fully-functioning computer vision models with PyTorch

Detecto is a Python package that allows you to build fully-functioning computer vision and object detection models with just 5 lines of code. Inferenc

Alan Bi 576 Dec 29, 2022
Implementation of self-attention mechanisms for general purpose. Focused on computer vision modules. Ongoing repository.

Self-attention building blocks for computer vision applications in PyTorch Implementation of self attention mechanisms for computer vision in PyTorch

AI Summer 962 Dec 23, 2022
Datasets, Transforms and Models specific to Computer Vision

torchvision The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision. Installat

null 13.1k Jan 2, 2023
Repository providing a wide range of self-supervised pretrained models for computer vision tasks.

Hierarchical Pretraining: Research Repository This is a research repository for reproducing the results from the project "Self-supervised pretraining

Colorado Reed 53 Nov 9, 2022
A PyTorch-Based Framework for Deep Learning in Computer Vision

TorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{you2019torchcv, author = {Ansheng You and Xiangtai Li and Zhen Zhu a

Donny You 2.2k Jan 9, 2023
Open Source Differentiable Computer Vision Library for PyTorch

Kornia is a differentiable computer vision library for PyTorch. It consists of a set of routines and differentiable modules to solve generic computer

kornia 7.6k Jan 4, 2023
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

null 107 Dec 2, 2022