[CVPR 2021] VirTex: Learning Visual Representations from Textual Annotations

Overview

VirTex: Learning Visual Representations from Textual Annotations

Karan Desai and Justin Johnson
University of Michigan


CVPR 2021 arxiv.org/abs/2006.06666

Model Zoo, Usage Instructions and API docs: kdexd.github.io/virtex

VirTex is a pretraining approach which uses semantically dense captions to learn visual representations. We train CNN + Transformers from scratch on COCO Captions, and transfer the CNN to downstream vision tasks including image classification, object detection, and instance segmentation. VirTex matches or outperforms models which use ImageNet for pretraining -- both supervised or unsupervised -- despite using up to 10x fewer images.

virtex-model

Get the pretrained ResNet-50 visual backbone from our best performing VirTex model in one line without any installation!

import torch

# That's it, this one line only requires PyTorch.
model = torch.hub.load("kdexd/virtex", "resnet50", pretrained=True)

Note (For returning users before January 2021):

The pretrained models in our model zoo have changed from v1.0 onwards. They are slightly better tuned than older models, and reproduce the results in our CVPR 2021 accepted paper (arXiv v2). Some training and evaluation hyperparams are changed since v0.9. Please refer CHANGELOG.md

Usage Instructions

  1. How to setup this codebase?
  2. VirTex Model Zoo
  3. How to train your VirTex model?
  4. How to evaluate on downstream tasks?

Full documentation is available at kdexd.github.io/virtex.

Citation

If you find this code useful, please consider citing:

@inproceedings{desai2021virtex,
    title={{VirTex: Learning Visual Representations from Textual Annotations}},
    author={Karan Desai and Justin Johnson},
    booktitle={CVPR},
    year={2021}
}

Acknowledgments

We thank Harsh Agrawal, Mohamed El Banani, Richard Higgins, Nilesh Kulkarni and Chris Rockwell for helpful discussions and feedback on the paper. We thank Ishan Misra for discussions regarding PIRL evaluation protocol; Saining Xie for discussions about replicating iNaturalist evaluation as MoCo; Ross Girshick and Yuxin Wu for help with Detectron2 model zoo; Georgia Gkioxari for suggesting the Instance Segmentation pretraining task ablation; and Stefan Lee for suggestions on figure aesthetics. We thank Jia Deng for access to extra GPUs during project development; and UMich ARC-TS team for support with GPU cluster management. Finally, we thank all the Starbucks outlets in Ann Arbor for many hours of free WiFi. This work was partially supported by the Toyota Research Institute (TRI). However, note that this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity.

Comments
  • run on single input image

    run on single input image

    Hi,

    I would like to evaluate your work on a single image for image captioning. Can you tell me the steps I should follow for a single input? For instance, given a folder of images, how would I use your model for inference only on the folder of images?

    Looking at captioning-task from your description, I am not sure how to go about using my own dataset for evaluation of the model.

    Thanks

    opened by nikky4D 15
  • Training loss acts strangely after resuming

    Training loss acts strangely after resuming

    Hi,

    I want to reproduce your pre-training result. There was a accident that caused the interruption of my training. I restored it by the flag "--resume-from" and it acts weirdly. The training and validation loss jumped dramatically at the beginning and then decreased, which seems there is a problem about the restoring. Could you help me about this?

    opened by BaohaoLiao 11
  • BatchNormalization's Running Stats are Accumulated in ImageNet Linear Evaluation

    BatchNormalization's Running Stats are Accumulated in ImageNet Linear Evaluation

    Hi,

    Thanks for the nice paper and clear code!

    I found that the models are set with .train() in clf_linear.py. Thus the running averages (i.e., the states) of BatchNormalization layers will be accumulated when training the ImageNet datasets (via calling the forward function), and the backbone model seems not to be fully frozen. Is it a special design for this fine-tuning task?

    Best, Hao

    opened by airsplay 6
  • Removed link for pretrained model

    Removed link for pretrained model

    Hi,

    I am trying to download the pertained model for image captioning. But the download link has been removed. Could you please update the download link?

    opened by zhuang93 4
  • unable to find a valid cuDNN algorithm to run convolution

    unable to find a valid cuDNN algorithm to run convolution

    sorry to bother you, but I run into this problem and can not to find a way to fix it. it happens when I train the base virtex model. I have update the cuDNN version into 8.0.3, the former version is 7.6.5. both version have this error.

    opened by Charlie-zhang1406 4
  • Question about SentencePiece [SOS] and [EOS] ID.

    Question about SentencePiece [SOS] and [EOS] ID.

    Hi, I saw that in SentencePieceTrainer, as below you made EOS and BOS and MASK and PAS tokens equal to Zero " --bos_id=-1 --eos_id=-1" " --control_symbols=[SOS],[EOS],[MASK]" However, during the captioning, you define sos_index: int = 1, eos_index: int = 2, I am wondering if these setups , have any effects?

    opened by nooralahzadeh 4
  • No loss when pretraining on token classification

    No loss when pretraining on token classification

    I am trying to pretrain using the token classification method. I copied this repo and was just trying to reproduce the results from the study. I am experiencing problems when pretraining using token classification. It seems as though the loss values are not in the output_dict variable.

    When I use pretrain_virtex.py and log every 20 iterations, I get the following output. 2021-11-16T12:20:04.960052+0000: Iter 20 | Time: 0.764 sec | ETA: 54h 39m [Loss nan] [GPU 8774 MB]

    Do you have any idea what could be wrong in the code?

    opened by alexkern1997 3
  • Possible inconsistency in data preprocessing

    Possible inconsistency in data preprocessing

    Hi, thank you so much for sharing this code. It is very helpful.

    However, I am confused about the data preprocessing configuration. In the config files, Caffe-style image mean and std is used, but it seams they are not used in the code. Instead, the code seems to hard-code torchvision-style mean and std (here). Can you confirm that both pretraining and fine-tuning use the latter?

    Furthermore, I am not sure whether the images are in 0-255 range or 0-1. For Caffe-style mean and std, it should be 0-255, but it seems with your hard-coded mean and std, it should be 0-1. However, I noticed you are using opencv to load images, which loads in 0-255, and I did not find anywhere in the code that they are transformed into 0-1, except in supervised pretraining (here).

    Could you please comment on the aforementioned issues? Especially it is important to make sure the config is identical for all pretraining and downstream settings. Since you fine-tune all layers and don't freeze the stem, it is hard to notice if such inconsistencies exist, because the fine-tuning process would fix them to some extent.

    Thank you so much.

    opened by alirezazareian 3
  • Pre-training on another dataset

    Pre-training on another dataset

    Hi,

    Thank you for making this code public!

    I want to pre-train a captioning model on another dataset (ARCH dataset). I went through your codebase and realized that first I need to create a Dataset class for my dataset similar to your Dataset class in virtex/data/datasets/coco_captions.py. Next, I will need to make a modified version of virtex/data/datasets/captioning.py.

    Somehow the files in virtex/data/datasets/ are all ignored by git and I can't make any of them become visible. Can you please help me with it? I would also appreciate any suggestions on how to modify the code at this stage in order to cause the least amount of disruption to the functions and classes which rely on the Dataset classes.

    Many thanks, George Batchkala

    opened by GeorgeBatch 2
  • The weight file on http://kdexd.xyz/virtex/virtex/usage/model_zoo.html was canceled

    The weight file on http://kdexd.xyz/virtex/virtex/usage/model_zoo.html was canceled

    Hello, your work is very attractive to me, but when I reproduced your excellent research results, I found that the weight file on http://kdexd.xyz/virtex/virtex/usage/model_zoo.html was canceled. I hope you can provide effective links to the weighted documents and reproduce your excellent work.

    opened by hubin111 2
  • torch.hub.load(

    torch.hub.load("kdexd/virtex", "resnet50", pretrained=True) not working

    I tried running this in Colab environment.

    Got the below error:

    KeyError                                  Traceback (most recent call last)
    
    <ipython-input-5-e8ec27705300> in <module>()
          1 import torch
          2 # model = torch.hub.load('pytorch/vision:v0.9.0', 'alexnet', pretrained=True)
    ----> 3 model = torch.hub.load("kdexd/virtex", "resnet50", pretrained=True)
          4 model.eval()
    
    2 frames
    
    /root/.cache/torch/hub/kdexd_virtex_master/hubconf.py in resnet50(pretrained, **kwargs)
         31                 "https://umich.box.com/shared/static/gsjqm4i4fm1wpzi947h27wweljd8gcpy.pth",
         32                 progress=False,
    ---> 33             )["model"]
         34         )
         35     return model
    
    KeyError: 'model'
    

    Can you let me know the fix ?

    opened by Sumegh-git 2
  • Cog version

    Cog version

    "😵 Uh oh! This model can't be run on Replicate because it was built with a version of Cog that is no longer supported." https://replicate.com/kdexd/virtex-image-captioning

    opened by Jakeukalane 0
  • Training with new Random Seed does not shuffle data

    Training with new Random Seed does not shuffle data

    I've been adapting the example scripts to my own training task, and I've noticed that the scripts do not handle different random seeds as expected. I've found this problem in two places, but there might be more:

    https://github.com/kdexd/virtex/blob/2baba8a4f3a4d80d617b3bc59e4be25b1052db57/scripts/clf_linear.py#L104-L109 https://github.com/kdexd/virtex/blob/2baba8a4f3a4d80d617b3bc59e4be25b1052db57/scripts/pretrain_virtex.py#L68

    The problem is that the DistributedSampler (from PyTorch 1.9.0) requires kwarg "seed" to shuffle differently, when shuffle=True. I believe that the correct use of DistributedSampler for training with different random seeds would be to add the kwarg seed=_DOWNC.RANDOM_SEED when DistributedSampler is initialized in these two places. As for reshuffling on additional epochs, DistributedSampler will add the seed to the epoch number, so nothing needs to be changed during epoch-setting for the sampler.

    https://github.com/pytorch/pytorch/blob/d69c22dd61a2f006dcfe1e3ea8468a3ecaf931aa/torch/utils/data/distributed.py#L100

    Please let me know your thoughts, or if I may have missed something.

    opened by keeganq 0
  • Decoder Attention Weight Visualization

    Decoder Attention Weight Visualization

    Hi, thanks for the awesome code base!

    I'm looking to produce visualizations of decoder attention weights similar to those shown in the paper, but I don't think that you have implemented this feature in the published code (although I may have overlooked it!)

    As best I can tell, the way this would be done is by using a new TransformerDecoderLayer which returns the multihead attention's attn_output_weights in its forward method. The visualized attention weights when predicting a single token would then be the average of these weights across all heads. The problem that I am finding is that the visualized weights seem to mostly appear in the center of the image during captioning on the coco dataset, but the results in the paper show reasonable variation in these weights as tokens are predicted.

    Is this the method that you used to create the visualization? Any insight into how this was previously done would be appreciated!

    opened by keeganq 0
  • Add Docker environment & web demo

    Add Docker environment & web demo

    Hey @kdexd! 👋

    This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.

    This also means we can make a web page where other people can try out your model! We've implemented image captioning, but it should be pretty easy to add other tasks too if you'd like. View it here: https://replicate.ai/kdexd/virtex-image-captioning

    That page also has instructions on how to use the Docker image, which is on our registry at r8.im/kdexd/virtex-image-captioning.

    In case you're wondering who the heck I am, I'm from Replicate, where we're trying to make machine learning reproducible. So many cool models are being made, but I got frustrated that I couldn't run them, hence we're trying to fix that. :)

    opened by bfirsh 0
  • Fine tuning Virtex for image captioning

    Fine tuning Virtex for image captioning

    Hi there, I am aware that Virtex used image captioning as a pretraining task and not as the "final goal", but I was wondering whether one could go on fine-tuning the pretrained model (e.g. bicaptioning_R_50_L1_H2048) with additional COCOcaptions-like data in order to get an improved captioning model. Has anyone tried that or does anyone have any suggestion how to do it? Can any of the scripts in the repository be used/adapted for fine-tuning existing models? Thanks a lot! :)

    opened by freeIsa 1
Releases(v1.4)
  • v1.4(Jan 9, 2022)

    Major changes

    • Python 3.6 support is dropped, the minimum requirement is Python 3.8. All major library versions are bumped to the latest releases (PyTorch, OpenCV, Albumentations, etc.).
    • Model zoo URLs are changed to Dropbox. All pre-trained checkpoint weights are unchanged.
    • There was a spike in training loss when resuming training with pretrain_virtex.py, it is fixed now.
    • Documentation theme is changed from alabaster to read the docs, looks fancier!
    Source code(tar.gz)
    Source code(zip)
  • v1.2(Jul 15, 2021)

    Bug Fix: Beam Search

    The beam search implementation adapted from AllenNLP was more suited for LSTM/GRU (recurrent models), less for transformers (autoregressive models). This version removes the "backpointer" trick from AllenNLP implementation and improves captioning results for all VirTex models. See below, "Old" metrics are v1.1 (ArXiv v2) and "New" metrics are v1.2 (ArXiv v3).

    image

    This bug does not affect pre-training or other downstream task results. Thanks to Nicolas Carion (@alcinos) and Aishwarya Kamath (@ashkamath) for spotting this issue and helping me to fix it!

    Feature: Nucleus Sampling

    This codebase now supports decoding through Nucleus Sampling, as introduced in The Curious Case of Neural Text Degeneration. Try running captioning evaluation script with --config-override MODEL.DECODER.NAME nucleus_sampling MODEL.DECODER.NUCLEUS_SIZE 0.9! To have consistent behavior with prior versions, the default decoding method is Beam Search with 5 beams.

    Note: Nucleus sampling would give worse results specifically on COCO Captions, but will produce more interesting sounding language with larger transformers trained on much more data than COCO Captions.

    New config arguments to support this:

    MODEL:
      DECODER:
        # What algorithm to use for decoding. Supported values: {"beam_search",
        # "nucleus_sampling"}.
        NAME: "beam_search"
    
        # Number of beams to decode (1 = greedy decoding). Ignored when decoding
        # through nucleus sampling.
        BEAM_SIZE: 5
    
        # Size of nucleus for sampling predictions. Ignored when decoding through
        # beam search.
        NUCLEUS_SIZE: 0.9
    
        # Maximum length of decoded caption. Decoding may end earlier when [EOS]
        # token is sampled.
        MAX_DECODING_STEPS: 50  # Same as DATA.MAX_CAPTION_LENGTH
    
    Source code(tar.gz)
    Source code(zip)
  • v1.1(Apr 4, 2021)

    This version is a small increment over v1.0 with only cosmetic changes and obsolete code removals. The final results of models rained from this codebase would remain unchanged.

    Removed feature extraction support:

    • Removed virtex.downstream.FeatureExtractor and its usage in scripts/clf_voc07.py. By default, the script will only evaluate on global average pooled features (2048-d), as with the CVPR 2021 paper version.

    • Removed virtex.modules.visual_backbones.BlindVisualBackbone. I introduced it a long time ago for debugging, it is not much useful anymore.

    Two config-related changes:

    1. Renamed config parameters: (OPTIM.USE_LOOKAHEAD —> OPTIM.LOOKAHEAD.USE), (OPTIM.LOOKAHEAD_ALPHA —> OPTIM.LOOKAHEAD_ALPHA) and (OPTIM.LOOKAHEAD_STEPS —> OPTIM.LOOKAHEAD.STEPS).

    2. Renamed TransformerTextualHead to TransformerDecoderTextualHead for clarity. Model names in config also change accordingly: "transformer_postnorm" —> "transdec_postnorm" (same for prenorm).

    These changes may be breaking if you wrote your own config and explicitly added these arguments.

    Source code(tar.gz)
    Source code(zip)
  • v1.0(Mar 7, 2021)

    CVPR 2021 release of VirTex. Code and pre-trained models can reproduce results according to the paper: https://arxiv.org/abs/2006.06666v2

    Source code(tar.gz)
    Source code(zip)
STEAL - Learning Semantic Boundaries from Noisy Annotations (CVPR 2019)

STEAL This is the official inference code for: Devil Is in the Edges: Learning Semantic Boundaries from Noisy Annotations David Acuna, Amlan Kar, Sanj

null 469 Dec 26, 2022
3D AffordanceNet is a 3D point cloud benchmark consisting of 23k shapes from 23 semantic object categories, annotated with 56k affordance annotations and covering 18 visual affordance categories.

3D AffordanceNet This repository is the official experiment implementation of 3D AffordanceNet benchmark. 3D AffordanceNet is a 3D point cloud benchma

null 49 Dec 1, 2022
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [2021]

Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations This repo contains the Pytorch implementation of our paper: Revisit

Wouter Van Gansbeke 80 Nov 20, 2022
Contextualized Perturbation for Textual Adversarial Attack, NAACL 2021

Contextualized Perturbation for Textual Adversarial Attack Introduction This is a PyTorch implementation of Contextualized Perturbation for Textual Ad

cookielee77 30 Jan 1, 2023
Harmonious Textual Layout Generation over Natural Images via Deep Aesthetics Learning

Harmonious Textual Layout Generation over Natural Images via Deep Aesthetics Learning Code for the paper Harmonious Textual Layout Generation over Nat

null 7 Aug 9, 2022
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

Thalles Silva 1.7k Dec 28, 2022
Official PyTorch implementation of the paper "TEMOS: Generating diverse human motions from textual descriptions"

TEMOS: TExt to MOtionS Generating diverse human motions from textual descriptions Description Official PyTorch implementation of the paper "TEMOS: Gen

Mathis Petrovich 187 Dec 27, 2022
PyTorch implementation of: Michieli U. and Zanuttigh P., "Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations", CVPR 2021.

Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations This is the official PyTorch implementation

Multimedia Technology and Telecommunication Lab 42 Nov 9, 2022
Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning, CVPR 2021

Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning By Zhenda Xie*, Yutong Lin*, Zheng Zhang, Yue Ca

Zhenda Xie 293 Dec 20, 2022
Code for "Learning the Best Pooling Strategy for Visual Semantic Embedding", CVPR 2021

Learning the Best Pooling Strategy for Visual Semantic Embedding Official PyTorch implementation of the paper Learning the Best Pooling Strategy for V

Jiacheng Chen 106 Jan 6, 2023
This is the code for CVPR 2021 oral paper: Jigsaw Clustering for Unsupervised Visual Representation Learning

JigsawClustering Jigsaw Clustering for Unsupervised Visual Representation Learning Pengguang Chen, Shu Liu, Jiaya Jia Introduction This project provid

DV Lab 73 Sep 18, 2022
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

null 697 Jan 6, 2023
Conformer: Local Features Coupling Global Representations for Visual Recognition

Conformer: Local Features Coupling Global Representations for Visual Recognition (arxiv) This repository is built upon DeiT and timm Usage First, inst

Zhiliang Peng 378 Jan 8, 2023
ImageNet-CoG is a benchmark for concept generalization. It provides a full evaluation framework for pre-trained visual representations which measure how well they generalize to unseen concepts.

The ImageNet-CoG Benchmark Project Website Paper (arXiv) Code repository for the ImageNet-CoG Benchmark introduced in the paper "Concept Generalizatio

NAVER 23 Oct 9, 2022
[BMVC2021] The official implementation of "DomainMix: Learning Generalizable Person Re-Identification Without Human Annotations"

DomainMix [BMVC2021] The official implementation of "DomainMix: Learning Generalizable Person Re-Identification Without Human Annotations" [paper] [de

Wenhao Wang 17 Dec 20, 2022
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
[CVPR 2021] Released code for Counterfactual Zero-Shot and Open-Set Visual Recognition

Counterfactual Zero-Shot and Open-Set Visual Recognition This project provides implementations for our CVPR 2021 paper Counterfactual Zero-S

null 144 Dec 24, 2022
[CVPR 2021] Involution: Inverting the Inherence of Convolution for Visual Recognition, a brand new neural operator

involution Official implementation of a neural operator as described in Involution: Inverting the Inherence of Convolution for Visual Recognition (CVP

Duo Li 1.3k Dec 28, 2022
PyTorch reimplementation of the paper Involution: Inverting the Inherence of Convolution for Visual Recognition [CVPR 2021].

Involution: Inverting the Inherence of Convolution for Visual Recognition Unofficial PyTorch reimplementation of the paper Involution: Inverting the I

Christoph Reich 100 Dec 1, 2022