Code release for SLIP Self-supervision meets Language-Image Pre-training

Related tags

Deep Learning SLIP
Overview

SLIP: Self-supervision meets Language-Image Pre-training

SLIP framework

What you can find in this repo:

Results and Pre-trained Models

The following models are pre-trained on YFCC15M and evaluated on ImageNet-1K (ILSVRC2012).

ViT-Small (MoCo v3 version w/ 12 vs. 6 heads)

Method Epochs 0-shot Linear Finetuned Weights
CLIP 25 32.7 59.3 78.2 url
SimCLR 25 - 58.1 79.9 url
SLIP 25 38.3 66.4 80.3 url
SLIP 50 39.3 67.6 80.7 url
SLIP 100 39.5 68.3 80.7 url

ViT-Base

Method Epochs 0-shot Linear Finetuned Weights
CLIP 25 37.6 66.5 80.5 url
SimCLR 25 - 64.0 82.5 url
SLIP 25 42.8 72.1 82.6 url
SLIP 50 44.1 73.0 82.9 url
SLIP 100 45.0 73.6 83.4 url

ViT-Large

Method Epochs 0-shot Linear Finetuned Weights
CLIP 25 40.4 70.5 81.0 url
SimCLR 25 - 66.7 84.0 url
SLIP 25 46.2 76.0 84.2 url
SLIP 50 47.4 75.8 84.7 url
SLIP 100 47.9 75.1 84.8 url

1. Setup

Install PyTorch and timm. The code has been tested with CUDA 11.3/CuDNN 8.2.0, PyTorch 1.10.0 and timm 0.5.0.

1.1. YFCC15M Setup

Download the YFCC100M dataset. Our dataloader expects the following dataset directory structure with 100 folders containing 1000 zip archives of 1000 images each. The concatenation of the folder, archive, and file names is the index of the image (i.e. image 12345678 is stored as 678.jpg within 12/345.zip):

/path/to/yfcc100m/
├── images/
│   ├── 00/
│   │   └── 000.zip
│   │   │   ├── 000.jpg
│   │   │   │   ...
│   │   │   └── 999.jpg
│   │   ...
│   │   └── 999.zip
│   ...
│   └── 99/
...

Prepare the YFCC15M subset metadata pickle:

  1. Download and compile a list of downloaded images to flickr_unique_ids.npy (ours)
  2. Download OpenAI's list of captioned YFCC100M images according to instructions here
  3. Run python make_dataset.py to create the yfcc15m.pkl metadata pickle

When pre-training with YFCC15M, set --dataset yfcc15m --root /path/to/yfcc100m --metadata /path/to/yfcc15m.pkl.

1.2. COCO Captions Setup

Download and unzip the 2017 Train images and annotations. When pre-training on COCO, set --dataset coco --root /path/to/coco --metadata /path/to/captions_train2017.json.

1.3. Conceptual Captions Setup

CC3M and CC12M are published as tsv files listing original image urls and processed captions. Download images and collect the captions of all available images (many will be missing due to broken links) into cc3m.npy and cc12m.npy.

For CC3M our dataloader expects cc3m.npy to contain a NumPy array of dicts in the following format:

{
  'image_id': 1510438788,  # local file path relative to root
  'captions': ['large field with pink tulips on a clear sunny summer day with a blue sky']
}

For CC12M our dataloader expects cc12m.npy to contain a NumPy array of dicts in the following format:

{
  'image_name': '0.jpg',  # local file path relative to root
  'image_id': 0,
  'captions': ['Metal Design Within Reach Ivory Slipper Chairs - a Pair For Sale - Image 7 of 10']
}

When pre-training on CC3M set --dataset cc3m --root /path/to/cc3m --metadata /path/to/cc3m.npy, and whe pre-training on CC12M set --dataset cc12m --root /path/to/cc12m --metadata /path/to/cc12m.npy.

1.4. Downstream Dataset Setup

Zero-shot (in main.py and eval_zeroshot.py) and linear (in main_linear.py) evaluations read dataset paths from dataset_catalog.json. Zero-shot evaluations read CLIP's class labels and caption templates from labels.json and templates.json. If just pre-training models on YFCC15M, only the ImageNet path is required for model validation between training epochs. See Section 3 below on zero-shot transfer evaluation for dataset preparation details.

2. Pre-training

We use the following pre-training recipes for SLIP, CLIP, and SimCLR. See main.py for the full list of default arguments. We use the same lr and wd settings for all model sizes within the same training framework, and different model sizes can be selected by passing in different strings to the --model argument such as SLIP_VITS16 or SLIP_VITL16.

In our workflow we use submitit, which interfaces nicely with Slurm. For local training with the torchrun utility (supersedes torch.distributed.launch), replace python run_with_submitit.py with torchrun --nproc_per_node=8 main.py. Local multi-node training with torchrun should also be possible.

We train most of our models on 8x 8-gpu nodes, but training with fewer gpus is possible by reducing the batch size and setting the --update-freq argument above 1 to enable gradient accumulation. Note that gradient accumulation will increase the variance of minibatch statistics and alter the training dynamics of batchnorm, which is used in SLIP and SimCLR.

SLIP ViT-Base with 8-nodes (batch size 4096)

python run_with_submitit.py \
  --root /path/to/yfcc100m \
  --model SLIP_VITB16 \
  --lr 3e-3 --wd 0.1

CLIP ViT-Base with 8-nodes (batch size 4096)

python run_with_submitit.py \
  --root /path/to/yfcc100m \
  --model CLIP_VITB16 \
  --lr 5e-4 --wd 0.5

SimCLR ViT-Base with 8-nodes (batch size 4096)

python run_with_submitit.py \
  --root /path/to/yfcc100m \
  --model SIMCLR_VITB16 \
  --ssl-mlp-dim 4096 --ssl-emb-dim 256 --ssl-temp 0.1 \
  --lr 3.2e-3 --wd 0.1 

Some important arguments:

--dataset: pre-training dataset name. choices include yfcc15m, cc12m, cc3m, coco.

--root: path to dataset root

--metadata: path to metadata file (see section 1 for details)

--ssl-mlp-dim: hidden dim of SimCLR mlp projection head

--ssl-emb-dim: output embed dim of SimCLR mlp projection head

--ssl-scale: loss scale for SimCLR objective

--ssl-temp: softmax temperature for SimCLR objective

--batch-size: number of samples per-device/per-gpu

--lr-start: initial warmup lr

--lr-end: minimum final lr

--update-freq: optimizer update frequency, i.e. gradient accumulation steps

--disable-amp: disable mixed-precision training (requires more memory and compute)

3. Evaluation: Zero-shot Transfer

First, prepare additional downstream classification datasets:

  • MNIST, CIFAR-10/100, STL-10: Automatic download via torchvision datasets
  • HatefulMemes: Manual download from official website and sort images according to train.jsonl/dev.jsonl into train/dev folder
  • Rendered SST2, Country211: Manual download from CLIP repo
  • Other datasets: Use scripts from VISSL

Then set all dataset paths in dataset_catalog.json.

Evaluate zero-shot transfer to various classification benchmarks with eval_zeroshot.py, which reads labels and templates from labels.json/templates.json and dataset paths from dataset_catalog.json. Inference is performed with a single gpu. By default, the script iterates through all datasets in dataset_catalog.json and evaluates zero-shot in order. Evaluation can be limited to a subset of datasets by replacing for d in datasets: with for d in ['imagenet']: on line 78.

python eval_zeroshot.py --resume /path/to/checkpoint.pt

4. Evaluation: Linear Classification

We use a modified version of the MoCo v3 ImageNet linear classification script, main_linear.py. We use the same single node 8-gpu recipe for all model sizes. See main_linear.py for the full list of default arguments. As with pre-training, our workflow uses submitit. For local training with torchrun, replace python run_with_submitit_linear.py with torchrun --nproc_per_node=8 main_linear.py. This script reads the ImageNet dataset path from the dataset catalog (dataset_catalog.json), which must be set properly before training.

python run_with_submitit_linear.py  \
  --arch vit_base_patch16_224 --dataset imagenet \
  --pretrained /path/to/checkpoint.pt

To evaluate linear classification on other datasets, set --dataset to the corresponding dataset name listed in dataset_catalog.json.

5. Evaluation: End-to-End Finetuning

We use a modified version of the ImageNet finetuning script from BeiT. Our code has been tested with commit f8f3df8. We have removed the explicit torch, torchvision, and timm dependencies from beit_finetuning/requirements.txt, as they conflict with the versions used in our SLIP code (CUDA 11.3/CuDNN 8.2.0, PyTorch 1.10.0 and timm 0.5.0). The fintuning code has been modified and tested to work with these versions.

5.1. Setup

To evaluate end-to-end finetuning on ImageNet, first clone the BeiT repo and checkout the correct commit:

git clone [email protected]:microsoft/unilm.git
cd unilm/beit
git checkout f8f3df8

Now copy over modified files from our beit_finetuning directory:

cp beit_finetuning/* unilm/beit
cd unilm/beit

Install pip dependencies and Nvidia Apex:

pip install -r requirements.txt
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./

5.2. Commands

As with pre-training, our workflow uses submitit. For local training with torchrun, replace python run_with_submitit_finetune.py with torchrun --nproc_per_node=8 run_class_finetuning.py. We established finetuning recipes based on the BeiT recipes with some light additional hyperparameter tuning. We increase regularization with model size: ViT-S uses drop_path=0 and layer_decay=0.65, ViT-B uses drop_path=0.1 and layer_decay=0.65, and ViT-L uses drop_path=0.1 and layer_decay=0.75. Note the use of the --finetune argument instead of --resume.

ViT-Small (MoCo v3 version w/ 12 vs. 6 heads)

python run_with_submitit_finetune.py \
    --batch_size 128 --enable_deepspeed \
    --epochs 100 --warmup_epochs 20 \
    --model beit_small_patch16_224 --nb_classes 1000 \
    --imagenet_default_mean_and_std \
    --model_key state_dict --model_prefix module.visual. \
    --disable_rel_pos_bias --abs_pos_emb --use_cls \
    --mixup 0.8 --cutmix 1 \
    --layer_scale_init_value 0 \
    --lr 4e-3 --drop_path 0 --layer_decay 0.65 \
    --output_dir /path/to/output_dir --finetune /path/to/checkpoint.pt

ViT-Base

python run_with_submitit_finetune.py \
    --batch_size 128 --enable_deepspeed \
    --epochs 100 --warmup_epochs 20 \
    --model beit_base_patch16_224 --nb_classes 1000 \
    --imagenet_default_mean_and_std \
    --model_key state_dict --model_prefix module.visual. \
    --disable_rel_pos_bias --abs_pos_emb --use_cls \
    --mixup 0.8 --cutmix 1 \
    --layer_scale_init_value 0 \
    --lr 4e-3 --drop_path 0.1 --layer_decay 0.65 \
    --output_dir /path/to/output_dir --finetune /path/to/checkpoint.pt

ViT-Large

python run_with_submitit_finetune.py \
    --batch_size 128 --enable_deepspeed \
    --epochs 50 --warmup_epochs 5 \
    --model beit_large_patch16_224 --nb_classes 1000 \
    --imagenet_default_mean_and_std \
    --model_key state_dict --model_prefix module.visual. \
    --disable_rel_pos_bias --abs_pos_emb --use_cls \
    --mixup 0.8 --cutmix 1 \
    --layer_scale_init_value 0 \
    --lr 4e-3 --drop_path 0.1 --layer_decay 0.75 \
    --output_dir /path/to/output_dir --finetune /path/to/checkpoint.pt

License

This project is under the CC-BY-NC 4.0 license. See LICENSE for details.

Citation

@Article{mu2021slip,
  author  = {Norman Mu and Alexander Kirillov and David Wagner and Saining Xie},
  title   = {SLIP: Self-supervision meets Language-Image Pre-training},
  journal = {arXiv preprint arXiv:2112.12750},
  year    = {2021},
}
Comments
  • How use SLIP to predict the specific picture?

    How use SLIP to predict the specific picture?

    How to use SLIP to predict the specific picture? Such as CLIP:

    import torch import clip from PIL import Image

    device = "cuda" if torch.cuda.is_available() else "cpu" model, preprocess = clip.load("D:\OD\CLIP\ViT-B-32.pt", device=device)

    image = preprocess(Image.open("fuliqiang.png")).unsqueeze(0).to(device) text = clip.tokenize(["sleep", "play cellphone", "work"]).to(device)

    with torch.no_grad(): image_features = model.encode_image(image) text_features = model.encode_text(text)

    logits_per_image, logits_per_text = model(image, text)
    probs = logits_per_image.softmax(dim=-1).cpu().numpy()
    

    print("Label probs:", probs) # prints: [[0.9927937 0.00421068 0.00299572]]

    opened by lixiangMindSpore 3
  • About license

    About license

    Thanks for the great work! The project is released under an MIT license. I want to know whether that means the pre-trained models are also released under the MIT license? Thanks.

    opened by WangWenhao0716 2
  • Fixed memory leak on metrics accumulation

    Fixed memory leak on metrics accumulation

    Hi, Thanks for open sourcing the code for SLIP paper. I noticed a minor bug in the implementation which accumulates tensors instead of actual values. This PR will fix the issue.

    CLA Signed 
    opened by naveenkumarmarri 2
  • When Pretraining on CoCo, the acc is low

    When Pretraining on CoCo, the acc is low

    When pre training on COCO, after 9 epochs training, the zero-shot acc of ViTB is still 0.100. I follow the instruction, --lr 3e-3 --wd 0.1. Since I have 2 GPUs, I set batchsize=8, and --update-fre= 256 to keep total batchsize 4096.

    opened by HaloTrouvaille 2
  • Possible bug in Tokenizer about max sequence length

    Possible bug in Tokenizer about max sequence length

    I found here that the token sequence is truncated to context_length = 77. The issue is that the truncation is done after wrapping original tokens with [SOT] and [EOT], and I think it's possible that [EOT] token is cut off if the original token sequence is too long. Meanwhile the text transformer uses the embedding of the [EOT] as feature representation, I guess something would be wrong for long text input.

    Am I understanding right here ?

    opened by zhihuacc 1
  • ResNet-50 codes and pretrained weights

    ResNet-50 codes and pretrained weights

    In your paper, you said that Our improved training procedure achieves 34.6% zero-shot transfer to ImageNet with a modified1 ResNet-50, exceeding the original result of 31.3%.

    Might you provide the code of RN-50 with pre-trained weights? Thank you!

    opened by hao-pt 1
  • How to use SLIP with text to image?

    How to use SLIP with text to image?

    Hi,

    I was testing CLIP and completed text to image search there.

    I was wondering how I can do it with SLIP?

    (Just confused on where to start)

    If anyone could help, it would be highly appreciated

    opened by animemes-bot 1
  • Number of text transformer layers

    Number of text transformer layers

    Hi, thanks for the amazing code!

    I have one question about the number of text transformer layers. In the paper it says that the text transformer contains 38M parameters. However, your code seems to use the 12-layer 512-wide model with 8 attention heads, which contains 63M parameter according to the CLIP paper. May I know which one is used?

    Thanks a lot!

    Best, Junnan

    opened by LiJunnan1992 1
  • text_projection_shape: torch.Size([512, 512]) - image_projection_shape: torch.Size([1024, 512])

    text_projection_shape: torch.Size([512, 512]) - image_projection_shape: torch.Size([1024, 512])

    how can the cosine similarity between these two projections be calculated when the image is [1, 1024] and the text is [1, 512] -- these aren't compatible. the original clip has image and text both at 512. I'm trying to test clip-large model (ViT-L-16) to start but not having any luck to get it running

    opened by fractaldna22 1
  • What's the difference between all_gather_batch and all_gather_batch_with_grad ?

    What's the difference between all_gather_batch and all_gather_batch_with_grad ?

    Thanks for the great work!

    I notice CLIP and SLIP use all_gather_batch and all_gather_batch_with_grad, respectively.

    What's the difference between the two?

    Thanks!

    opened by 4fee8fea 0
  • Slow data loading

    Slow data loading

    Hi I am trying to use torchrun --nproc_per_node=8 to train SimCLR on ImageNet using 8 GPUs in parallel. I am using this command to distribute the model

    model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu], bucket_cap_mb=200)

    The problem is that every couple of batches the data loading is very slow. This is not only for the first batch, which I guess is normal.

    Here is the (data_time_, batch_time_) when j=8

    142.89722776412964 172.6102113723755
    0.0011818408966064453 1.149657964706421
    0.0005125999450683594 0.5908513069152832
    0.000789642333984375 0.5789885520935059
    0.0006721019744873047 0.5847604274749756
    0.0006527900695800781 0.5850253105163574
    0.0006945133209228516 0.5916690826416016
    0.0006604194641113281 0.5858132839202881
    33.176689863204956 113.46687459945679
    0.0006616115570068359 3.237961530685425
    0.0004947185516357422 0.5849909782409668
    0.0006363391876220703 0.5738029479980469
    0.0005011558532714844 0.5823519229888916
    0.0004696846008300781 0.5900559425354004
    0.0006518363952636719 0.5800015926361084
    0.0006239414215087891 0.5876047611236572
    0.0006771087646484375 70.80163884162903
    0.0009722709655761719 0.5865006446838379
    0.0006647109985351562 0.5858757495880127
    0.0006537437438964844 1.4045929908752441
    0.0006687641143798828 32.53645730018616
    0.0008571147918701172 0.5862514972686768
    0.0006554126739501953 0.5853633880615234
    0.0006177425384521484 16.638994216918945
    0.0009629726409912109 66.59664487838745
    0.001010894775390625 0.588956356048584
    0.0006909370422363281 0.5857172012329102
    0.0006442070007324219 0.5854201316833496
    0.0006425380706787109 29.423442363739014
    0.0008411407470703125 0.5904080867767334
    0.0007281303405761719 0.5878152847290039
    0.0007319450378417969 47.11639881134033
    0.0008819103240966797 23.894486904144287
    0.0007138252258300781 0.578960657119751
    0.0005040168762207031 0.5796892642974854
    0.0004954338073730469 0.5855348110198975
    0.0004782676696777344 66.34711623191833
    0.0006091594696044922 0.5864002704620361
    0.0005247592926025391 0.5784976482391357
    0.0005164146423339844 0.6909780502319336
    0.0006034374237060547 26.061028718948364
    0.0008776187896728516 0.5903408527374268
    0.0006973743438720703 0.584754467010498
    0.0006537437438964844 0.5849916934967041
    …
    36.196861028671265 36.79169154167175
    0.0008547306060791016 0.5894830226898193
    0.0008087158203125 0.5778903961181641
    0.0006210803985595703 0.5889377593994141
    0.0008003711700439453 0.5878915786743164
    0.00077056884765625 0.5870087146759033
    0.0007855892181396484 93.86848998069763
    0.0007998943328857422 0.5807287693023682
    17.311529874801636 17.90171504020691
    0.0007803440093994141 9.284274816513062
    0.0008406639099121094 0.5794563293457031
    0.0008251667022705078 0.6089217662811279
    0.00078582763671875 0.5598442554473877
    0.0007565021514892578 0.5864059925079346
    0.0007340908050537109 42.826006174087524
    0.0010673999786376953 0.5904500484466553
    23.019705295562744 59.32295536994934
    0.0007565021514892578 31.347289085388184
    0.0006775856018066406 0.5731685161590576
    0.0007195472717285156 0.5763015747070312
    0.0005919933319091797 0.5776708126068115
    0.0005700588226318359 0.5778248310089111
    0.0006148815155029297 7.108304738998413
    0.0005848407745361328 0.5788106918334961
    0.0006554126739501953 32.21546387672424
    0.0007257461547851562 88.52377581596375
    0.0008158683776855469 0.5769295692443848
    

    I also noticed that although GPUs are 100% utilized but their power usage are around 80/350 W.

    opened by rahimentezari 0
  • TypeError: 'NoneType' object is not callable

    TypeError: 'NoneType' object is not callable

    
    class FairSlipLoaderBase(BaseMmcLoader):
        """
        SLIP models via https://github.com/facebookresearch/SLIP
        """
        def __init__(
            self,
            id,
            architecture,
        ):
            self.architecture = architecture
            self.publisher = 'facebookresearch'
            self.id = id
            self.modalities = (TEXT, IMAGE)
        def _napm_install(self):
            logger.debug('using napm to "install" facebookresearch/SLIP')
            url = "https://github.com/facebookresearch/SLIP"
            napm.pseudoinstall_git_repo(url, env_name='mmc', add_install_dir_to_path=True)
            napm.populate_pythonpaths('mmc')
            from SLIP.models import (
                SLIP_VITS16,
                SLIP_VITB16, 
                SLIP_VITL16
                )
    
        def load(self, device=DEVICE):
            """
            Returns the MMC associated with this loader.
            """
            self._napm_install()
    
            model_factory = model_factory_from_id(self.id)
            logger.debug(f"model_factory: {model_factory}")
            ckpt_url = url_from_id(self.id)
            ckpt = fetch_weights(
                url=ckpt_url, 
                namespace='fair_slip', 
                device=device,
                )
            d_args = vars(ckpt['args'])
            kwargs = {k:d_args[k] for k in ('ssl_emb_dim', 'ssl_mlp_dim') if k in d_args}
            logger.debug(kwargs)
            fix_param_names(ckpt)
            model = model_factory(**kwargs)
            model.load_state_dict(ckpt['state_dict'], strict=True)
            model = model.eval().to(device)
    
            from SLIP.tokenizer import SimpleTokenizer
            tokenizer = SimpleTokenizer()
    
            def preprocess_image_extended(*args, **kwargs):
                x = val_transform(*args, **kwargs)
                if x.ndim == 3:
                    logger.debug("adding batch dimension")
                    x = x.unsqueeze(0)
                return x.to(device)
            #logger.debug(model)
            mmc = MultiModalComparator(name=str(self), device=device)
            mmc.register_modality(modality=TEXT, projector=model.encode_text, preprocessor=tokenizer)
            mmc.register_modality(modality=IMAGE, projector=model.encode_image, preprocessor= preprocess_image_extended)
            mmc._model = model
            return mmc
    
    
    opened by putuoka 0
  • Any reason for slightly non-standard val augmentation?

    Any reason for slightly non-standard val augmentation?

    Hello, I am wondering if there's any reason to use these transforms for validation agumentation https://github.com/facebookresearch/SLIP/blob/main/main.py#L172-L177 when I often see https://github.com/pytorch/examples/blob/main/imagenet/main.py#L223-L230.

    opened by mitchellnw 0
  • Multiple GPU training

    Multiple GPU training

    Thanks for sharing the code. I was going to use your code to train SimCLR on ImageNet-1K, but could not use multiple gpus on one machine. Can you please let me know how I should use multiple gpus? also how the hyperparameters change with respect to number of gpus?

    Secondly, what do you recommend for ImageNet1K hyperparameters, e.g. LR, batch-size, etc?

    opened by rahimentezari 1
Owner
Meta Research
Meta Research
Learning trajectory representations using self-supervision and programmatic supervision.

Trajectory Embedding for Behavior Analysis (TREBA) Implementation from the paper: Jennifer J. Sun, Ann Kennedy, Eric Zhan, David J. Anderson, Yisong Y

null 58 Jan 6, 2023
Mixup for Supervision, Semi- and Self-Supervision Learning Toolbox and Benchmark

OpenSelfSup News Downstream tasks now support more methods(Mask RCNN-FPN, RetinaNet, Keypoints RCNN) and more datasets(Cityscapes). 'GaussianBlur' is

AI Lab, Westlake University 332 Jan 3, 2023
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

Salesforce 1.3k Dec 31, 2022
Self-Learned Video Rain Streak Removal: When Cyclic Consistency Meets Temporal Correspondence

In this paper, we address the problem of rain streaks removal in video by developing a self-learned rain streak removal method, which does not require any clean groundtruth images in the training process.

Yang Wenhan 44 Dec 6, 2022
Self-training with Weak Supervision (NAACL 2021)

This repo holds the code for our weak supervision framework, ASTRA, described in our NAACL 2021 paper: "Self-Training with Weak Supervision"

Microsoft 148 Nov 20, 2022
SAS: Self-Augmentation Strategy for Language Model Pre-training

SAS: Self-Augmentation Strategy for Language Model Pre-training This repository

Alibaba 5 Nov 2, 2022
This is the official code for the paper "Tracker Meets Night: A Transformer Enhancer for UAV Tracking".

SCT This is the official code for the paper "Tracker Meets Night: A Transformer Enhancer for UAV Tracking" The spatial-channel Transformer (SCT) enhan

Intelligent Vision for Robotics in Complex Environment 27 Nov 23, 2022
Official PyTorch code for CVPR 2020 paper "Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision"

Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision https://arxiv.org/abs/2003.00393 Abstract Active learning (AL) aims to min

Denis 29 Nov 21, 2022
CLIP: Connecting Text and Image (Learning Transferable Visual Models From Natural Language Supervision)

CLIP (Contrastive Language–Image Pre-training) Experiments (Evaluation) Model Dataset Acc (%) ViT-B/32 (Paper) CIFAR100 65.1 ViT-B/32 (Our) CIFAR100 6

Myeongjun Kim 52 Jan 7, 2023
CLIP (Contrastive Language–Image Pre-training) trained on Indonesian data

CLIP-Indonesian CLIP (Radford et al., 2021) is a multimodal model that can connect images and text by training a vision encoder and a text encoder joi

Galuh 17 Mar 10, 2022
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

NingWang 236 Dec 22, 2022
ConvMAE: Masked Convolution Meets Masked Autoencoders

ConvMAE ConvMAE: Masked Convolution Meets Masked Autoencoders Peng Gao1, Teli Ma1, Hongsheng Li2, Jifeng Dai3, Yu Qiao1, 1 Shanghai AI Laboratory, 2 M

Alpha VL Team of Shanghai AI Lab 345 Jan 8, 2023
Code for the ICML 2021 paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"

ViLT Code for the paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision" Install pip install -r requirements.txt pip

Wonjae Kim 922 Jan 1, 2023
Code for the ICML 2021 paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"

ViLT Code for the paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision" Install pip install -r requirements.txt pip

Wonjae Kim 922 Jan 1, 2023
Code release for "Self-Tuning for Data-Efficient Deep Learning" (ICML 2021)

Self-Tuning for Data-Efficient Deep Learning This repository contains the implementation code for paper: Self-Tuning for Data-Efficient Deep Learning

THUML @ Tsinghua University 101 Dec 11, 2022
CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training

UC2 UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu,

Mingyang Zhou 28 Dec 30, 2022
Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Improving Transferability of Representations via Augmentation-Aware Self-Supervision Accepted to NeurIPS 2021 TL;DR: Learning augmentation-aware infor

hankook 38 Sep 16, 2022
A library built upon PyTorch for building embeddings on discrete event sequences using self-supervision

pytorch-lifestream a library built upon PyTorch for building embeddings on discrete event sequences using self-supervision. It can process terabyte-si

Dmitri Babaev 103 Dec 17, 2022