PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".

Overview

LXMERT: Learning Cross-Modality Encoder Representations from Transformers

Our servers break again :(. I have updated the links so that they should work fine now. Sorry for the inconvenience. Please let me for any further issues. Thanks! --Hao, Dec 03

Introduction

PyTorch code for the EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers". Slides of our EMNLP 2019 talk are avialable here.

  • To analyze the output of pre-trained model (instead of fine-tuning on downstreaming tasks), please load the weight https://nlp.cs.unc.edu/data/github_pretrain/lxmert20/Epoch20_LXRT.pth, which is trained as in section pre-training. The default weight here is trained with a slightly different protocal as this code.

Results (with this Github version)

Split VQA GQA NLVR2
Local Validation 69.90% 59.80% 74.95%
Test-Dev 72.42% 60.00% 74.45% (Test-P)
Test-Standard 72.54% 60.33% 76.18% (Test-U)

All the results in the table are produced exactly with this code base. Since VQA and GQA test servers only allow limited number of 'Test-Standard' submissions, we use our remaining submission entry from the VQA/GQA challenges 2019 to get these results. For NLVR2, we only test once on the unpublished test set (test-U).

We use this code (with model ensemble) to participate in VQA 2019 and GQA 2019 challenge in May 2019. We are the only team ranking top-3 in both challenges.

Pre-trained models

The pre-trained model (870 MB) is available at http://nlp.cs.unc.edu/data/model_LXRT.pth, and can be downloaded with:

mkdir -p snap/pretrained 
wget https://nlp.cs.unc.edu/data/model_LXRT.pth -P snap/pretrained

If download speed is slower than expected, the pre-trained model could also be downloaded from other sources. Please help put the downloaded file at snap/pretrained/model_LXRT.pth.

We also provide data and commands to pre-train the model in pre-training. The default setup needs 4 GPUs and takes around a week to finish. The pre-trained weights with this code base could be downloaded from https://nlp.cs.unc.edu/data/github_pretrain/lxmert/EpochXX_LXRT.pth, XX from 01 to 12. It is pre-trained for 12 epochs (instead of 20 in EMNLP paper) thus the fine-tuned reuslts are about 0.3% lower on each datasets.

Fine-tune on Vision-and-Language Tasks

We fine-tune our LXMERT pre-trained model on each task with following hyper-parameters:

Dataset Batch Size Learning Rate Epochs Load Answers
VQA 32 5e-5 4 Yes
GQA 32 1e-5 4 Yes
NLVR2 32 5e-5 4 No

Although the fine-tuning processes are almost the same except for different hyper-parameters, we provide descriptions for each dataset to take care of all details.

General

The code requires Python 3 and please install the Python dependencies with the command:

pip install -r requirements.txt

By the way, a Python 3 virtual environment could be set up and run with:

virtualenv name_of_environment -p python3
source name_of_environment/bin/activate

VQA

Fine-tuning

  1. Please make sure the LXMERT pre-trained model is either downloaded or pre-trained.

  2. Download the re-distributed json files for VQA 2.0 dataset. The raw VQA 2.0 dataset could be downloaded from the official website.

    mkdir -p data/vqa
    wget https://nlp.cs.unc.edu/data/lxmert_data/vqa/train.json -P data/vqa/
    wget https://nlp.cs.unc.edu/data/lxmert_data/vqa/nominival.json -P  data/vqa/
    wget https://nlp.cs.unc.edu/data/lxmert_data/vqa/minival.json -P data/vqa/
  3. Download faster-rcnn features for MS COCO train2014 (17 GB) and val2014 (8 GB) images (VQA 2.0 is collected on MS COCO dataset). The image features are also available on Google Drive and Baidu Drive (see Alternative Download for details).

    mkdir -p data/mscoco_imgfeat
    wget https://nlp.cs.unc.edu/data/lxmert_data/mscoco_imgfeat/train2014_obj36.zip -P data/mscoco_imgfeat
    unzip data/mscoco_imgfeat/train2014_obj36.zip -d data/mscoco_imgfeat && rm data/mscoco_imgfeat/train2014_obj36.zip
    wget https://nlp.cs.unc.edu/data/lxmert_data/mscoco_imgfeat/val2014_obj36.zip -P data/mscoco_imgfeat
    unzip data/mscoco_imgfeat/val2014_obj36.zip -d data && rm data/mscoco_imgfeat/val2014_obj36.zip
  4. Before fine-tuning on whole VQA 2.0 training set, verifying the script and model on a small training set (512 images) is recommended. The first argument 0 is GPU id. The second argument vqa_lxr955_tiny is the name of this experiment.

    bash run/vqa_finetune.bash 0 vqa_lxr955_tiny --tiny
  5. If no bug came out, then the model is ready to be trained on the whole VQA corpus:

    bash run/vqa_finetune.bash 0 vqa_lxr955

It takes around 8 hours (2 hours per epoch * 4 epochs) to converge. The logs and model snapshots will be saved under folder snap/vqa/vqa_lxr955. The validation result after training will be around 69.7% to 70.2%.

Local Validation

The results on the validation set (our minival set) are printed while training. The validation result is also saved to snap/vqa/[experiment-name]/log.log. If the log file was accidentally deleted, the validation result in training is also reproducible from the model snapshot:

bash run/vqa_test.bash 0 vqa_lxr955_results --test minival --load snap/vqa/vqa_lxr955/BEST

Submitted to VQA test server

  1. Download our re-distributed json file containing VQA 2.0 test data.
    wget https://nlp.cs.unc.edu/data/lxmert_data/vqa/test.json -P data/vqa/
  2. Download the faster rcnn features for MS COCO test2015 split (16 GB).
    wget https://nlp.cs.unc.edu/data/lxmert_data/mscoco_imgfeat/test2015_obj36.zip -P data/mscoco_imgfeat
    unzip data/mscoco_imgfeat/test2015_obj36.zip -d data && rm data/mscoco_imgfeat/test2015_obj36.zip
  3. Since VQA submission system requires submitting whole test data, we need to run inference over all test splits (i.e., test dev, test standard, test challenge, and test held-out). It takes around 10~15 mins to run test inference (448K instances to run).
    bash run/vqa_test.bash 0 vqa_lxr955_results --test test --load snap/vqa/vqa_lxr955/BEST

The test results will be saved in snap/vqa_lxr955_results/test_predict.json. The VQA 2.0 challenge for this year is host on EvalAI at https://evalai.cloudcv.org/web/challenges/challenge-page/163/overview It still allows submission after the challenge ended. Please check the official website of VQA Challenge for detailed information and follow the instructions in EvalAI to submit. In general, after registration, the only thing remaining is to upload the test_predict.json file and wait for the result back.

The testing accuracy with exact this code is 72.42% for test-dev and 72.54% for test-standard. The results with the code base are also publicly shown on the VQA 2.0 leaderboard with entry LXMERT github version.

GQA

Fine-tuning

  1. Please make sure the LXMERT pre-trained model is either downloaded or pre-trained.

  2. Download the re-distributed json files for GQA balanced version dataset. The original GQA dataset is available in the Download section of its website and the script to preprocess these datasets is under data/gqa/process_raw_data_scripts.

    mkdir -p data/gqa
    wget https://nlp.cs.unc.edu/data/lxmert_data/gqa/train.json -P data/gqa/
    wget https://nlp.cs.unc.edu/data/lxmert_data/gqa/valid.json -P data/gqa/
    wget https://nlp.cs.unc.edu/data/lxmert_data/gqa/testdev.json -P data/gqa/
  3. Download Faster R-CNN features for Visual Genome and GQA testing images (30 GB). GQA's training and validation data are collected from Visual Genome. Its testing images come from MS COCO test set (I have verified this with one of GQA authors Drew A. Hudson). The image features are also available on Google Drive and Baidu Drive (see Alternative Download for details).

    mkdir -p data/vg_gqa_imgfeat
    wget https://nlp.cs.unc.edu/data/lxmert_data/vg_gqa_imgfeat/vg_gqa_obj36.zip -P data/vg_gqa_imgfeat
    unzip data/vg_gqa_imgfeat/vg_gqa_obj36.zip -d data && rm data/vg_gqa_imgfeat/vg_gqa_obj36.zip
    wget https://nlp.cs.unc.edu/data/lxmert_data/vg_gqa_imgfeat/gqa_testdev_obj36.zip -P data/vg_gqa_imgfeat
    unzip data/vg_gqa_imgfeat/gqa_testdev_obj36.zip -d data && rm data/vg_gqa_imgfeat/gqa_testdev_obj36.zip
  4. Before fine-tuning on whole GQA training+validation set, verifying the script and model on a small training set (512 images) is recommended. The first argument 0 is GPU id. The second argument gqa_lxr955_tiny is the name of this experiment.

    bash run/gqa_finetune.bash 0 gqa_lxr955_tiny --tiny
  5. If no bug came out, then the model is ready to be trained on the whole GQA corpus (train + validation), and validate on the testdev set:

    bash run/gqa_finetune.bash 0 gqa_lxr955

It takes around 16 hours (4 hours per epoch * 4 epochs) to converge. The logs and model snapshots will be saved under folder snap/gqa/gqa_lxr955. The validation result after training will be around 59.8% to 60.1%.

Local Validation

The results on testdev is printed out while training and saved in snap/gqa/gqa_lxr955/log.log. It could be also re-calculated with command:

bash run/gqa_test.bash 0 gqa_lxr955_results --load snap/gqa/gqa_lxr955/BEST --test testdev --batchSize 1024

Note: Our local testdev result is usually 0.1% to 0.5% lower than the submitted testdev result. The reason is that the test server takes an advanced evaluation system while our local evaluator only calculates the exact matching. Please use this official evaluator (784 MB) if you want to have the exact number without submitting.

Submitted to GQA test server

  1. Download our re-distributed json file containing GQA test data.

    wget https://nlp.cs.unc.edu/data/lxmert_data/gqa/submit.json -P data/gqa/
  2. Since GQA submission system requires submitting the whole test data, we need to run inference over all test splits. It takes around 30~60 mins to run test inference (4.2M instances to run).

    bash run/gqa_test.bash 0 gqa_lxr955_results --load snap/gqa/gqa_lxr955/BEST --test submit --batchSize 1024
  3. After running test script, a json file submit_predict.json under snap/gqa/gqa_lxr955_results will contain all the prediction results and is ready to be submitted. The GQA challenge 2019 is hosted by EvalAI at https://evalai.cloudcv.org/web/challenges/challenge-page/225/overview. After registering the account, uploading the submit_predict.json and waiting for the results are the only thing remained. Please also check GQA official website in case the test server is changed.

The testing accuracy with exactly this code is 60.00% for test-dev and 60.33% for test-standard. The results with the code base are also publicly shown on the GQA leaderboard with entry LXMERT github version.

NLVR2

Fine-tuning

  1. Download the NLVR2 data from the official GitHub repo.

    git submodule update --init
  2. Process the NLVR2 data to json files.

    bash -c "cd data/nlvr2/process_raw_data_scripts && python process_dataset.py"
  3. Download the NLVR2 image features for train (21 GB) & valid (1.6 GB) splits. The image features are also available on Google Drive and Baidu Drive (see Alternative Download for details). To access to the original images, please follow the instructions on NLVR2 official Github. The images could either be downloaded with the urls or by signing an agreement form for data usage. And the feature could be extracted as described in feature extraction

    mkdir -p data/nlvr2_imgfeat
    wget https://nlp.cs.unc.edu/data/lxmert_data/nlvr2_imgfeat/train_obj36.zip -P data/nlvr2_imgfeat
    unzip data/nlvr2_imgfeat/train_obj36.zip -d data && rm data/nlvr2_imgfeat/train_obj36.zip
    wget https://nlp.cs.unc.edu/data/lxmert_data/nlvr2_imgfeat/valid_obj36.zip -P data/nlvr2_imgfeat
    unzip data/nlvr2_imgfeat/valid_obj36.zip -d data && rm data/nlvr2_imgfeat/valid_obj36.zip
  4. Before fine-tuning on whole NLVR2 training set, verifying the script and model on a small training set (512 images) is recommended. The first argument 0 is GPU id. The second argument nlvr2_lxr955_tiny is the name of this experiment. Do not worry if the result is low (50~55) on this tiny split, the whole training data would bring the performance back.

    bash run/nlvr2_finetune.bash 0 nlvr2_lxr955_tiny --tiny
  5. If no bugs are popping up from the previous step, it means that the code, the data, and image features are ready. Please use this command to train on the full training set. The result on NLVR2 validation (dev) set would be around 74.0 to 74.5.

    bash run/nlvr2_finetune.bash 0 nlvr2_lxr955

Inference on Public Test Split

  1. Download NLVR2 image features for the public test split (1.6 GB).

    wget https://nlp.cs.unc.edu/data/lxmert_data/nlvr2_imgfeat/test_obj36.zip -P data/nlvr2_imgfeat
    unzip data/nlvr2_imgfeat/test_obj36.zip -d data/nlvr2_imgfeat && rm data/nlvr2_imgfeat/test_obj36.zip
  2. Test on the public test set (corresponding to 'test-P' on NLVR2 leaderboard) with:

    bash run/nlvr2_test.bash 0 nlvr2_lxr955_results --load snap/nlvr2/nlvr2_lxr955/BEST --test test --batchSize 1024
  3. The test accuracy would be shown on the screen after around 5~10 minutes. It also saves the predictions in the file test_predict.csv under snap/nlvr2_lxr955_reuslts, which is compatible to NLVR2 official evaluation script. The official eval script also calculates consistency ('Cons') besides the accuracy. We could use this official script to verify the results by running:

    python data/nlvr2/nlvr/nlvr2/eval/metrics.py snap/nlvr2/nlvr2_lxr955_results/test_predict.csv data/nlvr2/nlvr/nlvr2/data/test1.json

The accuracy of public test ('test-P') set should be almost same to the validation set ('dev'), which is around 74.0% to 74.5%.

Unreleased Test Sets

To be tested on the unreleased held-out test set (test-U on the leaderboard ), the code needs to be sent. Please check the NLVR2 official github and NLVR project website for details.

General Debugging Options

Since it takes a few minutes to load the features, the code has an option to prototype with a small amount of training data.

# Training with 512 images:
bash run/vqa_finetune.bash 0 --tiny 
# Training with 4096 images:
bash run/vqa_finetune.bash 0 --fast

Pre-training

  1. Download our aggregated LXMERT dataset from MS COCO, Visual Genome, VQA, and GQA (around 700MB in total). The joint answer labels are saved in data/lxmert/all_ans.json.

    mkdir -p data/lxmert
    wget https://nlp.cs.unc.edu/data/lxmert_data/lxmert/mscoco_train.json -P data/lxmert/
    wget https://nlp.cs.unc.edu/data/lxmert_data/lxmert/mscoco_nominival.json -P data/lxmert/
    wget https://nlp.cs.unc.edu/data/lxmert_data/lxmert/vgnococo.json -P data/lxmert/
    wget https://nlp.cs.unc.edu/data/lxmert_data/lxmert/mscoco_minival.json -P data/lxmert/
  2. [Skip this if you have run VQA fine-tuning.] Download the detection features for MS COCO images.

    mkdir -p data/mscoco_imgfeat
    wget https://nlp.cs.unc.edu/data/lxmert_data/mscoco_imgfeat/train2014_obj36.zip -P data/mscoco_imgfeat
    unzip data/mscoco_imgfeat/train2014_obj36.zip -d data/mscoco_imgfeat && rm data/mscoco_imgfeat/train2014_obj36.zip
    wget https://nlp.cs.unc.edu/data/lxmert_data/mscoco_imgfeat/val2014_obj36.zip -P data/mscoco_imgfeat
    unzip data/mscoco_imgfeat/val2014_obj36.zip -d data && rm data/mscoco_imgfeat/val2014_obj36.zip
  3. [Skip this if you have run GQA fine-tuning.] Download the detection features for Visual Genome images.

    mkdir -p data/vg_gqa_imgfeat
    wget https://nlp.cs.unc.edu/data/lxmert_data/vg_gqa_imgfeat/vg_gqa_obj36.zip -P data/vg_gqa_imgfeat
    unzip data/vg_gqa_imgfeat/vg_gqa_obj36.zip -d data && rm data/vg_gqa_imgfeat/vg_gqa_obj36.zip
  4. Test on a small split of the MS COCO + Visual Genome datasets:

    bash run/lxmert_pretrain.bash 0,1,2,3 --multiGPU --tiny
  5. Run on the whole MS COCO and Visual Genome related datasets (i.e., VQA, GQA, COCO caption, VG Caption, VG QA). Here, we take a simple single-stage pre-training strategy (20 epochs with all pre-training tasks) rather than the two-stage strategy in our paper (10 epochs without image QA and 10 epochs with image QA). The pre-training finishes in 8.5 days on 4 GPUs. By the way, I hope that my experience in this project would help anyone with limited computational resources.

    bash run/lxmert_pretrain.bash 0,1,2,3 --multiGPU

    Multiple GPUs: Argument 0,1,2,3 indicates taking 4 GPUs to pre-train LXMERT. If the server does not have 4 GPUs (I am sorry to hear that), please consider halving the batch-size or using the NVIDIA/apex library to support half-precision computation. The code uses the default data parallelism in PyTorch and thus extensible to less/more GPUs. The python main thread would take charge of the data loading. On 4 GPUs, we do not find that the data loading becomes a bottleneck (around 5% overhead).

    GPU Types: We find that either Titan XP, GTX 2080, and Titan V could support this pre-training. However, GTX 1080, with its 11G memory, is a little bit small thus please change the batch_size to 224 (instead of 256).

  6. I have verified these pre-training commands with 12 epochs. The pre-trained weights from previous process could be downloaded from https://nlp.cs.unc.edu/data/github_pretrain/lxmert/EpochXX_LXRT.pth, XX from 01 to 12. The results are roughly the same (around 0.3% lower in downstream tasks because of fewer epochs).

  7. Explanation of arguments in the pre-training script run/lxmert_pretrain.bash:

    python src/pretrain/lxmert_pretrain_new.py \
        # The pre-training tasks
        --taskMaskLM --taskObjPredict --taskMatched --taskQA \  
        
        # Vision subtasks
        # obj / attr: detected object/attribute label prediction.
        # feat: RoI feature regression.
        --visualLosses obj,attr,feat \
        
        # Mask rate for words and objects
        --wordMaskRate 0.15 --objMaskRate 0.15 \
        
        # Training and validation sets
        # mscoco_nominival + mscoco_minival = mscoco_val2014
        # visual genome - mscoco = vgnococo
        --train mscoco_train,mscoco_nominival,vgnococo --valid mscoco_minival \
        
        # Number of layers in each encoder
        --llayers 9 --xlayers 5 --rlayers 5 \
        
        # Train from scratch (Using intialized weights) instead of loading BERT weights.
        --fromScratch \
    
        # Hyper parameters
        --batchSize 256 --optim bert --lr 1e-4 --epochs 20 \
        --tqdm --output $output ${@:2}

Alternative Dataset and Features Download Links

All default download links are provided by our servers in UNC CS department and under our NLP group website but the network bandwidth might be limited. We thus provide a few other options with Google Drive and Baidu Drive.

The files in online drives are almost structured in the same way as our repo but have a few differences due to specific policies. After downloading the data and features from the drives, please re-organize them under data/ folder according to the following example:

REPO ROOT
 |
 |-- data                  
 |    |-- vqa
 |    |    |-- train.json
 |    |    |-- minival.json
 |    |    |-- nominival.json
 |    |    |-- test.json
 |    |
 |    |-- mscoco_imgfeat
 |    |    |-- train2014_obj36.tsv
 |    |    |-- val2014_obj36.tsv
 |    |    |-- test2015_obj36.tsv
 |    |
 |    |-- vg_gqa_imgfeat -- *.tsv
 |    |-- gqa -- *.json
 |    |-- nlvr2_imgfeat -- *.tsv
 |    |-- nlvr2 -- *.json
 |    |-- lxmert -- *.json          # Pre-training data
 | 
 |-- snap
 |-- src

Please also kindly contact us if anything is missing!

Google Drive

As an alternative way to download feature from our UNC server, you could also download the feature from google drive with link https://drive.google.com/drive/folders/1Gq1uLUk6NdD0CcJOptXjxE6ssY5XAuat?usp=sharing. The structure of the folders on drive is:

Google Drive Root
 |-- data                  # The raw data and image features without compression
 |    |-- vqa
 |    |-- gqa
 |    |-- mscoco_imgfeat
 |    |-- ......
 |
 |-- image_feature_zips    # The image-feature zip files (Around 45% compressed)
 |    |-- mscoco_imgfeat.zip
 |    |-- nlvr2_imgfeat.zip
 |    |-- vg_gqa_imgfeat.zip
 |
 |-- snap -- pretrained -- model_LXRT.pth # The pytorch pre-trained model weights.

Note: image features in zip files (e.g., mscoco_mgfeat.zip) are the same to which in data/ (i.e., data/mscoco_imgfeat). If you want to save network bandwidth, please download the feature zips and skip downloading the *_imgfeat folders under data/.

Baidu Drive

Since Google Drive is not officially available across the world, we also create a mirror on Baidu drive (i.e., Baidu PAN). The dataset and features could be downloaded with shared link https://pan.baidu.com/s/1m0mUVsq30rO6F1slxPZNHA and access code wwma.

Baidu Drive Root
 |
 |-- vqa
 |    |-- train.json
 |    |-- minival.json
 |    |-- nominival.json
 |    |-- test.json
 |
 |-- mscoco_imgfeat
 |    |-- train2014_obj36.zip
 |    |-- val2014_obj36.zip
 |    |-- test2015_obj36.zip
 |
 |-- vg_gqa_imgfeat -- *.zip.*  # Please read README.txt under this folder
 |-- gqa -- *.json
 |-- nlvr2_imgfeat -- *.zip.*   # Please read README.txt under this folder
 |-- nlvr2 -- *.json
 |-- lxmert -- *.json
 | 
 |-- pretrained -- model_LXRT.pth

Since Baidu Drive does not support extremely large files, we split a few features zips into multiple small files. Please follow the README.txt under baidu_drive/vg_gqa_imgfeat and baidu_drive/nlvr2_imgfeat to concatenate back to the feature zips with command cat.

Code and Project Explanation

  • All code is in folder src. The basics in lxrt. The python files related to pre-training and fine-tuning are saved in src/pretrain and src/tasks respectively.
  • I kept folders containing image features (e.g., mscoco_imgfeat) separated from vision-and-language dataset (e.g., vqa, lxmert) because multiple vision-and-language datasets would share common images.
  • We use the name lxmert for our framework and use the name lxrt (Language, Cross-Modality, and object-Relationship Transformers) to refer to our our models.
  • To be consistent with the name lxrt (Language, Cross-Modality, and object-Relationship Transformers), we use lxrXXX to denote the number of layers. E.g., lxr955 (used in current pre-trained model) indicates a model with 9 Language layers, 5 cross-modality layers, and 5 object-Relationship layers. If we consider a single-modality layer as a half of cross-modality layer, the total number of layers is (9 + 5) / 2 + 5 = 12, which is the same as BERT_BASE.
  • We share the weight between the two cross-modality attention sub-layers. Please check the visual_attention variable, which is used to compute both lang->visn attention and visn->lang attention. (I am sorry that the name visual_attention is misleading because I deleted the lang_attention there.) Sharing weights is mostly used for saving computational resources and it also (intuitively) helps forcing the features from visn/lang into a joint subspace.
  • The box coordinates are not normalized from [0, 1] to [-1, 1], which looks like a typo but actually not ;). Normalizing the coordinate would not affect the output of box encoder (mathematically and almost numerically). (Hint: consider the LayerNorm in positional encoding)

Faster R-CNN Feature Extraction

We use the Faster R-CNN feature extractor demonstrated in "Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering", CVPR 2018 and its released code at Bottom-Up-Attention github repo. It was trained on Visual Genome dataset and implemented based on a specific Caffe version.

To extract features with this Caffe Faster R-CNN, we publicly release a docker image airsplay/bottom-up-attention on docker hub that takes care of all the dependencies and library installation . Instructions and examples are demonstrated below. You could also follow the installation instructions in the bottom-up attention github to setup the tool: https://github.com/peteanderson80/bottom-up-attention.

The BUTD feature extractor is widely used in many other projects. If you want to reproduce the results from their paper, feel free to use our docker as a tool.

Feature Extraction with Docker

Docker is a easy-to-use virtualization tool which allows you to plug and play without installing libraries.

The built docker file for bottom-up-attention is released on docker hub and could be downloaded with command:

sudo docker pull airsplay/bottom-up-attention

The Dockerfile could be downloaed here, which allows using other CUDA versions.

After pulling the docker, you could test running the docker container with command:

docker run --gpus all --rm -it airsplay/bottom-up-attention bash

If errors about --gpus all popped up, please read the next section.

Docker GPU Access

Note that the purpose of the argument --gpus all is to expose GPU devices to the docker container, and it requires Docker >= 19.03 along with nvidia-container-toolkit:

  1. Docker CE 19.03
  2. nvidia-container-toolkit

For running Docker with an older version, either update it to 19.03 or use the flag --runtime=nvidia instead of --gpus all.

An Example: Feature Extraction for NLVR2

We demonstrate how to extract Faster R-CNN features of NLVR2 images.

  1. Please first follow the instructions on the NLVR2 official repo to get the images.

  2. Download the pre-trained Faster R-CNN model. Instead of using the default pre-trained model (trained with 10 to 100 boxes), we use the 'alternative pretrained model' which was trained with 36 boxes.

    wget 'https://www.dropbox.com/s/2h4hmgcvpaewizu/resnet101_faster_rcnn_final_iter_320000.caffemodel?dl=1' -O data/nlvr2_imgfeat/resnet101_faster_rcnn_final_iter_320000.caffemodel
  3. Run docker container with command:

    docker run --gpus all -v /path/to/nlvr2/images:/workspace/images:ro -v /path/to/lxrt_public/data/nlvr2_imgfeat:/workspace/features --rm -it airsplay/bottom-up-attention bash

    -v mounts the folders on host os to the docker image container.

    Note0: If it says something about 'privilege', add sudo before the command.

    Note1: If it says something about '--gpus all', it means that the GPU options are not correctly set. Please read Docker GPU Access for the instructions to allow GPU access.

    Note2: /path/to/nlvr2/images would contain subfolders train, dev, test1 and test2.

    Note3: Both paths '/path/to/nlvr2/images/' and '/path/to/lxrt_public' requires absolute paths.

  4. Extract the features inside the docker container. The extraction script is copied from butd/tools/generate_tsv.py and modified by Jie Lei and me.

    cd /workspace/features
    CUDA_VISIBLE_DEVICES=0 python extract_nlvr2_image.py --split train 
    CUDA_VISIBLE_DEVICES=0 python extract_nlvr2_image.py --split valid
    CUDA_VISIBLE_DEVICES=0 python extract_nlvr2_image.py --split test
  5. It would takes around 5 to 6 hours for the training split and 1 to 2 hours for the valid and test splits. Since it is slow, I recommend to run them parallelly if there are multiple GPUs. It could be achieved by changing the gpu_id in CUDA_VISIBLE_DEVICES=$gpu_id.

The features will be saved in train.tsv, valid.tsv, and test.tsv under the directory data/nlvr2_imgfeat, outside the docker container. I have verified the extracted image features are the same to the ones I provided in NLVR2 fine-tuning.

Yet Another Example: Feature Extraction for MS COCO Images

  1. Download the MS COCO train2014, val2014, and test2015 images from MS COCO official website.

  2. Download the pre-trained Faster R-CNN model.

    mkdir -p data/mscoco_imgfeat
    wget 'https://www.dropbox.com/s/2h4hmgcvpaewizu/resnet101_faster_rcnn_final_iter_320000.caffemodel?dl=1' -O data/mscoco_imgfeat/resnet101_faster_rcnn_final_iter_320000.caffemodel
  3. Run the docker container with the command:

    docker run --gpus all -v /path/to/mscoco/images:/workspace/images:ro -v $(pwd)/data/mscoco_imgfeat:/workspace/features --rm -it airsplay/bottom-up-attention bash

    Note: Option -v mounts the folders outside container to the paths inside the container.

    Note1: Please use the absolute path to the MS COCO images folder images. The images folder containing the train2014, val2014, and test2015 sub-folders. (It's the standard way to save MS COCO images.)

  4. Extract the features inside the docker container.

    cd /workspace/features
    CUDA_VISIBLE_DEVICES=0 python extract_coco_image.py --split train 
    CUDA_VISIBLE_DEVICES=0 python extract_coco_image.py --split valid
    CUDA_VISIBLE_DEVICES=0 python extract_coco_image.py --split test
  5. Exit from the docker container (by executing exit command in bash). The extracted features would be saved under folder data/mscoco_imgfeat.

Reference

If you find this project helps, please cite our paper :)

@inproceedings{tan2019lxmert,
  title={LXMERT: Learning Cross-Modality Encoder Representations from Transformers},
  author={Tan, Hao and Bansal, Mohit},
  booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing},
  year={2019}
}

Acknowledgement

We thank the funding support from ARO-YIP Award #W911NF-18-1-0336, & awards from Google, Facebook, Salesforce, and Adobe.

We thank Peter Anderson for providing the faster R-CNN code and pre-trained models under Bottom-Up-Attention Github Repo. We thank Hengyuan Hu for his PyTorch VQA implementation, our VQA implementation borrows its pre-processed answers. We thank hugginface for releasing the excellent PyTorch code PyTorch Transformers.

We thank Drew A. Hudson to answer all our questions about GQA specification. We thank Alane Suhr for helping test LXMERT on NLVR2 unreleased test split and provide a detailed analysis.

We thank all the authors and annotators of vision-and-language datasets (i.e., MS COCO, Visual Genome, VQA, GQA, NLVR2 ), which allows us to develop a pre-trained model for vision-and-language tasks.

We thank Jie Lei and Licheng Yu for their helpful discussions. I also want to thank Shaoqing Ren to teach me vision knowledge when I was in MSRA. We also thank you to help look into our code. Please kindly contact us if you find any issue. Comments are always welcome.

LXRThanks.

Comments
  • CPU memory usage is too high and other queries

    CPU memory usage is too high and other queries

    Thanks for sharing this code. When I'm performing finetuning with VQA, my RAM usage blows up. With num_workers set to 4, it requires 207 GB. I've tried with different batch sizes also. The script with --tiny flag runs successfully. But when I'm loading both train and nominival, the memory usage blows up. I get memory can't be allocated. Do you know a workaround for this ? I think this is because we are storing all the features from faster_rcnn in RAM ?

    opened by prajjwal1 19
  • Bad performance of NLVR2.

    Bad performance of NLVR2.

    Hi, I also met the problem in https://github.com/airsplay/lxmert/issues/1 and I also only have the performance to be about 50: Epoch 0: Train 50.31 Epoch 0: Valid 50.86 Epoch 0: Best 50.86

    Epoch 1: Train 50.39 Epoch 1: Valid 49.14 Epoch 1: Best 50.86

    Epoch 2: Train 50.44 Epoch 2: Valid 49.14 Epoch 2: Best 50.86

    Epoch 3: Train 50.57 Epoch 3: Valid 50.86 Epoch 3: Best 50.86 I also tried torch == 1.0.1, but it still did not work. I also wanted to download the data in that link, while the link seems did not exist. Can you reload these features again? Thank you very much!

    opened by yangxuntu 18
  • The URLs are not accessable

    The URLs are not accessable

    Hello, airsplay!

    Thanks for your generous sharing of this outstanding work!

    However, the URLs provided in this repository seem not accessible anymore. Could you please update them?

    Thank you in advance.

    The below is an example of this connection error.

    --2021-10-10 21:43:33--  http://nlp.cs.unc.edu/data/lxrt_noqa/Epoch19_LXRT.pth%0D
    Resolving nlp.cs.unc.edu (nlp.cs.unc.edu)... 152.2.128.53
    Connecting to nlp.cs.unc.edu (nlp.cs.unc.edu)|152.2.128.53|:80... connected.
    HTTP request sent, awaiting response... 503 Service Unavailable
    2021-10-10 21:44:33 ERROR 503: Service Unavailable.
    
    opened by BierOne 12
  • Not able to reproduce GQA(Train + BERT) 56.2

    Not able to reproduce GQA(Train + BERT) 56.2

    Hi Thanks for creating this great repo. I have trouble reproducing GQA(Train + BERT), which hits 56.2 for GQA test-dev set. My result stops around 54.51 at around 13 epoch. My script is following:

    CUDA_VISIBLE_DEVICES=0,6 PYTHONPATH=$PYTHONPATH:./src
    python src/tasks/gqa.py
    --train train,valid --valid testdev
    --llayers 9 --xlayers 5 --rlayers 5
    --batchSize 128 --optim bert --lr 1e-4 --epochs 400
    --tqdm --output $output ${@:3} --multiGPU

    Is it related to batch size or something else?

    opened by yix081 6
  • VQA finetuning training time

    VQA finetuning training time

    In this section of the readme, it says that fine-tuning should only take 2 hours per epoch:

    ''' If no bug came out, then the model is ready to be trained on the whole VQA corpus:

    bash run/vqa_finetune.bash 0 vqa_lxr955 It takes around 8 hours (2 hours per epoch * 4 epochs) to converge. The logs and model snapshots will be saved under folder snap/vqa/vqa_lxr955. The validation result after training will be around 69.7% to 70.2%. '''

    Is this with 4 GPUs? Because on my system with a single Titan XP, it is reporting 300+ hours/epoch

    image

    Is this an issue with my system or with the code? Because it seems even with 4 GPUs, we would still need 75 hours/epoch

    Thanks

    opened by johntiger1 6
  • Pre-training doesn't work

    Pre-training doesn't work

    Hello,

    I am trying to run the pre-training of the model again. When I run the command: bash run/lxmert_pretrain.bash 1,2 --multiGPU --tiny

    I get the following output:

    Load 174866 data from mscoco_train,mscoco_nominival,vgnococo
    Load an answer table of size 9500.
    Start to load Faster-RCNN detected objects from data/mscoco_imgfeat/train2014_obj36.tsv
    Loaded 500 images in file data/mscoco_imgfeat/train2014_obj36.tsv in 2 seconds.
    Start to load Faster-RCNN detected objects from data/mscoco_imgfeat/val2014_obj36.tsv
    Loaded 500 images in file data/mscoco_imgfeat/val2014_obj36.tsv in 2 seconds.
    Start to load Faster-RCNN detected objects from data/vg_gqa_imgfeat/vg_gqa_obj36.tsv
    Loaded 500 images in file data/vg_gqa_imgfeat/vg_gqa_obj36.tsv in 2 seconds.
    Use 33226 data in torch dataset
    
    Load 5000 data from mscoco_minival
    Load an answer table of size 9500.
    Start to load Faster-RCNN detected objects from data/mscoco_imgfeat/val2014_obj36.tsv
    Loaded 500 images in file data/mscoco_imgfeat/val2014_obj36.tsv in 2 seconds.
    Use 20707 data in torch dataset
    
    LXRT encoder with 9 l_layers, 5 x_layers, and 5 r_layers.
    Train from Scratch: re-initialize all BERT weights.
    Batch per epoch: 129
    Total Iters: 2580
    Warm up Iters: 129
      0%|                                                                                                                                | 0/129 [00:00<?, ?it/s]/mnt/8tera/claudio.greco/bert_foil/lxmert/venv_lxmert/lib/python3.6/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
      warnings.warn('Was asked to gather along dimension 0, but all '
    

    and nothing else happens.

    I guess I should see a progress bar or some intermediate information, right? Do you know how I could try to fix this issue?

    Thanks, Claudio

    opened by claudiogreco 5
  • Difference between NLVR2 Dev performance

    Difference between NLVR2 Dev performance

    When you ran the NLVR2 fine-tuning and reported your result in this issue https://github.com/airsplay/lxmert/issues/1#issuecomment-523983982, the result (dev performance) was 74.39. Similarly, when I ran fine-tuning, I got about 74.33 for dev accuracy. How come the dev accuracy reported in the paper, in the table in the README, and in the NLVR2 leaderboard (http://lil.nlp.cornell.edu/nlvr/) is 74.9? Thanks for your help.

    opened by sanjayss34 5
  • Why is object classification loss multiplied with the Faster R-CNN confidence score?

    Why is object classification loss multiplied with the Faster R-CNN confidence score?

    During training, mask_conf is multiplied to feature regression and object classification loss, which are defined here. It is reasonable to mask feature regression loss on masked regions, but I don't understand the reason of multiplying the Faster R-CNN confidence score (top object probability) to object classification loss (which is cross-entropy loss). Is this sort of knowledge distillation? This is not mentioned in the EMNLP paper.

    opened by j-min 3
  • Bad performance on NLVR2

    Bad performance on NLVR2

    Hi, thanks for releasing your code! I'm not able to reproduce your fine-tuning result on NLVR2. I followed your instructions by downloading the pre-trained model, downloading the image features, pre-processing the nlvr2 JSON files, and running the nlvr2_finetune.bash script as is. However, I get the following results, which are much lower than the result you reported. Do you know why this might be happening?

    Epoch 0: Train 52.32 Epoch 0: Valid 50.86 Epoch 0: Best 50.86

    Epoch 1: Train 50.50 Epoch 1: Valid 49.14 Epoch 1: Best 50.86

    Epoch 2: Train 50.56 Epoch 2: Valid 49.31 Epoch 2: Best 50.86

    Epoch 3: Train 54.83 Epoch 3: Valid 51.65 Epoch 3: Best 51.65

    opened by sanjayss34 3
  • validation score of VQA

    validation score of VQA

    I downloaded the pre-trained model, and fine-tuned the VQA tasks. But I found while fine-tuning, the validation scores of VQA decreases in the first 4 epochs. Is this normal? I think it is hard to believe that the validation scores decrease but test scores increase...

    opened by Yucheng-Han 2
  • Get embeddings of language encoder and Object-Relationship Encoder?

    Get embeddings of language encoder and Object-Relationship Encoder?

    Any suggestion of getting embeddings of language encoder and Object-Relationship Encoder? I refers to output_hidden_state mentioned in this blog as well as the official tutorial.

    opened by yezhengli-Mr9 2
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • how to export onnx or tensorrt model when I using lxrt.modeling.LXRTModel

    how to export onnx or tensorrt model when I using lxrt.modeling.LXRTModel

    When I using torch.export.onnx, it reported an error

    /home/pacaep/env/py38/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2157.)
      return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
    LXRT encoder with 12 l_layers, 5 x_layers, and 0 r_layers.
    /home/pacaep/env/py38/lib/python3.8/site-packages/torch/onnx/utils.py:90: UserWarning: 'enable_onnx_checker' is deprecated and ignored. It will be removed in the next PyTorch release. To proceed despite ONNX checker failures, catch torch.onnx.ONNXCheckerError.
      warnings.warn("'enable_onnx_checker' is deprecated and ignored. It will be removed in "
    /home/pacaep/env/py38/lib/python3.8/site-packages/torch/onnx/utils.py:103: UserWarning: `use_external_data_format' is deprecated and ignored. Will be removed in next PyTorch release. The code will work as it is False if models are not larger than 2GB, Otherwise set to False because of size limits imposed by Protocol Buffers.
      warnings.warn("`use_external_data_format' is deprecated and ignored. Will be removed in next "
    /home/pacaep/wbdc2022-baseline/utils/swin.py:449: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      assert H == self.img_size[0] and W == self.img_size[1], \
    /home/pacaep/wbdc2022-baseline/utils/swin.py:242: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      assert L == H * W, "input feature has wrong size"
    /home/pacaep/wbdc2022-baseline/utils/swin.py:46: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
      x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
    /home/pacaep/wbdc2022-baseline/utils/swin.py:124: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
      qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
    /home/pacaep/wbdc2022-baseline/utils/swin.py:62: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      B = int(windows.shape[0] / (H * W / window_size / window_size))
    /home/pacaep/wbdc2022-baseline/utils/swin.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
      attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
    /home/pacaep/wbdc2022-baseline/utils/swin.py:319: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      assert L == H * W, "input feature has wrong size"
    /home/pacaep/env/py38/lib/python3.8/site-packages/torch/onnx/symbolic_helper.py:325: UserWarning: Type cannot be inferred, which might cause exported graph to produce incorrect results.
      warnings.warn("Type cannot be inferred, which might cause exported graph to produce incorrect results.")
    [W shape_type_inference.cpp:434] Warning: Constant folding in symbolic shape inference fails: index_select(): Index is supposed to be a vector
    Exception raised from index_select_out_cpu_ at ../aten/src/ATen/native/TensorAdvancedIndexing.cpp:887 (most recent call first):
    frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7fcb6f42cd62 in /home/pacaep/env/py38/lib/python3.8/site-packages/torch/lib/libc10.so)
    frame #1: at::native::index_select_out_cpu_(at::Tensor const&, long, at::Tensor const&, at::Tensor&) + 0x3a9 (0x7fcbb435a189 in /home/pacaep/env/py38/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
    frame #2: at::native::index_select_cpu_(at::Tensor const&, long, at::Tensor const&) + 0xe6 (0x7fcbb435c146 in /home/pacaep/env/py38/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
    frame #3: <unknown function> + 0x1d37f12 (0x7fcbb4a53f12 in /home/pacaep/env/py38/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
    frame #4: at::_ops::index_select::redispatch(c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&) + 0xb9 (0x7fcbb45ef099 in /home/pacaep/env/py38/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
    frame #5: <unknown function> + 0x3250ac3 (0x7fcbb5f6cac3 in /home/pacaep/env/py38/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
    frame #6: <unknown function> + 0x32510f5 (0x7fcbb5f6d0f5 in /home/pacaep/env/py38/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
    frame #7: at::_ops::index_select::call(at::Tensor const&, long, at::Tensor const&) + 0x166 (0x7fcbb466ece6 in /home/pacaep/env/py38/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
    frame #8: torch::jit::onnx_constant_fold::runTorchBackendForOnnx(torch::jit::Node const*, std::vector<at::Tensor, std::allocator<at::Tensor> >&, int) + 0x1b5f (0x7fcc366d275f in /home/pacaep/env/py38/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
    frame #9: <unknown function> + 0xbbdc72 (0x7fcc36719c72 in /home/pacaep/env/py38/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
    frame #10: torch::jit::ONNXShapeTypeInference(torch::jit::Node*, std::map<std::string, c10::IValue, std::less<std::string>, std::allocator<std::pair<std::string const, c10::IValue> > > const&, int) + 0xa8e (0x7fcc3671f3ce in /home/pacaep/env/py38/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
    frame #11: <unknown function> + 0xbc4ed4 (0x7fcc36720ed4 in /home/pacaep/env/py38/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
    frame #12: <unknown function> + 0xb356dc (0x7fcc366916dc in /home/pacaep/env/py38/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
    frame #13: <unknown function> + 0x2a588e (0x7fcc35e0188e in /home/pacaep/env/py38/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
    <omitting python frames>
    frame #41: __libc_start_main + 0xf5 (0x7fcc6e9fb555 in /usr/lib64/libc.so.6)
     (function ComputeConstantFolding)
    Traceback (most recent call last):
      File "torch2onnx.py", line 33, in <module>
        torch.onnx.export(model, (frame_input,frame_mask,title_input,title_mask), './save/model.onnx', verbose=False, opset_version=12,
      File "/home/pacaep/env/py38/lib/python3.8/site-packages/torch/onnx/__init__.py", line 316, in export
        return utils.export(model, args, f, export_params, verbose, training,
      File "/home/pacaep/env/py38/lib/python3.8/site-packages/torch/onnx/utils.py", line 107, in export
        _export(model, args, f, export_params, verbose, training, input_names, output_names,
      File "/home/pacaep/env/py38/lib/python3.8/site-packages/torch/onnx/utils.py", line 724, in _export
        _model_to_graph(model, args, verbose, input_names,
      File "/home/pacaep/env/py38/lib/python3.8/site-packages/torch/onnx/utils.py", line 544, in _model_to_graph
        params_dict = torch._C._jit_pass_onnx_constant_fold(graph, params_dict,
    IndexError: index_select(): Index is supposed to be a vector
    

    please teach me how to deal with it, thanks

    opened by aeeeeeep 0
  • Pre-training mission error

    Pre-training mission error

    I encountered the following errors while running the pre-training task of large data sets and hope the author can provide answers.

    Start to load Faster-RCNN detected objects from data/mscoco_imgfeat/train2014_obj36.tsv Traceback (most recent call last): File "src/pretrain/lxmert_pretrain.py", line 46, in train_tuple = get_tuple(args.train, args.batch_size, shuffle=True, drop_last=True) File "src/pretrain/lxmert_pretrain.py", line 33, in get_tuple tset = LXMERTTorchDataset(dset, topk) File "/home/af/Downloads/zyd/lxmert-master/src/pretrain/lxmert_data.py", line 101, in init img_data.extend(load_obj_tsv(Split2ImgFeatPath[source], topk)) File "/home/af/Downloads/zyd/lxmert-master/src/utils.py", line 45, in load_obj_tsv item[key] = np.frombuffer(base64.b64decode(item[key]), dtype=dtype) ValueError: buffer size must be a multiple of element size

    opened by MaxwellZYD 0
  • Fine-tuning VQA on multiple gpus

    Fine-tuning VQA on multiple gpus

    I'm trying to reproduce some results with a downloaded pre-train model.But When I set GPU_ID to 0,1,2,3,the program seems not to run on multiple gpus as I expected.I wonder how to properly fine-tune the pretrained model on multiple gpus.

    opened by simplelifetime 0
Owner
Hao Tan
NLP @ UNC Chapel Hill
Hao Tan
This repository contains the code for EMNLP-2021 paper "Word-Level Coreference Resolution"

Word-Level Coreference Resolution This is a repository with the code to reproduce the experiments described in the paper of the same name, which was a

null 79 Dec 27, 2022
Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

LancoPKU 105 Jan 3, 2023
Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

Udit Arora 19 Oct 28, 2022
Code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation".

This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation".

Chenhe Dong 28 Nov 10, 2022
šŸ’› Code and Dataset for our EMNLP 2021 paper: "Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes"

Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes Official PyTorch implementation and EmoCause evaluatio

Hyunwoo Kim 50 Dec 21, 2022
Code and dataset for the EMNLP 2021 Finding paper "Can NLI Models Verify QA Systemsā€™ Predictions?"

Code and dataset for the EMNLP 2021 Finding paper "Can NLI Models Verify QA Systemsā€™ Predictions?"

Jifan Chen 22 Oct 21, 2022
Code for Findings at EMNLP 2021 paper: "Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning"

Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning This repo is for Findings at EMNLP 2021 paper: Learn Cont

INK Lab @ USC 6 Sep 2, 2022
Code to reproduce the results of the paper 'Towards Realistic Few-Shot Relation Extraction' (EMNLP 2021)

Realistic Few-Shot Relation Extraction This repository contains code to reproduce the results in the paper "Towards Realistic Few-Shot Relation Extrac

Bloomberg 8 Nov 9, 2022
A Pytorch implementation of "Splitter: Learning Node Representations that Capture Multiple Social Contexts" (WWW 2019).

Splitter ā €ā € A PyTorch implementation of Splitter: Learning Node Representations that Capture Multiple Social Contexts (WWW 2019). Abstract Recent inte

Benedek Rozemberczki 201 Nov 9, 2022
This is the writeup of all the challenges from Advent-of-cyber-2019 of TryHackMe

Advent-of-cyber-2019-writeup This is the writeup of all the challenges from Advent-of-cyber-2019 of TryHackMe https://tryhackme.com/shivam007/badges/c

shivam danawale 5 Jul 17, 2022
In this repository we have tested 3 VQA models on the ImageCLEF-2019 dataset.

Med-VQA In this repository we have tested 3 VQA models on the ImageCLEF-2019 dataset. Two of these are made on top of Facebook AI Reasearch's Multi-Mo

Kshitij Ambilduke 8 Apr 14, 2022
Based on 125GB of data leaked from Twitch, you can see their monthly revenues from 2019-2021

Twitch Revenues Bu script'i kullanarak istediğiniz yayıncıların, Twitch'den sızdırılan 125 GB'lik veriye dayanarak, 2019-2021 arası aylık gelirlerini

null 4 Nov 11, 2021
Twitter-Sentiment-Analysis - Twitter sentiment analysis for india's top online retailers(2019 to 2022)

Twitter-Sentiment-Analysis Twitter sentiment analysis for india's top online retailers(2019 to 2022) Project Overview : Sentiment Analysis helps us to

Balaji R 1 Jan 1, 2022
šŸŠ PAUSE (Positive and Annealed Unlabeled Sentence Embedding), accepted by EMNLP'2021 šŸŒ“

PAUSE: Positive and Annealed Unlabeled Sentence Embedding Sentence embedding refers to a set of effective and versatile techniques for converting raw

EQT 21 Dec 15, 2022
[EMNLP 2021] LM-Critic: Language Models for Unsupervised Grammatical Error Correction

LM-Critic: Language Models for Unsupervised Grammatical Error Correction This repo provides the source code & data of our paper: LM-Critic: Language M

Michihiro Yasunaga 98 Nov 24, 2022
IndoBERTweet is the first large-scale pretrained model for Indonesian Twitter. Published at EMNLP 2021 (main conference)

IndoBERTweet ?? ???? 1. Paper Fajri Koto, Jey Han Lau, and Timothy Baldwin. IndoBERTweet: A Pretrained Language Model for Indonesian Twitter with Effe

IndoLEM 40 Nov 30, 2022
Beta Distribution Guided Aspect-aware Graph for Aspect Category Sentiment Analysis with Affective Knowledge. Proceedings of EMNLP 2021

AAGCN-ACSA EMNLP 2021 Introduction This repository was used in our paper: Beta Distribution Guided Aspect-aware Graph for Aspect Category Sentiment An

Akuchi 36 Dec 18, 2022
EMNLP'2021: Can Language Models be Biomedical Knowledge Bases?

BioLAMA BioLAMA is biomedical factual knowledge triples for probing biomedical LMs. The triples are collected and pre-processed from three sources: CT

DMIS Laboratory - Korea University 41 Nov 18, 2022
A2T: Towards Improving Adversarial Training of NLP Models (EMNLP 2021 Findings)

A2T: Towards Improving Adversarial Training of NLP Models This is the source code for the EMNLP 2021 (Findings) paper "Towards Improving Adversarial T

QData 17 Oct 15, 2022