Scalable training for dense retrieval models.

Overview

Scalable implementation of dense retrieval.

Training on cluster

By default it trains locally:

PYTHONPATH=.:$PYTHONPATH python dpr_scale/main.py trainer.gpus=1

SLURM Training

To train the model on SLURM, run:

PYTHONPATH=.:$PYTHONPATH python dpr_scale/main.py -m trainer=slurm trainer.num_nodes=2 trainer.gpus=2

Reproduce DPR on 8 gpus

PYTHONPATH=.:$PYTHONPATH python dpr_scale/main.py -m --config-name nq.yaml  +hydra.launcher.name=dpr_stl_nq_reproduce

Generate embeddings on Wikipedia

PYTHONPATH=.:$PYTHONPATH python dpr_scale/generate_embeddings.py -m --config-name nq.yaml datamodule=generate datamodule.test_path=psgs_w100.tsv +task.ctx_embeddings_dir=<CTX_EMBEDDINGS_DIR> +task.checkpoint_path=<CHECKPOINT_PATH>

Get retrieval results

Currently this runs on 1 GPU. Use CTX_EMBEDDINGS_DIR from above.

PYTHONPATH=.:$PYTHONPATH python dpr_scale/run_retrieval.py --config-name nq.yaml trainer=gpu_1_host trainer.gpus=1 +task.output_path=<PATH_TO_OUTPUT_JSON> +task.ctx_embeddings_dir=<CTX_EMBEDDINGS_DIR> +task.checkpoint_path=<CHECKPOINT_PATH> +task.passages=psgs_w100.tsv datamodule.test_path=<PATH_TO_QUERIES_JSONL>

Generate query embeddings

Alternatively, query embedding generation and retrieval can be separated. After query embeddings are generated using the following command, the run_retrieval_fb.py or run_retrieval_multiset.py script can be used to perform retrieval.

PYTHONPATH=.:$PYTHONPATH python dpr_scale/generate_query_embeddings.py -m --config-name nq.yaml trainer.gpus=1 datamodule.test_path=<PATH_TO_QUERIES_JSONL> +task.ctx_embeddings_dir=<CTX_EMBEDDINGS_DIR> +task.checkpoint_path=<CHECKPOINT_PATH> +task.query_emb_output_path=<OUTPUT_TO_QUERY_EMB>

Get evaluation metrics for a given JSON output file

python dpr_scale/eval_dpr.py --retrieval <PATH_TO_OUTPUT_JSON> --topk 1 5 10 20 50 100 

Get evaluation metrics for MSMARCO

python dpr_scale/msmarco_eval.py ~data/msmarco/qrels.dev.small.tsv PATH_TO_OUTPUT_JSON

Domain-matched Pre-training Tasks for Dense Retrieval

Paper: https://arxiv.org/abs/2107.13602

The sections below provide links to datasets and pretrained models, as well as, instructions to prepare datasets, pretrain and fine-tune them.

Q&A Datasets

PAQ

Download the dataset from here

Conversational Datasets

You can download the dataset from the respective tables.

Reddit

File Download Link
train download
dev download

ConvAI2

File Download Link
train download
dev download

DSTC7

File Download Link
train download
dev download
test download

Prepare by downloading the tar ball linked here, and using the command below.

DSTC7_DATA_ROOT=<path_of_dir_where_the_data_is_extracted>
python dpr_scale/data_prep/prep_conv_datasets.py \
    --dataset dstc7 \
    --in_file_path $DSTC7_DATA_ROOT/ubuntu_train_subtask_1_augmented.json \
    --out_file_path $DSTC7_DATA_ROOT/ubuntu_train.jsonl

Ubuntu V2

File Download Link
train download
dev download
test download

Prepare by downloading the tar ball linked here, and using the command below.

UBUNTUV2_DATA_ROOT=<path_of_dir_where_the_data_is_extracted>
python dpr_scale/data_prep/prep_conv_datasets.py \
    --dataset ubuntu2 \
    --in_file_path $UBUNTUV2_DATA_ROOT/train.csv \
    --out_file_path $UBUNTUV2_DATA_ROOT/train.jsonl

Pretraining DPR

Pretrained Checkpoints

Pretrained Model Dataset Download Link
BERT-base PAQ download
BERT-large PAQ download
BERT-base Reddit download
BERT-large Reddit download
RoBERTa-base Reddit download
RoBERTa-large Reddit download

Pretraining on PAQ dataset

DPR_ROOT=<path_of_your_repo's_root>
MODEL="bert-large-uncased"
NODES=8
BSZ=16
MAX_EPOCHS=20
LR=1e-5
TIMOUT_MINS=4320
EXP_DIR=<path_of_the_experiment_dir>
TRAIN_PATH=<path_of_the_training_data_file>
mkdir -p ${EXP_DIR}/logs
PYTHONPATH=$DPR_ROOT python ${DPR_ROOT}/dpr_scale/main.py -m \
    --config-dir ${DPR_ROOT}/dpr_scale/conf \
    --config-name nq.yaml \
    hydra.launcher.timeout_min=$TIMOUT_MINS \
    hydra.sweep.dir=${EXP_DIR} \
    trainer.num_nodes=${NODES} \
    task.optim.lr=${LR} \
    task.model.model_path=${MODEL} \
    trainer.max_epochs=${MAX_EPOCHS} \
    datamodule.train_path=$TRAIN_PATH \
    datamodule.batch_size=${BSZ} \
    datamodule.num_negative=1 \
    datamodule.num_val_negative=10 \
    datamodule.num_test_negative=50 > ${EXP_DIR}/logs/log.out 2> ${EXP_DIR}/logs/log.err &

Pretraining on Reddit dataset

# Use a batch size of 16 for BERT and RoBERTa base models.
BSZ=4
NODES=8
MAX_EPOCHS=5
WARMUP_STEPS=10000
LR=1e-5
MODEL="roberta-large"
EXP_DIR=<path_of_the_experiment_dir>
PYTHONPATH=. python dpr_scale/main.py -m \
    --config-dir ${DPR_ROOT}/dpr_scale/conf \
    --config-name reddit.yaml \
    hydra.launcher.nodes=${NODES} \
    hydra.sweep.dir=${EXP_DIR} \
    trainer.num_nodes=${NODES} \
    task.optim.lr=${LR} \
    task.model.model_path=${MODEL} \
    trainer.max_epochs=${MAX_EPOCHS} \
    task.warmup_steps=${WARMUP_STEPS} \
    datamodule.batch_size=${BSZ} > ${EXP_DIR}/logs/log.out 2> ${EXP_DIR}/logs/log.err &

Fine-tuning DPR on downstream tasks/datasets

Fine-tune the pretrained PAQ checkpoint

# You can also try 2e-5 or 5e-5. Usually these 3 learning rates work best.
LR=1e-5
# Use a batch size of 32 for BERT and RoBERTa base models.
BSZ=12
MODEL="bert-large-uncased"
MAX_EPOCHS=40
WARMUP_STEPS=1000
NODES=1
PRETRAINED_CKPT_PATH=<path_of_checkpoint_pretrained_on_reddit>
EXP_DIR=<path_of_the_experiment_dir>
PYTHONPATH=. python dpr_scale/main.py -m \
    --config-dir ${DPR_ROOT}/dpr_scale/conf \
    --config-name nq.yaml \
    hydra.launcher.name=${NAME} \
    hydra.sweep.dir=${EXP_DIR} \
    trainer.num_nodes=${NODES} \
    trainer.max_epochs=${MAX_EPOCHS} \
    datamodule.num_negative=1 \
    datamodule.num_val_negative=25 \
    datamodule.num_test_negative=50 \
    +trainer.val_check_interval=150 \
    task.warmup_steps=${WARMUP_STEPS} \
    task.optim.lr=${LR} \
    task.pretrained_checkpoint_path=$PRETRAINED_CKPT_PATH \
    task.model.model_path=${MODEL} \
    datamodule.batch_size=${BSZ} > ${EXP_DIR}/logs/log.out 2> ${EXP_DIR}/logs/log.err &

Fine-tune the pretrained Reddit checkpoint

Batch sizes that worked on Volta 32GB GPUs for respective model and datasets.

Model Dataset Batch Size
BERT/RoBERTa base ConvAI2 64
RBERT/RoBERTa base ConvAI2 16
BERT/RoBERTa base DSTC7 24
BERT/RoBERTa base DSTC7 8
BERT/RoBERTa base Ubuntu V2 64
BERT/RoBERTa large Ubuntu V2 16
# Change the config file name to convai2.yaml or dstc7.yaml for the respective datasets.
CONFIG_FILE_NAME=ubuntuv2.yaml
# You can also try 2e-5 or 5e-5. Usually these 3 learning rates work best.
LR=1e-5
BSZ=16
NODES=1
MAX_EPOCHS=5
WARMUP_STEPS=10000
MODEL="roberta-large"
PRETRAINED_CKPT_PATH=<path_of_checkpoint_pretrained_on_reddit>
EXP_DIR=<path_of_the_experiment_dir>
PYTHONPATH=${DPR_ROOT} python ${DPR_ROOT}/dpr_scale/main.py -m \
    --config-dir=${DPR_ROOT}/dpr_scale/conf \
    --config-name=$CONFIG_FILE_NAME \
    hydra.launcher.nodes=${NODES} \
    hydra.sweep.dir=${EXP_DIR} \
    trainer.num_nodes=${NODES} \
    trainer.max_epochs=${MAX_EPOCHS} \
    +trainer.val_check_interval=150 \
    task.pretrained_checkpoint_path=$PRETRAINED_CKPT_PATH \
    task.warmup_steps=${WARMUP_STEPS} \
    task.optim.lr=${LR} \
    task.model.model_path=$MODEL \
    datamodule.batch_size=${BSZ} > ${EXP_DIR}/logs/log.out 2> ${EXP_DIR}/logs/log.err &

License

dpr-scale is CC-BY-NC 4.0 licensed as of now.

Comments
  • BEIR reproduction

    BEIR reproduction

    Thanks for providing this!

    Do you have the scripts for reproducing the results of SPAR on BEIR benchmark?

    Particularly, did you tune the concatenation weight for BEIR evaluation?

    opened by ziqing-huang 4
  • Question about deepspeed option

    Question about deepspeed option

    Dear colleagues,

    During training process on several gpus, I have an exception like this:

      File "/root/dpr-scale/dpr_scale/task/dpr_task.py", line 272, in validation_epoch_end
        self._eval_epoch_end(valid_outputs)
      File "/root/dpr-scale/dpr_scale/task/dpr_task.py", line 266, in _eval_epoch_end
        self.log_dict(metrics, on_epoch=True, sync_dist=True)
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py", line 343, in log_dict
        self.log(
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py", line 286, in log
        self._results.log(
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/core/step_result.py", line 149, in log
        value = sync_fn(value, group=sync_dist_group, reduce_op=sync_dist_op)
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 290,
    in reduce
        output = sync_ddp_if_available(output, group, reduce_op)
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/utilities/distributed.py", line 129, in s
    ync_ddp_if_available
        return sync_ddp(result, group=group, reduce_op=reduce_op)
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/utilities/distributed.py", line 162, in s
    ync_ddp
        torch.distributed.all_reduce(result, op=op, group=group, async_op=False)
      File "/root/.local/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 1287, in all_r
    educe
        work = group.allreduce([tensor], opts)
    RuntimeError: Tensors must be CUDA and dense
    

    How could I fix this error on validation step?

    My current hyperparameters for multi-training are:

    accumulate_grad_batches: 1
    plugins: deepspeed
    accelerator: ddp
    precision: 16
    

    Thanks in advance.

    opened by roddar92 4
  • Embeddings generation without Trainer

    Embeddings generation without Trainer

    Dear colleagues,

    Do you know how to generate embeddings for all contexts without Trainer usage? At initialization step, I only want to load a model from my checkpoint and then calculate query embeddings and retrieval documents as inference step.

    Thanks, Daria

    opened by roddar92 2
  • Some questions about 'generating training queries'

    Some questions about 'generating training queries'

    To generate our own training data for Lambda, we want to create training queries. At this time, I have a question about the code in 'dpr_scale/utils/prep_wiki_exp.py'.

    image Looking at the code above, it can be seen that the query is obtained from passage_sents.

    image However, if you look at the source of passage_sents, you can see that it was taken from the document. Am I right to understand?

    image In the end, it seems that positive_ctxs stores the passage from passage_sents, and I wonder if query and context are used correctly.

    If you have time, please reply. Thank you.

    opened by jjonhwa 2
  • what kind of bug might happen when num_workers > 0?

    what kind of bug might happen when num_workers > 0?

    Hi @ccsasuke

    I noticed you mentioned that num_workers > 0 bugs out right now. https://github.com/facebookresearch/dpr-scale/blob/2a6d3906ee163c4f0025841a3e30ebf82ebf49bb/dpr_scale/datamodule/dpr.py#L167 However, when I set num_workers = 8, it seems the code works. Could you points me out what kind of bug might happen? or you forget to remove the comment after solving the bugs since some configs do set num_workers > 10. https://github.com/facebookresearch/dpr-scale/blob/da2f594d22b499dd8d45bd8d8e9d11455e2c5efc/dpr_scale/conf/wiki_ict.yaml#L24

    opened by Liangtaiwan 1
  • Error during embeddings generation

    Error during embeddings generation

    Dear colleagues, when I try to generate embeddings, I have an error:

    Testing: 100%|████████████████████████████████████████████████████████████████████████████| 1851/1851 [02:41<00:00, 13.31it/s]
    Writing tensor of size torch.Size([29606, 768]) to /root/dpr/ctx_embeddings/reps_0000.pkl
    Error executing job with overrides: ['trainer.gpus=1', 'datamodule=generate', 'datamodule.test_path=/root/dpr/python_docs_w100.tsv', 'datamodule.test_batch_size=16', '+task.ctx_embeddings_dir=/root/dpr/ctx_embeddings', '+task.checkpoint_path=/root/dpr/trained_only_by_answers.ckpt', '+task.pretrained_checkpoint_path=/root/dpr/trained_only_by_answers.ckpt']
    Traceback (most recent call last):
      File "/root/dpr-scale/dpr_scale/generate_embeddings.py", line 30, in <module>
        main()
      File "/root/.local/lib/python3.9/site-packages/hydra/main.py", line 48, in decorated_main
        _run_hydra(
      File "/root/.local/lib/python3.9/site-packages/hydra/_internal/utils.py", line 385, in _run_hydra
        run_and_report(
      File "/root/.local/lib/python3.9/site-packages/hydra/_internal/utils.py", line 214, in run_and_report
        raise ex
      File "/root/.local/lib/python3.9/site-packages/hydra/_internal/utils.py", line 211, in run_and_report
        return func()
      File "/root/.local/lib/python3.9/site-packages/hydra/_internal/utils.py", line 386, in <lambda>
        lambda: hydra.multirun(
      File "/root/.local/lib/python3.9/site-packages/hydra/_internal/hydra.py", line 140, in multirun
        ret = sweeper.sweep(arguments=task_overrides)
      File "/root/.local/lib/python3.9/site-packages/hydra/_internal/core_plugins/basic_sweeper.py", line 161, in sweep
        _ = r.return_value
      File "/root/.local/lib/python3.9/site-packages/hydra/core/utils.py", line 233, in return_value
        raise self._return_value
      File "/root/.local/lib/python3.9/site-packages/hydra/core/utils.py", line 160, in run_job
        ret.return_value = task_function(task_cfg)
      File "/root/dpr-scale/dpr_scale/generate_embeddings.py", line 26, in main
        trainer.test(task, datamodule=datamodule)
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 914, in test
        results = self.__test_given_model(model, test_dataloaders)
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 972, in __test_given_model
        results = self.fit(model)
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 498, in fit
        self.dispatch()
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 539, in dispatch
        self.accelerator.start_testing(self)
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 76, in start_testing
        self.training_type_plugin.start_testing(trainer)
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 118, in start_testing
        self._results = trainer.run_test()
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 785, in run_test
        eval_loop_results, _ = self.run_evaluation()
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in run_evaluation
        deprecated_eval_results = self.evaluation_loop.evaluation_epoch_end()
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 187, in evaluation_epoch_end
        deprecated_results = self.__run_eval_epoch_end(self.num_dataloaders)
      File "/root/.local/lib/python3.9/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 219, in __run_eval_epoch_end
        eval_results = model.test_epoch_end(eval_results)
      File "/root/dpr-scale/dpr_scale/task/dpr_eval_task.py", line 49, in test_epoch_end
        torch.distributed.barrier()  # make sure rank 0 waits for all to complete
      File "/root/.local/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2708, in barrier
        default_pg = _get_default_group()
      File "/root/.local/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 410, in _get_default_group
        raise RuntimeError(
    RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
    Testing: 100%|██████████| 1851/1851 [02:42<00:00, 11.40it/s]
    

    Do you know how to fix it?

    opened by roddar92 1
  • KeyError: 'positive_ctxs' when running run_retrieval.py for nq-test.jsonl

    KeyError: 'positive_ctxs' when running run_retrieval.py for nq-test.jsonl

    When I run run_retrieval.py with nq-test.jsonl as the test file, I got KeyError: 'positive_ctxs' as the nq-test.jsonl does not have positive_ctxs. Why we need positive_ctxs in the test?

    Traceback (most recent call last): File "/home/default/persistent_drive/dpr_scale/dpr_scale/run_retrieval.py", line 83, in main trainer.test(task, datamodule=datamodule) File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 914, in test results = self.__test_given_model(model, test_dataloaders) File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 972, in __test_given_model results = self.fit(model) File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 498, in fit self.dispatch() File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 539, in dispatch self.accelerator.start_testing(self) File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 76, in start_testing self.training_type_plugin.start_testing(trainer) File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 118, in start_testing self._results = trainer.run_test() File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 785, in run_test eval_loop_results, _ = self.run_evaluation() File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 711, in run_evaluation for batch_idx, batch in enumerate(dataloader): File "/usr/local/lib64/python3.9/site-packages/torch/utils/data/dataloader.py", line 521, in next data = self._next_data() File "/usr/local/lib64/python3.9/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/usr/local/lib64/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch return self.collate_fn(data) File "/home/default/persistent_drive/dpr_scale/dpr_scale/datamodule/dpr.py", line 138, in collate_test return self.collate(batch, "test") File "/home/default/persistent_drive/dpr_scale/dpr_scale/datamodule/dpr.py", line 203, in collate return self.dpr_transform(batch, stage) File "/usr/local/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/default/persistent_drive/dpr_scale/dpr_scale/transforms/dpr_transform.py", line 85, in forward contexts_pos = row["positive_ctxs"] KeyError: 'positive_ctxs'

    opened by mei16 6
Owner
Facebook Research
Facebook Research
Personal implementation of paper "Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval"

Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval This repo provides personal implementation of paper Approximate Ne

John 8 Oct 7, 2022
Image-retrieval-baseline - MUGE Multimodal Retrieval Baseline

MUGE Multimodal Retrieval Baseline This repo is implemented based on the open_cl

null 47 Dec 16, 2022
Repo for WWW 2022 paper: Progressively Optimized Bi-Granular Document Representation for Scalable Embedding Based Retrieval

BiDR Repo for WWW 2022 paper: Progressively Optimized Bi-Granular Document Representation for Scalable Embedding Based Retrieval. Requirements torch==

Microsoft 11 Oct 20, 2022
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie_recs Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Coll

ShopRunner 97 Jan 3, 2023
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Collie do

ShopRunner 96 Dec 29, 2022
Python library containing BART query generation and BERT-based Siamese models for neural retrieval.

Neural Retrieval Embedding-based Zero-shot Retrieval through Query Generation leverages query synthesis over large corpuses of unlabeled text (such as

Amazon Web Services - Labs 35 Apr 14, 2022
Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in Pytorch

Retrieval-Augmented Denoising Diffusion Probabilistic Models (wip) Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in P

Phil Wang 55 Jan 1, 2023
Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better performance.

InfoPro-Pytorch The Information Propagation algorithm for training deep networks with local supervision. (ICLR 2021) Revisiting Locally Supervised Lea

null 78 Dec 27, 2022
Code for pre-training CharacterBERT models (as well as BERT models).

Pre-training CharacterBERT (and BERT) This is a repository for pre-training BERT and CharacterBERT. DISCLAIMER: The code was largely adapted from an o

Hicham EL BOUKKOURI 31 Dec 5, 2022
Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021.

Dense Contrastive Learning for Self-Supervised Visual Pre-Training This project hosts the code for implementing the DenseCL algorithm for se

Xinlong Wang 491 Jan 3, 2023
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

null 35 Dec 6, 2022
Official implementation of Monocular Quasi-Dense 3D Object Tracking

Monocular Quasi-Dense 3D Object Tracking Monocular Quasi-Dense 3D Object Tracking (QD-3DT) is an online framework detects and tracks objects in 3D usi

Visual Intelligence and Systems Group 441 Dec 20, 2022
Dense Prediction Transformers

Vision Transformers for Dense Prediction This repository contains code and models for our paper: Vision Transformers for Dense Prediction René Ranftl,

Intel ISL (Intel Intelligent Systems Lab) 1.3k Dec 28, 2022
《Fst Lerning of Temporl Action Proposl vi Dense Boundry Genertor》(AAAI 2020)

Update 2020.03.13: Release tensorflow-version and pytorch-version DBG complete code. 2019.11.12: Release tensorflow-version DBG inference code. 2019.1

Tencent 338 Dec 16, 2022
Learning Dense Representations of Phrases at Scale (Lee et al., 2020)

DensePhrases DensePhrases provides answers to your natural language questions from the entire Wikipedia in real-time. While it efficiently searches th

Princeton Natural Language Processing 540 Dec 30, 2022
Quasi-Dense Similarity Learning for Multiple Object Tracking, CVPR 2021 (Oral)

Quasi-Dense Tracking This is the offical implementation of paper Quasi-Dense Similarity Learning for Multiple Object Tracking. We present a trailer th

ETH VIS Research Group 327 Dec 27, 2022