Code for our paper "Graph Pre-training for AMR Parsing and Generation" in ACL2022

Overview

AMRBART

An implementation for ACL2022 paper "Graph Pre-training for AMR Parsing and Generation". You may find our paper here (Arxiv).

PWC

PWC

PWC

PWC

Requirements

  • python 3.8
  • pytorch 1.8
  • transformers 4.8.2
  • pytorch-lightning 1.5.0
  • Tesla V100 or A100

We recommend to use conda to manage virtual environments:

conda env update --name <env> --file requirements.yml

We also provide a docker image here.

Data Processing

You may download the AMR corpora at LDC.

We follow Spring to preprocess AMR graphs:

# 1. install spring 
cd spring && pip install -e .
# 2. processing data
bash run-preprocess.sh

Pre-training

bash run-posttrain-bart-textinf-joint-denoising-6task-large-unified-V100.sh /path/to/BART/

Fine-tuning

For AMR Parsing, run

bash finetune_AMRbart_amrparsing.sh /path/to/pre-trained/AMRBART/ gpu_id

For AMR-to-text Generation, run

bash finetune_AMRbart_amr2text.sh /path/to/pre-trained/AMRBART/ gpu_id

Evaluation

For AMR Parsing, run

bash eval_AMRbart_amrparsing.sh /path/to/fine-tuned/AMRBART/ gpu_id

For AMR-to-text Generation, run

bash eval_AMRbart_amr2text.sh /path/to/fine-tuned/AMRBART/ gpu_id

Inference on your own data

If you want to run our code on your own data, try to transform your data into the format here, then run

For AMR Parsing, run

bash inference_amr.sh /path/to/fine-tuned/AMRBART/ gpu_id

For AMR-to-text Generation, run

bash inference_text.sh /path/to/fine-tuned/AMRBART/ gpu_id

Pre-trained Models

Pre-trained AMRBART

Setting Params checkpoint
AMRBART-base 142M model
AMRBART-large 409M model

Fine-tuned models on AMR-to-Text Generation

Setting BLEU(tok) BLEU(detok) checkpoint output
AMRBART-large (AMR2.0) 49.8 45.7 model output
AMRBART-large (AMR3.0) 49.2 45.0 model output

To get the tokenized bleu score, you need to use the scorer we provide here. We use this script in order to ensure comparability with previous approaches.

Fine-tuned models on AMR Parsing

Setting Smatch checkpoint output
AMRBART-large (AMR2.0) 85.4 model output
AMRBART-large (AMR3.0) 84.2 model output

Todo

  • clean code

References

@inproceedings{bai-etal-2022-graph,
    title = "Graph Pre-training for {AMR} Parsing and Generation",
    author = "Bai, Xuefeng  and
      Chen, Yulong and
      Zhang, Yue",
    booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = may,
    year = "2022",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "todo",
    doi = "todo",
    pages = "todo"
}
Comments
  • Question about fine-tuned models for AMR parsing

    Question about fine-tuned models for AMR parsing

    Hi, First of all, thank you for your great work! I recently got interested in AMR and found your work from accepted papers in ACL.

    I have a question about fine-tuned models that you shared for AMR parsing. Can those models be used for plain texts to generate AMR graphs?? Also, do you support the code for plain texts to generate AMR graphs like SPRING model?

    Thanks! :) Hope you have a good one!

    opened by cws7777 24
  • Tokenizer for AMRBART-large-finetuned-AMR3.0-AMRParsing

    Tokenizer for AMRBART-large-finetuned-AMR3.0-AMRParsing

    I noticed that for the finetuned AMRBarts, there are no tokenizers offered in the huggingface hub, whereas the v2 models have tokenizers with a different vocab size (v1 53844 vs. v2 53228). My questions are:

    1. Where can I get the tokenizers for those finetuned models?
    2. Is there a demonstration for tokens used in v2 models (because I found that newly-added tokens used in v2 models are different from tokens illustrated in the paper)?
    3. Is it OK for me to use BartTokenizer to load the pretrained AMR tokenizers?

    Thank you!

    opened by HenryCai11 5
  • Inference Needs train, test and val datasets to run

    Inference Needs train, test and val datasets to run

    To run inference (say AMR-->Text) train, test and validation sets are required. Please provide a way to run the model on a pre-processed text file without needing all this data.

    opened by SreehariSankar 5
  • 'PENMANBartTokenizer' object has no attribute 'amr_bos_token_id'

    'PENMANBartTokenizer' object has no attribute 'amr_bos_token_id'

    Hello, when using the script inference_amr.sh I receive the following error:

    Please answer yes or no.
    Global seed set to 42
    Tokenizer: 53587 PreTrainedTokenizer(name_or_path='facebook/bart-large', vocab_size=53587, model_max_len=1024, is_fast=False, padding_side='right', special_tokens={'bos_token': 'Ġ<s>', 'eos_token': 'Ġ</s>', 'unk_token': 'Ġ<unk>', 'sep_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': 'Ġ<pad>', 'cls_token': AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'mask_token': AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False, normalized=True)})
    Traceback (most recent call last):
      File "/home/students/meier/MA/AMRBART/fine-tune/inference_amr.py", line 105, in <module>
        main(args)
      File "/home/students/meier/MA/AMRBART/fine-tune/inference_amr.py", line 65, in main
        data_module = AMRParsingDataModule(amr_tokenizer, **vars(args))
      File "/home/students/meier/MA/AMRBART/fine-tune/data_interface/dataset_pl.py", line 228, in __init__
        decoder_start_token_id=self.tokenizer.amr_bos_token_id,
    AttributeError: 'PENMANBartTokenizer' object has no attribute 'amr_bos_token_id'
    

    The facebook/bart-large tokenizer is used. This error is new, since I used the scripts 8 to 6 weeks ago and everything worked fine.

    A similar error can be seen when using inferece_text.sh:

    Please answer yes or no.
    Global seed set to 42
    Tokenizer: 53587 PreTrainedTokenizer(name_or_path='facebook/bart-large', vocab_size=53587, model_max_len=1024, is_fast=False, padding_side='right', special_tokens={'bos_token': 'Ġ<s>', 'eos_token': 'Ġ</s>', 'unk_token': 'Ġ<unk>', 'sep_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': 'Ġ<pad>', 'cls_token': AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'mask_token': AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False, normalized=True)})
    Dataset cache dir: /home/students/meier/MA/AMRBART/fine-tune/../examples/.cache/
    Using custom data configuration default-288dad464b8291c3
    Downloading and preparing dataset amr_data/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/students/meier/MA/AMRBART/fine-tune/../examples/.cache/amr_data/default-288dad464b8291c3/1.0.0/f0dfbe4d826478b18bc1ef4db7270a419c69c4ea4c94fbf73515b13180f43059...
    ^M0 examples [00:00, ? examples/s]^M                                ^M^M0 examples [00:00, ? examples/s]^M                                ^M^M0 examples [00:00, ? examples/s]^M                                ^MDataset amr_data downloaded and prepared to /home/students/meier/MA/AMRBART/fine-tune/../examples/.cache/amr_data/default-288dad464b8291c3/1.0.0/f0dfbe4d826478b18bc1ef4db7270a419c69c4ea4c94fbf73515b13180f43059. Subsequent calls will reuse this data.
    datasets: DatasetDict({
        train: Dataset({
            features: ['src', 'tgt'],
            num_rows: 10
        })
        validation: Dataset({
            features: ['src', 'tgt'],
            num_rows: 10
        })
        test: Dataset({
            features: ['src', 'tgt'],
            num_rows: 10
        })
    })
    colums: ['src', 'tgt']
    Setting TOKENIZERS_PARALLELISM=false for forked processes.
    Parameter 'function'=<function AMR2TextDataModule.setup.<locals>.tokenize_function at 0x154ba6915280> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
    ^M #0:   0%|          | 0/1 [00:00<?, ?ba/s]^M #0:   0%|          | 0/1 [00:00<?, ?ba/s]
    multiprocess.pool.RemoteTraceback:
    """
    Traceback (most recent call last):
      File "/home/students/meier/amrbart_venv_new/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker
        result = (True, func(*args, **kwds))
      File "/home/students/meier/amrbart_venv_new/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
        out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
      File "/home/students/meier/amrbart_venv_new/lib/python3.8/site-packages/datasets/fingerprint.py", line 397, in wrapper
        out = func(self, *args, **kwargs)
      File "/home/students/meier/amrbart_venv_new/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2016, in _map_single
        batch = apply_function_on_filtered_inputs(
      File "/home/students/meier/amrbart_venv_new/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1906, in apply_function_on_filtered_inputs
        function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
      File "/home/students/meier/MA/AMRBART/fine-tune/data_interface/dataset_pl.py", line 72, in tokenize_function
        amr_tokens = [
      File "/home/students/meier/MA/AMRBART/fine-tune/data_interface/dataset_pl.py", line 74, in <listcomp>
        + [self.tokenizer.amr_bos_token]
    AttributeError: 'PENMANBartTokenizer' object has no attribute 'amr_bos_token'
    """
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "/home/students/meier/MA/AMRBART/fine-tune/run_amr2text.py", line 154, in <module>
        main(args)
      File "/home/students/meier/MA/AMRBART/fine-tune/run_amr2text.py", line 91, in main
        data_module.setup()
      File "/home/students/meier/amrbart_venv_new/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py", line 474, in wrapped_fn
        fn(*args, **kwargs)
      File "/home/students/meier/MA/AMRBART/fine-tune/data_interface/dataset_pl.py", line 117, in setup
        self.train_dataset = datasets["train"].map(
      File "/home/students/meier/amrbart_venv_new/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1744, in map
        transformed_shards = [r.get() for r in results]
      File "/home/students/meier/amrbart_venv_new/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1744, in <listcomp>
        transformed_shards = [r.get() for r in results]
      File "/home/students/meier/amrbart_venv_new/lib/python3.8/site-packages/multiprocess/pool.py", line 771, in get
        raise self._value
    AttributeError: 'PENMANBartTokenizer' object has no attribute 'amr_bos_token'
    
    opened by PhMeier 3
  • AMRtotxt Inference on own data

    AMRtotxt Inference on own data

    Hi, I'm trying to use the inference script to inference on my own amr graphs but it seems keeps producing the same predictions of the data in the example folder. I tried to delete the cache file inside the example folder then it seems able to work... Could you check the script to see if the cache file is affecting the inference? Also I'm quite confused why it need train and val dataset to run the inference script, could you explain more about this? Thank you for your kind help!

    opened by LIU-FAYANG 2
  • 0.3 lower Smatch score using amr-evaluation-enhanced

    0.3 lower Smatch score using amr-evaluation-enhanced

    Dear authors,

    Thank you for sharing your work, it's amazing. I just want to share a finding regarding the parsing evaluation. As far as I know, many existing works (like Cai & Lam, ACL 2020) are utilizing amr-evaluation-enhanced for computing Smatch score. After running this script on your parsing output, 84.0 was returned, which is slightly lower than the score 84.3 you reported. I ran it multiple times and the result remained the same:

    $ bash evaluation.sh data/model/amr3/bartamr/AMR3.0-test-pred-wiki.amr data/amr/amr_3.0/test.txt
    
    Smatch -> P: 0.844, R: 0.836, F: 0.840
    Unlabeled -> P: 0.867, R: 0.858, F: 0.862
    No WSD -> P: 0.849, R: 0.841, F: 0.845
    Non_sense_frames -> P: 0.918, R: 0.916, F: 0.917
    Wikification -> P: 0.836, R: 0.817, F: 0.826
    Named Ent. -> P: 0.893, R: 0.874, F: 0.884
    Negations -> P: 0.716, R: 0.722, F: 0.719
    IgnoreVars -> P: 0.746, R: 0.742, F: 0.744
    Concepts -> P: 0.907, R: 0.900, F: 0.903
    Frames -> P: 0.888, R: 0.885, F: 0.887
    Reentrancies -> P: 0.721, R: 0.729, F: 0.725
    SRL -> P: 0.801, R: 0.807, F: 0.804
    

    I understand Smatch is using a stochastic matching algorithm and 0.3 is not significant at all. Just want to share this little finding with the community. Maybe we should migrate to the amrlib.evaluate.smatch_enhanced package you used for better comparision.

    opened by hankcs 2
  • AMRBART model does not available in Hugging Face hub

    AMRBART model does not available in Hugging Face hub

    Thank you for sharing this exellent work. I would like to ask you if you could re-upload AMRBART-large and base models. Because I keep getting this error

    OSError: Can't load config for 'xfbai/AMRBART-base'. Make sure that:

    • 'xfbai/AMRBART-base' is a correct model identifier listed on 'https://huggingface.co/models'

    • or 'xfbai/AMRBART-base' is the correct path to a directory containing a config.json file

    Thank you for your help!

    opened by elifssamplespace 2
  • PenmanBART Tokenizer

    PenmanBART Tokenizer

    Thank you for sharing your great work. I would like to ask if you could upload the weights of your trained PENMANBartTokenizer. Thank you for your help!

    opened by tdt98 2
  • KeyError 'source' when finetuning

    KeyError 'source' when finetuning

    Hello, during testing finetuning in a conda environment on the example data I encountered the following exception:

    Traceback (most recent call last):
      File "/home/students/meier/AMRBART/fine-tune/run_amrparsing.py", line 154, in <module>
        main(args)
      File "/home/students/meier/AMRBART/fine-tune/run_amrparsing.py", line 129, in main
        trainer.fit(model, datamodule=data_module)
      File "/home/students/meier/anaconda3/envs/my_AMRBART_env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 735, in fit
        self._call_and_handle_interrupt(
      File "/home/students/meier/anaconda3/envs/my_AMRBART_env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 682, in _call_and_handle_interrupt
        return trainer_fn(*args, **kwargs)
      File "/home/students/meier/anaconda3/envs/my_AMRBART_env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in _fit_impl
        self._run(model, ckpt_path=ckpt_path)
      File "/home/students/meier/anaconda3/envs/my_AMRBART_env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1193, in _run
        self._dispatch()
      File "/home/students/meier/anaconda3/envs/my_AMRBART_env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1272, in _dispatch
        self.training_type_plugin.start_training(self)
      File "/home/students/meier/anaconda3/envs/my_AMRBART_env/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training
        self._results = trainer.run_stage()
      File "/home/students/meier/anaconda3/envs/my_AMRBART_env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1282, in run_stage
        return self._run_train()
      File "/home/students/meier/anaconda3/envs/my_AMRBART_env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1304, in _run_train
        self._run_sanity_check(self.lightning_module)
      File "/home/students/meier/anaconda3/envs/my_AMRBART_env/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1368, in _run_sanity_check
        self._evaluation_loop.run()
      File "/home/students/meier/anaconda3/envs/my_AMRBART_env/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 151, in run
        output = self.on_run_end()
      File "/home/students/meier/anaconda3/envs/my_AMRBART_env/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 130, in on_run_end
        self._evaluation_epoch_end(outputs)
      File "/home/students/meier/anaconda3/envs/my_AMRBART_env/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 235, in _evaluation_epoch_end
        model.validation_epoch_end(outputs)
      File "/home/students/meier/AMRBART/fine-tune/model_interface/model_amrparsing.py", line 320, in validation_epoch_end
        source = flatten_list(x["source"] for x in ori_outputs)
      File "/home/students/meier/AMRBART/fine-tune/common/utils.py", line 109, in flatten_list
        return [x for x in itertools.chain.from_iterable(summary_ids)]
      File "/home/students/meier/AMRBART/fine-tune/common/utils.py", line 109, in <listcomp>
        return [x for x in itertools.chain.from_iterable(summary_ids)]
      File "/home/students/meier/AMRBART/fine-tune/model_interface/model_amrparsing.py", line 320, in <genexpr>
        source = flatten_list(x["source"] for x in ori_outputs)
    KeyError: 'source'
    

    Printing out "ori_outputs" shows this: ori outputs [{'loss': tensor(0.8626, device='cuda:0'), 'gen_time': 8.689491331577301, 'gen_len': 1024.0, 'preds': [[53842, 36, 53069, 51012, 52944, 36, 53070, 171, 4839, 52945, 36, 53071, 14195, 4839, 4839, 53843, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, The key 'source' is missing.

    My sh script looks like this:

    #!/bin/bash
    
    ROOT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
    
    GPUID=$2
    MODEL=$1
    eval_beam=5
    modelcate=base
    modelcate=large
    
    
    lr=8e-6
    
    datacate=/home/students/meier/AMRBART/examples/ #/home/students/meier/MA/data/ #AMR2.0
    # datacate=AMR3.0
    
    
    Tokenizer=facebook/bart-$modelcate  #../../../data/pretrained-model/bart-$modelcate
    export OUTPUT_DIR_NAME=outputs/fine_tune_amrparse #${datacate}-AMRBart-${modelcate}-amrparsing-6taskPLM-5e-5-finetune-lr${lr}
    
    export CURRENT_DIR=${ROOT_DIR}
    export OUTPUT_DIR=${CURRENT_DIR}/${OUTPUT_DIR_NAME}
    cache=~/.cache  #../../../data/.cache/
    
    if [ ! -d $OUTPUT_DIR ];then
      mkdir -p $OUTPUT_DIR
    else
      echo "${OUTPUT_DIR} already exists, change a new one or delete origin one"
      exit 0
    fi
    
    export OMP_NUM_THREADS=10
    export CUDA_VISIBLE_DEVICES=${GPUID}
    python -u ${ROOT_DIR}/run_amrparsing.py \
        --data_dir=$datacate \
        --train_data_file=$datacate/train.jsonl \
        --eval_data_file=$datacate/val.jsonl \
        --test_data_file=$datacate/test.jsonl \
        --model_type ${MODEL} \
        --model_name_or_path=${MODEL} \
        --tokenizer_name_or_path=${Tokenizer} \
        --val_metric "smatch" \
        --learning_rate=${lr} \
        --max_epochs 20 \
        --max_steps -1 \
        --per_gpu_train_batch_size=4 \
        --per_gpu_eval_batch_size=4 \
        --unified_input \
        --accumulate_grad_batches 2 \
        --early_stopping_patience 10 \
        --gpus 1 \
        --output_dir=${OUTPUT_DIR} \
        --cache_dir ${cache} \
        --num_sanity_val_steps 4 \
        --src_block_size=512 \
        --tgt_block_size=1024 \
        --eval_max_length=1024 \
        --train_num_workers 8 \
        --eval_num_workers 4 \
        --process_num_workers 8 \
        --do_train --do_predict \
        --seed 42 \
        --fp16 \
        --eval_beam ${eval_beam} 2>&1 | tee $OUTPUT_DIR/run.log
    

    I run call the script in the following way: srun ~/AMRBART/fine-tune/finetune_AMRbart_amrparsing_large.sh /workspace/students/meier/AMR_Bart_models/AMR-BART-LARGE 0

    What can I do to solve the problem? Thanks for reading!

    opened by PhMeier 2
  • Curious about AMR to text (node to text token alignment)

    Curious about AMR to text (node to text token alignment)

    Hi @muyeby @cylnlp , thanks for your patient answer previously. This is a great work and I am new to this task. I am curious that is there any heuristic way to align your AMR graph nodes back to text tokens?

    opened by xu1998hz 2
  • Difference in hyper-parameters

    Difference in hyper-parameters

    Hello,

    thank you very much for you work and providing the code!

    While comparing the fine-tuning scripts to your reported hyper-parameters in your paper I have seen some differences:

    • The sequence length for generation is 512 in the paper, in both amr2text scripts the parameter "src_block_size" is 1024.
    • Early stop is 5 in the paper, in the scripts it ranges from 10 to 15.
    • The learning rate in finetune_AMRbart_amr2text.sh differs from the reported 1e-5 for the base model.

    I guess the parameters from the scripts are more recent?

    opened by PhMeier 2
Owner
xfbai
Actions speak louder than words
xfbai
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Shipeng Wang 34 Dec 21, 2022
Convolutional neural network web app trained to track our infant’s sleep schedule using our Google Nest camera.

Machine Learning Sleep Schedule Tracker What is it? Convolutional neural network web app trained to track our infant’s sleep schedule using our Google

g-parki 7 Jul 15, 2022
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166

Region Proportion Regularized Inference (RePRI) for Few-Shot Segmentation In this repo, we provide the code for our paper : "Few-Shot Segmentation Wit

Malik Boudiaf 138 Dec 12, 2022
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Qing-Long Zhang 199 Jan 8, 2023
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022
Code for our CVPR2021 paper coordinate attention

Coordinate Attention for Efficient Mobile Network Design (preprint) This repository is a PyTorch implementation of our coordinate attention (will appe

Qibin (Andrew) Hou 726 Jan 5, 2023
[CVPR2021] The source code for our paper 《Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Learning》.

TBE The source code for our paper "Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Le

Jinpeng Wang 150 Dec 28, 2022
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

null 54 Dec 15, 2022
Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation

CorDA Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation Prerequisite Please create and activate the follo

Qin Wang 60 Nov 30, 2022
the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet]

BGNet This repository contains the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet] Environment Python 3.6.* C

3DCV developer 87 Nov 29, 2022
PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

Haoyu Chen 71 Dec 30, 2022
Code for reproducing our analysis in the paper titled: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency

Image Crop Analysis This is a repo for the code used for reproducing our Image Crop Analysis paper as shared on our blog post. If you plan to use this

Twitter Research 239 Jan 2, 2023
Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer

ConSERT Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer Requirements torch==1.6.0

Yan Yuanmeng 478 Dec 25, 2022
code for our paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"

SHOT++ Code for our TPAMI submission "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer" that is ext

null 75 Dec 16, 2022
Code for our paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021

SimCLS Code for our paper: "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021 1. How to Install Requirements

Yixin Liu 150 Dec 12, 2022
Code for our paper "Sematic Representation for Dialogue Modeling" in ACL2021

AMR-Dialogue An implementation for paper "Semantic Representation for Dialogue Modeling". You may find our paper here. Requirements python 3.6 pytorch

xfbai 45 Dec 26, 2022
Code for our ACL 2021 paper "One2Set: Generating Diverse Keyphrases as a Set"

One2Set This repository contains the code for our ACL 2021 paper “One2Set: Generating Diverse Keyphrases as a Set”. Our implementation is built on the

Jiacheng Ye 63 Jan 5, 2023
Code for our TKDE paper "Understanding WeChat User Preferences and “Wow” Diffusion"

wechat-wow-analysis Understanding WeChat User Preferences and “Wow” Diffusion. Fanjin Zhang, Jie Tang, Xueyi Liu, Zhenyu Hou, Yuxiao Dong, Jing Zhang,

null 18 Sep 16, 2022