Big Bird: Transformers for Longer Sequences

Overview

Big Bird: Transformers for Longer Sequences

Not an official Google product.

What is BigBird?

BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle.

As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization.

More details and comparisons can be found in our presentation.

Citation

If you find this useful, please cite our NeurIPS 2020 paper:

@article{zaheer2020bigbird,
  title={Big bird: Transformers for longer sequences},
  author={Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon, Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}

Code

The most important directory is core. There are three main files in core.

  • attention.py: Contains BigBird linear attention mechanism
  • encoder.py: Contains the main long sequence encoder stack
  • modeling.py: Contains packaged BERT and seq2seq transformer models with BigBird attention

Colab/IPython Notebook

A quick fine-tuning demonstration for text classification is provided in imdb.ipynb

Create GCP Instance

Please create a project first and create an instance in a zone which has quota as follows

gcloud compute instances create \
  bigbird \
  --zone=europe-west4-a \
  --machine-type=n1-standard-16 \
  --boot-disk-size=50GB \
  --image-project=ml-images \
  --image-family=tf-2-3-1 \
  --maintenance-policy TERMINATE \
  --restart-on-failure \
  --scopes=cloud-platform

gcloud compute tpus create \
  bigbird \
  --zone=europe-west4-a \
  --accelerator-type=v3-32 \
  --version=2.3.1

gcloud compute ssh --zone "europe-west4-a" "bigbird"

For illustration we used instance name bigbird and zone europe-west4-a, but feel free to change them. More details about creating Google Cloud TPU can be found in online documentations.

Instalation and checkpoints

git clone https://github.com/google-research/bigbird.git
cd bigbird
pip3 install -e .

You can find pretrained and fine-tuned checkpoints in our Google Cloud Storage Bucket.

Optionally, you can download them using gsutil as

mkdir -p bigbird/ckpt
gsutil cp -r gs://bigbird-transformer/ bigbird/ckpt/

The storage bucket contains:

  • pretrained BERT model for base(bigbr_base) and large (bigbr_large) size. It correspond to BERT/RoBERTa-like encoder only models. Following original BERT and RoBERTa implementation they are transformers with post-normalization, i.e. layer norm is happening after the attention layer. However, following Rothe et al, we can use them partially in encoder-decoder fashion by coupling the encoder and decoder parameters, as illustrated in bigbird/summarization/roberta_base.sh launch script.
  • pretrained Pegasus Encoder-Decoder Transformer in large size(bigbp_large). Again following original implementation of Pegasus, they are transformers with pre-normalization. They have full set of separate encoder-decoder weights. Also for long document summarization datasets, we have converted Pegasus checkpoints (model.ckpt-0) for each dataset and also provided fine-tuned checkpoints (model.ckpt-300000) which works on longer documents.
  • fine-tuned tf.SavedModel for long document summarization which can be directly be used for prediction and evaluation as illustrated in the colab nootebook.

Running Classification

For quickly starting with BigBird, one can start by running the classification experiment code in classifier directory. To run the code simply execute

export GCP_PROJECT_NAME=bigbird-project  # Replace by your project name
export GCP_EXP_BUCKET=gs://bigbird-transformer-training/  # Replace
sh -x bigbird/classifier/base_size.sh

Using BigBird Encoder instead BERT/RoBERTa

To directly use the encoder instead of say BERT model, we can use the following code.

from bigbird.core import modeling

bigb_encoder = modeling.BertModel(...)

It can easily replace BERT's encoder.

Alternatively, one can also try playing with layers of BigBird encoder

from bigbird.core import encoder

only_layers = encoder.EncoderStack(...)

Understanding Flags & Config

All the flags and config are explained in core/flags.py. Here we explain some of the important config paramaters.

attention_type is used to select the type of attention we would use. Setting it to block_sparse runs the BigBird attention module.

flags.DEFINE_enum(
    "attention_type", "block_sparse",
    ["original_full", "simulated_sparse", "block_sparse"],
    "Selecting attention implementation. "
    "'original_full': full attention from original bert. "
    "'simulated_sparse': simulated sparse attention. "
    "'block_sparse': blocked implementation of sparse attention.")

block_size is used to define the size of blocks, whereas num_rand_blocks is used to set the number of random blocks. The code currently uses window size of 3 blocks and 2 global blocks. The current code only supports static tensors.

Important points to note:

  • Hidden dimension should be divisible by the number of heads.
  • Currently the code only handles tensors of static shape as it is primarily designed for TPUs which only works with statically shaped tensors.
  • For sequene length less than 1024, using original_full is advised as there is no benefit in using sparse BigBird attention.
Comments
  • Question about pre-trained weights

    Question about pre-trained weights

    Thanks so much for releasing BigBird!

    Quick question about the pre-trained weights. Do the bigbr_large and bigbr_base correspond to BERT-like encoder-only checkpoints and bigbp_large to the encoder-decoder version?

    opened by patrickvonplaten 3
  • TFDS Custom Dataset Issue - normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.

    TFDS Custom Dataset Issue - normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.

    I am using BigBird with a custom dataset (essay, label) for classification. I successfully imported the dataset as a custom tfds dataset and the BigBird classifier runs but does not return any results as shown in the log below. In my_datset.py configuration file for tfds, I am using this code to define the text feature - 'text': tfds.features.Text(). However, I believe that I need to add an encoder but TensorFlow has deprecated this in tfds.features.Text and recommends using the new tensorflow_text but doesn't explain how to do this in tfds.features.Text. Can anyone provide a recommendation for how to encode the text so BigBird can perform the classification?

    My GPUS are 0 normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization. {'label': <tf.Tensor 'ParseSingleExample/ParseExample/ParseExampleV2:0' shape=() dtype=int64>, 'text': <tf.Tensor 'ParseSingleExample/ParseExample/ParseExampleV2:1' shape=() dtype=string>} Tensor("args_1:0", shape=(), dtype=string) Tensor("args_0:0", shape=(), dtype=int64)

    0%| | 0/199 [00:00<?, ?it/s] 42%|████▏ | 84/199 [00:00<00:00, 838.07it/s] 100%|██████████| 199/199 [00:00<00:00, 1124.10it/s]

    0%| | 0/2000 [00:00<?, ?it/s] 0%| | 0/2000 [00:00<?, ?it/s] {'label': <tf.Tensor 'ParseSingleExample/ParseExample/ParseExampleV2:0' shape=() dtype=int64>, 'text': <tf.Tensor 'ParseSingleExample/ParseExample/ParseExampleV2:1' shape=() dtype=string>} Tensor("args_1:0", shape=(), dtype=string) Tensor("args_0:0", shape=(), dtype=int64)

    0it [00:00, ?it/s] 0it [00:00, ?it/s] Loss = 0.0 Accuracy = 0.0

    opened by jtfields 1
  • How can we finetune the pretrained model using tfrecord files?

    How can we finetune the pretrained model using tfrecord files?

    I've tried to finetune the model on my own text summarization dataset. Before doing that, I tested using tfrecord as the input file. So I put /tmp/bigb/tfds/aeslc/1.0.0 as data_dir:

    flags.DEFINE_string(
        "data_dir", "/tmp/bigb/tfds/aeslc/1.0.0",
        "The input data dir. Should contain the TFRecord files. "
        "Can be TF Dataset with prefix tfds://")
    

    Then I run run_summarization.py. But I got the following error:

    tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
      (0) Invalid argument: Feature: document (data type: string) is required but could not be found.
             [[{{node ParseSingleExample/ParseExample/ParseExampleV2}}]]
             [[MultiDeviceIteratorGetNextFromShard]]
             [[RemoteCall]]
             [[IteratorGetNext]]
             [[Mean/_19475]]
      (1) Invalid argument: Feature: document (data type: string) is required but could not be found.
             [[{{node ParseSingleExample/ParseExample/ParseExampleV2}}]]
             [[MultiDeviceIteratorGetNextFromShard]]
             [[RemoteCall]]
             [[IteratorGetNext]]
    

    Could anyone advise me how to finetune the model using tfrecord as the input file?

    opened by gymbeijing 1
  • Preprocessing code for TriviaQA dataset

    Preprocessing code for TriviaQA dataset

    Dear authors,

    Do you use the same preprocessing code as Longformer on TriviaQA dataset such as truncating each document less than 4096, answer string match algorithm and normalized aliases as training labels?

    opened by sjy1203 1
  • I've added bigbird's attention to my model, but not seeing a decrease in memory

    I've added bigbird's attention to my model, but not seeing a decrease in memory

    I've replaced the attention layers in Enformer with those in bigbird, but the memory usage calculated by tf.get_memory_info shows the usage is still basically the same (within 1%). I'm wondering if I need to include code from the encoder or decoder to see a decrease in memory usage?

    Thanks!

    opened by Currie32 5
  • Why is BigBird Pegasus/Pegasus Repeating the Same Sentence for Summarization?

    Why is BigBird Pegasus/Pegasus Repeating the Same Sentence for Summarization?

    Hello,

    BigBird Pegaus, when creating summaries of text, is repeating the same sentence over and over. I have tried using text on the Hugging Face model hub and there is an issue posted on Stack Overflow (https://stackoverflow.com/questions/68911203/big-bird-pegasus-summarization-output-is-repeating-itself). Additionally, below are some images from the Hugging Face hub.

    image

    I am doing text summarization for my thesis and I am not sure why this is happening, but apparently it has been an issue for 6 months. Is there a way to prevent this from happening?

    Thank you.

    opened by Kevin-Patyk 1
  • Export predictions for each example

    Export predictions for each example

    I have successfully run Google's BigBird NLP on the IMDB dataset and also a custom dataset imported using tfds. BigBird's imdb.ipynb only prints the overall accuracy and loss. I'm trying to export the predictions for each record in the dataset and have been unable to find any information on how to do this. Any help is appreciated!

    Here is the current code that I used for the summary metrics: eval_loss = tf.keras.metrics.Mean(name='eval_loss') eval_accuracy = tf.keras.metrics.CategoricalAccuracy(name='eval_accuracy')

    opt = tf.keras.optimizers.Adam(FLAGS.learning_rate) train_loss = tf.keras.metrics.Mean(name='train_loss') train_accuracy = tf.keras.metrics.CategoricalAccuracy(name='train_accuracy')

    for i, ex in enumerate(tqdm(dataset.take(FLAGS.num_train_steps), position=0)): loss, log_probs, grads = fwd_bwd(ex[0], ex[1]) opt.apply_gradients(zip(grads, model.trainable_weights+headl.trainable_weights)) train_loss(loss) train_accuracy(tf.one_hot(ex[1], 2), log_probs) if i% 200 == 0: print('Loss = {} Accuracy = {}'.format(train_loss.result().numpy(), train_accuracy.result().numpy()))

    opened by jtfields 3
  • Differences between ETC and BigBird-ETC version

    Differences between ETC and BigBird-ETC version

    @manzilz Thank you for sharing the excellent research. :)

    I have two quick questions. If I missed some info in your paper, could you please let me know what I missed?

    Q1. Is the Global-local attention method used in the BigBird-ETC version totally the same as the ETC paper, otherwise Longformer?
    As I know, some special tokens(global tokens) only take full attention to the restricted sequences according to the ETC paper. For example, in the HotpotQA task, a paragraph token attends to all tokens within the paragraph. Also, a sentence token attends to all tokens within the sentence. ( I can't find about how [CLS] and question tokens take attention to. )

    In Longformer, the special tokens between sentences take full attention to the context.

    In BigBird paper(above of section 3), the author said

    "we add g global tokens that attend to all existing tokens."

    It seems to say the BigBird-ETC version is similar to Longformer. However, when the author mentioned differences between Longformer and BigBird-ETC, point to the reference as an ETC (in Appendix E.3). It makes me confused.

    Q2. Is there a source code or a pre-trained model for the BigBird-ETC version? If you could share it used in your paper, I will really appreciate it!

    I look forward to your response.

    opened by lhl2017 0
BMInf (Big Model Inference) is a low-resource inference package for large-scale pretrained language models (PLMs).

BMInf (Big Model Inference) is a low-resource inference package for large-scale pretrained language models (PLMs).

OpenBMB 377 Jan 2, 2023
CPC-big and k-means clustering for zero-resource speech processing

The CPC-big model and k-means checkpoints used in Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing.

Benjamin van Niekerk 5 Nov 23, 2022
Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.

Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra. What is Lightning Tran

Pytorch Lightning 581 Dec 21, 2022
Yet Another Sequence Encoder - Encode sequences to vector of vector in python !

Yase Yet Another Sequence Encoder - encode sequences to vector of vectors in python ! Why Yase ? Yase enable you to encode any sequence which can be r

Pierre PACI 12 Aug 19, 2021
Compute distance between sequences. 30+ algorithms, pure python implementation, common interface, optional external libs usage.

TextDistance TextDistance -- python library for comparing distance between two or more sequences by many algorithms. Features: 30+ algorithms Pure pyt

Life4 3k Jan 6, 2023
Compute distance between sequences. 30+ algorithms, pure python implementation, common interface, optional external libs usage.

TextDistance TextDistance -- python library for comparing distance between two or more sequences by many algorithms. Features: 30+ algorithms Pure pyt

Life4 1.9k Feb 18, 2021
Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch

COCO LM Pretraining (wip) Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch. They were a

Phil Wang 44 Jul 28, 2022
Beyond Paragraphs: NLP for Long Sequences

Beyond Paragraphs: NLP for Long Sequences

AI2 338 Dec 2, 2022
Addon for adding subtitle files to blender VSE as Text sequences. Using pysub2 python module.

Import Subtitles for Blender VSE Addon for adding subtitle files to blender VSE as Text sequences. Using pysub2 python module. Supported formats by py

null 4 Feb 27, 2022
Framework for fine-tuning pretrained transformers for Named-Entity Recognition (NER) tasks

NERDA Not only is NERDA a mesmerizing muppet-like character. NERDA is also a python package, that offers a slick easy-to-use interface for fine-tuning

Ekstra Bladet 141 Dec 30, 2022
KoBART model on huggingface transformers

KoBART-Transformers SKT에서 공개한 KoBART를 편리하게 사용할 수 있게 transformers로 포팅하였습니다. Install (Optional) BartModel과 PreTrainedTokenizerFast를 이용하면 설치하실 필요 없습니다. p

Hyunwoong Ko 58 Dec 7, 2022
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.

State-of-the-art Natural Language Processing for PyTorch and TensorFlow 2.0 ?? Transformers provides thousands of pretrained models to perform tasks o

Hugging Face 77.3k Jan 3, 2023
:mag: Transformers at scale for question answering & neural search. Using NLP via a modular Retriever-Reader-Pipeline. Supporting DPR, Elasticsearch, HuggingFace's Modelhub...

Haystack is an end-to-end framework for Question Answering & Neural search that enables you to ... ... ask questions in natural language and find gran

deepset 6.4k Jan 9, 2023
🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy This package provides spaCy components and architectures to use tr

Explosion 1.2k Jan 8, 2023
spaCy plugin for Transformers , Udify, ELmo, etc.

Camphr - spaCy plugin for Transformers, Udify, Elmo, etc. Camphr is a Natural Language Processing library that helps in seamless integration for a wid

null 342 Nov 21, 2022
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.

State-of-the-art Natural Language Processing for PyTorch and TensorFlow 2.0 ?? Transformers provides thousands of pretrained models to perform tasks o

Hugging Face 40.9k Feb 18, 2021
:mag: End-to-End Framework for building natural language search interfaces to data by utilizing Transformers and the State-of-the-Art of NLP. Supporting DPR, Elasticsearch, HuggingFace’s Modelhub and much more!

Haystack is an end-to-end framework that enables you to build powerful and production-ready pipelines for different search use cases. Whether you want

deepset 1.4k Feb 18, 2021
🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy This package provides spaCy components and architectures to use tr

Explosion 903 Feb 17, 2021
spaCy plugin for Transformers , Udify, ELmo, etc.

Camphr - spaCy plugin for Transformers, Udify, Elmo, etc. Camphr is a Natural Language Processing library that helps in seamless integration for a wid

null 327 Feb 18, 2021