🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

Overview

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

This package provides spaCy components and architectures to use transformer models via Hugging Face's transformers in spaCy. The result is convenient access to state-of-the-art transformer architectures, such as BERT, GPT-2, XLNet, etc.

This release requires spaCy v3. For the previous version of this library, see the v0.6.x branch.

Azure Pipelines PyPi GitHub Code style: black

Features

  • Use pretrained transformer models like BERT, RoBERTa and XLNet to power your spaCy pipeline.
  • Easy multi-task learning: backprop to one transformer model from several pipeline components.
  • Train using spaCy v3's powerful and extensible config system.
  • Automatic alignment of transformer output to spaCy's tokenization.
  • Easily customize what transformer data is saved in the Doc object.
  • Easily customize how long documents are processed.
  • Out-of-the-box serialization and model packaging.

🚀 Installation

Installing the package from pip will automatically install all dependencies, including PyTorch and spaCy. Make sure you install this package before you install the models. Also note that this package requires Python 3.6+, PyTorch v1.5+ and spaCy v3.0+.

pip install spacy[transformers]

For GPU installation, find your CUDA version using nvcc --version and add the version in brackets, e.g. spacy[transformers,cuda92] for CUDA9.2 or spacy[transformers,cuda100] for CUDA10.0.

If you are having trouble installing PyTorch, follow the instructions on the official website for your specific operation system and requirements, or try the following:

pip install spacy-transformers -f https://download.pytorch.org/whl/torch_stable.html

📖 Documentation

⚠️ Important note: This package has been extensively refactored to take advantage of spaCy v3.0. Previous versions that were built for spaCy v2.x worked considerably differently. Please see previous tagged versions of this README for documentation on prior versions.

Comments
  • Use ModelOutput instead of tuples

    Use ModelOutput instead of tuples

    • Save model output as ModelOutput instead of a list of tensors in TransformerData.model_output and FullTransformerBatch.model_output.

      • For backwards compatibility with transformers v3 set return_dict = True in the transformer config.
    • TransformerData.tensors and FullTransformerBatch.tensors return ModelOutput.to_tuple().

    • Store any additional model output as ModelOutput in TransformerData.model_output.

      • Save all torch.Tensor and tuple(torch.Tensor) values in TransformerData.model_output for cases where tensor.shape[0] is the batch size so that it's possible to slice the output for individual docs.
        • Includes: pooler_output, hidden_states, attentions, and cross_attentions
    • Re-enable tests for gpt2 and xlnet in the CI.

    • Following #285, include some minor modifications and bug fixes for HFShim and HFObjects

      • Rename the temporary init-only configs in HFObjects and don't serialize them in HFShim once the model is initialized
    enhancement v1.1 
    opened by adrianeboyd 14
  • Add support for mixed-precision training

    Add support for mixed-precision training

    This change makes it possible to use and configure the support for mixed-precision training that was added to thinc.

    Example configuration:

    [components.transformer.model]
    @architectures = "spacy-transformers.TransformerModel.v3"
    name = "roberta-base"
    mixed_precision = true
    
    enhancement perf / memory perf / speed 
    opened by danieldk 7
  • HFShim: support MPS device

    HFShim: support MPS device

    Before this change, two devices and map locations were supported:

    • CUDA: cuda:N
    • CPU: cpu

    This change adds support for other devices like Metal Performance Shader (MPS) devices by mapping to the active Torch device.

    opened by danieldk 6
  • added transformers_config for passing arguments to the transformer

    added transformers_config for passing arguments to the transformer

    Added transformers_config to allow the user to pass arguments to the transformers forward pass. Most notably the output_attentions.

    for convenience, I used this example to test the code:

    import spacy
    
    nlp = spacy.blank("en")
    
    # Construction via add_pipe with custom config
    config = {
        "model": {
            "@architectures": "spacy-transformers.TransformerModel.v1",
            "name": "bert-base-uncased",
            "transformers_config": {"output_attentions": True},
        }
    }
    transformer = nlp.add_pipe(
        "transformer", config=config)
    transformer.model.initialize()
    
    
    doc = nlp("This is a sentence.")
    

    which gives you:

    len(doc._.trf_data.attention) # 12
    doc._.trf_data.attention[-1].shape  # (1, 12, 7, 7) last layer of attention 
    len(doc._.trf_data.tensors) # 2 
    doc._.trf_data.tensors[0].shape # (1, 7, 768) <-- wordpiece embedding
    doc._.trf_data.tensors[1].shape  # (1, 768) <-- assuming this is the pooled embedding?
    

    Sidenote: it took me quite a while to find the default config str. It might be ideal to make this into a standalone file and load it in?

    enhancement 
    opened by KennethEnevoldsen 6
  • Add kwargs features of `from_pretrained()` in HFShim.from_bytes()

    Add kwargs features of `from_pretrained()` in HFShim.from_bytes()

    We're trying to give some parameters, such as use_fast and trsut_remote_code, to AutoTokenizer.from_pretrained() via kwargs in order to utilize custom tokenizers in spacy-transformers. In the current implementation of spacy-transformers, HFShim.from_bytes() does not apply kwargs to either AutoConfig.from_pretrained() or AutoTokenizer.from_pretrained(), while huggingface_from_pretrained() applies kwargs to both of them to create a transformer model.

    https://github.com/huggingface/transformers/blob/dcb08b99f44919425f8ba9be9ddcc041af8ec25e/src/transformers/models/auto/tokenization_auto.py https://github.com/huggingface/transformers/blob/dcb08b99f44919425f8ba9be9ddcc041af8ec25e/src/transformers/models/auto/configuration_auto.py#L679 https://github.com/explosion/spacy-transformers/blob/cab060701fe2bad0c9ae23a822249c9bebb56da7/spacy_transformers/layers/hf_shim.py#L98 https://github.com/explosion/spacy-transformers/blob/cab060701fe2bad0c9ae23a822249c9bebb56da7/spacy_transformers/layers/transformer_model.py#L256

    In this pull request, we used _init_tokenizer_config and _init_transformer_config in msg passed from the deserializer as kwargs parameter of from_pretrained().

    Another possible approach is to write these kwargs in the transformer/cfg.

    opened by hiroshi-matsuda-rit 4
  • Convert token identifiers to Long for torch < 1.8.0

    Convert token identifiers to Long for torch < 1.8.0

    Since PyTorch 1.8.0 token identifiers can be both Int and Long for embedding lookups. Prior versions only support Long. Since we still support older versions, convert token identifiers to Long for compatibility.

    This fixes the incompatibility with older versions introduced in #289.

    opened by danieldk 4
  • Fixing Transformer IO

    Fixing Transformer IO

    Note: some tests are adjusted in this PR. This was done first, before implementing the code changes, as we were aware that the initialize statements shouldn't be there, cf https://github.com/explosion/spaCy/issues/8319 and https://github.com/explosion/spaCy/issues/8566.

    Description

    Before this PR, the HF Transformer model was loaded through set_pytorch_transformer (stored in model.attrs["set_transformer"]), but this happened in the initialize call of TransformerModel. Unfortunately, this meant that saving/loading a transformer-based pipeline was kind of broken, as you needed to call initialize on a previously trained pipeline, which isn't the normal spaCy API. This also broke the from_bytes / to_bytes typical API.

    Furthermore, the from_disk / to_disk functionality worked with a "listener" transformer, because the transformer pipeline component saved out the PyTorch files directly. However, this solution did not work for the "inline" Transformer, because the TransformerModel would be used directly and not via the pipeline component.

    I've been looking at various different solutions, but the proposal in this PR is the only one that I got working for all use-cases: basically we need to load/define the transformer model in the constructor as we do for any other spaCy component.

    Unfortunately with this proposal, if you'd have the initialize calls as before in your old code, it will crash, complaining about the fact that the transformer model can't be set twice. I think this is actually the correct behaviour in the end, but it might break some people's code. The fix is obvious/easy though.

    But we'll have to discuss which version bump we want to do when releasing this.

    Fixes https://github.com/explosion/spaCy/issues/8319 Fixes https://github.com/explosion/spaCy/issues/8566

    bug feat / serialize 
    opened by svlandeg 4
  • IO for transformer component

    IO for transformer component

    IO

    Been going in circles a bit with this, trying to puzzle it into the IO mechanisms we decided on for the config refactor for spaCy 3 ...

    This PR: Transformer(Pipe) knows how to do to_disk and from_disk and stores the internal tokenizer & transformer object using huggingface transformer standard IO mechanisms. In the nlp/transformer output directory, this results in a folder model with files

    • config.json
    • pytorch_model.bin
    • special_tokens_map.json
    • tokenizer_config.json
    • vocab.txt

    This folder can be read using the spacy.TransformerFromFile.v1 architecture for the model, and then calling from_disk on the pipeline component (which happens automatically when reading the nlp object from a config)

    If users want to download a model by using architecture spacy.TransformerByName.v2, then when calling nlp.to_disk, we need to do a little hack rewriting that architecture to the one from file. This is done by directly modifying nlp.config when the component is created with from_nlp. This feels hacky, but not sure how else to prevent multiple downloads.

    Other fixes

    • fixed the config files to be up-to-date with the latest version of the v3 branch
    • moved install_extensions to the init of the transformer pipe, where I think it makes more sense. Added force=True to prevent warnings/errors when calling it multiple times (I don't think that matters?)
    enhancement feat / serialize 
    opened by svlandeg 4
  • WIP: Fix IO after init_model

    WIP: Fix IO after init_model

    To create a distilbert-base-german-cased model with the init_model.py script, I had to add two additional fields to the serialization code.

    Fixes #117

    bug 
    opened by svlandeg 4
  •  Transformer: add update_listeners_in_predict option

    Transformer: add update_listeners_in_predict option

    Draft: still needs docs, but I first wanted to discuss this proposal.

    The output of a transformer is passed through in two different ways:

    • Prediction: the data is passed through the Doc._.trf_data attribute.
    • Training: the data is broadcast directly to the transformer's listeners.

    However, the Transformer.predict method breaks the strict separation between training and prediction by also broadcasting transformer outputs to its listeners. This was added (I think) to support training with a frozen transformer.

    However, this breaks down when we are training a model with an unfrozen transformer when the transformer is also in annotating_components. The transformer will first (as part of its update step) broadcast the tensors and backprop function to its listeners. However, then when acting as an annotating component, it would immediately override its own output and clear the backprop function. As a result, gradients will not flow into the transformer.

    This change fixes this issue by adding the update_listeners_in_predict option, which is enabled by default. When this option is disabled, the tensors will not be broadcast to listeners in predict.


    Alternatives considered:

    • Yanking the listener code from predict: breaks our current semantics, would make it harder to train with a frozen transformer.
    • Checking in the listener if the tensors that we are receiving is the same batch ID as we already have. Don't update if we already have the same batch with a backprop function. I thought this is a bit fragile, because it breaks when batching differs between training and prediction (?).
    bug feat / pipeline 
    opened by danieldk 3
  • Support offset mapping alignment for fast tokenizers

    Support offset mapping alignment for fast tokenizers

    Switch to offset mapping-based alignment for fast tokenizers. With this change, slow vs. fast tokenizers will not give identical results with spacy-transformers.

    Additional modifications:

    • Update package setup for cython
    • Update CI for compiled package
    feat / alignment 
    opened by adrianeboyd 3
  • Add test for textcat CNN issue

    Add test for textcat CNN issue

    This is a test demonstrating the issue in https://github.com/explosion/spaCy/issues/11968. A potential fix is being worked on in https://github.com/explosion/thinc/pull/820.

    In its current condition, the test just creates a pipeline with textcat and transformer, creates a minimal doc and calls nlp.initialize. As described in the spaCy issue, that fails with this error:

    ValueError: Cannot get dimension 'nI' for model 'linear': value unset
    

    This will be left in draft until the fix is clarified.

    tests 
    opened by polm 7
  • Transformer.predict: do not broadcast to listeners

    Transformer.predict: do not broadcast to listeners

    The output of a transformer is passed through in two different ways:

    • Prediction: the data is passed through the Doc._.trf_data attribute.
    • Training: the data is broadcast directly to the transformer's listeners.

    However, the Transformer.predict method breaks the strict separation between training and prediction by also broadcasting transformer outputs to its listeners.

    However, this breaks down when we are training a model with an unfrozen transformer when the transformer is also in annotating_components. The transformer will first (as part of its update step) broadcast the tensors and backprop function to its listeners. However, then when acting as an annotating component, it would immediately override its own output and clear the backprop function. As a result, gradients will not flow into the transformer.

    This change removes the broadcast from the predict method. If a listener does not receive a batch, attempt to get the transformer output from the Doc instances. This makes it possible to train a pipeline with a frozen transformer.

    This ports https://github.com/explosion/spaCy/pull/11385 to spacy-transformers. Alternative to #342.

    bug feat / pipeline 
    opened by danieldk 0
Releases(v1.1.9)
  • v1.1.9(Dec 19, 2022)

    • Extend support for transformers up to v4.25.x.
    • Add support for Python 3.11 (currently limited to linux due to supported platforms for PyTorch v1.13.x).
    Source code(tar.gz)
    Source code(zip)
  • v1.1.8(Aug 12, 2022)

    • Extend support for transformers up to v4.21.x.
    • Support MPS device in HFShim (#328).
    • Track seen docs during alignment to improve speed (#337).
    • Don't require examples in Transformer.initialize (#341).
    Source code(tar.gz)
    Source code(zip)
  • v1.1.7(Aug 25, 2022)

    • Extend support for transformers up to v4.20.x.
    • Convert all transformer outputs to XP arrays at once (#330).
    • Support alternate model loaders in HFShim and HFWrapper (#332).
    Source code(tar.gz)
    Source code(zip)
  • v1.1.6(Jun 2, 2022)

    • Extend support for transformers up to v4.19.x.
    • Fix issue #324: Skip backprop for transformer if not available, for example if the transformer is frozen.
    Source code(tar.gz)
    Source code(zip)
  • v1.1.5(Mar 15, 2022)

  • v1.1.4(Jan 14, 2022)

  • v1.1.3(Dec 7, 2021)

  • v1.1.2(Oct 28, 2021)

  • v1.1.1(Oct 19, 2021)

    🔴 Bug fixes

    • Fix #309: Fix parameter ordering and defaults for new parameters in TransformerModel architectures.
    • Fix #310: Fix config and model issues when replacing listeners.

    👥 Contributors

    @adrianeboyd, @svlandeg

    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Oct 18, 2021)

    ✨ New features and improvements

    • Refactor and improve transformer serialization for better support of inline transformer components and replacing listeners.
    • Provide the transformer model output as ModelOutput instead of tuples in TransformerData.model_output and FullTransformerBatch.model_output. For backwards compatibility, the tuple format remains available under TransformerData.tensors and FullTransformerBatch.tensors. See more details in the transformer API docs.
    • Add support for transformer_config settings such as output_attentions. Additional output is stored under TransformerData.model_output. More details in the TransformerModel docs.
    • Add support for mixed-precision training.
    • Improve training speed by streamlining allocations for tokenizer output.
    • Extend support for transformers up to v4.11.x.

    🔴 Bug fixes

    • Fix support for GPT2 models.

    ⚠️ Backwards incompatibilities

    • The serialization format for transformer components has changed in v1.1 and is not compatible with spacy-transformers v1.0.x. Pipelines trained with v1.0.x can be loaded with v1.1.x, but pipelines saved with v1.1.x cannot be loaded with v1.0.x.
    • TransformerData.tensors and FullTransformerBatch.tensors return a tuple instead of a list.

    👥 Contributors

    @adrianeboyd, @bryant1410, @danieldk, @honnibal, @ines, @KennethEnevoldsen, @svlandeg

    Source code(tar.gz)
    Source code(zip)
  • v1.0.6(Sep 2, 2021)

  • v1.0.5(Aug 26, 2021)

  • v1.0.4(Aug 12, 2021)

    • Extend transformers support to <4.10.0
    • Enable pickling of span getters and annotation setters, which is required for multiprocessing with spawn
    Source code(tar.gz)
    Source code(zip)
  • v1.0.3(Jul 20, 2021)

  • v1.0.2(Apr 21, 2021)

    ✨ New features and improvements

    • Add support for transformers v4.3-v4.5
    • Add extra for CUDA 11.2

    🔴 Bug fixes

    • Fix #264, #265: Improve handling of empty docs
    • Fix #269: Add trf_data extension in Transformer.__call__ and Transformer.pipe to support distributed processing

    👥 Contributors

    Thanks to @bryant1410 for the pull requests and contributions!

    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Feb 2, 2021)

  • v1.0.0(Feb 1, 2021)

    This release requires spaCy v3.

    ✨ New features and improvements

    • Rewrite library from scratch for spaCy v3.0.
    • Transformer component for easy pipeline integration.
    • TransformerListener to share transformer weights between components.
    • Built-in registered functions that are available in spaCy if spacy-transformers is installed in the same environment.

    📖 Documentation

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc2(Jan 19, 2021)

    🌙 This release is a pre-release and requires spaCy v3 (nightly).

    ✨ New features and improvements

    • Add support for Python 3.9
    • Add support for transformers v4

    🔴 Bug fixes

    • Fix #230: Add upstream argument to TransformerListener.v1
    • Fix #238: Skip special tokens during alignment
    • Fix #246: Raise error if model max length exceeded
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc0(Oct 15, 2020)

    🌙 This release is a pre-release and requires spaCy v3 (nightly).

    ✨ New features and improvements

    • Rewrite library from scratch for spaCy v3.0.
    • Transformer component for easy pipeline integration.
    • TransformerListener to share transformer weights between components.
    • Built-in registered functions that are available in spaCy if spacy-transformers is installed in the same environment.

    📖 Documentation

    Source code(tar.gz)
    Source code(zip)
  • v0.6.2(Jun 29, 2020)

  • v0.6.1(Jun 18, 2020)

    ⚠️ This release requires downloading new models.

    • Update spacy-transformers for spaCy v2.3
    • Update and extend supported transformers versions to >=2.4.0,<2.9.0
    • Use transformers.AutoConfig to support loading pretrained models from https://huggingface.co/models
    • #123: Fix alignment algorithm using pytokenizations

    Thanks to @tamuhey for the pull request!

    Source code(tar.gz)
    Source code(zip)
  • v0.5.3(Jun 18, 2020)

    Bug fixes related to alignment and truncation:

    • #191: Reset max_len in case of alignment error
    • #196: Fix wordpiecer truncation to be per sentence

    Enhancement:

    • #162: Let nlp.update handle Doc type inputs

    Thanks to @ZhuoruLin for the pull requests and helping us debug issues related to batching and truncation!

    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(May 24, 2020)

    Update to newer version of transformers.

    This library is being rewritten for spaCy v3, in order to improve its flexibility and performance and to make it easier to stay up to date with new transformer models. See here for details: https://github.com/explosion/spacy-transformers/pull/173

    Source code(tar.gz)
    Source code(zip)
  • v0.5.2(May 24, 2020)

  • v0.5.1(Oct 28, 2019)

    • Downgrade version pin of importlib_metadata to prevent conflict.
    • Fix issue #92: Fix index error when calculating doc.tensor.

    Thanks to @ssavvi for the pull request!

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Oct 8, 2019)

    ⚠️ This release requires downloading new models. Also note the new model names that specify trf (transformers) instead of pytt (PyTorch transformers).

    • Rename package from spacy-pytorch-transformers to spacy-transformers.
    • Update to spacy>=2.2.0.
    • Upgrade to latest transformers.
    • Improve code and repo organization.
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Sep 4, 2019)

  • v0.3.0(Aug 27, 2019)

  • v0.2.0(Aug 12, 2019)

    • Add support for GLUE benchmark tasks.
    • Support text-pair classification. The specifics of this are likely to change, but you can see run_glue.py for current usage.
    • Improve reliability of tokenization and alignment.
    • Add support for segment IDs to the PyTT_Wrapper class. These can now be passed in as a second column of the RaggedArray input. See the model_registry.get_word_pieces function for example usage.
    • Set default maximum sequence length to 128.
    • Fix bug that caused settings not to be passed into PyTT_TextCategorizer on model initialization.
    • Fix serialization of XLNet model.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Aug 10, 2019)

Owner
Explosion
A software company specializing in developer tools for Artificial Intelligence and Natural Language Processing
Explosion
spaCy-wrap: For Wrapping fine-tuned transformers in spaCy pipelines

spaCy-wrap: For Wrapping fine-tuned transformers in spaCy pipelines spaCy-wrap is minimal library intended for wrapping fine-tuned transformers from t

Kenneth Enevoldsen 32 Dec 29, 2022
Spacy-ginza-ner-webapi - Named Entity Recognition API with spaCy and GiNZA

Named Entity Recognition API with spaCy and GiNZA I wrote a blog post about this

Yuki Okuda 3 Feb 27, 2022
Sentence Embeddings with BERT & XLNet

Sentence Transformers: Multilingual Sentence Embeddings using BERT / RoBERTa / XLM-RoBERTa & Co. with PyTorch This framework provides an easy method t

Ubiquitous Knowledge Processing Lab 9.1k Jan 2, 2023
Sentence Embeddings with BERT & XLNet

Sentence Transformers: Multilingual Sentence Embeddings using BERT / RoBERTa / XLM-RoBERTa & Co. with PyTorch This framework provides an easy method t

Ubiquitous Knowledge Processing Lab 4.2k Feb 18, 2021
天池中药说明书实体识别挑战冠军方案;中文命名实体识别;NER; BERT-CRF & BERT-SPAN & BERT-MRC;Pytorch

天池中药说明书实体识别挑战冠军方案;中文命名实体识别;NER; BERT-CRF & BERT-SPAN & BERT-MRC;Pytorch

zxx飞翔的鱼 751 Dec 30, 2022
GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model

GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model -- based on GPT-3, called GPT-Codex -- that is fine-tuned on publicly available code from GitHub.

Nathan Cooper 2.3k Jan 1, 2023
A combination of autoregressors and autoencoders using XLNet for sentiment analysis

A combination of autoregressors and autoencoders using XLNet for sentiment analysis Abstract In this paper sentiment analysis has been performed in or

James Zaridis 2 Nov 20, 2021
spaCy plugin for Transformers , Udify, ELmo, etc.

Camphr - spaCy plugin for Transformers, Udify, Elmo, etc. Camphr is a Natural Language Processing library that helps in seamless integration for a wid

null 342 Nov 21, 2022
spaCy plugin for Transformers , Udify, ELmo, etc.

Camphr - spaCy plugin for Transformers, Udify, Elmo, etc. Camphr is a Natural Language Processing library that helps in seamless integration for a wid

null 327 Feb 18, 2021
Transformer related optimization, including BERT, GPT

This repository provides a script and recipe to run the highly optimized transformer-based encoder and decoder component, and it is tested and maintained by NVIDIA.

NVIDIA Corporation 1.7k Jan 4, 2023
Ongoing research training transformer language models at scale, including: BERT & GPT-2

What is this fork of Megatron-LM and Megatron-DeepSpeed This is a detached fork of https://github.com/microsoft/Megatron-DeepSpeed, which in itself is

BigScience Workshop 316 Jan 3, 2023
Ongoing research training transformer language models at scale, including: BERT & GPT-2

Megatron (1 and 2) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA.

NVIDIA Corporation 3.5k Dec 30, 2022
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and GPT-NEO (2.7 B) on a single 16 GB VRAM V100 Google Cloud instance with Huggingface Transformers using DeepSpeed

Guide: Finetune GPT2-XL (1.5 Billion Parameters) and GPT-NEO (2.7 Billion Parameters) on a single 16 GB VRAM V100 Google Cloud instance with Huggingfa

null 289 Jan 6, 2023
An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger.

GPT-NeoX An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hun

EleutherAI 3.1k Jan 8, 2023
Neural text generators like the GPT models promise a general-purpose means of manipulating texts.

Boolean Prompting for Neural Text Generators Neural text generators like the GPT models promise a general-purpose means of manipulating texts. These m

Jeffrey M. Binder 20 Jan 9, 2023
VD-BERT: A Unified Vision and Dialog Transformer with BERT

VD-BERT: A Unified Vision and Dialog Transformer with BERT PyTorch Code for the following paper at EMNLP2020: Title: VD-BERT: A Unified Vision and Dia

Salesforce 44 Nov 1, 2022
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
Pytorch-version BERT-flow: One can apply BERT-flow to any PLM within Pytorch framework.

Pytorch-version BERT-flow: One can apply BERT-flow to any PLM within Pytorch framework.

Ubiquitous Knowledge Processing Lab 59 Dec 1, 2022