NLP library designed for reproducible experimentation management

Overview

Welcome to the Transfer NLP library, a framework built on top of PyTorch to promote reproducible experimentation and Transfer Learning in NLP

You can have an overview of the high-level API on this Colab Notebook, which shows how to use the framework on several examples. All DL-based examples on these notebooks embed in-cell Tensorboard training monitoring!

For an example of pre-trained model finetuning, we provide a short executable tutorial on BertClassifier finetuning on this Colab Notebook

Set up your environment

mkvirtualenv transfernlp
workon transfernlp

git clone https://github.com/feedly/transfer-nlp.git
cd transfer-nlp
pip install -r requirements.txt

To use Transfer NLP as a library:

# to install the experiment builder only
pip install transfernlp
# to install Transfer NLP with PyTorch and Transfer Learning in NLP support
pip install transfernlp[torch]

or

pip install git+https://github.com/feedly/transfer-nlp.git

to get the latest state before new releases.

To use Transfer NLP with associated examples:

git clone https://github.com/feedly/transfer-nlp.git
pip install -r requirements.txt

Documentation

API documentation and an overview of the library can be found here

Reproducible Experiment Manager

The core of the library is made of an experiment builder: you define the different objects that your experiment needs, and the configuration loader builds them in a nice way. For reproducible research and easy ablation studies, the library then enforces the use of configuration files for experiments. As people have different tastes for what constitutes a good experiment file, the library allows for experiments defined in several formats:

  • Python Dictionary
  • JSON
  • YAML
  • TOML

In Transfer-NLP, an experiment config file contains all the necessary information to define entirely the experiment. This is where you will insert names of the different components your experiment will use, along with the hyperparameters you want to use. Transfer-NLP makes use of the Inversion of Control pattern, which allows you to define any class / method / function you could need, the ExperimentConfig class will create a dictionnary and instatiate your objects accordingly.

To use your own classes inside Transfer-NLP, you need to register them using the @register_plugin decorator. Instead of using a different registry for each kind of component (Models, Data loaders, Vectorizers, Optimizers, ...), only a single registry is used here, in order to enforce total customization.

If you use Transfer NLP as a dev dependency only, you might want to use it declaratively only, and call register_plugin() on objects you want to use at experiment running time.

Here is an example of how you can define an experiment in a YAML file:

data_loader:
  _name: MyDataLoader
  data_parameter: foo
  data_vectorizer:
    _name: MyVectorizer
    vectorizer_parameter: bar

model:
  _name: MyModel
  model_hyper_param: 100
  data: $data_loader

trainer:
  _name: MyTrainer
  model: $model
  data: $data_loader
  loss:
    _name: PyTorchLoss
  tensorboard_logs: $HOME/path/to/tensorboard/logs
  metrics:
    accuracy:
      _name: Accuracy

Any object can be defined through a class, method or function, given a _name parameters followed by its own parameters. Experiments are then loaded and instantiated using ExperimentConfig(experiment=experiment_path_or_dict)

Some considerations:

  • Defaults parameters can be skipped in the experiment file.

  • If an object is used in different places, you can refer to it using the $ symbol, for example here the trainer object uses the data_loader instantiated elsewhere. No ordering of objects is required.

  • For paths, you might want to use environment variables so that other machines can also run your experiments. In the previous example, you would run e.g. ExperimentConfig(experiment=yaml_path, HOME=Path.home()) to instantiate the experiment and replace $HOME by your machine home path.

  • The config instantiation allows for any complex settings with nested dict / list

You can have a look at the tests for examples of experiment settings the config loader can build. Additionally we provide runnable experiments in experiments/.

Transfer Learning in NLP: flexible PyTorch Trainers

For deep learning experiments, we provide a BaseIgniteTrainer in transfer_nlp.plugins.trainers.py. This basic trainer will take a model and some data as input, and run a whole training pipeline. We make use of the PyTorch-Ignite library to monitor events during training (logging some metrics, manipulating learning rates, checkpointing models, etc...). Tensorboard logs are also included as an option, you will have to specify a tensorboard_logs simple parameters path in the config file. Then just run tensorboard --logdir=path/to/logs in a terminal and you can monitor your experiment while it's training! Tensorboard comes with very nice utilities to keep track of the norms of your model weights, histograms, distributions, visualizing embeddings, etc so we really recommend using it.

We provide a SingleTaskTrainer class which you can use for any supervised setting dealing with one task. We are working on a MultiTaskTrainer class to deal with multi task settings, and a SingleTaskFineTuner for large models finetuning settings.

Use cases

Here are a few use cases for Transfer NLP:

  • You have all your classes / methods / functions ready. Transfer NLP allows for a clean way to centralize loading and executing your experiments
  • You have all your classes but you would like to benchmark multiple configuration settings: the ExperimentRunner class allows for sequentially running your sets of experiments, and generates personalized reporting (you only need to implement your report method in a custom ReporterABC class)
  • You want to experiment with training deep learning models but you feel overwhelmed bby all the boilerplate code in SOTA models github projects. Transfer NLP encourages separation of important objects so that you can focus on the PyTorch Module implementation and let the trainers deal with the training part (while still controlling most of the training parameters through the experiment file)
  • You want to experiment with more advanced training strategies, but you are more interested in the ideas than the implementations details. We are working on improving the advanced trainers so that it will be easier to try new ideas for multi task settings, fine-tuning strategies or model adaptation schemes.

Slack integration

While experimenting with your own models / data, the training might take some time. To get notified when your training finishes or crashes, you can use the simple library knockknock by folks at HuggingFace, which add a simple decorator to your running function to notify you via Slack, E-mail, etc.

Some objectives to reach:

  • Include examples using state of the art pre-trained models
  • Include linguistic properties to models
  • Experiment with RL for sequential tasks
  • Include probing tasks to try to understand the properties that are learned by the models

Acknowledgment

The library has been inspired by the reading of "Natural Language Processing with PyTorch" by Delip Rao and Brian McMahan. Experiments in experiments, the Vocabulary building block and embeddings nearest neighbors are taken or adapted from the code provided in the book.

Comments
  • Pytorch Lightning as a back-end

    Pytorch Lightning as a back-end

    Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

    Describe the solution you'd like A clear and concise description of what you want to happen.

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Hi! check out Pytorch Lightning as an option for your backend! We're looking for awesome project implemented in Lightning.

    https://github.com/williamFalcon/pytorch-lightning Additional context Add any other context or screenshots about the feature request here.

    opened by williamFalcon 3
  • have the possibility to build object with a function instead of a class

    have the possibility to build object with a function instead of a class

    When you want to experiment with someone else's code, you don't want to copy-paste their code.

    If you want to use a class AwesomeClass from an awesome github repo, you can do:

    from transfer_nlp.pluginf.config import register_plugin
    from awesome_repo.module import AwesomeClass
    
    register_plugin(AwesomeClass)
    

    and then use it in your experiments.

    However, when reusing complex objects, it might complicated to configure them. An example is the pre-trained model from the pytorch-pretrained-bert repo, where you can build complex models with nice one-liners such as model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=4)

    It's possible to encapsulate these into other classes and have Transfer NLP build them, but it can feel awkward and adds unnecessary complexity / lines of code compared to the initial one-liner. An alternative is to build these objects with a method, in the previous example we would only write:

    @register_function
    def bert_classifier(bert_version: str='bert-base-uncased', num_labels: int=4):
        return BertForSequenceClassification.from_pretrained(pretrained_model_name_or_path=bert_version, num_labels=num_labels)
    

    and we could use functions just as methods in the config loading.

    opened by petermartigny 2
  • caching objects in experiment runner

    caching objects in experiment runner

    some readonly objects can take awhile to load in experiments (embeddings, datasets, etc). The current ExperimentRunner always recreates the entire experiment. It would be nice if we could keep some objects in memory...

    Proposal

    add a cached property in run_all

        def run_all(experiment: Union[str, Path, Dict],
                    experiment_cache: Union[str, Path, Dict],
                    experiment_config: Union[str, Path],
                    report_dir: Union[str, Path],
                    trainer_config_name: str = 'trainer',
                    reporter_config_name: str = 'reporter',
                    **env_vars) -> None:
    

    The cache is just another experiment json. it would be loaded only once at the very beginning only using the env_vars. any resulting objects would then be added to env_vars when running each each experiment. objects can optionally implement a Resettable class that has a reset method that would be called once before each experiment.

    incorrect usage of this feature could lead to non-reproducibility issues, but through docs we could make it clear this should only be for read-only objects. i think it would be worth doing...

    opened by kireet 1
  • cleanup config tests, also fixes #28

    cleanup config tests, also fixes #28

    i wanted to try to make the config tests a bit more sane, try to minimize the number of temporary classes we needed to create and improve naming. also found issue #28 and fixed it.

    opened by kireet 1
  • unsubstituted parameter doesn't cause an error

    unsubstituted parameter doesn't cause an error

    something like this won't cause a problem:

    { 
       "item": {
           "_name": "foo",
           "param":"$bar"
        }
    }
    

    even if we don't set a value for bar. this can lead to easily misconfigured objects.

    opened by kireet 1
  • Ioc refactor

    Ioc refactor

    • Refactor the basic trainer in an IoC pattern, with a single registry for every registrable classes, allowing for maximum customization
    • Separate the example experiments from the library
    • Adapt the examples to the new logic
    • Set cuda as optional in the config file
    opened by petermartigny 1
  • TPU + 16 bit

    TPU + 16 bit

    hey!

    Not sure if you've seen: https://github.com/williamFalcon/pytorch-lightning.

    The fastest growing PyTorch front-end project.

    We're also now venture funded so we have a fulltime team working on this and will be around for a very long time :)

    https://medium.com/pytorch/pytorch-lightning-0-7-1-release-and-venture-funding-dd12b2e75fb3?postPublishedType=repub

    opened by williamFalcon 0
  • Optional torch imports for trainers

    Optional torch imports for trainers

    We import torch modules in the __init__.py of trainers. This PR makes these imports optional, in the case where we don't have torch installed but still want to use the base TrainerABC class

    opened by petermartigny 0
  • move trainerABC to separate file

    move trainerABC to separate file

    This PR moves the TrainerABC class to a separate file. Therefore, someone willing to use the experiment runner class can do so without having to install torch

    opened by petermartigny 0
  • Refactor/experiment config

    Refactor/experiment config

    This PR does the refactoring defined in #76 to have a more easily maintainable configuration logic.

    Also, we remove the pytorch modules that were included in the registry by default. This allows for non-DL projects to use the config part f the library.

    opened by petermartigny 0
  • simplify configs reporting

    simplify configs reporting

    This PR does a few things:

    • Get rid of ini .cfg files saving
    • Before doing the sequential experiments, we copy the configs, experiment and cache files to a global-reporting directory.
    • This global-reporting directory will also host the outputs from the reporter's report_globally() call
    opened by petermartigny 0
  • [ExperimentRunner] Default value of experiment_cache cause run_all to fail

    [ExperimentRunner] Default value of experiment_cache cause run_all to fail

    Describe the bug The ExperimentRunner.run_all fails if experiment_cache is None.

    The issue comes from line 109, where the default value for the experiment cache (None) is not handled correctly: https://github.com/feedly/transfer-nlp/blob/master/transfer_nlp/runner/experiment_runner.py#L109

    opened by Mathieu4141 0
  • Check that all registrables are registered

    Check that all registrables are registered

    Currently, objects are built one by one and when one fails it throws an error.

    It would be great to have a quick pass before instantiating objects to check that all registrable names / aliases are actually registered, and throw an error at this moment.

    opened by petermartigny 0
  • Downloader Plugin

    Downloader Plugin

    From the talk today, one good point was the point that reproducibility problems often stem from data inconsistencies. To that end, I think we should have a DataDownloader component that can download data from URLs and save them locally to disk.

    • If the files exist, the downloader can skip the download
    • the downloader should calculate checksums for downloaded files. it should produce a checksums.cfg file to simplify reusing these in configuration later
    • the downloader should allow checksums to be configured in the experiment file. when set, the downloader would verify the downloaded file is the same as the one specified in the experiment.

    so an example json config could be:

    {
      "_name": "Downloader",
      "local_dir": "$my_path",
      "checksums": "$WORK_DIR/checksums_2019_05_23.cfg", <-- produced by a previous download 
      "sentences.txt.gz": {
        "url": "$BASE_URL/sentences.txt.gz",
        "decompress": true
      },
      "word_embeddings.npy": {
        "url": "$BASE_URL/word_embeddings.npy"
      }
    }
    
    opened by kireet 1
Releases(v0.1.6)
  • v0.1.5(Jun 25, 2019)

  • v0.1.3(May 29, 2019)

  • v0.1.2(May 28, 2019)

  • v0.1.1(May 28, 2019)

  • v0.1(May 28, 2019)

    This is a first stable version for Transfer NLP, allowing users to:

    Keep track of experiments and enforce reproducible research Combine custom and open-source code into controlled experiments Here are a few features available in the release:

    Configuring all objects from an experiment using a json file Running sequential jobs for the same experiment using different sets of parameters (parameter tuning, ablation studies...) Keep track of your experiments and make them reproducible / incrementally improvable Allow dynamic re-creation of any instantiated object during training through object factories Use several basic building blocks: Vocabulary class, PyTorch optimizer, Predictors... Transfer Learning: use the BasicTrainer to fine-tune pre-trained models to your custom downstream tasks.

    Source code(tar.gz)
    Source code(zip)
Owner
Feedly
Feedly
This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers.

private-transformers This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers. What is this? Why

Xuechen Li 73 Dec 28, 2022
Using Bert as the backbone model for lime, designed for NLP task explanation (sentence pair text classification task)

Lime Comparing deep contextualized model for sentences highlighting task. In addition, take the classic explanation model "LIME" with bert-base model

JHJu 2 Jan 18, 2022
Grading tools for Advanced NLP (11-711)Grading tools for Advanced NLP (11-711)

Grading tools for Advanced NLP (11-711) Installation You'll need docker and unzip to use this repo. For docker, visit the official guide to get starte

Hao Zhu 2 Sep 27, 2022
This repository describes our reproducible framework for assessing self-supervised representation learning from speech

LeBenchmark: a reproducible framework for assessing SSL from speech Self-Supervised Learning (SSL) using huge unlabeled data has been successfully exp

null 49 Aug 24, 2022
An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger.

GPT-NeoX An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hun

EleutherAI 3.1k Jan 8, 2023
Official Stanford NLP Python Library for Many Human Languages

Stanza: A Python NLP Library for Many Human Languages The Stanford NLP Group's official Python NLP library. It contains support for running various ac

Stanford NLP 6.4k Jan 2, 2023
NLP Core Library and Model Zoo based on PaddlePaddle 2.0

PaddleNLP 2.0拥有丰富的模型库、简洁易用的API与高性能的分布式训练的能力,旨在为飞桨开发者提升文本建模效率,并提供基于PaddlePaddle 2.0的NLP领域最佳实践。

null 6.9k Jan 1, 2023
An open-source NLP research library, built on PyTorch.

An Apache 2.0 NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks. Quic

AI2 11.4k Jan 1, 2023
Official Stanford NLP Python Library for Many Human Languages

Stanza: A Python NLP Library for Many Human Languages The Stanford NLP Group's official Python NLP library. It contains support for running various ac

Stanford NLP 5.2k Feb 12, 2021
Super easy library for BERT based NLP models

Fast-Bert New - Learning Rate Finder for Text Classification Training (borrowed with thanks from https://github.com/davidtvs/pytorch-lr-finder) Suppor

Utterworks 1.8k Dec 27, 2022
An open-source NLP research library, built on PyTorch.

An Apache 2.0 NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks. Quic

AI2 9.7k Feb 18, 2021
Official Stanford NLP Python Library for Many Human Languages

Stanza: A Python NLP Library for Many Human Languages The Stanford NLP Group's official Python NLP library. It contains support for running various ac

Stanford NLP 5.2k Feb 17, 2021
Super easy library for BERT based NLP models

Fast-Bert New - Learning Rate Finder for Text Classification Training (borrowed with thanks from https://github.com/davidtvs/pytorch-lr-finder) Suppor

Utterworks 1.5k Feb 18, 2021
NLPretext packages in a unique library all the text preprocessing functions you need to ease your NLP project.

NLPretext packages in a unique library all the text preprocessing functions you need to ease your NLP project.

Artefact 114 Dec 15, 2022
Pytorch NLP library based on FastAI

Quick NLP Quick NLP is a deep learning nlp library inspired by the fast.ai library It follows the same api as fastai and extends it allowing for quick

Agis pof 283 Nov 21, 2022
Graph4nlp is the library for the easy use of Graph Neural Networks for NLP

Graph4NLP Graph4NLP is an easy-to-use library for R&D at the intersection of Deep Learning on Graphs and Natural Language Processing (i.e., DLG4NLP).

Graph4AI 1.5k Dec 23, 2022