Scikit-learn style model finetuning for NLP

Overview

Scikit-learn style model finetuning for NLP

Finetune is a library that allows users to leverage state-of-the-art pretrained NLP models for a wide variety of downstream tasks.

Finetune currently supports TensorFlow implementations of the following models:

  1. BERT, from "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"
  2. RoBERTa, from "RoBERTa: A Robustly Optimized BERT Pretraining Approach"
  3. GPT, from "Improving Language Understanding by Generative Pre-Training"
  4. GPT2, from "Language Models are Unsupervised Multitask Learners"
  5. TextCNN, from "Convolutional Neural Networks for Sentence Classification"
  6. Temporal Convolution Network, from "An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling"
  7. DistilBERT from "Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT"
Section Description
API Tour Base models, configurables, and more
Installation How to install using pip or directly from source
Finetune with Docker Finetune and inference within a Docker Container
Documentation Full API documentation

Finetune API Tour

Finetuning the base language model is as easy as calling Classifier.fit:

model = Classifier()               # Load base model
model.fit(trainX, trainY)          # Finetune base model on custom data
model.save(path)                   # Serialize the model to disk
...
model = Classifier.load(path)      # Reload models from disk at any time
predictions = model.predict(testX) # [{'class_1': 0.23, 'class_2': 0.54, ..}, ..]

Choose your desired base model from finetune.base_models:

from finetune.base_models import BERT, RoBERTa, GPT, GPT2, TextCNN, TCN
model = Classifier(base_model=BERT)

Optimize your model with a variety of configurables. A detailed list of all config items can be found in the finetune docs.

model = Classifier(low_memory_mode=True, lr_schedule="warmup_linear", max_length=512, l2_reg=0.01, oversample=True, ...)

The library supports finetuning for a number of tasks. A detailed description of all target models can be found in the finetune API reference.

from finetune import *
models = (Classifier, MultiLabelClassifier, MultiFieldClassifier, MultipleChoice, # Classify one or more inputs into one or more classes
          Regressor, OrdinalRegressor, MultifieldRegressor,                       # Regress on one or more inputs
          SequenceLabeler, Association,                                           # Extract tokens from a given class, or infer relationships between them
          Comparison, ComparisonRegressor, ComparisonOrdinalRegressor,            # Compare two documents for a given task
          LanguageModel, MultiTask,                                               # Further pretrain your base models
          DeploymentModel                                                         # Wrapper to optimize your serialized models for a production environment
          )

For example usage of each of these target types, see the finetune/datasets directory. For purposes of simplicity and runtime these examples use smaller versions of the published datasets.

If you have large amounts of unlabeled training data and only a small amount of labeled training data, you can finetune in two steps for best performance.

model = Classifier()               # Load base model
model.fit(unlabeledX)              # Finetune base model on unlabeled training data
model.fit(trainX, trainY)          # Continue finetuning with a smaller amount of labeled data
predictions = model.predict(testX) # [{'class_1': 0.23, 'class_2': 0.54, ..}, ..]
model.save(path)                   # Serialize the model to disk

Installation

Finetune can be installed directly from PyPI by using pip

pip3 install finetune

or installed directly from source:

git clone -b master https://github.com/IndicoDataSolutions/finetune && cd finetune
python3 setup.py develop              # symlinks the git directory to your python path
pip3 install tensorflow-gpu --upgrade # or tensorflow-cpu
python3 -m spacy download en          # download spacy tokenizer

In order to run finetune on your host, you'll need a working copy of tensorflow-gpu >= 1.14.0 and up to date nvidia-driver versions.

You can optionally run the provided test suite to ensure installation completed successfully.

pip3 install pytest
pytest

Docker

If you'd prefer you can also run finetune in a docker container. The bash scripts provided assume you have a functional install of docker and nvidia-docker.

git clone https://github.com/IndicoDataSolutions/finetune && cd finetune

# For usage with NVIDIA GPUs
./docker/build_gpu_docker.sh      # builds a docker image
./docker/start_gpu_docker.sh      # starts a docker container in the background, forwards $PWD to /finetune

docker exec -it finetune bash # starts a bash session in the docker container

For CPU-only usage:

./docker/build_cpu_docker.sh
./docker/start_cpu_docker.sh

Documentation

Full documentation and an API Reference for finetune is available at finetune.indico.io.

Comments
  • Very slow inference in 0.5.11

    Very slow inference in 0.5.11

    After training a default classifier, saving and loading. model.predict("lorem ipsum") and model.predict_prob take in average 14 seconds even on a hefty server such as AWS p3.16Xlarge.

    opened by dimidd 17
  • Out of Memory on Small Dataset

    Out of Memory on Small Dataset

    Describe the bug When attempting to train a classifier on a small dataset of 8,000 documents, I get an out of memory error and the script stops running.

    Minimum Reproducible Example Version of finetune = 0.4.1 Version of tensorflow-gpu = 1.8.0 Version of cuda = release 9.0, V9.0.176 Windows 10 Pro
    Load a dataset of documents (X_train) and labels (Y_train), where each document and label is simply a string. model = finetune.Classifier(max_length = 256, batch_size = 1) #tried reducing the memory footprint model.fit(X_train, Y_train)

    Expected behavior I expected the model to train, but it doesn't manage to start training.

    Additional context I get the following warnings in the jupyter notebook:

    C:\Users...\Python35\site-packages\finetune\encoding.py:294: UserWarning: Some examples are longer than the max_length. Please trim documents or increase max_length. Fallback behaviour is to use the first 254 byte-pair encoded tokens "Fallback behaviour is to use the first {} byte-pair encoded tokens".format(max_length - 2) C:\Users...\Python35\site-packages\finetune\encoding.py:233: UserWarning: Document is longer than max length allowed, trimming document to 256 tokens. max_length C:\Users...\tensorflow\python\ops\gradients_impl.py: 100: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " WARNING:tensorflow:From C:\Users...\tensorflow\python\util\tf_should_use.py:118: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02. Instructions for updating: Use tf.variables_initializer instead.

    And then I get the following diagnostic info showing up in the command prompt:

    2018-10-04 17:26:36.920118: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2018-10-04 17:26:37.716883: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1356] Found device 0 with properties: name: Quadro M1200 major: 5 minor: 0 memoryClockRate(GHz): 1.148 pciBusID: 0000:01:00.0 totalMemory: 4.00GiB freeMemory: 3.35GiB 2018-10-04 17:26:37.725637: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0 2018-10-04 17:26:38.412484: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-10-04 17:26:38.417413: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929] 0 2018-10-04 17:26:38.419392: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0: N 2018-10-04 17:26:38.421353: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/device:GPU:0 with 3083 MB memory) -> physical GPU (device: 0, name: Quadro M1200, pci bus id: 0000:01:00.0, compute capability: 5.0) [I 17:28:26.081 NotebookApp] Saving file at /projects/language-models/Finetune Package.ipynb 2018-10-04 17:29:14.118663: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0 2018-10-04 17:29:14.123595: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-10-04 17:29:14.127649: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929] 0 2018-10-04 17:29:14.135411: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0: N 2018-10-04 17:29:14.138698: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3083 MB memory) -> physical GPU (device: 0, name: Quadro M1200, pci bus id: 0000:01:00.0, compute capability: 5.0) 2018-10-04 17:30:06.881174: W T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:275] Allocator (GPU_0_bfc) ran out of memory trying to allocate 9.00MiB. Current allocation summary follows. 2018-10-04 17:30:06.900550: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (256): Total Chunks: 60, Chunks in use: 60. 15.0KiB allocated for chunks. 15.0KiB in use in bin. 312B client-requested in use in bin. 2018-10-04 17:30:06.929551: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (512): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:06.964647: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (1024): Total Chunks: 2, Chunks in use: 2. 2.5KiB allocated for chunks. 2.5KiB in use in bin. 2.0KiB client-requested in use in bin. 2018-10-04 17:30:06.995394: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (2048): Total Chunks: 532, Chunks in use: 532. 1.56MiB allocated for chunks. 1.56MiB in use in bin. 1.56MiB client-requested in use in bin. 2018-10-04 17:30:07.031613: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (4096): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.061013: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (8192): Total Chunks: 137, Chunks in use: 137. 1.39MiB allocated for chunks. 1.39MiB in use in bin. 1.39MiB client-requested in use in bin. 2018-10-04 17:30:07.093603: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (16384): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.130530: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (32768): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.170321: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (65536): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.212730: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (131072): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.246329: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (262144): Total Chunks: 2, Chunks in use: 2. 512.0KiB allocated for chunks. 512.0KiB in use in bin. 512.0KiB client-requested in use in bin. 2018-10-04 17:30:07.288640: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (524288): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.303248: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (1048576): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.332990: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (2097152): Total Chunks: 71, Chunks in use: 71. 159.75MiB allocated for chunks. 159.75MiB in use in bin. 159.75MiB client-requested in use in bin. 2018-10-04 17:30:07.364897: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (4194304): Total Chunks: 69, Chunks in use: 68. 466.99MiB allocated for chunks. 459.00MiB in use in bin. 459.00MiB client-requested in use in bin. 2018-10-04 17:30:07.396862: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (8388608): Total Chunks: 140, Chunks in use: 140. 1.23GiB allocated for chunks. 1.23GiB in use in bin. 1.23GiB client-requested in use in bin. 2018-10-04 17:30:07.428029: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (16777216): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.464813: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (33554432): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.494067: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (67108864): Total Chunks: 10, Chunks in use: 10. 1.17GiB allocated for chunks. 1.17GiB in use in bin. 1.17GiB client-requested in use in bin. 2018-10-04 17:30:07.524156: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (134217728): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.550345: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (268435456): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.578392: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:646] Bin for 9.00MiB was 8.00MiB, Chunk State: 2018-10-04 17:30:07.600123: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 0000000801980000 of size 1280 2018-10-04 17:30:07.629493: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 0000000801980500 of size 1280 2018-10-04 17:30:07.649189: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 0000000801980A00 of size 125144064 2018-10-04 17:30:07.676965: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 00000008090D9600 of size 7077888 2018-10-04 17:30:07.699245: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 0000000809799600 of size 3072 2018-10-04 17:30:07.718738: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 000000080979A200 of size 3072

    ...and so on. This is, in my opinion a pretty small dataset and I've made the max characters pretty small so I don't think this is a hardware limitation, but a bug.

    opened by stevesmit 12
  • interpolate_pos_embed on MultiTask

    interpolate_pos_embed on MultiTask

    Hi, it looks like this parameter from the multitask is not being properly inherited by the input_pipeline.

    from finetune import Classifier, MultiTask
    
    MAX_LENGTH = 300
    finetune_config = {'batch_size': 4, 
                      'interpolate_pos_embed': True,
                       'n_epochs' : 1, #default 3
                       'train_embeddings': False, 
                       'num_layers_trained': 3, 
                       'max_length': MAX_LENGTH
                      }
    multi_model = MultiTask({"sentiment": Classifier, 
                             "tags": Classifier}, 
                            **finetune_config)
    

    The previous code builds the multitask object.

    The next code works fine finetuning it:

    multi_model.finetune(X={"sentiment": X_train.regex_text.values,
                             "tags": X_train.regex_text.values}, 
                         Y={"sentiment": y_train.sentiment,
                             "tags": y_train.full_topic},
                         batch_size=4
                       )
    

    Also, multi_model.input_pipeline.config['interpolate_pos_embed'] = True is verified.

    But when prediction time comes:

    y_pred = multi_model.predict({"sentiment": X_train.regex_text.values,
                                       "tags": X_train.regex_text.values})
    

    It does not work with:

    ValueError: Max Length cannot be greater than 300 if interpolate_pos_embed is turned off
    

    I do not know if I am missing something on the setup or it is a conflict between the parameters of the distinct objects.

    Thanks very much Madison for the great job! The MultiTask model is a fantastic tool for uneven multiobjective labeled data.

    opened by Guillermogsjc 11
  • Loading a model from 0.4.1 in 0.5.11

    Loading a model from 0.4.1 in 0.5.11

    Describe the bug After saving a model on 5.10 using Classifier.save("my_model.bin"), upgrading to 5.11. Loading using Classifier.load("my_model.bin") results in KeyError: 'base_model_path'

    opened by dimidd 11
  • A different way of doing the similarity/comparison task?

    A different way of doing the similarity/comparison task?

    Hey! Thanks for the awsome work. I was wondering if I could use and update finetune to do the following:

    Instead of using (Start, Text1, Delim, Text2, Extract) and (Start, Text2, Delim, Text1, Extract) as in the paper, can we use (Start, Text1, Extract) and (Start, Text2, Extract) separately through the transformer?

    This could be thought of as obtaining sentence/document embeddings for Text1 and Text2 separately. Upon doing that, I would like to compare their similarity using a distance metric such as cosine distance. (i.e. train the transformer as a siamese network.)

    Would you suggest I build such a model on top of a fork of finetune?

    opened by chaitjo 11
  • Support for pre-training the language model

    Support for pre-training the language model

    Is your feature request related to a problem? Please describe. In order to use the classifier on different languages / specific domains it would be useful to be able to pretrain the language model.

    Describe the solution you'd like Calling .fit on a corpus (i.e.) no labels should train the language model.

    model.fit(corpus)
    

    Describe alternatives you've considered Use the original repo which doesn't have a simple to use interface.

    enhancement 
    opened by elyase 11
  • ValueError: Couldn't find trained model at /tmp/Finetune14yvac9b.

    ValueError: Couldn't find trained model at /tmp/Finetune14yvac9b.

    Describe the bug INFO:finetune:Saving tensorboard output to /tmp/Finetune14yvac9b


    ValueError Traceback (most recent call last) in 6 inputs = {"x": tf.placeholder(shape=xshapes, dtype=xtypes)} 7 return tf.estimator.export.ServingInputReceiver(inputs, inputs) ----> 8 estimator.export_saved_model(export_dir_base='saved_model', serving_input_receiver_fn=serving_input_receiver_fn)

    ~/.local/lib/python3.5/site-packages/tensorflow_estimator/python/estimator/estimator.py in export_saved_model(self, export_dir_base, serving_input_receiver_fn, assets_extra, as_text, checkpoint_path, experimental_mode) 730 as_text=as_text, 731 checkpoint_path=checkpoint_path, --> 732 strip_default_attrs=True) 733 734 def experimental_export_all_saved_models(

    ~/.local/lib/python3.5/site-packages/tensorflow_estimator/python/estimator/estimator.py in _export_all_saved_models(self, export_dir_base, input_receiver_fn_map, assets_extra, as_text, checkpoint_path, strip_default_attrs) 825 else: 826 raise ValueError("Couldn't find trained model at {}.".format( --> 827 self._model_dir)) 828 829 export_dir = export_lib.get_timestamped_export_dir(export_dir_base)

    ValueError: Couldn't find trained model at /tmp/Finetune14yvac9b.

    Minimum Reproducible Example from finetune import MultiLabelClassifier model = MultiLabelClassifier.load('comp_gpt2.model') estimator, hooks = model.get_estimator() (xtypes, ytypes), (xshapes, yshapes) = model.input_pipeline.feed_shape_type_def() def serving_input_receiver_fn(): inputs = {"x": tf.placeholder(shape=xshapes, dtype=xtypes)} return tf.estimator.export.ServingInputReceiver(inputs, inputs) estimator.export_saved_model(export_dir_base='saved_model', serving_input_receiver_fn=serving_input_receiver_fn)

    opened by emtropyml 10
  • getting much lower accuracy with new release of finetune library

    getting much lower accuracy with new release of finetune library

    Describe the bug I updated my finetune library to the latest version two days ago. For sanity check, I loaded my fine-tuned and saved models from previous model. I get totally different training and test accuracies. In the previous version, my train and test accuracy was 90% and 82%, now with this new release, with the same fine-tuned model, and same datasets, but I am getting 34% for training set, and 16% for test set. This is a huge difference. I assume there is a bug, or something else going on?

    My code lines for fine tuning:

    import time
    start = time.time()
    model = Classifier(n_epochs=2 , base_model=GPT2Model, tensorboard_folder ='/workspace/checkpoints', max_length= 1024, val_size = 1000, chunk_long_sequences=False, keep_best_model= True)
    model.fit(trainX, trainY)
    print("total training time:", time.time() - start)
    

    for testing:

    #Load the saved model
    model= Classifier.load('./checkpoints/2epochs_GPT2')
    #test accuracy for the test set 
    pred_test = model.predict(testX)
    accuracy = np.mean(pred_test == testY)
    print('Test Accuracy: {:0.3f}'.format(accuracy))
    
    opened by rnyak 9
  • Can I use to generate text?

    Can I use to generate text?

    Hi, seems great work done by the team. According to the documentation, I understand that every model uses a pre-trained language model. Can I use it for the following scenario, if yes how?:

    1. Fine-tune the pre-trained language model on my own text corpus and then generate(sample) text.
    2. Fine-tune the pre-trained language model on my own text corpus and then score any given text/sentence. Thanks.
    opened by abubakar-ucr 9
  • Slow unsupervised training

    Slow unsupervised training

    Thank you for your library, the supervised finetuning works very well. However, when I try to train on unlabelled data ( model.fit(unlabeledX) ), the training is much slower (9s/it) compared to supervised training (1.7s/it). This is on one K80 gpu. I am not sure why unsupervised training is slower, as doesn't the supervised training tune the language model as well?

    opened by chiayewken 9
  •  eval_acc parameter

    eval_acc parameter

    Describe the bug I set the eval_acc = true, and val_size = 1000. I am fine-tuning the Classifier model for 3 epochs. I get 90% training 82% test set accuracies, but when I check the TensorBoard accuracy plot, I see val accuracy is 49%. That does not seem correct to me.

    I am not sure if the eval_acc is calculated correctly.

    Expected behavior How can we print put val accuracy during fine tuning at least for epoch?

    opened by rnyak 8
  • [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1 to latest

    [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1 to latest

    Keeping your Docker base image up-to-date means you’ll benefit from security fixes in the latest version of your chosen image.

    Changes included in this PR

    • docker/Dockerfile.cpu

    We recommend upgrading to tensorflow/tensorflow:latest, as this image has only 29 known vulnerabilities. To do this, merge this pull request, then verify your application still works as expected.

    Some of the most important vulnerabilities in your base image include:

    | Severity | Priority Score / 1000 | Issue | Exploit Maturity | | :------: | :-------------------- | :---- | :--------------- | | low severity | 536 | Improper Check for Dropped Privileges
    SNYK-UBUNTU2004-BASH-581100 | Mature | | high severity | 614 | Loop with Unreachable Exit Condition ('Infinite Loop')
    SNYK-UBUNTU2004-OPENSSL-2426343 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387723 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387728 | No Known Exploit | | medium severity | 514 | Improper Input Validation
    SNYK-UBUNTU2004-XZUTILS-2442551 | No Known Exploit |


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by snyk-bot 0
  • [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1-gpu to latest-gpu

    [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1-gpu to latest-gpu

    This PR was automatically created by Snyk using the credentials of a real user.


    Keeping your Docker base image up-to-date means you’ll benefit from security fixes in the latest version of your chosen image.

    Changes included in this PR

    • docker/Dockerfile.gpu

    We recommend upgrading to tensorflow/tensorflow:latest-gpu, as this image has only 49 known vulnerabilities. To do this, merge this pull request, then verify your application still works as expected.

    Some of the most important vulnerabilities in your base image include:

    | Severity | Priority Score / 1000 | Issue | Exploit Maturity | | :------: | :-------------------- | :---- | :--------------- | | low severity | 536 | Improper Check for Dropped Privileges
    SNYK-UBUNTU2004-BASH-581100 | Mature | | medium severity | 514 | CVE-2022-32221
    SNYK-UBUNTU2004-CURL-3070971 | No Known Exploit | | medium severity | 514 | Arbitrary Code Injection
    SNYK-UBUNTU2004-GNUPG2-2940666 | No Known Exploit | | medium severity | 514 | Improper Input Validation
    SNYK-UBUNTU2004-GZIP-2442549 | No Known Exploit | | high severity | 614 | Loop with Unreachable Exit Condition ('Infinite Loop')
    SNYK-UBUNTU2004-OPENSSL-2426343 | No Known Exploit |


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by ashmuck 0
  • [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1 to 2.11.0

    [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1 to 2.11.0

    Keeping your Docker base image up-to-date means you’ll benefit from security fixes in the latest version of your chosen image.

    Changes included in this PR

    • docker/Dockerfile.cpu

    We recommend upgrading to tensorflow/tensorflow:2.11.0, as this image has only 29 known vulnerabilities. To do this, merge this pull request, then verify your application still works as expected.

    Some of the most important vulnerabilities in your base image include:

    | Severity | Priority Score / 1000 | Issue | Exploit Maturity | | :------: | :-------------------- | :---- | :--------------- | | low severity | 536 | Improper Check for Dropped Privileges
    SNYK-UBUNTU2004-BASH-581100 | Mature | | high severity | 614 | Loop with Unreachable Exit Condition ('Infinite Loop')
    SNYK-UBUNTU2004-OPENSSL-2426343 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387723 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387728 | No Known Exploit | | medium severity | 514 | Improper Input Validation
    SNYK-UBUNTU2004-XZUTILS-2442551 | No Known Exploit |


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by snyk-bot 0
  • [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1-gpu to 2.11.0-gpu

    [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1-gpu to 2.11.0-gpu

    Keeping your Docker base image up-to-date means you’ll benefit from security fixes in the latest version of your chosen image.

    Changes included in this PR

    • docker/Dockerfile.gpu

    We recommend upgrading to tensorflow/tensorflow:2.11.0-gpu, as this image has only 49 known vulnerabilities. To do this, merge this pull request, then verify your application still works as expected.

    Some of the most important vulnerabilities in your base image include:

    | Severity | Priority Score / 1000 | Issue | Exploit Maturity | | :------: | :-------------------- | :---- | :--------------- | | low severity | 536 | Improper Check for Dropped Privileges
    SNYK-UBUNTU2004-BASH-581100 | Mature | | high severity | 614 | Loop with Unreachable Exit Condition ('Infinite Loop')
    SNYK-UBUNTU2004-OPENSSL-2426343 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387723 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387728 | No Known Exploit | | medium severity | 514 | Improper Input Validation
    SNYK-UBUNTU2004-XZUTILS-2442551 | No Known Exploit |


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by snyk-bot 0
Releases(0.8.6)
Learn meanings behind words is a key element in NLP. This project concentrates on the disambiguation of preposition senses. Therefore, we train a bert-transformer model and surpass the state-of-the-art.

New State-of-the-Art in Preposition Sense Disambiguation Supervisor: Prof. Dr. Alexander Mehler Alexander Henlein Institutions: Goethe University TTLa

Dirk Neuhäuser 4 Apr 6, 2022
Code for "Finetuning Pretrained Transformers into Variational Autoencoders"

transformers-into-vaes Code for Finetuning Pretrained Transformers into Variational Autoencoders (our submission to NLP Insights Workshop 2021). Gathe

Seongmin Park 22 Nov 26, 2022
Code for paper Multitask-Finetuning of Zero-shot Vision-Language Models

Code for paper Multitask-Finetuning of Zero-shot Vision-Language Models

Zhenhailong Wang 2 Jul 15, 2022
scikit-learn wrappers for Python fastText.

skift scikit-learn wrappers for Python fastText. >>> from skift import FirstColFtClassifier >>> df = pandas.DataFrame([['woof', 0], ['meow', 1]], colu

Shay Palachy 233 Sep 9, 2022
scikit-learn wrappers for Python fastText.

skift scikit-learn wrappers for Python fastText. >>> from skift import FirstColFtClassifier >>> df = pandas.DataFrame([['woof', 0], ['meow', 1]], colu

Shay Palachy 209 Feb 17, 2021
This code extends the neural style transfer image processing technique to video by generating smooth transitions between several reference style images

Neural Style Transfer Transition Video Processing By Brycen Westgarth and Tristan Jogminas Description This code extends the neural style transfer ima

Brycen Westgarth 110 Jan 7, 2023
Grading tools for Advanced NLP (11-711)Grading tools for Advanced NLP (11-711)

Grading tools for Advanced NLP (11-711) Installation You'll need docker and unzip to use this repo. For docker, visit the official guide to get starte

Hao Zhu 2 Sep 27, 2022
Explore different way to mix speech model(wav2vec2, hubert) and nlp model(BART,T5,GPT) together

SpeechMix Explore different way to mix speech model(wav2vec2, hubert) and nlp model(BART,T5,GPT) together. Introduction For the same input: from datas

Eric Lam 31 Nov 7, 2022
Yuqing Xie 2 Feb 17, 2022
NLP Core Library and Model Zoo based on PaddlePaddle 2.0

PaddleNLP 2.0拥有丰富的模型库、简洁易用的API与高性能的分布式训练的能力,旨在为飞桨开发者提升文本建模效率,并提供基于PaddlePaddle 2.0的NLP领域最佳实践。

null 6.9k Jan 1, 2023
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP

TextAttack ?? Generating adversarial examples for NLP models [TextAttack Documentation on ReadTheDocs] About • Setup • Usage • Design About TextAttack

QData 2.2k Jan 3, 2023
Using Bert as the backbone model for lime, designed for NLP task explanation (sentence pair text classification task)

Lime Comparing deep contextualized model for sentences highlighting task. In addition, take the classic explanation model "LIME" with bert-base model

JHJu 2 Jan 18, 2022
Toward Model Interpretability in Medical NLP

Toward Model Interpretability in Medical NLP LING380: Topics in Computational Linguistics Final Project James Cross ([email protected]) and Daniel Kim

null 1 Mar 4, 2022
Global Rhythm Style Transfer Without Text Transcriptions

Global Prosody Style Transfer Without Text Transcriptions This repository provides a PyTorch implementation of AutoPST, which enables unsupervised glo

Kaizhi Qian 193 Dec 30, 2022
Unet-TTS: Improving Unseen Speaker and Style Transfer in One-shot Voice Cloning

Unet-TTS: Improving Unseen Speaker and Style Transfer in One-shot Voice Cloning English | 中文 ❗ Now we provide inferencing code and pre-training models

null 164 Jan 2, 2023
Python bindings to the dutch NLP tool Frog (pos tagger, lemmatiser, NER tagger, morphological analysis, shallow parser, dependency parser)

Frog for Python This is a Python binding to the Natural Language Processing suite Frog. Frog is intended for Dutch and performs part-of-speech tagging

Maarten van Gompel 46 Dec 14, 2022