A deep learning-based translation library built on Huggingface transformers

Overview

DL Translate

A deep learning-based translation library built on Huggingface transformers and Facebook's mBART-Large

💻 GitHub Repository
📚 Documentation / Readthedocs
🐍 PyPi project
🧪 Colab Demo / Kaggle Demo

Quickstart

Install the library with pip:

pip install dl-translate

To translate some text:

import dl_translate as dlt

mt = dlt.TranslationModel()  # Slow when you load it for the first time

text_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
mt.translate(text_hi, source=dlt.lang.HINDI, target=dlt.lang.ENGLISH)

Above, you can see that dlt.lang contains variables representing each of the 50 available languages with auto-complete support. Alternatively, you can specify the language (e.g. "Arabic") or the language code (e.g. "fr_XX" for French):

text_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا."
mt.translate(text_ar, source="Arabic", target="fr_XX")

If you want to verify whether a language is available, you can check it:

print(mt.available_languages())  # All languages that you can use
print(mt.available_codes())  # Code corresponding to each language accepted
print(mt.get_lang_code_map())  # Dictionary of lang -> code

Usage

Selecting a device

When you load the model, you can specify the device:

mt = dlt.TranslationModel(device="auto")

By default, the value will be device="auto", which means it will use a GPU if possible. You can also explicitly set device="cpu" or device="gpu", or some other strings accepted by torch.device(). In general, it is recommend to use a GPU if you want a reasonable processing time.

Loading from a path

By default, dlt.TranslationModel will download the model from the huggingface repo and cache it. However, you are free to load from a path:

mt = dlt.TranslationModel("/path/to/your/model/directory/")

Make sure that your tokenizer is also stored in the same directory if you use this approach.

Using a different model

You can also choose another model that has a similar format, e.g.

mt = dlt.TranslationModel("facebook/mbart-large-50-one-to-many-mmt")

Note that the available languages will change if you do this, so you will not be able to leverage dlt.lang or dlt.utils.

Breaking down into sentences

It is not recommended to use extremely long texts as it takes more time to process. Instead, you can try to break them down into sentences with the help of nltk. First install the library with pip install nltk, then run:

import nltk

nltk.download("punkt")

text = "Mr. Smith went to his favorite cafe. There, he met his friend Dr. Doe."
sents = nltk.tokenize.sent_tokenize(text, "english")  # don't use dlt.lang.ENGLISH
" ".join(mt.translate(sents, source=dlt.lang.ENGLISH, target=dlt.lang.FRENCH))

Batch size and verbosity when using translate

It's possible to set a batch size (i.e. the number of elements processed at once) for mt.translate and whether you want to see the progress bar or not:

...
mt = dlt.TranslationModel()
mt.translate(text, source, target, batch_size=32, verbose=True)

If you set batch_size=None, it will compute the entire text at once rather than splitting into "chunks". We recommend lowering batch_size if you do not have a lot of RAM or VRAM and run into CUDA memory error. Set a higher value if you are using a high-end GPU and the VRAM is not fully utilized.

dlt.utils module

An alternative to mt.available_languages() is the dlt.utils module. You can use it to find out which languages and codes are available:

print(dlt.utils.available_languages('mbart50'))  # All languages that you can use
print(dlt.utils.available_codes('mbart50'))  # Code corresponding to each language accepted
print(dlt.utils.get_lang_code_map('mbart50'))  # Dictionary of lang -> code

Advanced

The following section assumes you have knowledge of PyTorch and Huggingface Transformers.

Saving and loading

If you wish to accelerate the loading time the translation model, you can use save_obj:

mt = dlt.TranslationModel()
mt.save_obj('saved_model')
# ...

Then later you can reload it with load_obj:

mt = dlt.TranslationModel.load_obj('saved_model')
# ...

Warning: Only use this if you are certain the torch module saved in saved_model/weights.pt can be correctly loaded. Indeed, it is possible that the huggingface, torch or some other dependencies change between when you called save_obj and load_obj, and that might break your code. Thus, it is recommend to only run load_obj in the same environment/session as save_obj. Note this method might be deprecated in the future once there's no speed benefit in loading this way.

Interacting with underlying model and tokenizer

When initializing model, you can pass in arguments for the underlying BART model and tokenizer (which will respectively be passed to MBartForConditionalGeneration.from_pretrained and MBart50TokenizerFast.from_pretrained):

mt = dlt.TranslationModel(
    model_options=dict(
        state_dict=...,
        cache_dir=...,
        ...
    ),
    tokenizer_options=dict(
        tokenizer_file=...,
        eos_token=...,
        ...
    )
)

You can also access the underlying transformers model and tokenizer:

bart = mt.get_transformers_model()
tokenizer = mt.get_tokenizer()

See the huggingface docs for more information.

bart_model.generate() keyword arguments

When running mt.translate, you can also give a generation_options dictionary that is passed as keyword arguments to the underlying bart_model.generate() method:

mt.translate(
    text,
    source=dlt.lang.GERMAN,
    target=dlt.lang.SPANISH,
    generation_options=dict(num_beams=5, max_length=...)
)

Learn more in the huggingface docs.

Acknowledgement

dl-translate is built on top of Huggingface's implementation of multilingual BART finetuned on many-to-many translation of over 50 languages, which is documented here. The original paper was written by Tang et. al from Facebook AI Research; you can find it here and cite it using the following:

@article{tang2020multilingual,
  title={Multilingual translation with extensible multilingual pretraining and finetuning},
  author={Tang, Yuqing and Tran, Chau and Li, Xian and Chen, Peng-Jen and Goyal, Naman and Chaudhary, Vishrav and Gu, Jiatao and Fan, Angela},
  journal={arXiv preprint arXiv:2008.00401},
  year={2020}
}

dlt is a wrapper with useful utils to save you time. For huggingface's transformers, the following snippet is shown as an example:

from transformers import MBartForConditionalGeneration, MBart50TokenizerFast

article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
article_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا."

model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")

# translate Hindi to French
tokenizer.src_lang = "hi_IN"
encoded_hi = tokenizer(article_hi, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire en Syria."

# translate Arabic to English
tokenizer.src_lang = "ar_AR"
encoded_ar = tokenizer(article_ar, return_tensors="pt")
generated_tokens = model.generate(**encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "The Secretary-General of the United Nations says there is no military solution in Syria."

With dlt, you can run:

import dl_translate as dlt

article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
article_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا."

mt = dlt.TranslationModel()
translated_fr = mt.translate(article_hi, source=dlt.lang.HINDI, target=dlt.lang.FRENCH)
translated_en = mt.translate(article_ar, source=dlt.lang.ARABIC, target=dlt.lang.ENGLISH)

Notice you don't have to think about tokenizers, condition generation, pretrained models, and regional codes; you can just tell the model what to translate!

If you are experienced with huggingface's ecosystem, then you should be familiar enough with the example above that you wouldn't need this library. However, if you've never heard of huggingface or mBART, then I hope using this library will give you enough motivation to learn more about them :)

Comments
  • module 'torch' has no attribute 'device'

    module 'torch' has no attribute 'device'

    Hello , @xhlulu Please find attached the part of the tutorial that I tried to execute and where I find the error. NB : I used the guide of Pytorch to install torch according to the command appropriate to my system which is: pip3 install torch torchvision torchaudio . The version of torch is 1.10.1 and my python version is 3.8.5 . image

    image

    Thank you for your help.

    opened by gitassia 9
  • Offline mode tutorial

    Offline mode tutorial

    hi, sorry for my bad English, and I am quite a newbie I am quite confused with the offline tutorial "Now, move everything in the dlt directory to your offline environment. Create a virtual environment:" -where is the "offline environment"? and -how to Create a "virtual environment"? I using windows 11 and python 3.9

    opened by kucingkembar 6
  • error on pyw extention

    error on pyw extention

    hi, it's me again, sorry again for bad English I tried this code in py file, open using python IDLE, run -> run module F5 ===> no problem then rename the extension to pyw, open like exe (double click), and this is the result:

    Traceback (most recent call last):
      File "D:\Script\translate.pyw", line 67, in FB_Loading
        import dl_translate as dlt
      File "C:\Python\Python39\lib\site-packages\dl_translate\__init__.py", line 3, in <module>
        from ._translation_model import TranslationModel
      File "C:\Python\Python39\lib\site-packages\dl_translate\_translation_model.py", line 5, in <module>
        import transformers
      File "C:\Python\Python39\lib\site-packages\transformers\__init__.py", line 43, in <module>
        from . import dependency_versions_check
      File "C:\Python\Python39\lib\site-packages\transformers\dependency_versions_check.py", line 36, in <module>
        from .file_utils import is_tokenizers_available
      File "C:\Python\Python39\lib\site-packages\transformers\file_utils.py", line 58, in <module>
        logger = logging.get_logger(__name__)  # pylint: disable=invalid-name
      File "C:\Python\Python39\lib\site-packages\transformers\utils\logging.py", line 119, in get_logger
        _configure_library_root_logger()
      File "C:\Python\Python39\lib\site-packages\transformers\utils\logging.py", line 82, in _configure_library_root_logger
        _default_handler.flush = sys.stderr.flush
    AttributeError: 'NoneType' object has no attribute 'flush'
    

    any guide to fix this?

    opened by kucingkembar 4
  • Add MarianNMT

    Add MarianNMT

    See Marian: https://huggingface.co/transformers/model_doc/marian.html See helsinki-nlp's models: https://huggingface.co/Helsinki-NLP

    We'd need

    • [ ] Add option to load the marian architecture at initialization (e.g. dlt.TranslationModel("marian"))
    • [ ] Add an option to find all of the languages (and code) available for a certain variant trained using marian, e.g. dlt.utils.available_languages("opus-en-romance")
    • [ ] An option to leverage autocomplete such as dlt.lang.opus.en_romance.ENGLISH, but the options would be limited to only what's available with the variance (i.e. romance)
    • [ ] TBD
    enhancement 
    opened by xhluca 3
  • no load to ram mode

    no load to ram mode

    hi, it me again, and sorry about my bad English, I have a project to use this software for windows tablets with 4GB of ram, the problem is the ram consumption using this software is quite high, about 2,3GB, is there any way to use this software read storage data(SSD or HDD) instead of ram data?

    thank you for reading, and have a nice day

    opened by kucingkembar 2
  • error: when using  torch(1.8.0+cu111)

    error: when using torch(1.8.0+cu111)

    Traceback (most recent call last):
    
      File "translate_test.py", line 66, in <module>
    
        translate_test()
    
      File "translate_test.py", line 30, in translate_test
    
        rest = mt.predict(texts, _from = 'en',batch_size = size)
    
      File "/mnt/eclipse-glority/receipt/deploy/branches/dev/ms_deploy/util/translate_util.py", line 29, in predict
    
        rest = self.mt.translate(texts, source=_from, target=_to, batch_size = batch_size)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/dl_translate/_translation_model.py", line 197, in translate
    
        **encoded, **generation_options
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    
        return func(*args, **kwargs)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/transformers/generation_utils.py", line 927, in generate
    
        model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/transformers/generation_utils.py", line 412, in _prepare_encoder_decoder_kwargs_for_generation
    
        model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    
        result = self.forward(*input, **kwargs)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 780, in forward
    
        output_attentions=output_attentions,
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    
        result = self.forward(*input, **kwargs)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 388, in forward
    
        hidden_states = self.activation_fn(self.fc1(hidden_states))
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    
        result = self.forward(*input, **kwargs)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 94, in forward
    
        return F.linear(input, self.weight, self.bias)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/torch/nn/functional.py", line 1753, in linear
    
        return torch._C._nn.linear(input, weight, bias)
    
    RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
    
    torch                            1.8.0+cu111
    
    torchvision                      0.9.0+cu111
    

    it is ok, when

    torch 1.7.1+cu101

    how to fix ?

    opened by hongyinjie 2
  • Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)

    Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)

    When I use dl_translate, the following problem appears, how do I set TOKENIZERS_PARALLELISM.

    huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using tokenizers before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)

    opened by Kouuh 2
  • Incorporating ISO-639

    Incorporating ISO-639

    opened by xhluca 2
  • Cannot run with device = 'gpu' on Macbook M1 Pro

    Cannot run with device = 'gpu' on Macbook M1 Pro

    I have tried to using gpu on Macbook 16inch M1 Pro, then I got this error: "AssertionError: Torch not compiled with CUDA enabled"

    Please help!

    opened by htnha 1
  • how to make (slow) translation faster

    how to make (slow) translation faster

    Hi, I am testing this code on a list of 5 short sentences, the average time for translation is 2 seconds/sentence. which is slow for my requirements. any hints on how to speed-up the translation ? Thanks

    import dl_translate as dlt
    import time 
    
    french_sentence = 'oh mon dieu c mechant c pas possible jamais je reviendrai, a deconseiller. je vous recommende de visiter un autre produit apres vous pouvez voire la difference'
    arabic_sentence = '  لقد جربت عدة نسخ من هذا المنتج لكن لم استطع ان اجد فبه ما ينتج ما هذا الهراء'
    ar2 = 'المنتج الاصلى سريع الذوبان فى الماء ويذوب بشكل مثالى على عكس المكمل المغشوش ...منتج كويس انا حبيتو و بنصح فيه'
    ar3= 'امشي سيدا لفه الثانيه يسار تعدد المطالبات المتعلقة بالأراضي وما ينتج عن ذلك من تناحر يولد باستمرار نزاعات متجددة. ... ويمكن دمج ما ينتج عن ذلك من معارف في إطار برنامج عمل نيروبي' 
    nepali ='यो मृत्युदर विकासशील देशहरुमा धेरै छ'
    sent_list =[french_sentence, arabic_sentence, ar2, ar3, nepali]
    print(sent_list)
    mt = dlt.TranslationModel()  # Slow when you load it for the first time
    map_langdetect_to_translate = {'ar':'Arabic', 'en':'English', 'es':'Spanish', 'fr':'French', 'ne':'Nepali'}
    start = time.time() 
    for sent in sent_list:
    	print('-------------------------------------')
    	print('original sentence is : ',sent)
    	print('detected lang ',detect(sent))
    	mapped = map_langdetect_to_translate[detect(sent)]
    	translated = mt.translate(sent, source=mapped, target="en")
    	print('Translation is : ',translated)
    
    end = time.time()	
    tt = time.strftime("%H:%M:%S", time.gmtime(end-start))
    time_message = 'Query execution time : {}'.format( tt )
    print(time_message)
    
    opened by banyous 1
  • Generate docs with sphinx or something else

    Generate docs with sphinx or something else

    Right now I have some docstrings but it would require some refactoring. Using readthedocs.io would be nice, we could start by looking at what numpy or pydata is using

    documentation 
    opened by xhluca 1
  • Detect source language with langdetect package

    Detect source language with langdetect package

    The langdetect has worked well for me in the past for language detection problems. How would you feel about allowing users to pass 'auto' as an option for source? I could see some pros and cons:

    Pros

    • Users don't need to be able to recognize a language to translate
    • Eliminates pre-classification of languages if your dataset contains multiple languages

    Cons

    I'm a little new to open source but I would love to contribute 🙂 Of course, if you feel this doesn't fit this package's mission that's totally understandable.

    enhancement help wanted good first issue 
    opened by awalker88 5
  • Support for sentence splitting

    Support for sentence splitting

    Right now TranslationModel.translate will translate each input string as is, which can be extremely slow for longer sequences due to the quadratic runtime of the architecture. The current recommended way is to use nltk:

    import nltk
    
    nltk.load("punkt")
    
    text = "Mr. Smith went to his favorite cafe. There, he met his friend Dr. Doe."
    sents = nltk.tokenize.sent_tokenize(text, "english")  # don't use dlt.lang.ENGLISH
    " ".join(model.translate(sents, source=dlt.lang.ENGLISH, target=dlt.lang.FRENCH))
    

    Which works well but doesn't include all possible languages. It would be interesting to train the punkt model on each of the language made available (though we'd need to use a very large dataset for that). Once that's done, the snippet above could be a simple argument, e.g. model.translate(..., max_length="sentence"). With some more effort, max_length parameter could also be an integer n between 0 and 512, which represents the length of the max token. Moreover, rather than truncating at that length, we could break down the input text into sequences of length n or less, which would include the aggregated sentences.

    enhancement help wanted 
    opened by xhluca 3
Releases(v0.2.6)
  • v0.2.6(Jul 13, 2022)

  • v0.2.2.post1(Aug 21, 2021)

  • v0.2.2(Apr 9, 2021)

    Change languages available in dlt.lang

    Changed

    • Docs: Available languages now include "Khmer" (which maps to central khmer)

    Fixed

    • dlt.lang will now have all the languages corresponding to m2m100 instead of mbart50
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Apr 8, 2021)

    Fix dlt.TranslationModel.load_obj

    Added

    • New tests for saving and loading.

    Fixed

    • dlt.TranslationModel.load_obj: Will now work without having to explicitly give the model family.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 8, 2021)

    Add m2m100 as the new default model to support 100 languages

    Added

    • dlt.lang.m2m100 module: Now has variables for over 100 languages, also auto-complete ready. Example: dlt.lang.m2m100.ENGLISH.
    • dlt.utils.available_languages, dlt.utils.available_codes: Now supports argument "m2m100"
    • Available languages for each model family
    • Script and template to generate available languages

    Changed

    • [BREAKING] dlt.lang.TranslationModel: A new model parameter called model_family in the initialization function. Either "mbart50" or "m2m100". By default, it will be inferred based on model_or_path. Needs to be explicitly set if model_or_path is a path.
    • [BREAKING] Default model changed to m2m100
    • Docs and readme about mbart50 were reframed to take into account the new model
    • dlt.TranslationModel.translate: Improved docstring to be more general.
    • Tests pertaining to m2m100
    • scripts/generate_langs.py: Renamed, mechanism now changed to loading from json files
    • docs/index.md: Expand the "Usage" and "Advanced" sections
    • README.md: Add acknowledgement about m2m100, significantly trim "Advanced" section, make "Usage" more concise

    Fixed

    • dlt.TranslationModel.available_codes() was returning the languages instead of the codes. It will now correctly return the code.

    Removed

    • Output type hints for TranslationModel.get_transformers_model and TranslationModel.get_tokenizer
    • [BREAKING] dlt.TranslationModel.bart_model and dlt.TranslationModel.tokenizer are no longer available to be used directly. Please use dlt.TranslationModel.get_transformers_model and dlt.TranslationModel.get_tokenizer instead.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0rc1(Mar 21, 2021)

    Add m2m100 as an alternative to mbart50

    m2m100 has more languages available (~110) and has also reported their absolute BLEU scores.

    Added

    • dlt.lang.m2m100 module: Now has variables for over 100 languages, also auto-complete ready. Example: dlt.lang.m2m100.ENGLISH.
    • dlt.utils.available_languages, dlt.utils.available_codes: Now supports argument "m2m100"

    Changed

    • [BREAKING] dlt.lang.TranslationModel: A new model parameter called model_family in the initialization function. Either "mbart50" or "m2m100". By default, it will be inferred based on model_or_path. Needs to be explicitly set if model_or_path is a path.
    • dlt.TranslationModel.translate: Improved docstring to be more general.
    • Tests pertaining to m2m100
    • scripts/generate_langs.py: Renamed, mechanism now changed to loading from json files

    Fixed

    • dlt.TranslationModel.available_codes() was returning the languages instead of the codes. It will now correctly return the code.

    Removed

    • Output type hints for TranslationModel.get_transformers_model and TranslationModel.get_tokenizer
    • [BREAKING] dlt.TranslationModel.bart_model and dlt.TranslationModel.tokenizer are no longer available to be used directly. Please use dlt.TranslationModel.get_transformers_model and dlt.TranslationModel.get_tokenizer instead.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Mar 17, 2021)

Code for lyric-section-to-comment generation based on huggingface transformers.

CommentGeneration Code for lyric-section-to-comment generation based on huggingface transformers. Migrate Guyu model and code (both 12-layers and 24-l

Yawei Sun 8 Sep 4, 2021
KoBART model on huggingface transformers

KoBART-Transformers SKT에서 공개한 KoBART를 편리하게 사용할 수 있게 transformers로 포팅하였습니다. Install (Optional) BartModel과 PreTrainedTokenizerFast를 이용하면 설치하실 필요 없습니다. p

Hyunwoong Ko 58 Dec 7, 2022
:mag: Transformers at scale for question answering & neural search. Using NLP via a modular Retriever-Reader-Pipeline. Supporting DPR, Elasticsearch, HuggingFace's Modelhub...

Haystack is an end-to-end framework for Question Answering & Neural search that enables you to ... ... ask questions in natural language and find gran

deepset 6.4k Jan 9, 2023
:mag: End-to-End Framework for building natural language search interfaces to data by utilizing Transformers and the State-of-the-Art of NLP. Supporting DPR, Elasticsearch, HuggingFace’s Modelhub and much more!

Haystack is an end-to-end framework that enables you to build powerful and production-ready pipelines for different search use cases. Whether you want

deepset 1.4k Feb 18, 2021
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and GPT-NEO (2.7 B) on a single 16 GB VRAM V100 Google Cloud instance with Huggingface Transformers using DeepSpeed

Guide: Finetune GPT2-XL (1.5 Billion Parameters) and GPT-NEO (2.7 Billion Parameters) on a single 16 GB VRAM V100 Google Cloud instance with Huggingfa

null 289 Jan 6, 2023
Huggingface Transformers + Adapters = ❤️

adapter-transformers A friendly fork of HuggingFace's Transformers, adding Adapters to PyTorch language models adapter-transformers is an extension of

AdapterHub 1.2k Jan 9, 2023
Label data using HuggingFace's transformers and automatically get a prediction service

Label Studio for Hugging Face's Transformers Website • Docs • Twitter • Join Slack Community Transfer learning for NLP models by annotating your textu

Heartex 135 Dec 29, 2022
Use AutoModelForSeq2SeqLM in Huggingface Transformers to train COMET

Training COMET using seq2seq setting Use AutoModelForSeq2SeqLM in Huggingface Transformers to train COMET. The codes are modified from run_summarizati

tqfang 9 Dec 17, 2022
Client library to download and publish models and other files on the huggingface.co hub

huggingface_hub Client library to download and publish models and other files on the huggingface.co hub Do you have an open source ML library? We're l

Hugging Face 644 Jan 1, 2023
NLP codes implemented with Pytorch (w/o library such as huggingface)

NLP_scratch NLP codes implemented with Pytorch (w/o library such as huggingface) scripts ├── models: Neural Network models ├── data: codes for dataloa

null 3 Dec 28, 2021
HuggingSound: A toolkit for speech-related tasks based on HuggingFace's tools

HuggingSound HuggingSound: A toolkit for speech-related tasks based on HuggingFace's tools. I have no intention of building a very complex tool here.

Jonatas Grosman 247 Dec 26, 2022
Neural-Machine-Translation - Implementation of revolutionary machine translation models

Neural Machine Translation Framework: PyTorch Repository contaning my implementa

Utkarsh Jain 1 Feb 17, 2022
Incorporating KenLM language model with HuggingFace implementation of Wav2Vec2CTC Model using beam search decoding

Wav2Vec2CTC With KenLM Using KenLM ARPA language model with beam search to decode audio files and show the most probable transcription. Assuming you'v

farisalasmary 65 Sep 21, 2022
Train BPE with fastBPE, and load to Huggingface Tokenizer.

BPEer Train BPE with fastBPE, and load to Huggingface Tokenizer. Description The BPETrainer of Huggingface consumes a lot of memory when I am training

Lizhuo 1 Dec 23, 2021
This is a general repo that helps you develop fast/effective NLP classifiers using Huggingface

NLP Classifier Introduction This project trains a bert model on any NLP classifcation model. And uses the model in make predictions on new data using

Abdullah Tarek 3 Mar 11, 2022
Implementation of the Hybrid Perception Block and Dual-Pruned Self-Attention block from the ITTR paper for Image to Image Translation using Transformers

ITTR - Pytorch Implementation of the Hybrid Perception Block (HPB) and Dual-Pruned Self-Attention (DPSA) block from the ITTR paper for Image to Image

Phil Wang 17 Dec 23, 2022
simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.

Quickly train T5 models in just 3 lines of code + ONNX support simpleT5 is built on top of PyTorch-lightning ⚡️ and Transformers ?? that lets you quic

Shivanand Roy 220 Dec 30, 2022
Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.

Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra. What is Lightning Tran

Pytorch Lightning 581 Dec 21, 2022
Visual Automata is a Python 3 library built as a wrapper for Caleb Evans' Automata library to add more visualization features.

Visual Automata Copyright 2021 Lewi Lie Uberg Released under the MIT license Visual Automata is a Python 3 library built as a wrapper for Caleb Evans'

Lewi Uberg 55 Nov 17, 2022