Simple tool/toolkit for evaluating NLG (Natural Language Generation) offering various automated metrics.

Overview

Jury

Python versions downloads PyPI version Latest Release Open in Colab
Build status Dependencies Code style: black License: MIT

Simple tool/toolkit for evaluating NLG (Natural Language Generation) offering various automated metrics. Jury offers a smooth and easy-to-use interface. It uses datasets for underlying metric computation, and hence adding custom metric is easy as adopting datasets.Metric.

Main advantages that Jury offers are:

  • Easy to use for any NLG system.
  • Calculate many metrics at once.
  • Metrics calculations are handled concurrently to save processing time.
  • It supports evaluating multiple predictions.

To see more, check the official Jury blog post.

Installation

Through pip,

pip install jury

or build from source,

git clone https://github.com/obss/jury.git
cd jury
python setup.py install

Usage

API Usage

It is only two lines of code to evaluate generated outputs.

from jury import Jury

jury = Jury()

# Microsoft translator translation for "Yurtta sulh, cihanda sulh." (16.07.2021)
predictions = ["Peace in the dormitory, peace in the world."]
references = ["Peace at home, peace in the world."]
scores = jury.evaluate(predictions, references)

Specify metrics you want to use on instantiation.

jury = Jury(metrics=["bleu", "meteor"])
scores = jury.evaluate(predictions, references)

CLI Usage

You can specify predictions file and references file paths and get the resulting scores. Each line should be paired in both files.

jury eval --predictions /path/to/predictions.txt --references /path/to/references.txt --reduce_fn max

If you want to specify metrics, and do not want to use default, specify it in config file (json) in metrics key.

{
  "predictions": "/path/to/predictions.txt",
  "references": "/path/to/references.txt",
  "reduce_fn": "max",
  "metrics": [
    "bleu",
    "meteor"
  ]
}

Then, you can call jury eval with config argument.

jury eval --config path/to/config.json

Custom Metrics

You can use custom metrics with inheriting jury.metrics.Metric, you can see current metrics on datasets/metrics. The code snippet below gives a brief explanation.

from jury.metrics import Metric

CustomMetric(Metric):
    def compute(self, predictions, references):
        pass

Contributing

PRs are welcomed as always :)

Installation

git clone https://github.com/obss/jury.git
cd jury
pip install -e .[develop]

Tests

To tests simply run.

python tests/run_tests.py

Code Style

To check code style,

python tests/run_code_style.py check

To format codebase,

python tests/run_code_style.py format

License

Licensed under the MIT License.

Comments
  • Facing datasets error

    Facing datasets error

    Hello, After dowloading the contents from git and instantiating the object, i get this error :-

    /content/image-captioning-bottom-up-top-down
    Traceback (most recent call last):
      File "eval.py", line 11, in <module>
       from jury import Jury 
      File "/usr/local/lib/python3.7/dist-packages/jury/__init__.py", line 1, in <module>
        from jury.core import Jury
      File "/usr/local/lib/python3.7/dist-packages/jury/core.py", line 6, in <module>
        from jury.metrics import EvaluationInstance, Metric, load_metric
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/__init__.py", line 1, in <module>
        from jury.metrics._core import (
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/_core/__init__.py", line 1, in <module>
        from jury.metrics._core.auto import AutoMetric, load_metric
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/_core/auto.py", line 23, in <module>
        from jury.metrics._core.base import Metric
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/_core/base.py", line 28, in <module>
        from datasets.utils.logging import get_logger
    ModuleNotFoundError: No module named 'datasets.utils'; 'datasets' is not a package
    

    Can you please check what could be the issue

    opened by amit0623 8
  • CLI Implementation

    CLI Implementation

    CLI implementation for the package the read from txt files.

    Draft Usage: jury evaluate --predictions predictions.txt --references references.txt

    NLGEval uses single prediction and multiple references in a way that u specify multiple references.txt files for mulitple references, and like this on API.

    My idea is to have a single prediction and refenence file including multiple predictions or multiple references. In a single txt file, maybe we can use some sort of special separator like "<sep>" instead of a special char like [",", ";", ":", "\t"] maybe tab seperated would be OK. Wdyt ? @fcakyon @cemilcengiz

    help wanted discussion 
    opened by devrimcavusoglu 5
  • BLEU: ndarray reshape error

    BLEU: ndarray reshape error

    Hey, when computing BLEU score (snippet), facing reshape error in _compute_single_pred_single_ref.

    Could you assist with the same.

    from jury import Jury
    
    scorer = Jury()
    
    # [2, 5/5]
    p = [
            ['dummy text', 'dummy text', 'dummy text', 'dummy text', 'dummy text'],
            ['dummy text', 'dummy text', 'dummy text', 'dummy text', 'dummy text']
        ]
    
    # [2, 4/2]
    r = [['be looking for a certain office in the building ',
          ' ask the elevator operator for directions ',
          ' be a trained detective ',
          ' be at the scene of a crime'],
         ['leave the room ',
          ' transport the notebook']]
    
    scores = scorer(predictions=p, references=r)
    

    Output:

    Traceback (most recent call last):
      File "/home/axe/Projects/VisComSense/del.py", line 22, in <module>
        scores = scorer(predictions=p, references=r)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/core.py", line 78, in __call__
        score = self._compute_single_score(inputs)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/core.py", line 137, in _compute_single_score
        score = metric.compute(predictions=predictions, references=references, reduce_fn=reduce_fn)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/datasets/metric.py", line 404, in compute
        output = self._compute(predictions=predictions, references=references, **kwargs)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/_core/base.py", line 325, in _compute
        result = self.evaluate(predictions=predictions, references=references, reduce_fn=reduce_fn, **eval_params)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 241, in evaluate
        return eval_fn(predictions=predictions, references=references, reduce_fn=reduce_fn, **kwargs)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 195, in _compute_multi_pred_multi_ref
        score = self._compute_single_pred_multi_ref(
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 176, in _compute_single_pred_multi_ref
        return self._compute_single_pred_single_ref(
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 165, in _compute_single_pred_single_ref
        predictions = predictions.reshape(
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/collator.py", line 35, in reshape
        return Collator(_seq.reshape(args).tolist(), keep=True)
    ValueError: cannot reshape array of size 20 into shape (10,)
    
    Process finished with exit code 1
    
    bug 
    opened by Axe-- 4
  • Understanding BLEU Score ('bleu_n')

    Understanding BLEU Score ('bleu_n')

    Hey, how are different bleu scores calculated?

    For the give snippet, why are all bleu(n) scores identical? And how does this relate to nltk's sentence_bleu (weights) ?

    from jury import Jury
    
    scorer = Jury()
    predictions = [
        ["the cat is on the mat", "There is cat playing on the mat"], 
        ["Look!    a wonderful day."]
    ]
    references = [
        ["the cat is playing on the mat.", "The cat plays on the mat."], 
        ["Today is a wonderful day", "The weather outside is wonderful."]
    ]
    scores = scorer(predictions=predictions, references=references)
    
    

    Output:

    {'empty_predictions': 0,
     'total_items': 2,
     'bleu_1': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'bleu_2': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'bleu_3': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'bleu_4': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'meteor': {'score': 0.5420511682934044},
     'rouge': {'rouge1': 0.7783882783882783,
      'rouge2': 0.5925324675324675,
      'rougeL': 0.7426739926739926,
      'rougeLsum': 0.7426739926739926}}
    
    
    bug 
    opened by Axe-- 4
  • Computing BLEU more than once

    Computing BLEU more than once

    Hey, why does computing the BLEU score more than once, affect the key value of the score dict. e.g. 'bleu_1', 'bleu_1_1', 'bleu_1_1_1'

    Overall I find the library quite user-friendly, but unsure about this behavior.

    opened by Axe-- 4
  • New metrics structure completed.

    New metrics structure completed.

    New metrics structure allows user to create and define params for metrics as desired. Current metric classes in metrics/ can be extended or completely new custom metric can be defined inheriting jury.metrics.Metric.

    patch 
    opened by devrimcavusoglu 3
  • Fixed warning message in BLEURT default initialization

    Fixed warning message in BLEURT default initialization

    Jury constructor accepts metrics as a string, an object from Metric class or list of metric configurations inside a dict. In addition, BLEURT metric checks for config_namekey instead of checkpoint key. Thus, this warning message misleads if default model is not used.

    Here is an example of incorrect initialization and warning message:

    Screen Shot 2022-05-16 at 15 43 06

    checkpoint is ignored: Screen Shot 2022-05-16 at 15 42 55

    opened by zafercavdar 1
  • Fix Reference Structure for Basic BLEU calculation

    Fix Reference Structure for Basic BLEU calculation

    The wrapped function expects a slightly different reference structure than the one we give in the Single Ref-Pred method. A small structure change fixes the issue.

    Fixes #72

    opened by Sophylax 1
  • Bug: Metric object and string cannot be used together in input.

    Bug: Metric object and string cannot be used together in input.

    Currently, jury allows usage of input metrics to be passed in Jury(metrics=metrics) to be either list of jury.metrics.Metric or str, but it does not allow to use both str and Metric object together as,

    from jury import Jury
    from jury.metrics import load_metric
    
    metrics = ["bleu", load_metric("meteor")]
    jury = Jury(metrics=metrics)
    

    raises an error as metrics parameter expects a NestedSingleType of object which is either list<str> or list<jury.metrics.Metric.

    opened by devrimcavusoglu 1
  • BLEURT is failing to produce results

    BLEURT is failing to produce results

    I was trying to check with the same example mentioned in the readme file for Bleurt. It is failing by throwing an error. Please let me know the issue.

    Error :

    ImportError                               Traceback (most recent call last)
    <ipython-input-16-ed14e2ab4c7e> in <module>
    ----> 1 bleurt = Bleurt.construct()
          2 score = bleurt.compute(predictions=predictions, references=references)
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\auxiliary.py in construct(cls, task, resulting_name, compute_kwargs, **kwargs)
         99         subclass = cls._get_subclass()
        100         resulting_name = resulting_name or cls._get_path()
    --> 101         return subclass._construct(resulting_name=resulting_name, compute_kwargs=compute_kwargs, **kwargs)
        102 
        103     @classmethod
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\base.py in _construct(cls, resulting_name, compute_kwargs, **kwargs)
        235         cls, resulting_name: Optional[str] = None, compute_kwargs: Optional[Dict[str, Any]] = None, **kwargs
        236     ):
    --> 237         return cls(resulting_name=resulting_name, compute_kwargs=compute_kwargs, **kwargs)
        238 
        239     @staticmethod
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\base.py in __init__(self, resulting_name, compute_kwargs, **kwargs)
        220     def __init__(self, resulting_name: Optional[str] = None, compute_kwargs: Optional[Dict[str, Any]] = None, **kwargs):
        221         compute_kwargs = self._validate_compute_kwargs(compute_kwargs)
    --> 222         super().__init__(task=self._task, resulting_name=resulting_name, compute_kwargs=compute_kwargs, **kwargs)
        223 
        224     def _validate_compute_kwargs(self, compute_kwargs: Dict[str, Any]) -> Dict[str, Any]:
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\base.py in __init__(self, task, resulting_name, compute_kwargs, config_name, keep_in_memory, cache_dir, num_process, process_id, seed, experiment_id, max_concurrent_cache_files, timeout, **kwargs)
        100         self.resulting_name = resulting_name if resulting_name is not None else self.name
        101         self.compute_kwargs = compute_kwargs or {}
    --> 102         self.download_and_prepare()
        103 
        104     @abstractmethod
    
    ~\anaconda3\lib\site-packages\evaluate\module.py in download_and_prepare(self, download_config, dl_manager)
        649             )
        650 
    --> 651         self._download_and_prepare(dl_manager)
        652 
        653     def _download_and_prepare(self, dl_manager):
    
    ~\anaconda3\lib\site-packages\jury\metrics\bleurt\bleurt_for_language_generation.py in _download_and_prepare(self, dl_manager)
        120         global bleurt
        121         try:
    --> 122             from bleurt import score
        123         except ModuleNotFoundError:
        124             raise ModuleNotFoundError(
    
    ImportError: cannot import name 'score' from 'bleurt' (unknown location)
    
    opened by Santhanreddy71 4
  • Prism support for use_cuda option

    Prism support for use_cuda option

    Referring this issue https://github.com/thompsonb/prism/issues/13, since it seems like no activate maintanance is going on, we can add this support on a public fork.

    enhancement 
    opened by devrimcavusoglu 0
  • Add support for custom tokenizer for BLEU

    Add support for custom tokenizer for BLEU

    Due to the nature of the Jury API, all input strings must be a whole (not tokenized), the current implementation of BLEU score is tokenized by white spaces. However, one might want results for smaller tokens, morphemes, or even character level rather than BLEU score of the words. Thus, it'd be great to support this with adding a support for tokenizer in the score computation for BLEU.

    enhancement help wanted 
    opened by devrimcavusoglu 0
Releases(2.2.3)
  • 2.2.3(Dec 26, 2022)

    What's Changed

    • flake8 error on python3.7 by @devrimcavusoglu in https://github.com/obss/jury/pull/118
    • Seqeval typo fix by @devrimcavusoglu in https://github.com/obss/jury/pull/117
    • Refactored requirements (sklearn). by @devrimcavusoglu in https://github.com/obss/jury/pull/121

    Full Changelog: https://github.com/obss/jury/compare/2.2.2...2.2.3

    Source code(tar.gz)
    Source code(zip)
  • 2.2.2(Sep 30, 2022)

    What's Changed

    • Migrating to evaluate package (from datasets). by @devrimcavusoglu in https://github.com/obss/jury/pull/116

    Full Changelog: https://github.com/obss/jury/compare/2.2.1...2.2.2

    Source code(tar.gz)
    Source code(zip)
  • 2.2.1(Sep 21, 2022)

    What's Changed

    • Fixed warning message in BLEURT default initialization by @zafercavdar in https://github.com/obss/jury/pull/110
    • ZeroDivisionError on precision and recall values. by @devrimcavusoglu in https://github.com/obss/jury/pull/112
    • validators added to the requirements. by @devrimcavusoglu in https://github.com/obss/jury/pull/113
    • Intermediate patch, fixes, updates. by @devrimcavusoglu in https://github.com/obss/jury/pull/114

    New Contributors

    • @zafercavdar made their first contribution in https://github.com/obss/jury/pull/110

    Full Changelog: https://github.com/obss/jury/compare/2.2...2.2.1

    Source code(tar.gz)
    Source code(zip)
  • 2.2(Mar 29, 2022)

    What's Changed

    • Fix Reference Structure for Basic BLEU calculation by @Sophylax in https://github.com/obss/jury/pull/74
    • Added BLEURT. by @devrimcavusoglu in https://github.com/obss/jury/pull/78
    • README.md updated with doi badge and citation inforamtion. by @devrimcavusoglu in https://github.com/obss/jury/pull/81
    • Add VSCode Folder to Gitignore by @Sophylax in https://github.com/obss/jury/pull/82
    • Change one BERTScore test Device to CPU by @Sophylax in https://github.com/obss/jury/pull/84
    • Add Prism metric by @devrimcavusoglu in https://github.com/obss/jury/pull/79
    • Update issue templates by @devrimcavusoglu in https://github.com/obss/jury/pull/85
    • Dl manager rework by @devrimcavusoglu in https://github.com/obss/jury/pull/86
    • Nltk upgrade by @devrimcavusoglu in https://github.com/obss/jury/pull/88
    • CER metric implementation. by @devrimcavusoglu in https://github.com/obss/jury/pull/90
    • Prism checkpoint URL updated. by @devrimcavusoglu in https://github.com/obss/jury/pull/92
    • Test cases refactored. by @devrimcavusoglu in https://github.com/obss/jury/pull/96
    • Added BARTScore by @Sophylax in https://github.com/obss/jury/pull/89
    • License information added for prism and bleurt. by @devrimcavusoglu in https://github.com/obss/jury/pull/97
    • Remove Unused Imports by @Sophylax in https://github.com/obss/jury/pull/98
    • Added WER metric. by @devrimcavusoglu in https://github.com/obss/jury/pull/103
    • Add TER metric by @devrimcavusoglu in https://github.com/obss/jury/pull/104
    • CHRF metric added. by @devrimcavusoglu in https://github.com/obss/jury/pull/105
    • Add comet by @devrimcavusoglu in https://github.com/obss/jury/pull/107
    • Doc refactor by @devrimcavusoglu in https://github.com/obss/jury/pull/108
    • Pypi fix by @devrimcavusoglu in https://github.com/obss/jury/pull/109

    New Contributors

    • @Sophylax made their first contribution in https://github.com/obss/jury/pull/74

    Full Changelog: https://github.com/obss/jury/compare/2.1.5...2.2

    Source code(tar.gz)
    Source code(zip)
  • 2.1.5(Dec 23, 2021)

    What's Changed

    • Bug fix: Typo corrected in _remove_empty() in core.py. by @devrimcavusoglu in https://github.com/obss/jury/pull/67
    • Metric name path bug fix. by @devrimcavusoglu in https://github.com/obss/jury/pull/69

    Full Changelog: https://github.com/obss/jury/compare/2.1.4...2.1.5

    Source code(tar.gz)
    Source code(zip)
  • 2.1.4(Dec 6, 2021)

    What's Changed

    • Handle for empty predictions & references on Jury (skipping empty). by @devrimcavusoglu in https://github.com/obss/jury/pull/65

    Full Changelog: https://github.com/obss/jury/compare/2.1.3...2.1.4

    Source code(tar.gz)
    Source code(zip)
  • 2.1.3(Dec 1, 2021)

    What's Changed

    • Bug fix: Bleu reshape error fixed. by @devrimcavusoglu in https://github.com/obss/jury/pull/63

    Full Changelog: https://github.com/obss/jury/compare/2.1.2...2.1.3

    Source code(tar.gz)
    Source code(zip)
  • 2.1.2(Nov 14, 2021)

    What's Changed

    • Bug fix: bleu returning same score with different max_order is fixed. by @devrimcavusoglu in https://github.com/obss/jury/pull/59
    • nltk version upgraded as >=3.6.4 (from >=3.6.2). by @devrimcavusoglu in https://github.com/obss/jury/pull/61

    Full Changelog: https://github.com/obss/jury/compare/2.1.1...2.1.2

    Source code(tar.gz)
    Source code(zip)
  • 2.1.1(Nov 10, 2021)

    What's Changed

    • Seqeval: json normalization added. by @devrimcavusoglu in https://github.com/obss/jury/pull/55
    • Read support from folders by @devrimcavusoglu in https://github.com/obss/jury/pull/57

    Full Changelog: https://github.com/obss/jury/compare/2.1.0...2.1.1

    Source code(tar.gz)
    Source code(zip)
  • 2.1.0(Oct 25, 2021)

    What's New 🚀

    Tasks 📝

    We added task based new metric system which allows us to evaluate different type of inputs rather than old system which could only evaluate from strings (generated text) for only language generation tasks. Hence, jury now is able to support broader set of metrics works with different types of input.

    With this, on jury.Jury API, the consistency of set of tasks given is under control. Jury will raise an error if any pair of metrics are not consistent with each other in terms of task (evaluation input).

    AutoMetric ✨

    • AutoMetric is introduced as a main factory class for automatically loading metrics, as a side note load_metric is still available for backward compatibility and is preferred (it uses AutoMetric under the hood).
    • Tasks are now distinguished within metrics. For example, precision can be used for language-generation or sequence-classification task, where one evaluates from string (generated text) while other one evaluates from integers (class labels).
    • On configuration file, metrics can be now stated with HuggingFace's datasets' metrics initializiation parameters. The keyword arguments for metrics that are used on computation are now separated in "compute_kwargs" key.

    Full Changelog: https://github.com/obss/jury/compare/2.0.0...2.1.0

    Source code(tar.gz)
    Source code(zip)
  • 2.0.0(Oct 11, 2021)

    Jury 2.0.0 is out 🎉🥳

    New Metric System

    • datasets package Metric implementation is adopted (and extended) to provide high performance 💯 and more unified interface 🤗.
    • Custom metric implementation changed accordingly (it now requires 3 abstract methods to be implemented).
    • Jury class is now callable (implements call() method to be used thoroughly) though evaluate() method is still available for backward compatibility.
    • In the usage of evaluate of Jury, predictions and references parameters are restricted to be passed as keyword arguments to prevent confusion/wrong computations (like datasets' metrics).
    • MetricCollator is removed, the methods for metrics are attached directly to Jury class. Now, metric addition and removal can be performed from a Jury instance directly.
    • Jury now supports reading metrics from string, list and dictionaries. It is more generic to input type of metrics given along with parameters.

    New metrics

    • Accuracy, F1, Precision, Recall are added to Jury metrics.
    • All metrics on datasets package are still available on jury through the use of jury.load_metric()

    Development

    • Test cases are improved with fixtures, and test structure is enchanced.
    • Expected outputs are now required for tests as a json with proper name.
    Source code(tar.gz)
    Source code(zip)
  • 1.1.2(Sep 15, 2021)

  • 1.1.1(Aug 15, 2021)

    • Malfunctioning multiple prediction calculation caused by multiple reference input for BLEU and SacreBLEU is fixed.
    • CLI Implementation is completed. 🎉
    Source code(tar.gz)
    Source code(zip)
  • 1.0.1(Aug 13, 2021)

  • 1.0.0(Aug 9, 2021)

    Release Notes

    • New metric structure is completed.
      • Custom metric support is improved and no longer required to extend datasets.Metric, rather uses jury.metrics.Metric.
      • Metric usage is unified with compute, preprocess and postprocess functions, which the only required implementation for custom metric is compute.
      • Both string and Metric objects can be passed to Jury(metrics=metrics) now in a mixed fashion.
      • load_metric function was rearranged to capture end score results and several metrics added accordingly (e.g. load_metric("squad_f1") will load squad metric which returns F1-score).
    • Example notebook has added to example.
      • MT and QA tasks were illustrated.
      • Custom metric creation added as example.

    Acknowledgments

    @fcakyon @cemilcengiz @devrimcavusoglu

    Source code(tar.gz)
    Source code(zip)
  • 0.0.3(Jul 26, 2021)

  • 0.0.2(Jul 14, 2021)

Owner
Open Business Software Solutions
Open Source for Open Business
Open Business Software Solutions
Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/

Texar is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides

ASYML 2.3k Jan 7, 2023
An Analysis Toolkit for Natural Language Generation (Translation, Captioning, Summarization, etc.)

VizSeq is a Python toolkit for visual analysis on text generation tasks like machine translation, summarization, image captioning, speech translation

Facebook Research 409 Oct 28, 2022
Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/

Texar is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides

ASYML 2.1k Feb 17, 2021
An Analysis Toolkit for Natural Language Generation (Translation, Captioning, Summarization, etc.)

VizSeq is a Python toolkit for visual analysis on text generation tasks like machine translation, summarization, image captioning, speech translation

Facebook Research 310 Feb 1, 2021
T‘rex Park is a Youzan sponsored project. Offering Chinese NLP and image models pretrained from E-commerce datasets

T‘rex Park is a Youzan sponsored project. Offering Chinese NLP and image models pretrained from E-commerce datasets (product titles, images, comments, etc.).

null 55 Nov 22, 2022
A design of MIDI language for music generation task, specifically for Natural Language Processing (NLP) models.

MIDI Language Introduction Reference Paper: Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions: code This

Robert Bogan Kang 3 May 25, 2022
Implementation of Natural Language Code Search in the project CodeBERT: A Pre-Trained Model for Programming and Natural Languages.

CodeBERT-Implementation In this repo we have replicated the paper CodeBERT: A Pre-Trained Model for Programming and Natural Languages. We are interest

Tanuj Sur 4 Jul 1, 2022
Natural language Understanding Toolkit

Natural language Understanding Toolkit TOC Requirements Installation Documentation CLSCL NER References Requirements To install nut you need: Python 2

Peter Prettenhofer 119 Oct 8, 2022
Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing

Trankit: A Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing Trankit is a light-weight Transformer-based Pyth

null 652 Jan 6, 2023
Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CASL project: http://casl-project.ai/

Texar-PyTorch is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar

ASYML 726 Dec 30, 2022
A Survey of Natural Language Generation in Task-Oriented Dialogue System (TOD): Recent Advances and New Frontiers

A Survey of Natural Language Generation in Task-Oriented Dialogue System (TOD): Recent Advances and New Frontiers

Libo Qin 132 Nov 25, 2022
Code for the paper "Flexible Generation of Natural Language Deductions"

Code for the paper "Flexible Generation of Natural Language Deductions"

Kaj Bostrom 12 Nov 11, 2022
A python framework to transform natural language questions to queries in a database query language.

__ _ _ _ ___ _ __ _ _ / _` | | | |/ _ \ '_ \| | | | | (_| | |_| | __/ |_) | |_| | \__, |\__,_|\___| .__/ \__, | |_| |_| |___/

Machinalis 1.2k Dec 18, 2022
LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language

LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language ⚖️ The library of Natural Language Processing for Brazilian legal lang

Felipe Maia Polo 125 Dec 20, 2022
NL. The natural language programming language.

NL A Natural-Language programming language. Built using Codex. A few examples are inside the nl_projects directory. How it works Write any code in pur

null 2 Jan 17, 2022
A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. IMPORTANT: (30.08.2020) We moved our models

flair 12.3k Dec 31, 2022
A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. IMPORTANT: (30.08.2020) We moved our models

flair 10k Feb 18, 2021
A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. Flair is: A powerful NLP library. Flair allo

flair 12.3k Jan 2, 2023