👄 The most accurate natural language detection library for Python, suitable for long and short text alike

Overview

Lingua Logo


build codecov supported languages docs pypi license

1. What does this library do?

Its task is simple: It tells you which language some provided textual data is written in. This is very useful as a preprocessing step for linguistic data in natural language processing applications such as text classification and spell checking. Other use cases, for instance, might include routing e-mails to the right geographically located customer service department, based on the e-mails' languages.

2. Why does this library exist?

Language detection is often done as part of large machine learning frameworks or natural language processing applications. In cases where you don't need the full-fledged functionality of those systems or don't want to learn the ropes of those, a small flexible library comes in handy.

Python is widely used in natural language processing, so there are a couple of comprehensive open source libraries for this task, such as Google's CLD 2 and CLD 3, langid and langdetect. Unfortunately, except for the last one they have two major drawbacks:

  1. Detection only works with quite lengthy text fragments. For very short text snippets such as Twitter messages, they do not provide adequate results.
  2. The more languages take part in the decision process, the less accurate are the detection results.

Lingua aims at eliminating these problems. She nearly does not need any configuration and yields pretty accurate results on both long and short text, even on single words and phrases. She draws on both rule-based and statistical methods but does not use any dictionaries of words. She does not need a connection to any external API or service either. Once the library has been downloaded, it can be used completely offline.

3. Which languages are supported?

Compared to other language detection libraries, Lingua's focus is on quality over quantity, that is, getting detection right for a small set of languages first before adding new ones. Currently, the following 75 languages are supported:

  • A
    • Afrikaans
    • Albanian
    • Arabic
    • Armenian
    • Azerbaijani
  • B
    • Basque
    • Belarusian
    • Bengali
    • Norwegian Bokmal
    • Bosnian
    • Bulgarian
  • C
    • Catalan
    • Chinese
    • Croatian
    • Czech
  • D
    • Danish
    • Dutch
  • E
    • English
    • Esperanto
    • Estonian
  • F
    • Finnish
    • French
  • G
    • Ganda
    • Georgian
    • German
    • Greek
    • Gujarati
  • H
    • Hebrew
    • Hindi
    • Hungarian
  • I
    • Icelandic
    • Indonesian
    • Irish
    • Italian
  • J
    • Japanese
  • K
    • Kazakh
    • Korean
  • L
    • Latin
    • Latvian
    • Lithuanian
  • M
    • Macedonian
    • Malay
    • Maori
    • Marathi
    • Mongolian
  • N
    • Norwegian Nynorsk
  • P
    • Persian
    • Polish
    • Portuguese
    • Punjabi
  • R
    • Romanian
    • Russian
  • S
    • Serbian
    • Shona
    • Slovak
    • Slovene
    • Somali
    • Sotho
    • Spanish
    • Swahili
    • Swedish
  • T
    • Tagalog
    • Tamil
    • Telugu
    • Thai
    • Tsonga
    • Tswana
    • Turkish
  • U
    • Ukrainian
    • Urdu
  • V
    • Vietnamese
  • W
    • Welsh
  • X
    • Xhosa
  • Y
    • Yoruba
  • Z
    • Zulu

4. How good is it?

Lingua is able to report accuracy statistics for some bundled test data available for each supported language. The test data for each language is split into three parts:

  1. a list of single words with a minimum length of 5 characters
  2. a list of word pairs with a minimum length of 10 characters
  3. a list of complete grammatical sentences of various lengths

Both the language models and the test data have been created from separate documents of the Wortschatz corpora offered by Leipzig University, Germany. Data crawled from various news websites have been used for training, each corpus comprising one million sentences. For testing, corpora made of arbitrarily chosen websites have been used, each comprising ten thousand sentences. From each test corpus, a random unsorted subset of 1000 single words, 1000 word pairs and 1000 sentences has been extracted, respectively.

Given the generated test data, I have compared the detection results of Lingua, langdetect, langid, CLD 2 and CLD 3 running over the data of Lingua's supported 75 languages. Languages that are not supported by the other detectors are simply ignored for them during the detection process.

The box plots below illustrate the distributions of the accuracy values for each classifier. The boxes themselves represent the areas which the middle 50 % of data lie within. Within the colored boxes, the horizontal lines mark the median of the distributions. All these plots demonstrate that Lingua clearly outperforms its contenders. Bar plots for each language can be found in the file ACCURACY_PLOTS.md. Detailed statistics including mean, median and standard deviation values for each language and classifier are available in the file ACCURACY_TABLE.md.

4.1 Single word detection


Single Word Detection Performance



4.2 Word pair detection


Word Pair Detection Performance



4.3 Sentence detection


Sentence Detection Performance



4.4 Average detection


Average Detection Performance



5. Why is it better than other libraries?

Every language detector uses a probabilistic n-gram model trained on the character distribution in some training corpus. Most libraries only use n-grams of size 3 (trigrams) which is satisfactory for detecting the language of longer text fragments consisting of multiple sentences. For short phrases or single words, however, trigrams are not enough. The shorter the input text is, the less n-grams are available. The probabilities estimated from such few n-grams are not reliable. This is why Lingua makes use of n-grams of sizes 1 up to 5 which results in much more accurate prediction of the correct language.

A second important difference is that Lingua does not only use such a statistical model, but also a rule-based engine. This engine first determines the alphabet of the input text and searches for characters which are unique in one or more languages. If exactly one language can be reliably chosen this way, the statistical model is not necessary anymore. In any case, the rule-based engine filters out languages that do not satisfy the conditions of the input text. Only then, in a second step, the probabilistic n-gram model is taken into consideration. This makes sense because loading less language models means less memory consumption and better runtime performance.

In general, it is always a good idea to restrict the set of languages to be considered in the classification process using the respective api methods. If you know beforehand that certain languages are never to occur in an input text, do not let those take part in the classifcation process. The filtering mechanism of the rule-based engine is quite good, however, filtering based on your own knowledge of the input text is always preferable.

6. Test report generation

If you want to reproduce the accuracy results above, you can generate the test reports yourself for all classifiers and languages by executing:

poetry install --extras "langdetect langid gcld3 pycld2"
poetry run python3 scripts/accuracy_reporter.py

For each detector and language, a test report file is then written into /accuracy-reports. As an example, here is the current output of the Lingua German report:

##### German #####

>>> Accuracy on average: 89.27%

>> Detection of 1000 single words (average length: 9 chars)
Accuracy: 74.20%
Erroneously classified as Dutch: 2.30%, Danish: 2.20%, English: 2.20%, Latin: 1.80%, Bokmal: 1.60%, Italian: 1.30%, Basque: 1.20%, Esperanto: 1.20%, French: 1.20%, Swedish: 0.90%, Afrikaans: 0.70%, Finnish: 0.60%, Nynorsk: 0.60%, Portuguese: 0.60%, Yoruba: 0.60%, Sotho: 0.50%, Tsonga: 0.50%, Welsh: 0.50%, Estonian: 0.40%, Irish: 0.40%, Polish: 0.40%, Spanish: 0.40%, Tswana: 0.40%, Albanian: 0.30%, Icelandic: 0.30%, Tagalog: 0.30%, Bosnian: 0.20%, Catalan: 0.20%, Croatian: 0.20%, Indonesian: 0.20%, Lithuanian: 0.20%, Romanian: 0.20%, Swahili: 0.20%, Zulu: 0.20%, Latvian: 0.10%, Malay: 0.10%, Maori: 0.10%, Slovak: 0.10%, Slovene: 0.10%, Somali: 0.10%, Turkish: 0.10%, Xhosa: 0.10%

>> Detection of 1000 word pairs (average length: 18 chars)
Accuracy: 93.90%
Erroneously classified as Dutch: 0.90%, Latin: 0.90%, English: 0.70%, Swedish: 0.60%, Danish: 0.50%, French: 0.40%, Bokmal: 0.30%, Irish: 0.20%, Tagalog: 0.20%, Tsonga: 0.20%, Afrikaans: 0.10%, Esperanto: 0.10%, Estonian: 0.10%, Finnish: 0.10%, Italian: 0.10%, Maori: 0.10%, Nynorsk: 0.10%, Somali: 0.10%, Swahili: 0.10%, Turkish: 0.10%, Welsh: 0.10%, Zulu: 0.10%

>> Detection of 1000 sentences (average length: 111 chars)
Accuracy: 99.70%
Erroneously classified as Dutch: 0.20%, Latin: 0.10%

7. How to add it to your project?

Lingua is available in the Python Package Index and can be installed with:

pip install lingua-language-detector

8. How to build?

Lingua requires Python >= 3.9 and uses Poetry for packaging and dependency management. You need to install it first if you have not done so yet. Afterwards, clone the repository and install the project dependencies:

git clone https://github.com/pemistahl/lingua-py.git
cd lingua-py
poetry install

The library makes uses of type annotations which allow for static type checking with Mypy. Run the following command for checking the types:

poetry run mypy

The source code is accompanied by an extensive unit test suite. To run the tests, simply say:

poetry run pytest

9. How to use?

9.1 Basic usage

>>> from lingua import Language, LanguageDetectorBuilder
>>> languages = [Language.ENGLISH, Language.FRENCH, Language.GERMAN, Language.SPANISH]
>>> detector = LanguageDetectorBuilder.from_languages(*languages).build()
>>> detector.detect_language_of("languages are awesome")
Language.ENGLISH

9.2 Minimum relative distance

By default, Lingua returns the most likely language for a given input text. However, there are certain words that are spelled the same in more than one language. The word prologue, for instance, is both a valid English and French word. Lingua would output either English or French which might be wrong in the given context. For cases like that, it is possible to specify a minimum relative distance that the logarithmized and summed up probabilities for each possible language have to satisfy. It can be stated in the following way:

>>> from lingua import Language, LanguageDetectorBuilder
>>> languages = [Language.ENGLISH, Language.FRENCH, Language.GERMAN, Language.SPANISH]
>>> detector = LanguageDetectorBuilder.from_languages(*languages)\
.with_minimum_relative_distance(0.25)\
.build()
>>> print(detector.detect_language_of("languages are awesome"))
None

Be aware that the distance between the language probabilities is dependent on the length of the input text. The longer the input text, the larger the distance between the languages. So if you want to classify very short text phrases, do not set the minimum relative distance too high. Otherwise, None will be returned most of the time as in the example above. This is the return value for cases where language detection is not reliably possible.

9.3 Confidence values

Knowing about the most likely language is nice but how reliable is the computed likelihood? And how less likely are the other examined languages in comparison to the most likely one? These questions can be answered as well:

>>> from lingua import Language, LanguageDetectorBuilder
>>> languages = [Language.ENGLISH, Language.FRENCH, Language.GERMAN, Language.SPANISH]
>>> detector = LanguageDetectorBuilder.from_languages(*languages).build()
>>> confidence_values = detector.compute_language_confidence_values("languages are awesome")
>>> for language, value in confidence_values:
...     print(f"{language.name}: {value:.2f}")
ENGLISH: 1.00
FRENCH: 0.79
GERMAN: 0.75
SPANISH: 0.70

In the example above, a list of all possible languages is returned, sorted by their confidence value in descending order. The values that the detector computes are part of a relative confidence metric, not of an absolute one. Each value is a number between 0.0 and 1.0. The most likely language is always returned with value 1.0. All other languages get values assigned which are lower than 1.0, denoting how less likely those languages are in comparison to the most likely language.

The list returned by this method does not necessarily contain all languages which this LanguageDetector instance was built from. If the rule-based engine decides that a specific language is truly impossible, then it will not be part of the returned list. Likewise, if no ngram probabilities can be found within the detector's languages for the given input text, the returned list will be empty. The confidence value for each language not being part of the returned list is assumed to be 0.0.

9.4 Eager loading versus lazy loading

By default, Lingua uses lazy-loading to load only those language models on demand which are considered relevant by the rule-based filter engine. For web services, for instance, it is rather beneficial to preload all language models into memory to avoid unexpected latency while waiting for the service response. If you want to enable the eager-loading mode, you can do it like this:

LanguageDetectorBuilder.from_all_languages().with_preloaded_language_models().build()

Multiple instances of LanguageDetector share the same language models in memory which are accessed asynchronously by the instances.

9.5 Methods to build the LanguageDetector

There might be classification tasks where you know beforehand that your language data is definitely not written in Latin, for instance. The detection accuracy can become better in such cases if you exclude certain languages from the decision process or just explicitly include relevant languages:

from lingua import LanguageDetectorBuilder, Language, IsoCode639_1, IsoCode639_3

# Including all languages available in the library
# consumes approximately 3GB of memory and might
# lead to slow runtime performance.
LanguageDetectorBuilder.from_all_languages()

# Include only languages that are not yet extinct (= currently excludes Latin).
LanguageDetectorBuilder.from_all_spoken_languages()

# Include only languages written with Cyrillic script.
LanguageDetectorBuilder.from_all_languages_with_cyrillic_script()

# Exclude only the Spanish language from the decision algorithm.
LanguageDetectorBuilder.from_all_languages_without(Language.SPANISH)

# Only decide between English and German.
LanguageDetectorBuilder.from_languages(Language.ENGLISH, Language.GERMAN)

# Select languages by ISO 639-1 code.
LanguageDetectorBuilder.from_iso_codes_639_1(IsoCode639_1.EN, IsoCode639_1.DE)

# Select languages by ISO 639-3 code.
LanguageDetectorBuilder.from_iso_codes_639_3(IsoCode639_3.ENG, IsoCode639_3.DEU)

10. What's next for version 1.1.0?

Take a look at the planned issues.

11. Contributions

Any contributions to Lingua are very much appreciated. Please read the instructions in CONTRIBUTING.md for how to add new languages to the library.

Comments
  • Make the library compatible with Python versions < 3.9

    Make the library compatible with Python versions < 3.9

    Hello, I try to use the module on google colab and I get this error during the installation:

    ERROR: Could not find a version that satisfies the requirement lingua-language-detector (from versions: none)
    ERROR: No matching distribution found for lingua-language-detector
    

    What are the requirements of this module?

    opened by Jourdelune 10
  • Error: ZeroDivisionError: float division by zero

    Error: ZeroDivisionError: float division by zero

    Hello.

    When running this code with lingua_language_detector version 1.3.0.

    with open('text.txt') as fh:
        text = fh.read()
        detector = LanguageDetectorBuilder.from_all_languages().build()
        print(text)
        result = detector.detect_language_of(text)
        print(result)
    

    I get this error:

    Traceback (most recent call last):
      File "/home/jordi/sc/crux-top-lists-catalan/bug.py", line 9, in <module>
        result = detector.detect_language_of(text)
      File "/home/jordi/.local/lib/python3.10/site-packages/lingua/detector.py", line 272, in detect_language_of
        confidence_values = self.compute_language_confidence_values(text)
      File "/home/jordi/.local/lib/python3.10/site-packages/lingua/detector.py", line 499, in compute_language_confidence_values
        normalized_probability = probability / denominator
    ZeroDivisionError: float division by zero
    

    I attached the text file that triggers the problem. It works fine with others texts. This happens often in a crawling application that I'm testing.

    bug 
    opened by jordimas 3
  • Import of LanguageDetectorBuilder failed

    Import of LanguageDetectorBuilder failed

    When loading the LanguageDetectorBuilder as recommended in the readme, I received the following error:

    from lingua import LanguageDetectorBuilder ... ImportError: cannot import name 'LanguageDetectorBuilder' from 'lingua'

    The following worked for me:

    from lingua.builder import LanguageDetectorBuilder

    opened by geritwagner 3
  • Detect multiple languages in mixed-language text

    Detect multiple languages in mixed-language text

    Currently, for a given input string, only the most likely language is returned. However, if the input contains contiguous sections of multiple languages, it will be desirable to detect all of them and return an ordered sequence of items, where each item consists of a start index, an end index and the detected language.

    Input: He turned around and asked: "Entschuldigen Sie, sprechen Sie Deutsch?"

    Output:

    [
      {"start": 0, "end": 27, "language": ENGLISH}, 
      {"start": 28, "end": 69, "language": GERMAN}
    ]
    
    new feature 
    opened by pemistahl 3
  • ZeroDivisionError: float division by zero

    ZeroDivisionError: float division by zero

    On occassion on longer texts I am getting this error. Steps to reproduce:

    detector.detect_language_of(text)
    

    Where text is

    Flagged as potential abuser? No Retailer | Concept-store() Brand order:  placed on  Payout scheduled date: Not Scheduled Submission type: Lead How did you initially connected?: Sales rep When did you last reach out?:  (UTC) Did you add this person through ?: I don't know Additional information: Bonjour, Je travaille avec cette boutique depuis plusieurs années. C'est moi qui lui ai conseillé de passer par pour son réassort avec le lien direct que je lui avais transmis. Pourriez vous retirer la commission de 23% ? Je vous remercie. En lien pour preuve la dernière facture que je lui ai éditée et qui date du mois dernier. De plus, j'ai redirigé vers plusieurs autres boutiques avec qui j'ai l'habitude de travailler. Elles devraient passer commande prochainement: Ça m'ennuierai de me retrouver avec le même problème pour ces clients aussi. Merci d'avance pour votre aide ! Cordialement Click here to check out customer uploaded file Click here to approve / reject / flag as potential abuser
    

    It's not an isolated example

    Any help would be massively appreciated

    opened by duboff 2
  • Weird issues with short texts in Russian

    Weird issues with short texts in Russian

    Hi team, great library! Wanted to share an example I stumbled upon, when detecting the language of a very short basic Russian text. It comes out as Macedonian, even though as far as I can tell it's not actually correct Macedonian but is correct Russian. It is identified correctly by AWS Comprehend and other APIs:

    detector = LanguageDetectorBuilder.from_all_languages().build()
    detector.detect_language_of("как дела")
    Language.MACEDONIAN
    opened by duboff 2
  • Use softmax function instead of min-max normalization

    Use softmax function instead of min-max normalization

    What do you think about passing results to softmax function instead min-max normalization? I think it's more clear way. Because, for example, you can have a threshold to filter-out unidentified languages.

    Is there are some pitfalls that aren't clear for me? I've implemented this by slightly changing your code. I've also rounded results.

    It passed black and mypy, but not tests. It's throwing me error like: INTERNALERROR> UnicodeEncodeError: 'charmap' codec can't encode characters in position 712-720: character maps to <undefined>

    opened by Alex-Kopylov 2
  • Failed to predict correct language for popular English single words

    Failed to predict correct language for popular English single words

    Hello

    • "ITALIAN": 0.9900000000000001,
    • "SPANISH": 0.8457074930316446,
    • "ENGLISH": 0.6405700388041755,
    • "FRENCH": 0.260556921899765,
    • "GERMAN": 0.01,
    • "CHINESE": 0,
    • "RUSSIAN": 0

    Bye

    • "FRENCH": 0.9899999999999999,
    • "ENGLISH": 0.9062076381164255,
    • "GERMAN": 0.6259792361883574,
    • "SPANISH": 0.46755135335558035,
    • "ITALIAN": 0.01,
    • "CHINESE": 0,
    • "RUSSIAN": 0

    Loss (not Löss)

    • "GERMAN": 0.99,
    • "ENGLISH": 0.9177028091362562,
    • "ITALIAN": 0.9082690119891484,
    • "FRENCH": 0.7091301303929289,
    • "SPANISH": 0.01,
    • "CHINESE": 0,
    • "RUSSIAN": 0
    opened by Alex-Kopylov 2
  • Is it possible to detect only English using lingua?

    Is it possible to detect only English using lingua?

    Hi, I'm currently working on a project which requires me to filter all non-English text. It is comprised of mostly short texts, most of them in English. I thought of building the language detector with only Language.ENGLISH but got an error that at least two languages are required. I do not care about knowing what language each non-English text is actually in, only English / Non-English. What would be the correct way to go about it with lingua? I think it might be problematic if I set it to recognize all languages because it might just add unnecessary noise to the prediction, which should have a bias towards English in my case. Thanks!

    opened by OmriPi 2
  • Caught an IndexError while using detect_multiple_languages_of

    Caught an IndexError while using detect_multiple_languages_of

    On the test_case:

    , Ресторан «ТИНАТИН»
    

    Code fell down with an error:

    Traceback (most recent call last):
      File "/home/essential/PycharmProjects/pythonProject/test_unnest.py", line 363, in <module>
        for lang, sentence in detector.detect_multiple_languages_of(text)
      File "/home/essential/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/lingua/detector.py", line 389, in detect_multiple_languages_of
        _merge_adjacent_results(results, mergeable_result_indices)
      File "/home/essential/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/lingua/detector.py", line 114, in _merge_adjacent_results
        end_index=results[i + 1].end_index,
    IndexError: list index out of range
    

    Code example:

    languages = [Language.ENGLISH, Language.RUSSIAN, Language.UKRAINIAN]
    detector = LanguageDetectorBuilder.from_languages(*languages).build()
    text = ', Ресторан «ТИНАТИН»'
    sentences = [(lang, sentence) for lang, sentence in detector.detect_multiple_languages_of(text)]
    
    bug 
    opened by Saninsusanin 1
  • Bad detection in common word

    Bad detection in common word

    Hello, I need to detect language in user generated content, it's for a chat. I have tested this library but the library have strange result in short text, for exemple the word hello:

    from lingua import Language, LanguageDetectorBuilder
    
    languages = [Language.ENGLISH, Language.FRENCH, Language.GERMAN, Language.SPANISH]
    detector = LanguageDetectorBuilder.from_languages(*languages).build()
    
    text = """
    Hello
    """
    confidence_values = detector.compute_language_confidence_values(text.strip())
    for language, value in confidence_values:
        print(f"{language.name}: {value:.2f}")
    

    return spanich (but the correct language is English)

    SPANISH: 1.00
    ENGLISH: 0.95
    FRENCH: 0.87
    GERMAN: 0.82
    

    Do you know some tips to have better result for detecting language on user generated content?

    opened by Jourdelune 1
  • detect_multiple_languages_of  predicts incorrect languages

    detect_multiple_languages_of predicts incorrect languages

    Using version 1.3.1

    Using a text that is in Catalan language only, that does not contain any fragments from other languages, and that it's very standard kind of text, detect_multiple_languages_of method detects: CATALAN, SOMALI, LATIN, FRENCH, SPANISH and PORTUGUESE. The expectation is that should report that the full text is CATALAN.

    Code to reproduce the problem:

    from lingua import Language, LanguageDetectorBuilder, IsoCode639_1
    
    with open('text-catalan.txt') as fh:
        text = fh.read()
    
        detector = LanguageDetectorBuilder.from_all_languages().build()
        
        for result in detector.detect_multiple_languages_of(text):
            print(f"{result.language.name}")
    

    Related to this problem also is that detect_language_of and detect_multiple_languages_of predict different languages over the same text. Below an example on the same input detect_language_of predicts Catalan and detect_multiple_languages_of predicts Tsonga.

    My expectation is that both methods will predict the same given the same input.

    Code sample:

    from lingua import Language, LanguageDetectorBuilder, IsoCode639_1
    
    with open('china.txt') as fh:
        text = fh.read()
    
        detector = LanguageDetectorBuilder.from_all_languages().build()
          
        result = detector.detect_language_of(text)
        print(f"detect_language_of prediction: {result}")
        
        for result in detector.detect_multiple_languages_of(text):
            print(f"detect_language_of prediction: {result.language.name}")
    
    
    opened by jordimas 2
  • detect_multiple_languages_of is very slow

    detect_multiple_languages_of is very slow

    Using version 1.3.1

    In a text that is 3.5K (31 lines) in my machine detect_multiple_languages_of takes 26.56 seconds while detect_language_of takes only 1.68 seconds.

    26 seconds to analyse 3.5K of text (throughput of ~7 seconds per 1K) makes detect_multiple_languages_of method really not suitable for processing large corpus.

    Code used for the benchmark:

    
    from lingua import Language, LanguageDetectorBuilder, IsoCode639_1
    import datetime
    
    
    with open('text.txt') as fh:
        text = fh.read()
    
        detector = LanguageDetectorBuilder.from_all_languages().build()
        
        start_time = datetime.datetime.now()
        result = detector.detect_language_of(text)
        print('Time used for detect_language_of: {0}'.format(datetime.datetime.now() - start_time))
        print(result.iso_code_639_1)
    
        start_time = datetime.datetime.now()    
        results = detector.detect_multiple_languages_of(text)    
        print('Time used for detect_multiple_languages_of: {0}  '.format(datetime.datetime.now() - start_time))    
        for result in results:
            print(result)
            print(f"** {result.language.name}")
    
    opened by jordimas 1
  • Chars to language mapping

    Chars to language mapping

    Hello! My understading is that this mapping:

    https://github.com/pemistahl/lingua-py/blob/502bb9abef2a31b841c49e063f1a0bd7e47af86d/lingua/_constant.py#L34

    It's used by the rule system to identity languages based on chars. Is my assumption correct?

    Looking at this: https://github.com/pemistahl/lingua-py/blob/502bb9abef2a31b841c49e063f1a0bd7e47af86d/lingua/_constant.py#L191

    Catalan language for example does NOT have "Áá" as valid chars (see reference https://en.wikipedia.org/wiki/Catalan_orthography#Alphabet).

    Looking at the data I see other mappings that do not seem right.

    May be the case that these mappings can be improved?

    opened by jordimas 0
  • Proposition: Using prior language probability to increase likelihood

    Proposition: Using prior language probability to increase likelihood

    @pemistahl Peter, I think it would be beneficial for this library to have a separate method that will add probability prior (in a Bayesian way) to the mix.

    Let's look into statistics: https://en.wikipedia.org/wiki/Languages_used_on_the_Internet

    So if 57% of texts, that you see on the internet, are in English so, if you predicted "English" for any input you would be wrong only in 43%. It's like a stopped clock, but it is right every second probe.

    For example: https://github.com/pemistahl/lingua-py/issues/100

    Based on that premise, if we are using just plain character statistics "как дела" is more Macedonian than Russian. But overall, if we add language statistics to the mix, lingua-puy would be "wrong" less often.

    There are more Russian-speaking users of this library, than Macedonians, just because there are more Russian-speaking people overall. And so when a random user writes "как дела" it's "more accurate" to predict "russian" than "macedonian", just because in general that is what is expected by these users.

    So my proposition to add detector.detect_language_with_prior function and factorize it with prior: likelihood = probability X prior_probability

    For example: https://github.com/pemistahl/lingua-py/issues/97

    detector.detect_language_of("Hello")
    
    "ITALIAN": 0.9900000000000001,
    "SPANISH": 0.8457074930316446,
    "ENGLISH": 0.6405700388041755,
    "FRENCH": 0.260556921899765,
    "GERMAN": 0.01,
    "CHINESE": 0,
    "RUSSIAN": 0
    
    detector.detect_language_with_prior("Hello")
    
    # Of course constants are for illustrative purposes only.
    # Results should be normalized afterwords
    "ENGLISH": 0.6405700388041755 * 0.577,
    "SPANISH": 0.8457074930316446 * 0.045,
    "ITALIAN": 0.9900000000000001 * 0.017,
    "FRENCH": 0.260556921899765 * 0.039,
    

    Linked issues:

    • https://github.com/pemistahl/lingua-py/issues/94
    • https://github.com/pemistahl/lingua-py/issues/100
    • https://github.com/pemistahl/lingua-py/issues/97
    opened by slavaGanzin 1
  • Increase speed by compiling to native code

    Increase speed by compiling to native code

    It should be investigated if and how detection speed can be increased by compiling crucial parts of the library to native code, probably with the help of Cython or mypyc.

    enhancement 
    opened by pemistahl 0
Releases(v1.3.1)
Owner
Peter M. Stahl
Computational linguist, Rust enthusiast, green IT advocate
Peter M. Stahl
Indobenchmark are collections of Natural Language Understanding (IndoNLU) and Natural Language Generation (IndoNLG)

Indobenchmark Toolkit Indobenchmark are collections of Natural Language Understanding (IndoNLU) and Natural Language Generation (IndoNLG) resources fo

Samuel Cahyawijaya 11 Aug 26, 2022
This script just scrapes the most recent Nepali news from Kathmandu Post and notifies the user about current events at regular intervals.It sends out the most recent news at random!

Nepali-news-notifier This script just scrapes the most recent Nepali news from Kathmandu Post and notifies the user about current events at regular in

Sachit Yadav 1 Feb 11, 2022
Implementation of Natural Language Code Search in the project CodeBERT: A Pre-Trained Model for Programming and Natural Languages.

CodeBERT-Implementation In this repo we have replicated the paper CodeBERT: A Pre-Trained Model for Programming and Natural Languages. We are interest

Tanuj Sur 4 Jul 1, 2022
A python framework to transform natural language questions to queries in a database query language.

__ _ _ _ ___ _ __ _ _ / _` | | | |/ _ \ '_ \| | | | | (_| | |_| | __/ |_) | |_| | \__, |\__,_|\___| .__/ \__, | |_| |_| |___/

Machinalis 1.2k Dec 18, 2022
One Stop Anomaly Shop: Anomaly detection using two-phase approach: (a) pre-labeling using statistics, Natural Language Processing and static rules; (b) anomaly scoring using supervised and unsupervised machine learning.

One Stop Anomaly Shop (OSAS) Quick start guide Step 1: Get/build the docker image Option 1: Use precompiled image (might not reflect latest changes):

Adobe, Inc. 148 Dec 26, 2022
Word2Wave: a framework for generating short audio samples from a text prompt using WaveGAN and COALA.

Word2Wave is a simple method for text-controlled GAN audio generation. You can either follow the setup instructions below and use the source code and CLI provided in this repo or you can have a play around in the Colab notebook provided. Note that, in both cases, you will need to train a WaveGAN model first

Ilaria Manco 91 Dec 23, 2022
Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/

Texar is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides

ASYML 2.3k Jan 7, 2023
Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/

Texar is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides

ASYML 2.1k Feb 17, 2021
Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CASL project: http://casl-project.ai/

Texar-PyTorch is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar

ASYML 726 Dec 30, 2022
LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language

LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language ⚖️ The library of Natural Language Processing for Brazilian legal lang

Felipe Maia Polo 125 Dec 20, 2022
A design of MIDI language for music generation task, specifically for Natural Language Processing (NLP) models.

MIDI Language Introduction Reference Paper: Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions: code This

Robert Bogan Kang 3 May 25, 2022
NL. The natural language programming language.

NL A Natural-Language programming language. Built using Codex. A few examples are inside the nl_projects directory. How it works Write any code in pur

null 2 Jan 17, 2022
Various Algorithms for Short Text Mining

Short Text Mining in Python Introduction This package shorttext is a Python package that facilitates supervised and unsupervised learning for short te

Kwan-Yuet 466 Dec 6, 2022
VMD Audio/Text control with natural language

This repository is a proof of principle for performing Molecular Dynamics analysis, in this case with the program VMD, via natural language commands.

Andrew White 13 Jun 9, 2022
:id: A python library for accurate and scalable fuzzy matching, record deduplication and entity-resolution.

Dedupe Python Library dedupe is a python library that uses machine learning to perform fuzzy matching, deduplication and entity resolution quickly on

Dedupe.io 3.6k Jan 2, 2023
:id: A python library for accurate and scalable fuzzy matching, record deduplication and entity-resolution.

Dedupe Python Library dedupe is a python library that uses machine learning to perform fuzzy matching, deduplication and entity resolution quickly on

Dedupe.io 2.9k Feb 11, 2021
:id: A python library for accurate and scalable fuzzy matching, record deduplication and entity-resolution.

Dedupe Python Library dedupe is a python library that uses machine learning to perform fuzzy matching, deduplication and entity resolution quickly on

Dedupe.io 2.9k Feb 17, 2021
A high-level Python library for Quantum Natural Language Processing

lambeq About lambeq is a toolkit for quantum natural language processing (QNLP). Documentation: https://cqcl.github.io/lambeq/ Getting started Prerequ

Cambridge Quantum 315 Jan 1, 2023