Blackstone is a spaCy model and library for processing long-form, unstructured legal text

Overview

Blackstone Built with spaCy

Blackstone is a spaCy model and library for processing long-form, unstructured legal text. Blackstone is an experimental research project from the Incorporated Council of Law Reporting for England and Wales' research lab, ICLR&D. Blackstone was written by Daniel Hoadley.

Contents

Why are we building Blackstone?

What's special about Blackstone?

Observations and other things worth noting

Installation

    Install the library

    Install the Blackstone model

About the model

    The pipeline

    Named-Entity Recogniser

    Text categoriser

Usage

    Applying the NER model

        Visualising entities

    Applying the text categoriser model

Custom pipeline extensions

    Abbreviation and long-form definition resolution

    Compound case reference detections

    Legislation linker

    Sentence segmenter

Why are we building Blackstone?

The past several years have seen a surge in activity at the intersection of law and technology. However, in the United Kingdom, the overwhelming bulk of that activity has taken place in law firms and other commercial contexts. The consequence of this is that despite the never ending flurry of development in the legal-informatics space, almost none of the research is made available on an open-source basis.

Moreover, the majoritry of research in the UK legal-informatics domain (whether open or closed) has focussed on the development of NLP applications for automating contracts and other legal documents that are transactional in nature. This is understandable, because the principal benefactors of legal NLP research in the UK are law firms and law firms tend not to find it difficult to get their hands on transactional documentation that can be harnessed as training data.

The problem, as we see it, is that legal NLP research in the UK has become over concentrated on commercial applications and that it is worthwhile making the investment in developing legal NLP research available with respect to other legal texts, such as judgments, scholarly articles, skeleton arguments and pleadings.

What's special about Blackstone?

  • So far as we are aware, Blackstone is the first open source model specifically trained for use on long-form texts containing common law entities and concepts.
  • Blackstone is built on spaCy, which makes it easy to pick up and apply to your own data.
  • Blackstone has been trained on data spanning a considerable temporal period (as early as texts drafted in the 1860s). This is useful because an interesting quirk of the common law is that older writings (particularly, judgments) go on to remain relevant for many, many years.
  • It is free and open source
  • It is imperfect and makes no attempt to hide that fact from you

Observations and other things worth noting:

  • Perfection is the enemy of the good. This is a prototype release of a highly experimental project. As such, the accuracy of Blackstone's models leaves something to be desired (F1 on the NER is approx 70%). The accuracy of these models will improve over time.
  • The models have been trained on English case law and the library has been built with the peculiarities of the legal system of England and Wales in mind. That said, the model has generalised well and should do a reasonably good job on Australasian, Canadian and American content, too.
  • The data used to train Blackstone's models was derived from the Incorporated Council of Law Reporting for England and Wales' archive of case reports and unreported judgments. That archive is proprietary and this prevents us from releasing any of the data used to train Blackstone.
  • Blackstone is not a judge or litigation analytics tool.

Installation

Note! It is strongly recommended that you install Blackstone into a virtual environment! See here for more on virtual environments. Blackstone should compatible with Python 3.6 and higher.

To install Blackstone follow these steps:

1. Install the library

The first step is to install the library, which at present contains a handful of custom spaCy components. Install the library like so:

pip install blackstone

2. Install the Blackstone model

The second step is to install the spaCy model. Install the model like so:

pip install https://blackstone-model.s3-eu-west-1.amazonaws.com/en_blackstone_proto-0.0.1.tar.gz

Installing from source

If you are developing Blackstone, you can install from source like so:

pip install --editable .
pip install -r dev-requirements.txt

About the model

This is the very first release of Blackstone and the model is best viewed as a prototype; it is rough around the edges and represents the first step in a larger ongoing programme of open source research into NLP on legal texts being carried out by ICLR&D.

With that out of the way, here's a brief rundown of what's happening in the proto model.

The pipeline

The proto model included in this release has the following elements in its pipeline:

Owing to a scarcity of labelled part-of-speech and dependency training data for legal text, the tokenizer, tagger and parser pipeline components have been taken from spaCy's en_core_web_sm model. By and large, these components appear to a do a decent job, but it would be good to revisit these components with custom training data at some point in the future.

The ner and textcat components are custom components trained especially for Blackstone.

Named-Entity Recogniser

The NER component of the Blackstone model has been trained to detect the following entity types:

Ent Name Examples
CASENAME Case names e.g. Smith v Jones, In re Jones, In Jones' case
CITATION Citations (unique identifiers for reported and unreported cases) e.g. (2002) 2 Cr App R 123
INSTRUMENT Written legal instruments e.g. Theft Act 1968, European Convention on Human Rights, CPR
PROVISION Unit within a written legal instrument e.g. section 1, art 2(3)
COURT Court or tribunal e.g. Court of Appeal, Upper Tribunal
JUDGE References to judges e.g. Eady J, Lord Bingham of Cornhill

Text Categoriser

This release of Blackstone also comes with a text categoriser. In contrast with the NER component (which has been trainined to identify tokens and series of tokens of interest), the text categoriser classifies longer spans of text, such as sentences.

The Text Categoriser has been trained to classify text according to one of five mutually exclusive categories, which are as follows:

Cat Description
AXIOM The text appears to postulate a well-established principle
CONCLUSION The text appears to make a finding, holding, determination or conclusion
ISSUE The text appears to discuss an issue or question
LEGAL_TEST The test appears to discuss a legal test
UNCAT The text does not fall into one of the four categories above

Usage

Applying the NER model

Here's an example of how the model is applied to some text taken from para 31 of the Divisional Court's judgment in R (Miller) v Secretary of State for Exiting the European Union (Birnie intervening) [2017] UKSC 5; [2018] AC 61:

>> European Communities Act 1972 INSTRUMENT >>> article 50EU PROVISION >>> EU Treaty INSTRUMENT >>> Attorney General v De Keyser’s Royal Hotel Ltd CASENAME >>> [1920] AC 508 CITATION >>> R v Secretary of State for Foreign and Commonwealth Affairs, Ex p Rees-Mogg CASENAME >>> [1994] QB 552 CITATION >>> article 50EU PROVISION ">
import spacy

# Load the model
nlp = spacy.load("en_blackstone_proto")

text = """ 31 As we shall explain in more detail in examining the submission of the Secretary of State (see paras 77 and following), it is the Secretary of State’s case that nothing has been done by Parliament in the European Communities Act 1972 or any other statute to remove the prerogative power of the Crown, in the conduct of the international relations of the UK, to take steps to remove the UK from the EU by giving notice under article 50EU for the UK to withdraw from the EU Treaty and other relevant EU Treaties. The Secretary of State relies in particular on Attorney General v De Keyser’s Royal Hotel Ltd [1920] AC 508 and R v Secretary of State for Foreign and Commonwealth Affairs, Ex p Rees-Mogg [1994] QB 552; he contends that the Crown’s prerogative power to cause the UK to withdraw from the EU by giving notice under article 50EU could only have been removed by primary legislation using express words to that effect, alternatively by legislation which has that effect by necessary implication. The Secretary of State contends that neither the ECA 1972 nor any of the other Acts of Parliament referred to have abrogated this aspect of the Crown’s prerogative, either by express words or by necessary implication.
"""

# Apply the model to the text
doc = nlp(text)

# Iterate through the entities identified by the model
for ent in doc.ents:
    print(ent.text, ent.label_)

>>> European Communities Act 1972 INSTRUMENT
>>> article 50EU PROVISION
>>> EU Treaty INSTRUMENT
>>> Attorney General v De Keysers Royal Hotel Ltd CASENAME
>>> [1920] AC 508 CITATION
>>> R v Secretary of State for Foreign and Commonwealth Affairs, Ex p Rees-Mogg CASENAME
>>> [1994] QB 552 CITATION
>>> article 50EU PROVISION

Visualising entities

spaCy ships with an excellent set of visualisers, including a visualiser for NER predicts. Blackstone comes with a custom colour palette that can be used to make it easier to distiguish entities on the source text when using displacy.

"""
Visualise entities using spaCy's displacy visualiser. 

Blackstone has a custom colour palette: `from blackstone.displacy_palette import ner_displacy options`
"""

import spacy
from spacy import displacy
from blackstone.displacy_palette import ner_displacy_options

nlp = spacy.load("en_blackstone_proto")

text = """
The applicant must satisfy a high standard. This is a case where the action is to be tried by a judge with a jury. The standard is set out in Jameel v Wall Street Journal Europe Sprl [2004] EMLR 89, para 14:
“But every time a meaning is shut out (including any holding that the words complained of either are, or are not, capable of bearing a defamatory meaning) it must be remembered that the judge is taking it upon himself to rule in effect that any jury would be perverse to take a different view on the question. It is a high threshold of exclusion. Ever since Fox’s Act 1792 (32 Geo 3, c 60) the meaning of words in civil as well as criminal libel proceedings has been constitutionally a matter for the jury. The judge’s function is no more and no less than to pre-empt perversity. That being clearly the position with regard to whether or not words are capable of being understood as defamatory or, as the case may be, non-defamatory, I see no basis on which it could sensibly be otherwise with regard to differing levels of defamatory meaning. Often the question whether words are defamatory at all and, if so, what level of defamatory meaning they bear will overlap.”
18 In Berezovsky v Forbes Inc [2001] EMLR 1030, para 16 Sedley LJ had stated the test this way:
“The real question in the present case is how the courts ought to go about ascertaining the range of legitimate meanings. Eady J regarded it as a matter of impression. That is all right, it seems to us, provided that the impression is not of what the words mean but of what a jury could sensibly think they meant. Such an exercise is an exercise in generosity, not in parsimony.”
"""

doc = nlp(text)

# Call displacy and pass `ner_displacy_options` into the option parameter`
displacy.serve(doc, style="ent", options=ner_displacy_options)

Which produces something that looks like this:

Applying the text categoriser model

Blackstone's text categoriser generates a predicted categorisation for a doc. The textcat pipeline component has been designed to be applied to individual sentences rather than a single document consisting of many sentences.

>> "In my judgment, it is patently obvious that cats are a type of dog." ('CONCLUSION', 0.9990500807762146) >>> "It is a well settled principle that theft is wrong." ('AXIOM', 0.556410014629364) >>> "The question is whether on the facts found by the judge, the (or a) proximate cause of the loss of the rig was “inherent vice or nature of the subject matter insured” within the meaning of clause 4.4 of the Institute Cargo Clauses (A)." ('ISSUE', 0.5040785074234009)">
import spacy

# Load the model
nlp = spacy.load("en_blackstone_proto")

def get_top_cat(doc):
    """
    Function to identify the highest scoring category
    prediction generated by the text categoriser. 
    """
    cats = doc.cats
    max_score = max(cats.values()) 
    max_cats = [k for k, v in cats.items() if v == max_score]
    max_cat = max_cats[0]
    return (max_cat, max_score)

text = """
It is a well-established principle of law that the transactions of independent states between each other are governed by other laws than those which municipal courts administer. \
It is, however, in my judgment, insufficient to react to the danger of over-formalisation and “judicialisation” simply by emphasising flexibility and context-sensitivity. \
The question is whether on the facts found by the judge, the (or a) proximate cause of the loss of the rig was “inherent vice or nature of the subject matter insured” within the meaning of clause 4.4 of the Institute Cargo Clauses (A).
"""

# Apply the model to the text
doc = nlp(text)

# Get the sentences in the passage of text
sentences = [sent.text for sent in doc.sents]

# Print the sentence and the corresponding predicted category.
for sentence in sentences:
    doc = nlp(sentence)
    top_category = get_top_cat(doc)
    print (f"\"{sentence}\" {top_category}\n")
    
>>> "In my judgment, it is patently obvious that cats are a type of dog." ('CONCLUSION', 0.9990500807762146)
>>> "It is a well settled principle that theft is wrong." ('AXIOM', 0.556410014629364)
>>> "The question is whether on the facts found by the judge, the (or a) proximate cause of the loss of the rig was “inherent vice or nature of the subject matter insured” within the meaning of clause 4.4 of the Institute Cargo Clauses (A)." ('ISSUE', 0.5040785074234009)

Custom pipeline extensions

In addition to the core model, this proto release of Blackstone comes with three custom components:

  • Abbreviation detection - this is heavily based on the AbbreviationDetector() component in [scispacy] and resolves an abbreviated form to its long form definition, e.g. ECtHR -> European Court of Human Rights.
  • Legislation linker - this is an alpha component that attempts to resolve references to provisons to their parent instrument (more on this further down the README).
  • Compound case reference detection - again, this is an alpha component that attempts identify CASENAME and CITATION pairs enabling the merging of a CITATION to its parent CASENAME.

Abbreviation detection and long-form definition resolution

It is not uncommon for authors of legal documents to abbreviate long-winded terms that will be used instead of the long-form througout the rest of the document. For example,

The European Court of Human Rights ("ECtHR") is the court ultimately responsible for applying the European Convention on Human Rights ("ECHR").

The abbreviation detection component in Blackstone seeks to address this by implementing an ever so slightly modified version of scispaCy's AbbreviationDetector() (which is itself an implementation of the approach set out in this paper: https://psb.stanford.edu/psb-online/proceedings/psb03/schwartz.pdf). Our implementation still has some problems, but an example of its usage is as follows:

>> "ECtHR" (7, 10) European Court of Human Rights >>> "ECHR" (25, 28) European Convention on Human Rights ">
import spacy
from blackstone.pipeline.abbreviations import AbbreviationDetector

nlp = spacy.load("en_blackstone_proto")

# Add the abbreviation pipe to the spacy pipeline.
abbreviation_pipe = AbbreviationDetector(nlp)
nlp.add_pipe(abbreviation_pipe)

doc = nlp('The European Court of Human Rights ("ECtHR") is the court ultimately responsible for applying the European Convention on Human Rights ("ECHR").')

print("Abbreviation", "\t", "Definition")
for abrv in doc._.abbreviations:
	print(f"{abrv} \t ({abrv.start}, {abrv.end}) {abrv._.long_form}")
    
>>> "ECtHR"          (7, 10) European Court of Human Rights
>>> "ECHR"   (25, 28) European Convention on Human Rights   

Compound case reference detection

The compound case reference detection component in Blackstone is designed to marry up CITATION entities with their parent CASENAME entities.

Common law jurisdictions typically relate to case references through a coupling of a name (typically derived from the names of the parties in the case) and some unique citation to identify where the case has been reported, like so:

Regina v Horncastle [2010] 2 AC 373

Blackstone's NER model separately attempts to identify the CASENAME and CITATION entities. However, it is potentially useful (particularly in the context of information extraction) to pull these entities out as pairs.

CompoundCases() applies a custom pipe after the NER and identifies CASENAME/CITATION pairs in two scenarios:

  • The standard scenario: Gelmini v Moriggia [1913] 2 KB 549
  • The possessive scenario (which is a little antiquated): Jone's case [1915] 1 KB 45
>> Gelmini v Moriggia [1913] 2 KB 549 >>> Jones' case [1915] 1 KB 45">
import spacy
from blackstone.pipeline.compound_cases import CompoundCases

nlp = spacy.load("en_blackstone_proto")

compound_pipe = CompoundCases(nlp)
nlp.add_pipe(compound_pipe)

text = "As I have indicated, this was the central issue before the judge. On this issue the defendants relied (successfully below) on the decision of the High Court in Gelmini v Moriggia [1913] 2 KB 549. In Jones' case [1915] 1 KB 45, the defendant wore a hat."
doc = nlp(text)

for compound_ref in doc._.compound_cases:
    print(compound_ref)
    
>>> Gelmini v Moriggia [1913] 2 KB 549
>>> Jones' case [1915] 1 KB 45

Legislation linker

Blackstone's Legislation Linker attempts to couple a reference to a PROVISION to it's parent INSTRUMENT by using the NER model to identify the presence of an INSTRUMENT and then navigating the dependency tree to identify the child provision.

Once Blackstone has identified a PROVISION:INSTRUMENT pair, it will attempt to generate target URLs to both the provision and the instrument on legislation.gov.uk.

>> section 20 http://www.legislation.gov.uk/ukpga/2010/25/section/20 Constitutional Reform and Governance Act 2010 http://www.legislation.gov.uk/ukpga/2010/25/contents >>> section 1 http://www.legislation.gov.uk/ukpga/1968/60/section/1 Theft Act 1968 http://www.legislation.gov.uk/ukpga/1968/60/contents">
import spacy
from blackstone.utils.legislation_linker import extract_legislation_relations
nlp = spacy.load("en_blackstone_proto")

text = "The Secretary of State was at pains to emphasise that, if a withdrawal agreement is made, it is very likely to be a treaty requiring ratification and as such would have to be submitted for review by Parliament, acting separately, under the negative resolution procedure set out in section 20 of the Constitutional Reform and Governance Act 2010. Theft is defined in section 1 of the Theft Act 1968"

doc = nlp(text) 
relations = extract_legislation_relations(doc)
for provision, provision_url, instrument, instrument_url in relations:
    print(f"\n{provision}\t{provision_url}\t{instrument}\t{instrument_url}")
    
>>> section 20      http://www.legislation.gov.uk/ukpga/2010/25/section/20  Constitutional Reform and Governance Act 2010   http://www.legislation.gov.uk/ukpga/2010/25/contents

>>> section 1       http://www.legislation.gov.uk/ukpga/1968/60/section/1   Theft Act 1968  http://www.legislation.gov.uk/ukpga/1968/60/contents

Sentence segmenter

Blackstone ships with a custom rule-based sentence segmenter that addresses a range of characteristics inherent in legal texts that have a tendency to baffle out-of-the-box sentence segmentation rules.

This behaviour can be extended by optionally passing a list of spaCy-style Matcher patterns that will explicitly prevent sentence boundary detection inside matches.

import spacy
from blackstone.pipeline.sentence_segmenter import SentenceSegmenter
from blackstone.rules import CITATION_PATTERNS

nlp = spacy.load("en_blackstone_proto")

# add the Blackstone sentence_segmenter to the pipeline before the parser
sentence_segmenter = SentenceSegmenter(nlp.vocab, CITATION_PATTERNS)
nlp.add_pipe(sentence_segmenter, before="parser")

doc = nlp(
    """
    The courts in this jurisdiction will enforce those commitments when it is legally possible and necessary to do so (see, most recently, R. (on the application of ClientEarth) v Secretary of State for the Environment, Food and Rural Affairs (No.2) [2017] P.T.S.R. 203 and R. (on the application of ClientEarth) v Secretary of State for Environment, Food and Rural Affairs (No.3) [2018] Env. L.R. 21). The central question in this case arises against that background.
    """
)

for sent in doc.sents:
    print (sent.text)

Thanks

We would like to thank the following people/organisations who have helped us (directly or indirectly) to build this prototype.

Comments
  • Unknown Morphological Feature

    Unknown Morphological Feature

    I am trying to get up to speed with the model but when I execute the example code given I hit the following error when nlp = spacy.load("en_blackstone_proto") is called.

    [E167] Unknown morphological feature: 'ConjType' (9141427322507498425). This can happen if the tagger was trained with a different set of morphological features. If you're using a pretrained model, make sure that your models are up to date: python -m spacy validate

    opened by StuartHewitson 6
  • ValueError - Unknown morphological feature: 'Person'

    ValueError - Unknown morphological feature: 'Person'

    Hi. After fresh installation I wanted to test sentence segmenter example but I get this error :

    ValueError: [E167] Unknown morphological feature: 'Person' (2313063860588076218). This can happen if the tagger was trained with a different set of morphological features. If you're using a pretrained model, make sure that your models are up to date: python -m spacy validate

    python -m spacy validate show me: TYPE NAME MODEL VERSION package en-core-web-sm en_core_web_sm 2.2.0 ✔

    I have tested with Spacy 2.2.1 and 2.2.0 (with Python 3.6.7 / Data Science Virtual Machine form Azure with GPU)

    Thank you in advance for your help

    opened by nicolasesprit 4
  • General Python tidy

    General Python tidy

    My point of view is that I have some Python experience, but very little domain experience.

    Depending on who the contributors are, it might help things if we have a more robust Python environment:

    • virtual environment
    • runnable code
    • tests

    If the imminent contributors aren't going to be Python-focussed, then that might not help, but if it does, then the benefit could be that everyone gets a common environment in which to play with this stuff early on.

    opened by dave-donaghy 2
  • tidy up, all tests passing

    tidy up, all tests passing

    This PR is the first one which has CI 🎉

    The CI uses github actions, you can see the definition of what is running here: https://github.com/ICLRandD/Blackstone/blob/master/.github/workflows/main.yml

    Every time a PR is opened, it will run, making it easy to keep the tests working etc.

    In this PR i've cleaned up a few function names which don't use PEP8 snake case style, as well as adding a mock test for the legislation linker, because it makes get requests to a website, which we shouldn't do if we are running this on every commit.

    opened by DeNeutoy 1
  • add dockerfile

    add dockerfile

    This adds a dockerfile - there will be a follow up PR which adds some continuous integration which uses this.

    You can build it like:

    docker build -t blackstone-test .
    

    and then get a shell inside a running container like this:

    docker run -it blackstone-test
    

    Docker is useful because it provides a consistent environment for code, and is generally helpful for other people trying to run blackstone.

    opened by DeNeutoy 1
  • Custom modules are not getting loaded and giving error

    Custom modules are not getting loaded and giving error

    from blackstone.pipeline.compound_cases import CompoundCases ModuleNotFoundError: No module named 'blackstone.pipeline'; 'blackstone' is not a package

    Is there anything to be done to load custom modules. Sorry , i could not find anything in the README regarding this.

    Thanks, Srijith

    opened by srijiths 1
  • Add Sentence Segmenter

    Add Sentence Segmenter

    en_blackstone_proto, the model that ships with the prototype release of Blackstone, did not come with a sentence segmentation module. By and large, the current model does an okayish job splitting sentences, but it will get baffled on older material where punctation had a tendency to be crop dusted over the text.

    We did sketch a sentence pipeline out during the development of the proto model (primarily to help with the data extraction for training the model itself), but it was put together in a bit of a flap.

    I rather like scispaCy's implementation, which makes use of a list for abbreviations and section contractions.

    enhancement 
    opened by ICLRandD 1
  • Abbreviation detection not working where short form contains a space followed by digits

    Abbreviation detection not working where short form contains a space followed by digits

    The current implementation of the AbbreviationDetector() does not handle abbreviations that contain a short form followed by a space followed by a number

    For example, in this scenario:

    The Proceeds of Crime Act 2002 ("PoCA 2000")

    The abbreviation is not matched.

    The original implementation in scispaCy does not appear to have been built to handle instances in which the short form is bounded by quote marks).

    help wanted 
    opened by ICLRandD 1
  • Add a `tests` directory

    Add a `tests` directory

    I recommend using https://docs.pytest.org/en/latest/

    The tests directory should look like this:

    tests/
    - __init__.py (important)
    - linker_test.py
    - ...
    

    Then add this file to your top level directory: https://github.com/allenai/scispacy/blob/master/pytest.ini

    then you should be able to run all tests just by running pytest in the root of the project. The directory of your tests should look like the directory of the blackstone package. Tests will only be run if the class and method names begin or end with test.

    opened by DeNeutoy 1
  • requirements.txt is too big

    requirements.txt is too big

    requirements.txt typically only specifies the exact packages that are required to run a project, rather than an exhaustive list of your personal python environment.

    https://github.com/allenai/scispacy/blob/master/requirements.in

    Also, it's typical to only pin packages to a particular version if you know there is a problem in a newer version that will break stuff. The reason for this is that it makes it easier for people to install if you don't require precise version numbers.

    opened by DeNeutoy 1
  • Update Spacy Version

    Update Spacy Version

    Fixes memory leak found in Spacy 2.1.8

    "This is a small maintenance update that backports a bug fix for a memory leak that'd occur in long-running parsing processes. It's intended for users who can't or don't yet want to upgrade to spaCy v2.2 (e.g. because it requires retraining all the models). If you're able to upgrade, you shouldn't use this version and instead install the latest v2.2." - https://github.com/explosion/spaCy/releases/tag/v2.1.9

    opened by ryanmcdonough 0
  • Poorly maintained project: Upgrade package to support Python 3.10+

    Poorly maintained project: Upgrade package to support Python 3.10+

    Hi, This only supports Python 3.6 it seems. We are in Py 3.10+ world. Can you please update your codebase so it can be compatible with Py 3.10+?

    Thanks in advance.

    opened by skuma307 0
  • Pipenv installation failed

    Pipenv installation failed

    Using a clean pipfile and environment, a blackstone dependency failed. Looks like it's something about preshed, blis and unicode?

    (base) peter@Peters-MBP14 pych-km-4 % pipenv shell
    Creating a virtualenv for this project...
    Pipfile: /Users/peter/pych-km-4/Pipfile
    Using /Users/peter/opt/anaconda3/bin/python3 (3.9.12) to create virtualenv...
    ⠋ Creating virtual environment...created virtual environment CPython3.9.12.final.0-64 in 400ms
      creator CPython3Posix(dest=/Users/peter/.local/share/virtualenvs/pych-km-4-slCQK9Uf, clear=False, no_vcs_ignore=False, global=False)
      seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/Users/peter/Library/Application Support/virtualenv)
        added seed packages: pip==22.1.2, setuptools==62.2.0, wheel==0.37.1
      activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
    
    ✔ Successfully created virtual environment!
    Virtualenv location: /Users/peter/.local/share/virtualenvs/pych-km-4-slCQK9Uf
    Creating a Pipfile for this project...
    Launching subshell in virtual environment...
    Loading .zshrc
     . /Users/peter/.local/share/virtualenvs/pych-km-4-slCQK9Uf/bin/activate
    (base) peter@Peters-MBP14 pych-km-4 %  . /Users/peter/.local/share/virtualenvs/pych-km-4-slCQK9Uf/bin/activate
    

    Then:

    (pych-km-4) (base) peter@Peters-MBP14 pych-km-4 % pipenv install blackstone
                Compiler gcc
                building 'blis.cy' extension
                creating build/temp.macosx-10.9-x86_64-cpython-39
                creating build/temp.macosx-10.9-x86_64-cpython-39/blis
                clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /Users/peter/opt/anaconda3/include -arch x86_64 -I/Users/peter/opt/anaconda3/include -fPIC -O2 -isystem /Users/peter/opt/anaconda3/include -arch x86_64 -I/private/var/folders/dh/1ggfwgy164d06z3k4yf78ksc0000gn/T/pip-install-masm7svx/blis_8ee02e7df50f4e0c8da54b2d54349d4d/include -I/private/var/folders/dh/1ggfwgy164d06z3k4yf78ksc0000gn/T/pip-install-masm7svx/blis_8ee02e7df50f4e0c8da54b2d54349d4d/blis/_src/include/darwin-x86_64 -I/Users/peter/.local/share/virtualenvs/pych-km-3-CwEGPvOL/include -I/Users/peter/opt/anaconda3/include/python3.9 -c blis/cy.c -o build/temp.macosx-10.9-x86_64-cpython-39/blis/cy.o -std=c99
                blis/cy.c:2470:7: warning: code will never be executed [-Wunreachable-code]
                      PyErr_SetNone(PyExc_AssertionError);
                      ^~~~~~~~~~~~~
                blis/cy.c:2486:7: warning: code will never be executed [-Wunreachable-code]
                      PyErr_SetNone(PyExc_AssertionError);
                      ^~~~~~~~~~~~~
                blis/cy.c:2502:7: warning: code will never be executed [-Wunreachable-code]
                      PyErr_SetNone(PyExc_AssertionError);
                      ^~~~~~~~~~~~~
                blis/cy.c:2518:7: warning: code will never be executed [-Wunreachable-code]
                      PyErr_SetNone(PyExc_AssertionError);
                      ^~~~~~~~~~~~~
                blis/cy.c:2534:7: warning: code will never be executed [-Wunreachable-code]
                      PyErr_SetNone(PyExc_AssertionError);
                      ^~~~~~~~~~~~~
                blis/cy.c:2550:7: warning: code will never be executed [-Wunreachable-code]
                      PyErr_SetNone(PyExc_AssertionError);
                      ^~~~~~~~~~~~~
                blis/cy.c:2566:7: warning: code will never be executed [-Wunreachable-code]
                      PyErr_SetNone(PyExc_AssertionError);
                      ^~~~~~~~~~~~~
                blis/cy.c:2582:7: warning: code will never be executed [-Wunreachable-code]
                      PyErr_SetNone(PyExc_AssertionError);
                      ^~~~~~~~~~~~~
                blis/cy.c:2598:7: warning: code will never be executed [-Wunreachable-code]
                      PyErr_SetNone(PyExc_AssertionError);
                      ^~~~~~~~~~~~~
                blis/cy.c:2614:7: warning: code will never be executed [-Wunreachable-code]
                      PyErr_SetNone(PyExc_AssertionError);
                      ^~~~~~~~~~~~~
                blis/cy.c:2630:7: warning: code will never be executed [-Wunreachable-code]
                      PyErr_SetNone(PyExc_AssertionError);
                      ^~~~~~~~~~~~~
                blis/cy.c:2646:7: warning: code will never be executed [-Wunreachable-code]
                      PyErr_SetNone(PyExc_AssertionError);
                      ^~~~~~~~~~~~~
                blis/cy.c:2662:7: warning: code will never be executed [-Wunreachable-code]
                      PyErr_SetNone(PyExc_AssertionError);
                      ^~~~~~~~~~~~~
                blis/cy.c:5740:3: warning: code will never be executed [-Wunreachable-code]
                  goto __pyx_L0;
                  ^~~~~~~~~~~~~
                blis/cy.c:5871:3: warning: code will never be executed [-Wunreachable-code]
                  goto __pyx_L0;
                  ^~~~~~~~~~~~~
                blis/cy.c:21738:18: error: no member named 'tp_print' in 'struct _typeobject'
                  __Pyx_EnumMeta.tp_print = 0;
                  ~~~~~~~~~~~~~~ ^
                blis/cy.c:21747:26: error: no member named 'tp_print' in 'struct _typeobject'
                  __pyx_type___pyx_array.tp_print = 0;
                  ~~~~~~~~~~~~~~~~~~~~~~ ^
                blis/cy.c:21752:32: error: no member named 'tp_print' in 'struct _typeobject'
                  __pyx_type___pyx_MemviewEnum.tp_print = 0;
                  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
                blis/cy.c:21767:31: error: no member named 'tp_print' in 'struct _typeobject'
                  __pyx_type___pyx_memoryview.tp_print = 0;
                  ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
                blis/cy.c:21780:36: error: no member named 'tp_print' in 'struct _typeobject'
                  __pyx_type___pyx_memoryviewslice.tp_print = 0;
                  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
                blis/cy.c:24202:22: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
                                    (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
                                     ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
                      PyUnicode_WSTR_LENGTH(op) :                    \
                      ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
                #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
                                                  ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
                Py_DEPRECATED(3.3)
                ^
                /Users/peter/opt/anaconda3/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
                #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
                                                                     ^
                blis/cy.c:24202:22: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
                                    (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
                                     ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
                      ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
                             ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
                Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
                ^
                /Users/peter/opt/anaconda3/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
                #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
                                                                     ^
                blis/cy.c:24202:22: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
                                    (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
                                     ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
                       PyUnicode_WSTR_LENGTH(op)))
                       ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
                #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
                                                  ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
                Py_DEPRECATED(3.3)
                ^
                /Users/peter/opt/anaconda3/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
                #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
                                                                     ^
                blis/cy.c:24202:52: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
                                    (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
                                                                   ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
                      PyUnicode_WSTR_LENGTH(op) :                    \
                      ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
                #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
                                                  ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
                Py_DEPRECATED(3.3)
                ^
                /Users/peter/opt/anaconda3/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
                #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
                                                                     ^
                blis/cy.c:24202:52: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
                                    (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
                                                                   ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
                      ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
                             ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
                Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
                ^
                /Users/peter/opt/anaconda3/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
                #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
                                                                     ^
                blis/cy.c:24202:52: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
                                    (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
                                                                   ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
                       PyUnicode_WSTR_LENGTH(op)))
                       ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
                #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
                                                  ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
                Py_DEPRECATED(3.3)
                ^
                /Users/peter/opt/anaconda3/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
                #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
                                                                     ^
                blis/cy.c:24218:26: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
                                        (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
                                         ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
                      PyUnicode_WSTR_LENGTH(op) :                    \
                      ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
                #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
                                                  ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
                Py_DEPRECATED(3.3)
                ^
                /Users/peter/opt/anaconda3/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
                #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
                                                                     ^
                blis/cy.c:24218:26: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
                                        (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
                                         ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
                      ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
                             ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
                Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
                ^
                /Users/peter/opt/anaconda3/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
                #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
                                                                     ^
                blis/cy.c:24218:26: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
                                        (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
                                         ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
                       PyUnicode_WSTR_LENGTH(op)))
                       ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
                #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
                                                  ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
                Py_DEPRECATED(3.3)
                ^
                /Users/peter/opt/anaconda3/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
                #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
                                                                     ^
                blis/cy.c:24218:59: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
                                        (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
                                                                          ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
                      PyUnicode_WSTR_LENGTH(op) :                    \
                      ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
                #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
                                                  ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
                Py_DEPRECATED(3.3)
                ^
                /Users/peter/opt/anaconda3/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
                #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
                                                                     ^
                blis/cy.c:24218:59: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
                                        (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
                                                                          ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
                      ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
                             ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
                Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
                ^
                /Users/peter/opt/anaconda3/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
                #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
                                                                     ^
                blis/cy.c:24218:59: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
                                        (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
                                                                          ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
                       PyUnicode_WSTR_LENGTH(op)))
                       ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
                #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
                                                  ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
                Py_DEPRECATED(3.3)
                ^
                /Users/peter/opt/anaconda3/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
                #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
                                                                     ^
                blis/cy.c:25025:16: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations]
                        return PyUnicode_FromUnicode(NULL, 0);
                               ^
                /Users/peter/opt/anaconda3/include/python3.9/cpython/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here
                Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(
                ^
                /Users/peter/opt/anaconda3/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
                #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
                                                                     ^
                28 warnings and 5 errors generated.
                error: command '/usr/bin/clang' failed with exit code 1
                [end of output]
    
            note: This error originates from a subprocess, and is likely not a problem with pip.
            ERROR: Failed building wheel for blis
            Running setup.py clean for blis
          Failed to build preshed blis
          Installing collected packages: wasabi, srsly, plac, murmurhash, cymem, wheel, tqdm, setuptools, preshed, numpy, Cython, blis, thinc
            Running setup.py install for preshed: started
            Running setup.py install for preshed: finished with status 'error'
            error: subprocess-exited-with-error
    
            × Running setup.py install for preshed did not run successfully.
            │ exit code: 1
            ╰─> [15 lines of output]
                /Users/peter/.local/share/virtualenvs/pych-km-3-CwEGPvOL/lib/python3.9/site-packages/setuptools/installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer.
                  warnings.warn(
                running install
                /Users/peter/.local/share/virtualenvs/pych-km-3-CwEGPvOL/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
                  warnings.warn(
                running build
                running build_py
                warning: build_py: byte-compiling is disabled, skipping.
    
                running build_ext
                building 'preshed.maps' extension
                clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /Users/peter/opt/anaconda3/include -arch x86_64 -I/Users/peter/opt/anaconda3/include -fPIC -O2 -isystem /Users/peter/opt/anaconda3/include -arch x86_64 -I/Users/peter/opt/anaconda3/include/python3.9 -I/Users/peter/.local/share/virtualenvs/pych-km-3-CwEGPvOL/include -I/Users/peter/opt/anaconda3/include/python3.9 -c preshed/maps.cpp -o build/temp.macosx-10.9-x86_64-cpython-39/preshed/maps.o -O3 -Wno-strict-prototypes -Wno-unused-function
                clang: error: no such file or directory: 'preshed/maps.cpp'
                clang: error: no input files
                error: command '/usr/bin/clang' failed with exit code 1
                [end of output]
    
            note: This error originates from a subprocess, and is likely not a problem with pip.
          error: legacy-install-failure
    
          × Encountered error while trying to install package.
          ╰─> preshed
    
          note: This is an issue with the package mentioned above, not pip.
          hint: See above for output from the failure.
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: subprocess-exited-with-error
    
    × pip subprocess to install build dependencies did not run successfully.
    │ exit code: 1
    ╰─> See above for output.
    
    note: This error originates from a subprocess, and is likely not a problem with pip.
    
    opened by roablep 1
  • config.cfg is missing from model

    config.cfg is missing from model

    Error while loading the model : Could not read config.cfg

    Stacktrace:

    OSError Traceback (most recent call last) in 1 # Load the model ----> 2 nlp = en_blackstone_proto.load()

    env\lib\site-packages\en_blackstone_proto_init_.py in load(**overrides) 10 11 def load(**overrides): ---> 12 return load_model_from_init_py(file, **overrides)

    env\lib\site-packages\spacy\util.py in load_model_from_init_py(init_file, vocab, disable, exclude, config) 512 if not model_path.exists(): 513 raise IOError(Errors.E052.format(path=data_path)) --> 514 return load_model_from_path( 515 data_path, 516 vocab=vocab,

    env\lib\site-packages\spacy\util.py in load_model_from_path(model_path, meta, vocab, disable, exclude, config) 386 meta = get_model_meta(model_path) 387 config_path = model_path / "config.cfg" --> 388 config = load_config(config_path, overrides=dict_to_dot(config)) 389 nlp = load_model_from_config(config, vocab=vocab, disable=disable, exclude=exclude) 390 return nlp.from_disk(model_path, exclude=exclude)

    env\lib\site-packages\spacy\util.py in load_config(path, overrides, interpolate) 543 else: 544 if not config_path or not config_path.exists() or not config_path.is_file(): --> 545 raise IOError(Errors.E053.format(path=config_path, name="config.cfg")) 546 return config.from_disk( 547 config_path, overrides=overrides, interpolate=interpolate

    OSError: [E053] Could not read config.cfg from env\lib\site-packages\en_blackstone_proto\en_blackstone_proto-0.0.1\config.cfg

    opened by SanjanSRao 6
  • Compatibility with spaCy 2.1.9 & 2.2+

    Compatibility with spaCy 2.1.9 & 2.2+

    Hi Blackstone team, at first, I want to thank you for your pre-trained models and your work in automatic legal text analysis. Especially your custom SentenceSegmenter and NER detections works very good with our dataset of legal texts. Unfortunately this package still depends on spaCy 2.1 or more specifically on spaCy 2.1.8. This version currently has a major memory leak bug (https://github.com/explosion/spaCy/issues/3618), which has been fixed with 2.1.9. I already modified the dependency files of Blackstone, so I'm able to install spaCy 2.1.9 instead of the required 2.1.8 which works flawlessly on my machine. You might consider changing your dependencies accordingly. However, it would be even better if you could update to an even newer version of spaCy (e.g. 2.2+) to profit from several performance optimizations done by Explosion. There is already a pending pull request (#22) to address this issue, but without the corresponding training data you used to train the model there is no way to retrain ourselves. It would be greatly appreciated if you could update your model & package to spaCy 2.2 - as this might take some time you update your package's dependencies to spaCy 2.1.9 in the meantime to circumvent memory leaks present in spaCy 2.1.9.

    opened by phHartl 1
Owner
ICLR&D
Research & Development Lab at the Incorporated Council of Law Reporting for England & Wales
ICLR&D
LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language

LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language ⚖️ The library of Natural Language Processing for Brazilian legal lang

Felipe Maia Polo 125 Dec 20, 2022
Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech tagging and word segmentation.

Pytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech tagging and word segmentation.

null 186 Dec 24, 2022
Legal text retrieval for python

legal-text-retrieval Overview This system contains 2 steps: generate training data containing negative sample found by mixture score of cosine(tfidf)

Nguyễn Minh Phương 22 Dec 6, 2022
topic modeling on unstructured data in Space news articles retrieved from the Guardian (UK) newspaper using API

NLP Space News Topic Modeling Photos by nasa.gov (1, 2, 3, 4, 5) and extremetech.com Table of Contents Project Idea Data acquisition Primary data sour

edesz 1 Jan 3, 2022
Augmenty is an augmentation library based on spaCy for augmenting texts.

Augmenty: The cherry on top of your NLP pipeline Augmenty is an augmentation library based on spaCy for augmenting texts. Besides a wide array of high

Kenneth Enevoldsen 124 Dec 29, 2022
Implementation of legal QA system based on SentenceKoBART

LegalQA using SentenceKoBART Implementation of legal QA system based on SentenceKoBART How to train SentenceKoBART Based on Neural Search Engine Jina

Heewon Jeon(gogamza) 75 Dec 27, 2022
NLP, before and after spaCy

textacy: NLP, before and after spaCy textacy is a Python library for performing a variety of natural language processing (NLP) tasks, built on the hig

Chartbeat Labs Projects 2k Jan 4, 2023
NLP, before and after spaCy

textacy: NLP, before and after spaCy textacy is a Python library for performing a variety of natural language processing (NLP) tasks, built on the hig

Chartbeat Labs Projects 1.6k Feb 10, 2021
🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy This package provides spaCy components and architectures to use tr

Explosion 1.2k Jan 8, 2023
A full spaCy pipeline and models for scientific/biomedical documents.

This repository contains custom pipes and models related to using spaCy for scientific documents. In particular, there is a custom tokenizer that adds

AI2 1.3k Jan 3, 2023
NLP, before and after spaCy

textacy: NLP, before and after spaCy textacy is a Python library for performing a variety of natural language processing (NLP) tasks, built on the hig

Chartbeat Labs Projects 1.6k Feb 17, 2021
🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy This package provides spaCy components and architectures to use tr

Explosion 903 Feb 17, 2021
A full spaCy pipeline and models for scientific/biomedical documents.

This repository contains custom pipes and models related to using spaCy for scientific documents. In particular, there is a custom tokenizer that adds

AI2 831 Feb 17, 2021
🧪 Cutting-edge experimental spaCy components and features

spacy-experimental: Cutting-edge experimental spaCy components and features This package includes experimental components and features for spaCy v3.x,

Explosion 65 Dec 30, 2022
Prithivida 690 Jan 4, 2023
✨Fast Coreference Resolution in spaCy with Neural Networks

✨ NeuralCoref 4.0: Coreference Resolution in spaCy with Neural Networks. NeuralCoref is a pipeline extension for spaCy 2.1+ which annotates and resolv

Hugging Face 2.6k Jan 4, 2023
spaCy plugin for Transformers , Udify, ELmo, etc.

Camphr - spaCy plugin for Transformers, Udify, Elmo, etc. Camphr is a Natural Language Processing library that helps in seamless integration for a wid

null 342 Nov 21, 2022
✨Fast Coreference Resolution in spaCy with Neural Networks

✨ NeuralCoref 4.0: Coreference Resolution in spaCy with Neural Networks. NeuralCoref is a pipeline extension for spaCy 2.1+ which annotates and resolv

Hugging Face 2.2k Feb 18, 2021