Auto-generate PEP-484 annotations

Overview

PyAnnotate: Auto-generate PEP-484 annotations

Insert annotations into your source code based on call arguments and return types observed at runtime.

For license and copyright see the end of this file.

Blog post: http://mypy-lang.blogspot.com/2017/11/dropbox-releases-pyannotate-auto.html

How to use

See also the example directory.

Phase 1: Collecting types at runtime

  • Install the usual way (see "red tape" section below)
  • Add from pyannotate_runtime import collect_types to your test
  • Early in your test setup, call collect_types.init_types_collection()
  • Bracket your test execution between calls to collect_types.start() and collect_types.stop() (or use the context manager below)
  • When done, call collect_types.dump_stats(filename)

All calls between the start() and stop() calls will be analyzed and the observed types will be written (in JSON form) to the filename you pass to dump_stats(). You can have multiple start/stop pairs per dump call.

If you'd like to automatically collect types when you run pytest, see example/example_conftest.py and example/README.md.

Instead of using start() and stop() you can also use a context manager:

collect_types.init_types_collection()
with collect_types.collect():
    <your code here>
collect_types.dump_stats(<filename>)

Phase 2: Inserting types into your source code

The command-line tool pyannotate can add annotations into your source code based on the annotations collected in phase 1. The key arguments are:

  • Use --type-info FILE to tell it the file you passed to dump_stats()
  • Positional arguments are source files you want to annotate
  • With no other flags the tool will print a diff indicating what it proposes to do but won't do anything. Review the output.
  • Add -w to make the tool actually update your files. (Use git or some other way to keep a backup.)

At this point you should probably run mypy and iterate. You probably will have to tweak the changes to make mypy completely happy.

Notes and tips

  • It's best to do one file at a time, at least until you're comfortable with the tool.
  • The tool doesn't touch functions that already have an annotation.
  • The tool can generate either of:
    • type comments, i.e. Python 2 style annotations
    • inline type annotations, i.e. Python 3 style annotations, using --py3 in v1.0.7+

Red tape

Installation

This should work for Python 2.7 as well as for Python 3.4 and higher.

pip install pyannotate

This installs several items:

  • A runtime module, pyannotate_runtime/collect_types.py, which collects and dumps types observed at runtime using a profiling hook.

  • A library package, pyannotate_tools, containing code that can read the data dumped by the runtime module and insert annotations into your source code.

  • An entry point, pyannotate, which runs the library package on your files.

For dependencies, see setup.py and requirements.txt.

Testing etc.

To run the unit tests, use pytest:

pytest

TO DO

We'd love your help with some of these issues:

  • Better documentation.
  • Python 3 code generation.
  • Refactor the tool modules (currently its legacy architecture shines through).

Acknowledgments

The following people contributed significantly to this tool:

  • Tony Grue
  • Sergei Vorobev
  • Jukka Lehtosalo
  • Guido van Rossum

Licence etc.

  1. License: Apache 2.0.
  2. Copyright attribution: Copyright (c) 2017 Dropbox, Inc.
  3. External contributions to the project should be subject to Dropbox's Contributor License Agreement (CLA): https://opensource.dropbox.com/cla/
Comments
  • Fail to apply typeinfo to a python module in a subdirectory

    Fail to apply typeinfo to a python module in a subdirectory

    The following will fail :

    > python toto\toto.py 
    ( this generates a type_info.json in current directory )
    
    >dir
    [...]
    25/07/2018  06:22    <DIR>          toto
    25/07/2018  06:21               784 type_info.json
    
    >pyannotate -3 toto\toto.py --type-info type_info.json
    No files need to be modified.
    NOTE: this was a dry run; use -w to write files
    

    strange, there are type annotations in type_info.json with the correct path

    >type type_info.json
    [
        {
            "path": "toto\\toto.py",
            "line": 2,
            "func_name": "add",
            "type_comments": [
                "(*int) -> int",
                "(*List[int]) -> List[int]",
                "(*Tuple[int, int]) -> Tuple[int, int]"
            ],
            "samples": 3
        },
        {
            "path": "toto\\toto.py",
            "line": 8,
            "func_name": "add2",
            "type_comments": [
                "(Tuple[int, int], Tuple[int, int]) -> Tuple[int, int, int, int]",
                "(List[int], List[int]) -> List[int]",
                "(int, int) -> int"
            ],
            "samples": 3
        },
        {
            "path": "toto\\toto.py",
            "line": 11,
            "func_name": "main",
            "type_comments": [
                "() -> None"
            ],
            "samples": 1
        }
    ]
    

    edit the type_info.json to remove the "toto\"

    >type type_info.json
    [
        {
            "path": "toto.py",
            "line": 2,
            "func_name": "add",
            "type_comments": [
                "(*int) -> int",
                "(*List[int]) -> List[int]",
                "(*Tuple[int, int]) -> Tuple[int, int]"
            ],
            "samples": 3
        },
        {
            "path": "toto.py",
            "line": 8,
            "func_name": "add2",
            "type_comments": [
                "(Tuple[int, int], Tuple[int, int]) -> Tuple[int, int, int, int]",
                "(List[int], List[int]) -> List[int]",
                "(int, int) -> int"
            ],
            "samples": 3
        },
        {
            "path": "toto.py",
            "line": 11,
            "func_name": "main",
            "type_comments": [
                "() -> None"
            ],
            "samples": 1
        }
    ]
    
    

    try again

    >pyannotate -3 toto\toto.py --type-info type_info.json
    Refactored toto\toto.py
    --- toto\toto.py        (original)
    +++ toto\toto.py        (refactored)
    @@ -1,14 +1,18 @@
    +from typing import Any
    +from typing import List
    +from typing import Tuple
    +from typing import Union
    
    -def add(*args):
    +def add(*args: Any) -> Union[List[int], Tuple[int, int], int]:
         ret = args[0]
         for v in args:
             ret += v
         return v
    
    -def add2(v1, v2):
    +def add2(v1: Union[List[int], Tuple[int, int], int], v2: Union[List[int], Tuple[int, int], int]) -> Union[List[int], Tuple[int, int, int, int], int]:
         return v1+v2
    
    -def main():
    +def main() -> None:
         print( add(1,2,3) )
         print( add([1,2], [3,4]) )
         print( add((1,2), (3,4)) )
    Files that need to be modified:
    toto\toto.py
    NOTE: this was a dry run; use -w to write files
    
    

    it worked...

    It looks like pyannotate is trimming directories from type_info.json too agressively.

    opened by bluebird75 12
  • Python3 annotations

    Python3 annotations

    Implementation for #4 is done.

    All tests are ported and pass successfully.

    A few remarks on the implementation :

    • command-line options are the same as mypy : -2, --py2, -3, --py3, --python-version
    • the code will not annotate functions with an existing py2 or py3 annotation
    • I ignored the concept of long form for python 3 annotations. Function with many arguments are still annotated inline.
    • test_annotate.py and test_annotate_json.py have been renamed with a _py2 and _py3 suffix depending on what they are testing.

    Looking forward for your feedback.

    opened by bluebird75 10
  • Crash on head when annotating

    Crash on head when annotating

    Works fine on v1.0.2. On head this crashes:

    pyannotate mpf/core/switch_controller.py 
    Traceback (most recent call last):
      File "/usr/local/bin/pyannotate", line 11, in <module>
        load_entry_point('pyannotate', 'console_scripts', 'pyannotate')()
      File "/data/home/jan/cloud/flipper/src/pyannotate/pyannotate_tools/annotations/__main__.py", line 56, in main
        show_diffs=not args.quiet)
      File "/usr/lib/python3.5/lib2to3/main.py", line 63, in __init__
        super(StdoutRefactoringTool, self).__init__(fixers, options, explicit)
      File "/usr/lib/python3.5/lib2to3/refactor.py", line 698, in __init__
        super(MultiprocessRefactoringTool, self).__init__(*args, **kwargs)
      File "/usr/lib/python3.5/lib2to3/refactor.py", line 210, in __init__
        self.pre_order, self.post_order = self.get_fixers()
      File "/usr/lib/python3.5/lib2to3/refactor.py", line 255, in get_fixers
        fixer = fix_class(self.options, self.fixer_log)
      File "/usr/lib/python3.5/lib2to3/fixer_base.py", line 58, in __init__
        self.compile_pattern()
      File "/usr/lib/python3.5/lib2to3/fixer_base.py", line 67, in compile_pattern
        PC = PatternCompiler()
      File "/usr/lib/python3.5/lib2to3/patcomp.py", line 50, in __init__
        self.grammar = driver.load_grammar(grammar_file)
      File "/usr/lib/python3.5/lib2to3/pgen2/driver.py", line 120, in load_grammar
        logger.info("Generating grammar tables from %s", gt)
      File "/usr/lib/python3.5/logging/__init__.py", line 1279, in info
        self._log(INFO, msg, args, **kwargs)
      File "/usr/lib/python3.5/logging/__init__.py", line 1414, in _log
        exc_info, func, extra, sinfo)
      File "/usr/lib/python3.5/logging/__init__.py", line 1384, in makeRecord
        sinfo)
      File "/usr/lib/python3.5/logging/__init__.py", line 269, in __init__
        if (args and len(args) == 1 and isinstance(args[0], collections.Mapping)
      File "/usr/lib/python3.5/abc.py", line 191, in __instancecheck__
        return cls.__subclasscheck__(subclass)
      File "/usr/lib/python3.5/abc.py", line 226, in __subclasscheck__
        if issubclass(subclass, scls):
      File "/usr/lib/python3.5/abc.py", line 226, in __subclasscheck__
        if issubclass(subclass, scls):
      File "/usr/lib/python3.5/abc.py", line 226, in __subclasscheck__
        if issubclass(subclass, scls):
      File "/usr/lib/python3.5/typing.py", line 1081, in __subclasscheck__
        return issubclass(cls, self.__extra__)
      File "/usr/lib/python3.5/abc.py", line 226, in __subclasscheck__
        if issubclass(subclass, scls):
    [...many more repetitions...]  
      File "/usr/lib/python3.5/typing.py", line 1081, in __subclasscheck__
        return issubclass(cls, self.__extra__)
      File "/usr/lib/python3.5/abc.py", line 226, in __subclasscheck__
        if issubclass(subclass, scls):
      File "/usr/lib/python3.5/typing.py", line 1077, in __subclasscheck__
        if super().__subclasscheck__(cls):
      File "/usr/lib/python3.5/abc.py", line 197, in __subclasscheck__
        if subclass in cls._abc_cache:
    RecursionError: maximum recursion depth exceeded
    
    opened by jabdoa2 9
  • Type annotation does not with in Python 3.5

    Type annotation does not with in Python 3.5

    For some reason pyannotate with the workaround from #12 does not annotate my files:

    PYTHONPATH=/usr/local/lib/python3.5/dist-packages/ pyannotate -w -v "mpf/core/data_manager.py"
    Generating grammar tables from /usr/lib/python3.5/lib2to3/PatternGrammar.txt
    Adding transformation: annotate_json
    Refactoring mpf/core/data_manager.py
    No changes in mpf/core/data_manager.py
    No files need to be modified.
    

    However, it definitely found some missing annotations in type_info.json:

        {   
            "func_name": "DataManager.__init__",
            "path": "mpf/core/data_manager.py",
            "type_comments": [
                "(mpf.tests.MpfTestCase.TestMachineController, str, int) -> None"
            ],
            "line": 18,
            "samples": 5
        },
    

    And the code looks like this:

    class DataManager(MpfController):
    
        """Handles key value data loading and saving for the machine."""
    
        def __init__(self, machine, name, min_wait_secs=1):
            [...]
    

    Any idea what is going wrong here?

    opened by jabdoa2 9
  • Treat any instance of a `Mapping` as a `Dict`.

    Treat any instance of a `Mapping` as a `Dict`.

    Without this we're winding up with a bunch types like:

    Union[Dict[str, str], OrderedDict]
    

    On functions that can accept any Mapping.

    I was woried about covariance/contravariance with this and mypy seems to be OK with that:

    from collections import OrderedDict
    from typing import Dict
    
    def foo(a: Dict[str, str]) -> Dict[str, str]:
        return OrderedDict([('a', 'a'), ('b', 'b')])
    
    foo(OrderedDict([('a', 'a'), ('b', 'b')]))
    
    opened by rowillia 8
  • Fix CI on travis and include Python 3.8-dev

    Fix CI on travis and include Python 3.8-dev

    This fixes and adds the following points:

    • python 3.7-dev is no longer working on Ubuntu Trusty image, use Ubuntu Xenial instead (see https://github.com/travis-ci/travis-ci/issues/9069)
    • use official 3.7 release (with Ubuntu Xenial)
    • provide a build for 3.8-dev allowing failures (with Ubuntu Xenial)
    • use the more flexible matrix syntax
    • run mypy for 3.6, 3.7, 3.8-dev versions (using a environement variable to trigger mypy)
    • avoid latest mypy v0.620 which signals an error (to be fixed)
    opened by bluebird75 6
  • Permission denied for Temp file on Windows

    Permission denied for Temp file on Windows

    When I run "pyannotate --type-info ./annotate.json ." with on Windows (both 2.7.14 and 3.6.3, probably others) , I get the following error: Traceback (most recent call last): File "c:\python27\lib\runpy.py", line 174, in run_module_as_main "main", fname, loader, pkg_name) File "c:\python27\lib\runpy.py", line 72, in run_code exec code in run_globals File "C:\Python27\Scripts\pyannotate.exe_main.py", line 9, in File "c:\python27\lib\site-packages\pyannotate_tools\annotations_main.py", line 45, in main generate_annotations_json(infile, tf.name) File "c:\python27\lib\site-packages\pyannotate_tools\annotations\main.py", line 58, in generate_annotations_json with open(target_path, 'w') as f: IOError: [Errno 13] Permission denied: 'c:\temp\tmp2ui1ku'

    A little bit of googling suggests this might be the problem Permission Denied To Write To My Temporary File

    opened by NeonGraal 6
  • Update the current module when a new file is parsed

    Update the current module when a new file is parsed

    FixAnnotateJson.current_module is currently a constant value set in __main__, but this causes bugs when pyannotate is run on a package directory, where the current module should be updated for each file that is parsed.

    To fix this, I implemented set_filename, which is inherited from BaseFix, to update the current module each time the file changes.

    To test, I removed the annotation from parse_type_comment and created a type_info.json file to provide the type data:

    [
      {
        "func_name": "parse_type_comment", "line": 213, 
        "path": "/Users/chad/dev/pyannotate/pyannotate_tools/annotations/parse.py", 
        "samples": 0, 
        "signature": {
          "arg_types": ["str"], 
          "return_type": "Tuple[pyannotate_tools.annotations.parse:List[pyannotate_tools.annotations.parse.Argument], pyannotate_tools.annotations.parse.AbstractType]"
        }
      }
    ]
    

    I saw two types of bugs with the current implementation.

    1. imports statements were added that imported types from the current module:
    pyannotate --uses-signature pyannotate_tools pyannotate_runtime
    Refactored pyannotate_tools/annotations/parse.py
    --- pyannotate_tools/annotations/parse.py	(original)
    +++ pyannotate_tools/annotations/parse.py	(refactored)
    @@ -10,6 +10,9 @@
     import sys
     
     from typing import Any, List, Mapping, Set, Tuple
    +from pyannotate_tools.annotations.parse import AbstractType
    +from pyannotate_tools.annotations.parse import Argument
    +from pyannotate_tools.annotations.parse import List
     try:
         from typing import Text
     except ImportError:
    @@ -211,6 +214,7 @@
     
     
     def parse_type_comment(comment):
    +    # type: (str) -> Tuple[List[Argument], AbstractType]
         """Parse a type comment of form '(arg1, ..., argN) -> ret'."""
         return Parser(comment).parse()
    
    1. If I passed two packages on the command line the second would be completely ignored.
    pyannotate --uses-signature pyannotate_runtime pyannotate_tools
    No files need to be modified.
    
    opened by chadrik 5
  • Pyannotate crashes when type_info.json has info about junitxml.py

    Pyannotate crashes when type_info.json has info about junitxml.py

    Running pyannotate with the following json file:

    [
        {
            "type_comments": [
                "(py._xmlgen.system-err) -> None", 
                "(py._xmlgen.system-out) -> None"
            ], 
            "path": "<path_python_lib>/junitxml.py", 
            "line": 78, 
            "samples": 5, 
            "func_name": "_NodeReporter.append"
        }
    ]
    

    fails with

    Traceback (most recent call last):
      File "/usr/local/bin/pyannotate", line 11, in <module>
        sys.exit(main())
      File "/Library/Python/2.7/site-packages/pyannotate_tools/annotations/__main__.py", line 45, in main
        generate_annotations_json(infile, tf.name)
      File "/Library/Python/2.7/site-packages/pyannotate_tools/annotations/main.py", line 37, in generate_annotations_json
        arg_types, return_type = infer_annotation(item.type_comments)
      File "/Library/Python/2.7/site-packages/pyannotate_tools/annotations/infer.py", line 38, in infer_annotation
        arg_types, return_type = parse_type_comment(comment)
      File "/Library/Python/2.7/site-packages/pyannotate_tools/annotations/parse.py", line 196, in parse_type_comment
        return Parser(comment).parse()
      File "/Library/Python/2.7/site-packages/pyannotate_tools/annotations/parse.py", line 205, in __init__
        self.tokens = tokenize(comment)
      File "/Library/Python/2.7/site-packages/pyannotate_tools/annotations/parse.py", line 188, in tokenize
        raise ParseError(original)
    pyannotate_tools.annotations.parse.ParseError: Invalid type comment: (py._xmlgen.system-err) -> None
    

    Please note I'm not trying to annotate junitxml.py. pyannotate just always fails if json file has it

    opened by elvslv 5
  • NamedTemporaryFile works different on Windows

    NamedTemporaryFile works different on Windows

    Add optional stream parameters to generate_annotation_json and parse_json.

    Hopefully fixes #25 but unable to run all tests due to lib2to3 quirks on Windows and Ubuntu

    CLA signed 
    opened by NeonGraal 5
  • Add caller func name into type_info.json to make it clear which function is doing wrong

    Add caller func name into type_info.json to make it clear which function is doing wrong

    What this pr does

    • add caller func name into type_info.json
    • this doesn't affect to any original function

    Why I do this

    I think this is really necessary in a kind of big project.

    Let's think if you have a code like this.

    def main():
        print(gcd(15, 10))
        print(gcd(24, 10))
        print(gcd("a", "b"))
    
    def gcd(a, b):
        return a, b
    

    This generates type_info like below.

    [
        {
            "path": "gcd.py",
            "line": 1,
            "func_name": "main",
            "type_comments": [
                "() -> None"
            ],
            "samples": 1
        },
        {
            "path": "gcd.py",
            "line": 6,
            "func_name": "gcd",
            "type_comments": [
                "(int, int) -> Tuple[int, int]",
                "(str, str) -> Tuple[str, str]"
            ],
            "samples": 3
        }
    ]
    

    Then, you would come up with the idea like "this gcd function must accept only int. Which function uses in wrong way...?".

    However, you can't see any information about that. So you need to dig yourself which code was wrong. I spent the most of time on this while using pyannotate.

    Now, this pr will show which code screwed up. This will generate type_info like below

    [
        {
            "path": "gcd.py",
            "line": 1,
            "func_name": "main",
            "type_comments": [
                "() -> None"
            ],
            "caller_name": [
                ["__main__ : 7"]
            ],
            "samples": 1
        },
        {
            "path": "gcd.py",
            "line": 6,
            "func_name": "gcd",
            "type_comments": [
                "(int, int) -> Tuple[int, int]",
                "(str, str) -> Tuple[str, str]"
            ],
            "caller_name": [
                [
                    "gcd.main : 2",
                    "gcd.main : 3"
                ],
                ["gcd.main : 4"]
            ],
            "samples": 3
        }
    ]
    

    As you see, you can understand (str, str) is called by "gcd.main : 4" easily.

    remarks

    I submitted Contributor License Agreement.

    opened by ulwlu 4
  • docs: Fix a few typos

    docs: Fix a few typos

    There are small typos in:

    • pyannotate_tools/annotations/infer.py
    • pyannotate_tools/fixes/fix_annotate.py

    Fixes:

    • Should read redundant rather than reundant.
    • Should read annotated rather than annoted.

    Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md

    opened by timgates42 1
  • [propose] Should we have event caller (f_back) in type_comments?

    [propose] Should we have event caller (f_back) in type_comments?

    If you have multiple functions calling one function, it will look like this.

      {
        "path": "~~~",
        "line": 1,
        "func_name": "called_func",
        "type_comments": [
            "(int) -> int"
            "(int) -> str"
        ],
        "samples": 2
      },
    

    I would like to know which function calls the called_func in this case.

      {
        "path": "~~~",
        "line": 1,
        "func_name": "called_func",
        "type_comments": [
            {
                "calling1_func": "(int) -> int",
            }
            {
                "calling2_func": "(int) -> str",
            }
        ],
        "samples": 2
      },
    

    ... because there are chances to have 10 or more samples (sadly).

    remarks

    This must change the code below (and more. such as FunctionKey), but it maybe gets slower.

    https://github.com/dropbox/pyannotate/blob/1e7ddf09fdbc6fa37b2cf261f36a5563a0fe772a/pyannotate_runtime/collect_types.py#L446-L448

    So I want to know if this proposal is ideal or not. If ideal I think I can contribute when I have a time (maybe late of this month)

    opened by ulwlu 0
  • Problem: TOP_DIR_x are global

    Problem: TOP_DIR_x are global

    In the context of @gvanrossum comment on #111, I looked a bit closer to the globals TOP_DIR_x.

    IMHO, it is not needed to keep them global.

    This PR would replace #111. (Nevertheless, I fixed #111 in order to let you decide which one you prefer.)

    The code in this PR enables customizing the default_filter_filename top_dirwith the following code:

    top_filter = collect_types.configure_default_filter_top_dir(top_dir)
    collect_types.init_types_collection(filter_filename=top_filter)
    
    opened by gotcha 1
  • WIP Enable configuration of TOP_DIR

    WIP Enable configuration of TOP_DIR

    used by default_filter_filename.

    The test suite of my tool runs forked processes in a temporary directory (via tempfile.mkdtemp) which does not share the same root directory as the test suite. This implies all calls are filtered out by default_filter_filename.

    Adding the set_top_dir was enough to avoid filtering out.

    This is a simple enough way to work around the little naive assumption made by:

    TOP_DIR = os.path.join(os.getcwd(), '')     # current dir with trailing slash
    

    Would you see a better solution ?

    opened by gotcha 2
  • Crash when the target json file is too large

    Crash when the target json file is too large

    My code is here:

    # driver.py
    import numpy as np
    from pyannotate_runtime import collect_types
    
    
    if __name__ == '__main__':
        collect_types.init_types_collection()
        collect_types.start()
        np.core.test()
        collect_types.stop()
        print("test finished")
        collect_types.dump_stats('type_info.json')
    

    and crashed after some time.

    Is that because the target json file too large? What tricks can be used to run it correctly?

    opened by Butter934 0
Owner
Dropbox
Dropbox
Re-apply type annotations from .pyi stubs to your codebase.

retype Re-apply type annotations from .pyi stubs to your codebase. Usage Usage: retype [OPTIONS] [SRC]... Re-apply type annotations from .pyi stubs

Łukasz Langa 131 Nov 17, 2022
Tool for translation type comments to type annotations in Python

com2ann Tool for translation of type comments to type annotations in Python. The tool requires Python 3.8 to run. But the supported target code versio

Ivan Levkivskyi 123 Nov 12, 2022
Auto-generate PEP-484 annotations

PyAnnotate: Auto-generate PEP-484 annotations Insert annotations into your source code based on call arguments and return types observed at runtime. F

Dropbox 1.4k Dec 26, 2022
Optional static typing for Python 3 and 2 (PEP 484)

Mypy: Optional Static Typing for Python Got a question? Join us on Gitter! We don't have a mailing list; but we are always happy to answer questions o

Python 14.4k Jan 5, 2023
Typing-toolbox for Python 3 _and_ 2.7 w.r.t. PEP 484.

Welcome to the pytypes project pytypes is a typing toolbox w.r.t. PEP 484 (PEP 526 on the road map, later also 544 if it gets accepted). Its main feat

Stefan Richthofer 188 Dec 29, 2022
PEP-484 stubs for Django

pep484 stubs for Django This package contains type stubs and a custom mypy plugin to provide more precise static types and type inference for Django f

TypedDjango 1.1k Dec 30, 2022
PEP-484 stubs for django-rest-framework

pep484 stubs for Django REST framework Mypy stubs for DRF 3.12.x. Supports Python 3.6, 3.7, 3.8 and 3.9. Installation pip install djangorestframework-

TypedDjango 303 Dec 27, 2022
Optional static typing for Python 3 and 2 (PEP 484)

Mypy: Optional Static Typing for Python Got a question? Join us on Gitter! We don't have a mailing list; but we are always happy to answer questions o

Python 14.4k Jan 8, 2023
PEP-484 typing stubs for SQLAlchemy 1.4 and SQLAlchemy 2.0

SQLAlchemy 2 Stubs These are PEP-484 typing stubs for SQLAlchemy 1.4 and 2.0. They are released concurrently along with a Mypy extension which is desi

SQLAlchemy 139 Dec 30, 2022
A PyPI mirror client according to PEP 381 http://www.python.org/dev/peps/pep-0381/

This is a PyPI mirror client according to PEP 381 + PEP 503 http://www.python.org/dev/peps/pep-0381/. bandersnatch >=4.0 supports Linux, MacOSX + Wind

Python Packaging Authority 345 Dec 28, 2022
A PyPI mirror client according to PEP 381 http://www.python.org/dev/peps/pep-0381/

This is a PyPI mirror client according to PEP 381 + PEP 503 http://www.python.org/dev/peps/pep-0381/. bandersnatch >=4.0 supports Linux, MacOSX + Wind

Python Packaging Authority 345 Dec 28, 2022
Pydocstringformatter - A tool to automatically format Python docstrings that tries to follow recommendations from PEP 8 and PEP 257.

Pydocstringformatter A tool to automatically format Python docstrings that tries to follow recommendations from PEP 8 and PEP 257. See What it does fo

Daniël van Noord 31 Dec 29, 2022
Auto Liker, Auto Reaction, Auto Comment, Auto Follower Tool. RajeLiker Credit Hacker.

Auto Liker, Auto Reaction, Auto Comment, Auto Follower Tool. RajeLiker Credit Hacker. Unlimited RajeLiker Credit Hack. Thanks To RajeLiker.

Md. Mehedi Hasan 32 Dec 28, 2022
shiv is a command line utility for building fully self contained Python zipapps as outlined in PEP 441, but with all their dependencies included.

shiv shiv is a command line utility for building fully self-contained Python zipapps as outlined in PEP 441, but with all their dependencies included!

LinkedIn 1.5k Dec 28, 2022
asyncio (PEP 3156) Redis support

aioredis asyncio (PEP 3156) Redis client library. Features hiredis parser Yes Pure-python parser Yes Low-level & High-level APIs Yes Connections Pool

aio-libs 2.2k Jan 4, 2023
Redis client for Python asyncio (PEP 3156)

Redis client for Python asyncio. Redis client for the PEP 3156 Python event loop. This Redis library is a completely asynchronous, non-blocking client

Jonathan Slenders 554 Dec 4, 2022
flake8 plugin which forbids match statements (PEP 634)

flake8-match flake8 plugin which forbids match statements (PEP 634)

Anthony Sottile 25 Nov 1, 2022
Poetry PEP 517 Build Backend & Core Utilities

Poetry Core A PEP 517 build backend implementation developed for Poetry. This project is intended to be a light weight, fully compliant, self-containe

Poetry 293 Jan 2, 2023
A modern Python package manager with PEP 582 support.

A modern Python package manager with PEP 582 support.

Python Development Master(PDM) 3.6k Jan 5, 2023
This package tries to emulate the behaviour of syntax proposed in PEP 671 via a decorator

Late-Bound Arguments This package tries to emulate the behaviour of syntax proposed in PEP 671 via a decorator. Usage Mention the names of the argumen

Shakya Majumdar 0 Feb 6, 2022