A static analysis tool for Python

Overview

pyanalyze

Pyanalyze is a tool for programmatically detecting common mistakes in Python code, such as references to undefined variables and some categories of type mismatches. It can be extended to add additional rules and perform checks specific to particular functions.

Some use cases for this tool include:

  • Catching bugs before they reach production. The script will catch accidental mistakes like writing "collections.defalutdict" instead of "collections.defaultdict", so that they won't cause errors in production. Other categories of bugs it can find include variables that may be undefined at runtime, duplicate keys in dict literals, and missing await keywords.
  • Making refactoring easier. When you make a change like removing an object attribute or moving a class from one file to another, pyanalyze will often be able to flag code that you forgot to change.
  • Finding dead code. It has an option for finding Python objects (functions and classes) that are not used anywhere in the codebase.
  • Checking type annotations. Type annotations are useful as documentation for readers of code, but only when they are actually correct. Although pyanalyze does not support the full Python type system (see below for details), it can often detect incorrect type annotations.

Usage

You can install pyanalyze with:

$ pip install pyanalyze

Once it is installed, you can run pyanalyze on a Python file or package as follows:

$ python -m pyanalyze file.py
$ python -m pyanalyze package/

But note that this will try to import all Python files it is passed. If you have scripts that perform operations without if __name__ == "__main__": blocks, pyanalyze may end up executing them.

In order to run successfully, pyanalyze needs to be able to import the code it checks. To make this work you may have to manually adjust Python's import path using the $PYTHONPATH environment variable.

Pyanalyze has a number of command-line options, which you can see by running python -m pyanalyze --help. Important ones include -f, which runs an interactive prompt that lets you examine and fix each error found by pyanalyze, and --enable/--disable, which enable and disable specific error codes.

Advanced usage

At Quora, when we want pyanalyze to check a library in CI, we write a unit test that invokes pyanalyze for us. This allows us to run pyanalyze with other tests without further special setup, and it provides a convenient place to put configuration options. An example is pyanalyze's own test_self.py test:

import os.path
import pyanalyze
from pyanalyze.error_code import ErrorCode
from pyanalyze.test_node_visitor import skip_before


class PyanalyzeConfig(pyanalyze.config.Config):
    DEFAULT_DIRS = (str(os.path.dirname(__file__)),)
    DEFAULT_BASE_MODULE = pyanalyze
    ENABLED_ERRORS = {
        ErrorCode.condition_always_true,
        ErrorCode.possibly_undefined_name,
    }


class PyanalyzeVisitor(pyanalyze.name_check_visitor.NameCheckVisitor):
    config = PyanalyzeConfig()
    should_check_environ_for_files = False


@skip_before((3, 6))
def test_all():
    PyanalyzeVisitor.check_all_files()


if __name__ == "__main__":
    PyanalyzeVisitor.main()

Extending pyanalyze

The main way to extend pyanalyze is by providing a specification for a particular function. This allows you to run arbitrary code that inspects the arguments to the function and raises errors if something is wrong.

As an example, suppose your codebase contains a function database.run_query() that takes as an argument a SQL string, like this:

database.run_query("SELECT answer, question FROM content")

You want to detect when a call to run_query() contains syntactically invalid SQL or refers to a non-existent table or column. You could set that up with code like this:

from ast import AST
from pyanalyze.arg_spec import ExtendedArgSpec, Parameter
from pyanalyze.error_code import ErrorCode
from pyanalyze.name_check_visitor import NameCheckVisitor
from pyanalyze.value import KnownValue, TypedValue, Value
from typing import Dict

from database import run_query, parse_sql

def run_query_impl(
  variables: Dict[str, Value],  # parameters passed to the function
  visitor: NameCheckVisitor,   # can be used to show errors or look up names
  node: AST,  # for showing errors
) -> Value:
  sql = variables["sql"]
  if not isinstance(sql, KnownValue) or not isinstance(sql.val, str):
      visitor.show_error(
          node,
          "Argument to run_query() must be a string literal",
          error_code=ErrorCode.incompatible_call,
      )
      return

  try:
      parsed = parse_sql(sql)
  except ValueError as e:
      visitor.show_error(
          node,
          f"Invalid sql passed to run_query(): {e}",
          error_code=ErrorCode.incompatible_call,
      )
      return

  # check that the parsed SQL is valid...

  # pyanalyze will use this as the inferred return type for the function
  return TypedValue(list)

  class Config(pyanalyze.config.Config):
      def get_known_argspecs(self, arg_spec_cache):
          return {
              # This infers the parameter types and names from the function signature
              run_query: arg_spec_cache.get_argspec(
                  run_query, implementation=run_query_impl,
              )
              # You can also write the signature manually
              run_query: ExtendedArgSpec(
                  [Parameter("sql", typ=TypedValue(str))],
                  name="run_query",
                  implementation=run_query_impl,
              )
          }

Displaying and checking the type of an expression

You can use pyanalyze.dump_value(expr) to display the type pyanalyze infers for an expression. This can be useful to understand errors or to debug why pyanalyze does not catch a particular issue. For example:

from pyanalyze import dump_value

dump_value(1)  # value: KnownValue(val=1) (code: inference_failure)

Similarly, you can use pyanalyze.assert_is_value to assert that pyanalyze infers a particular type for an expression. This requires importing the appropriate Value subclass from pyanalyze.value. For example:

from pyanalyze import assert_is_value
from pyanalyze.value import KnownValue

assert_is_value(1, KnownValue(1))  # succeeds
assert_is_value(int("2"), KnownValue(1))  # Bad value inference: expected KnownValue(val=1), got TypedValue(typ=<class 'int'>) (code: inference_failure)

This function is mostly useful when writing unit tests for pyanalyze or an extension.

Ignoring errors

Sometimes pyanalyze gets things wrong and you need to ignore an error it emits. This can be done as follows:

  • Add # static analysis: ignore on a line by itself before the line that generates the erorr.
  • Add # static analysis: ignore at the end of the line that generates the error.
  • Add # static analysis: ignore at the top of the file; this will ignore errors in the entire file.

You can add an error code, like # static analysis: ignore[undefined_name], to ignore only a specific error code. This does not work for whole-file ignores. If the bare_ignore error code is turned on, pyanalyze will emit an error if you don't specify an error code on an ignore comment.

Python version support

Pyanalyze supports Python 2.7 and 3.5 through 3.8. Because it imports the code it checks, you have to run it using the same version of Python you use to run your code.

In Python 2 mode, some checks (notably, many of those related to type checking) are not supported. In the future we will likely drop Python 2 support completely.

Background

Pyanalyze is built on top of two lower-level abstractions: Python's built-in ast module and our own node_visitor abstraction, which is an extension of the ast.NodeVisitor class.

Python AST module

The ast module (https://docs.python.org/3/library/ast.html) provides access to the abstract syntax tree (AST) of Python code. The AST is a tree-based representation of the structure of a Python program. For example, the string "import a" resolves into this AST:

# ast.parse considers everything to be a module
Module(body=[
    # the module contains one statement of type Import
    Import(
        # names is a list; it would contain multiple elements for "import a, b"
        names=[
            alias(
                name='a',
                # if we did "import a as b", this would be "b" instead of None
                asname=None
            )
        ]
    )
])

The ast.NodeVisitor class provides a convenient way to run code that inspects an AST. For each AST node type, a NodeVisitor subclass can implement a method called visit_<node type>. When the visitor is run on an AST, this method will be called for each node of that type. For example, the following class could be used to find import statements:

class ImportFinder(ast.NodeVisitor):
    def visit_Import(self, node):
        print("Found import statement: %s" % ast.dump(node))

node_visitor.py

Pyanalyze uses an extension to ast.NodeVisitor, implemented in pyanalyze/node_visitor.py, that adds two main features: management of files to run the visitor on and management of errors that are found by the visitor.

The following is a simple example of a visitor using this abstraction---a visitor that will show an error for every assert and import statement found:

import enum
from pyanalyze import node_visitor

class ErrorCode(enum.Enum):
    found_assert = 1
    found_import = 2

class BadStatementFinder(node_visitor.BaseNodeVisitor):
    error_code_enum = ErrorCode

    def visit_Assert(self, node):
        self.show_error(node, error_code=ErrorCode.found_assert)

    def visit_Import(self, node):
        self.show_error(node, error_code=ErrorCode.found_import)

if __name__ == '__main__':
    BadStatementFinder.main()

As an example, we'll run the visitor on a file containing this code:

import a
assert True

Running the visitor without arguments gives the following output:

$ python example_visitor.py example.py
Error: found_import (code: found_import)
In example.py at line 1:
   1: import a
      ^
   2: assert True
   3:

Error: found_assert (code: found_assert)
In example.py at line 2:
   1: import a
   2: assert True
      ^
   3:

Using information stored in the node that caused the error, the show_error method finds the line and column in the Python source file where the error appears.

Passing an error_code argument to show_error makes it possible to conditionally suppress errors by passing a --disable command-line argument:

$ python example_visitor.py example.py --disable found_import
Error: found_assert (code: found_assert)
In example.py at line 2:
   1: import a
   2: assert True
      ^
   3:

Subclasses of BaseNodeVisitor can specify which errors are enabled by default by overriding is_enabled_by_default and the description shown for an error by overriding get_description_for_error_code.

Design

Fundamentally, the way pyanalyze works is that it tries to infer, with as much precision as possible, what Python value or what kind of Python value each node in a file's AST corresponds to, and then uses that information to flag code that does something undesirable. Mostly, that involves identifying code that will cause the Python interpreter to throw an error at runtime, for example because it accesses an attribute that doesn't exist or because it passes incorrect arguments to a function. As much as possible, the script tries to evaluate whether an operation is allowed by asking Python whether it is: for example, whether the arguments to a function call are correct is decided by creating a function with the same arguments as the called function, calling it with the same arguments as in the call, and checking whether the call throws an error.

This is done by recursively visiting the AST of the file and building up a context of information gathered from previously visited nodes. For example, the visit_ClassDef method visits the body of the class within a context that indicates that AST nodes are part of the class, which enables method definitions within the class to infer the type of their self arguments as being the class. In some cases, the visitor will traverse the AST twice: once to collect places where names are set, and once again to check that every place a name is accessed is valid. This is necessary because functions may use names that are only defined later in the file.

Name resolution

The name resolution component of pyanalyze makes it possible to connect usage of a Python variable with the place where it is defined.

Pyanalyze uses the StackedScopes class to simulate Python scoping rules. This class contains a stack of nested scopes, implemented as dictionaries, that contain names defined in a particular Python scope (e.g., a function). When the script needs to determine what a particular name refers to, it iterates through the scopes, starting at the top of the scope stack, until it finds a scope dictionary that contains the name. This is similar to how name lookup is implemented in Python itself. When a name that is accessed in Python code is not found in any scope object, test_scope.py will throw an error with code undefined_name.

When the script is run on a file, the scopes object is initialized with two scope levels containing builtin objects such as len and Exception and the file's module-level globals (found by importing the file and inspecting its __dict__). When it inspects the AST, it adds names that it finds in assignment context into the appropriate nested scope. For example, when the scripts sees a FunctionDef AST node, it adds a new function-level scope, and if the function contains a statement like x = 1, it will add the variable x to the function's scope. Then when the function accesses the variable x, the script can retrieve it from the function-level scope in the StackedScopes object.

The following scope types exist:

  • builtin_scope is at the bottom of every scope stack and contains standard Python builtin objects.
  • module_scope is always right above builtin_scope and contains module-global names, such as classes and functions defined at the global level in the file.
  • class_scope is entered whenever the AST visitor encounters a class definition. It can contain nested class or function scopes.
  • function_scope is entered for each function definition.

The function scope has a more complicated representation than the others so that it can reflect changes in values during the execution of a function. Broadly speaking, pyanalyze collects the places where every local variable is either written (definition nodes) or read (usage nodes), and it maps every usage node to the set of possible definition nodes that the value may come from. For example, if a variable is written to and then read on the next line, the usage node on the second line is mapped to the definition node on the first line only, but if a variable is set within both the if and the else branch of an if block, a usage after the if block will be mapped to definition nodes from both the if and the else block. If the variable is never set in some branches, a special marker object is used again, and pyanalyze will emit a possibly_undefined_name error.

Function scopes also support constraints. Constraints are restrictions on the values a local variable may take. For example, take the following code:

def f(x: Union[int, None]) -> None:
    dump_value(x)  # Union[int, None]
    if x is not None:
        dump_value(x)  # int

In this code, the x is not None check is translated into a constraint that is stored in the local scope, similar to how assignments are stored. When a variable is used within the block, we look at active constraints to restrict the type. In this example, this makes pyanalyze able to understand that within the if block the type of x is int, not Union[int, None].

The following constructs are understood as constraints:

  • if x is (not) None
  • if (not) x
  • if isinstance(x, <some type>)

Constraints are used to restrict the types of:

  • Local variables
  • Instance variables (e.g., after if self.x is None, the type of self.x is restricted)
  • Nonlocal variables (variables defined in enclosing scopes)

Type and value inference

Just knowing that a name has been defined doesn't tell what you can do with the value stored for the name. To get this information, each node visit method in test_scope.py can return an instance of the Value class representing the Python value that corresponds to the AST node. We also use type annotations in the code under consideration to get types for more values. Scope dictionaries also store Value instances to represent the values associated with names.

The following subclasses of Value exist:

  • UnresolvedValue (with a single instance, UNRESOLVED_VALUE), representing that the script knows nothing about the value a node can contain. For example, if a file contains only the function def f(x): return x, the name x will have UNRESOLVED_VALUE as its value within the function, because there is no information to determine what value it can contain.
  • KnownValue represents a value for which the script knows the concrete Python value. If a file contains the line x = 1 and no other assignments to x, x will contain KnownValue(1).
  • TypedValue represents that the script knows the type but not the exact value. If the only assignment to x is a line x = int(some_function()), the script infers that x contains TypedValue(int). More generally, the script infers any call to a class as resulting in an instance of that class. The type is also inferred for the self argument of methods, for comprehensions, for function arguments with type annotations, and in a few other cases. This class has several subtypes:
    • NewTypeValue corresponds to typing.NewType; it indicates a distinct type that is identical to some other type at runtime. At Quora we use newtypes for helper types like qtype.Uid.
    • GenericValue corresponds to generics, like List[int].
  • MultiValuedValue indicates that multiple values are possible, for example because a variable is assigned to in multiple places. After the line x = 1 if condition() else 'foo', x will contain MultiValuedValue([KnownValue(1), KnownValue('foo')]). This corresponds to typing.Union.
  • UnboundMethodValue indicates that the value is a method, but that we don't have the instance the method is bound to. This often comes up when a method in a class SomeClass contains code like self.some_other_method: we know that self is a TypedValue(SomeClass) and that SomeClass has a method some_other_method, but we don't have the instance that self.some_other_method will be bound to, so we can't resolve a KnownValue for it. Returning an UnboundMethodValue in this case makes it still possible to check whether the arguments to the method are valid.
  • ReferencingValue represents a value that is a reference to a name in some other scopes. This is used to implement the global statement: global x creates a ReferencingValue referencing the x variable in the module scope. Assignments to it will affect the referenced value.
  • SubclassValue represents a class object of a class or its subclass. For example, in a classmethod, the type of the cls argument is a SubclassValue of the class the classmethod is defined in. At runtime, it is either this class or a subclass.
  • NoReturnValue indicates that a function will never return (e.g., because it always throws an error), corresponding to typing.NoReturn.

Each Value object has a method is_value_compatible that checks whether types are correct. The call X.is_value_compatible(Y) essentially answers the question: if we expect a value X, is it legal to pass a value Y instead? For example, TypedValue(int).is_value_compatible(KnownValue(1)) will return True, because 1 is a valid int, but TypedValue(int).is_value_compatible(KnownValue("1")) will return False, because "1" is not.

Call compatibility

When the visitor encounters a Call node (representing a function call) and it can resolve the object being called, it will check that the object can in fact be called and that it accepts the arguments given to it. This checks only the number of arguments and the names of keyword arguments, not their types.

The first step in implementing this check is to retrieve the argument specification (argspec) for the callee. Although Python provides the inspect.getargspec function to do this, this function doesn't work on classes and its result needs post-processing to remove the self argument from calls to bound methods. To figure out what arguments classes take, the argspec of their __init__ method is retrieved. It is not always possible to programmatically determine what arguments built-in or Cythonized functions accept, but pyanalyze can often figure this out with the new Python 3 inspect.signature API or by using typeshed, a repository of types for standard library modules.

Once we have the argspec, we can figure out whether the arguments passed to the callee in the AST node under consideration are compatible with the argspec. The semantics of Python calls are sufficiently complicated that it seemed simplest to generate code that contains a function with the argspec and a call to that function with the node's arguments, which can be exec'ed to determine whether the call is valid. All default values and all arguments to the call are set to None. In verbose mode, this generated code is printed out:

$ cat call_example.py
def function(foo, bar=3, baz='baz'):
    return str(foo * bar) + baz

if False:  # to make the module importable
    function(2, bar=2, bax='2')
$ python -m pyanalyze -vv call_example.py
Checking file: ('call_example.py', 3469)
Code to execute:
def str(self, *args, **kwargs):
    return __builtin__.locals()

Variables from function call: {'self': TypedValue(typ=<class 'str'>), 'args': (UnresolvedValue(),), 'kwargs': {}}
Code to execute:
def function(foo, bar=__default_bar, baz=__default_baz):
    return __builtin__.locals()


TypeError("function() got an unexpected keyword argument 'bax'") (code: incompatible_call)
In call_example.py at line 5:
   2:     return str(foo * bar) + baz
   3:
   4: if False:  # to make the module importable
   5:     function(2, bar=2, bax='2')
          ^

Non-existent object attributes

Python throws a runtime AttributeError when you try to access an object attribute that doesn't exist. test_scope.py can statically find some kinds of code that will access non-existent attribute. The simpler case is when code accesses an attribute of a KnownValue , like in a file that has import os and then accesses os.ptah. In this case, we know the value that os contains, so we can try to access the attribute ptah on it, and show an error if the attribute lookup fails. Similarly, os.path will return a KnownValue of the os.path module, so that we can also check attribute lookups on os.path.

Another class of bugs involves objects accessing attributes on self that don't exist. For example, an object may set self.promote in its __init__ method, but then access self.promotion in its tree method. To detect such cases, pyanalyze uses the ClassAttributeChecker class. This class keeps a record of every node where an attribute is written or read on a TypedValue. After checking all code that uses the class, it then takes the difference between the sets of read and written values and shows an error for every attribute that is read but never written. This approach is complicated by inheritance---subclasses may read values only written on the superclass, and vice versa. Therefore, the check doesn't trigger for any attribute that is set on any superclass or subclass of the class under consideration. It also doesn't trigger for any attributes of a class that has a base class that wasn't itself examined by the ClassAttributeChecker. This was needed to deal with Thrift classes that used attributes defined in superclasses outside of code checked by pyanalyze. Two superclasses are excluded from this, so that undefined attributes are flagged on their subclasses even though test_scope.py hasn't examined their definitions: object (the superclass of every class) and qutils.webnode2.Component (which doesn't define any attributes that are read by its subclasses).

Finding unused code

Because pyanalyze tries to resolve all names and attribute lookups in code in a package, it was easy to extend it to determine which of the classes and functions defined in the package aren't accessed in any other code. This is done by recording every name and attribute lookup that results in a KnownValue containing a function or class defined in the package. After the AST visitor run, it compares the set of accessed objects with another set of all the functions and classes that are defined in submodules of the package. All objects that appear in the second set but not the first are probably unused. (There are false positives, such as functions that are registered in some registry by decorators, or those that are called from outside of a itself.) This check can be run by passing the --find-unused argument to pyanalyze.

Type system

Pyanalyze partially supports the Python type system, as specified in PEP 484 and in the Python documentation. It uses type annotations to infer types and checks for type compatibility in calls and return types. Supported type system features include generics like List[int], NewType, and TypedDict.

However, support for some features is still missing, including:

  • Callable types
  • Overloaded functions
  • Type variables
  • Protocols

Limitations

Python is sufficiently dynamic that almost any check like the ones run by pyanalyze will inevitably have false positives: cases where the script sees an error, but the code in fact runs fine. Attributes may be added at runtime in hard-to-detect ways, variables may be created by direct manipulation of the globals() dictionary, and the mock module can change anything into anything. Although pyanalyze has a number of whitelists to deal with these false positives, it is usually better to write code in a way that doesn't require use of the whitelist: code that's easier for the script to understand is probably also easier for humans to understand.

Just as the script inevitably has false positives, it equally inevitably cannot find all code that will throw a runtime error. It is generally impossible to statically determine what a program does or whether it runs successfully without actually running the program. Pyanalyze doesn't check program logic and it cannot always determine exactly what value a variable will have. It is no substitute for unit tests.

Developing pyanalyze

Pyanalyze has hundreds of unit tests that check its behavior. To run them, you can just run pytest in the project directory.

The code is formatted using Black.

Comments
  • Annotated AST

    Annotated AST

    Is there a way to access the parsed AST, with the inferred types? I want to write a code translation tool, and knowing the types would be very useful. I mean, having a regular AST (similar to ast.AST) but with added attributes, such as: declared_type, and inferred_types.

    I tried to look through the code, but couldn't figure it out.

    opened by erezsh 46
  • Incorrect Behavior in `MultiValuedValue.get_type()`

    Incorrect Behavior in `MultiValuedValue.get_type()`

    >>> iv   # our variable
    MultiValuedValue(vals=(SequenceIncompleteValue(typ=<class 'list'>, args=(UnresolvedValue(),), members=(UnresolvedValue(),)), KnownValue(val=[])))
    >>> iv.get_type()
    None
    >>> iv.get_type_value()
    MultiValuedValue(vals=(KnownValue(val=<class 'list'>), KnownValue(val=<class 'list'>)))
    >>> iv.get_type_value().get_type()
    None
    

    I would expect get_type() to return list, since that is clearly the type of this node.

    opened by erezsh 8
  • AssertionError: invalid argspec BoundMethodArgSpecWrapper

    AssertionError: invalid argspec BoundMethodArgSpecWrapper

    Traceback (most recent call last):
      File "/v/site-packages/pyanalyze/name_check_visitor.py", line 705, in visit
        ret = method(node)
      File "/v/site-packages/pyanalyze/name_check_visitor.py", line 2843, in visit_Subscript
        return_value, _ = self._get_argspec_and_check_call(
      File "/v/site-packages/pyanalyze/name_check_visitor.py", line 3372, in _get_argspec_and_check_call
        extended_argspec = self._get_argspec_from_value(callee_wrapped, node)
      File "/v/site-packages/pyanalyze/name_check_visitor.py", line 3465, in _get_argspec_from_value
        return self._get_argspec(callee_wrapped.val, node, name=name)
      File "/v/site-packages/pyanalyze/name_check_visitor.py", line 3495, in _get_argspec
        return self.arg_spec_cache.get_argspec(obj, name=name, logger=self.log)
      File "/v/site-packages/pyanalyze/arg_spec.py", line 1124, in get_argspec
        argspec = self._cached_get_argspec(obj, kwargs)
      File "/v/site-packages/pyanalyze/arg_spec.py", line 1136, in _cached_get_argspec
        extended = self._uncached_get_argspec(obj, kwargs)
      File "v/site-packages/pyanalyze/arg_spec.py", line 1161, in _uncached_get_argspec
        return BoundMethodArgSpecWrapper(argspec, KnownValue(obj.__self__))
      File "/v/site-packages/pyanalyze/arg_spec.py", line 142, in __init__
        assert isinstance(argspec, ExtendedArgSpec), "invalid argspec %r" % (argspec,)
    AssertionError: invalid argspec BoundMethodArgSpecWrapper(
      argspec=ExtendedArgSpec(
        _has_return_value=False,
        arguments=[Parameter(default_value=<object object at 0x7fa6a833f990>, name='self', typ=None)],
        implementation=None,
        kwargs='kwargs',
        kwonly_args=None,
        name='GenericAlias',
        params_of_names={
          'self': Parameter(default_value=<object object at 0x7fa6a833f990>, name='self', typ=None),
          'args': Parameter(default_value=<object object at 0x7fa6a833f990>, name='args', typ=TypedValue(typ=<class 'tuple'>)),
          'kwargs': Parameter(default_value=<object object at 0x7fa6a833f990>, name='kwargs', typ=TypedValue(typ=<class 'dict'>))},
        return_value=UnresolvedValue(), starargs='args'),
        self_value=TypedValue(typ=<class 'types.GenericAlias'>))
    
    Internal error: AssertionError(-"-) (code: internal_error)
    

    I'm sorry the code is closed-source, I could try to track it down somehow... I get this error using Python 3.9.0b1 but not 3.8

    opened by dimaqq 8
  • Pyanalyze outputs apparently harmless warning message when run on any file

    Pyanalyze outputs apparently harmless warning message when run on any file

    I get the following message printed to my terminal whenever running pyanalyze on any non-trivial files in my project, once, for the first file Pyanalyze sees. As no traceback or other information is printed, I'm not sure where its coming from. However, it is not considered an error, as pyanalyze still exits with code 0, and only ever prints once per path spec, but its a minor annoyance and I figured it would be worth information you of. Thanks!

    Cannot overload a class or an imported name: NameInfo(name='CodeType', is_exported=False, ast=ImportedName(module_name=('types',), name='CodeType'), child_nodes=None)
    
    opened by CAM-Gerlach 6
  • New release?

    New release?

    Seems like there have been a lot of improvements lately, but the last release is from Aug 12, over 3 months ago.

    Are there any plans to make a new release soon?

    (no pressure, I'll work off master if necessary, but I prefer to wait for a release if it's coming soon)

    opened by erezsh 5
  • Pyanalyze raises ModuleNotFoundError and prints traceback on absolute or relative imports of local Python modules that aren't on sys.path

    Pyanalyze raises ModuleNotFoundError and prints traceback on absolute or relative imports of local Python modules that aren't on sys.path

    I'm running Pyanalyze against my test suite as well as my production code, with the latter is a separate directory outside my installed package (per the recommendations of the src layout), and not installed itself, but with __init__.py in every directory all the way up to and including the root tests dir. As such, Pyanalyze should be able to determine (with pkgutil/etc) that the given file lies within a Python package, and perform the import accordingly; however, this doesn't work even if I specify the top-level package explicitly as the path argument to the pyanalyze CLI, nor even changing the working directory to the top-level package dir.

    However, Pyanalyze fails to successfully handle both absolute imports, as tested on the latest master, with the error you'd expect:

    Traceback (most recent call last):
      File "c:\users\c. a. m. gerlach\documents\dev\spacex\pyanalyze\pyanalyze\name_check_visitor.py", line 812, in _load_module
        return self.load_module(self.filename)
      File "c:\users\c. a. m. gerlach\documents\dev\spacex\pyanalyze\pyanalyze\name_check_visitor.py", line 832, in load_module
        return importer.load_module_from_file(filename)
      File "c:\users\c. a. m. gerlach\documents\dev\spacex\pyanalyze\pyanalyze\importer.py", line 98, in load_module_from_file
        return import_module(str(abspath), abspath), False
      File "c:\users\c. a. m. gerlach\documents\dev\spacex\pyanalyze\pyanalyze\importer.py", line 106, in import_module
        spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 850, in exec_module
      File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
      File "C:\Users\C. A. M. Gerlach\Documents\dev\SpaceX\submanager\tests\functional\test_validate.py", line 17, in <module>
        from tests.functional.conftest import (
    ModuleNotFoundError: No module named 'tests.functional'
    

    and relative imports, with an error message that further indicates that something isn't correctly handling spaces in the path (from a username I created a decade ago when I didn't know better, unfortunately):

    Traceback (most recent call last):
      File "c:\users\c. a. m. gerlach\documents\dev\spacex\pyanalyze\pyanalyze\name_check_visitor.py", line 812, in _load_module
        return self.load_module(self.filename)
      File "c:\users\c. a. m. gerlach\documents\dev\spacex\pyanalyze\pyanalyze\name_check_visitor.py", line 832, in load_module
        return importer.load_module_from_file(filename)
      File "c:\users\c. a. m. gerlach\documents\dev\spacex\pyanalyze\pyanalyze\importer.py", line 98, in load_module_from_file
        return import_module(str(abspath), abspath), False
      File "c:\users\c. a. m. gerlach\documents\dev\spacex\pyanalyze\pyanalyze\importer.py", line 106, in import_module
        spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 850, in exec_module
      File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
      File "C:\Users\C. A. M. Gerlach\Documents\dev\SpaceX\submanager\tests\functional\test_validate.py", line 17, in <module>
        from .conftest import (
    ModuleNotFoundError: No module named 'C:\\Users\\C'
    

    What's more, while this would be tolerable as the analysis still works quite well and I use local test imports very sparingly in my tests (only to import common types, essentially), while I can silence the Pyanalyze message (by putting # static analysis: ignore[import_failed] as an inline comment on the first non-blank line of the file, which is rather awkward to do but possible), there appears to be no way to silence the long traceback that prints for every file and module that failed importing, making the output essentially useless.

    Therefore, the only way currently to use a wrapper script to temporarily add the tests dir to sys.path, run pyanalyze, and then restore sys.path to the original value, or else not run pyanalyze on those files at all.

    At the very least, can the error be properly suppressed, and ideally pyanalyze correctly handle local imports from modules in packages? Or any other ideas to resolve this? Thanks!

    opened by CAM-Gerlach 4
  • Bug Report: inferred_value doesn't propagate to the subscript of a list

    Bug Report: inferred_value doesn't propagate to the subscript of a list

    The following code demonstrates the bug:

    a = [1,2,3]
    
    if a[:0]:
    	print("// bad")
    
    import pyanalyze
    def main():
    	tree = pyanalyze.ast_annotator.annotate_file(__file__)
    	print(repr(tree.body[1].test.inferred_value))			# UnresolvedValue()
    	print(repr(tree.body[1].test.value.inferred_value))		# KnownValue(val=[1, 2, 3])
    
    main()
    

    The issue from my standpoint, is that when I ask for the type of the if's condition, I get None instead of list, even though there should be no doubt that it is a list.

    I'd be willing to attempt to fix it myself, if you could give me a few pointers.

    Btw,

    Your library has been very useful to me so far, to establish call graphs and discern types, so thank you!

    opened by erezsh 4
  • Protocol support

    Protocol support

    • [x] Fix 3.7
    • [x] Extract protocols from stubs
    • [x] Make protocol bases from typeshed work
    • [ ] Test against internal codebase
    • [ ] Profile to see if some additional caching is valuable
    opened by JelleZijlstra 4
  • While Loops: handle exiting a loop from within a conditional block

    While Loops: handle exiting a loop from within a conditional block

    The existing logic doesn't recognize that you can break out of loops inside of an if statement, since it only checks for LEAVES_LOOP in the main scope.

    opened by nbdaaron 3
  • Default to the CWD and don't dump a traceback if no path specified

    Default to the CWD and don't dump a traceback if no path specified

    Sorry I didn't get this in for v0.3.0! If no path is specified on the command line, pyanalyze will default to checking the current working directory, like flake8, rather than erroring out and dumping a traceback. Tested with and without paths passed and by using the entrypoint and python -m.

    Fixes #219

    opened by CAM-Gerlach 3
  • Add --disable-all CLI flag and allow enabling/disabling specific checks when --*-all passed

    Add --disable-all CLI flag and allow enabling/disabling specific checks when --*-all passed

    Fixes #211

    Allows checks to be disabled with -d when --enable-all is passed, to support a strict default-enabled/deny-list approach to checks. Furthermore, adds a new --disable-all flag to the CLI, which disables all checks except those enabled with -e, to support a default-disabled/allow-list approach.

    There didn't appear to be an obvious one-letter equivalent (-A and -a were already taken, as was -d, and -D was potentially confusingly not parallel with -a. However, if you want me to add one, I certainly can.

    I've manually tested this in a variety of cases locally, but I'm not sure what, if anything, should be done about tests. I couldn't find where CLI args were tested, so I'm not sure what the current approach is. And while I'm reasonably experienced with pytest and know just enough unittest/xUnit to be dangerous, I'm not really familiar with nose, considering it has been unmaintained since before I learned Python, haha. So I'd appreciate some guidance on this, thanks!

    opened by CAM-Gerlach 3
  • Bump sphinx from 5.3.0 to 6.1.2

    Bump sphinx from 5.3.0 to 6.1.2

    Bumps sphinx from 5.3.0 to 6.1.2.

    Release notes

    Sourced from sphinx's releases.

    v6.1.2

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.1.1

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.1.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.1

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0b2

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0b1

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    Changelog

    Sourced from sphinx's changelog.

    Release 6.1.2 (released Jan 07, 2023)

    Bugs fixed

    • #11101: LaTeX: div.topic_padding key of sphinxsetup documented at 5.1.0 was implemented with name topic_padding

    • #11099: LaTeX: shadowrule key of sphinxsetup causes PDF build to crash since Sphinx 5.1.0

    • #11096: LaTeX: shadowsize key of sphinxsetup causes PDF build to crash since Sphinx 5.1.0

    • #11095: LaTeX: shadow of :dudir:topic and contents_ boxes not in page margin since Sphinx 5.1.0

      .. _contents: https://docutils.sourceforge.io/docs/ref/rst/directives.html#table-of-contents

    • #11100: Fix copying images when running under parallel mode.

    Release 6.1.1 (released Jan 05, 2023)

    Bugs fixed

    • #11091: Fix util.nodes.apply_source_workaround for literal_block nodes with no source information in the node or the node's parents.

    Release 6.1.0 (released Jan 05, 2023)

    Dependencies

    Incompatible changes

    • #10979: gettext: Removed support for pluralisation in get_translation. This was unused and complicated other changes to sphinx.locale.

    Deprecated

    • sphinx.util functions:

      • Renamed sphinx.util.typing.stringify() to sphinx.util.typing.stringify_annotation()

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies python 
    opened by dependabot[bot] 0
  • Incompatible bounds on type variable when using unbound method with map()

    Incompatible bounds on type variable when using unbound method with map()

    Given:

    list(map(str.strip, fields))
    

    We get this error:

    E         Cannot call overloaded function (code: incompatible_call)
    E             In overload (func: (~_T1, /) -> ~_S, iter1: collections.abc.Iterable[~_T1], /) -> map[~_S]
    E               Cannot resolve type variables
    E                     Incompatible bounds on type variable
    E                       str is not a literal
    E                         LiteralString >= ~_T1
    E                         str <= ~_T1
    E
    E
    E         In /home/[...] at line 77
    E           77:             list(map(str.strip, fields))
    E                                ^
    
    opened by besfahbod 0
  • Maybe replace dependency on codemod with another library?

    Maybe replace dependency on codemod with another library?

    https://github.com/quora/pyanalyze/blob/b20132810f8519b3b8302fc190a372f1f7a52340/pyanalyze/node_visitor.py#L10

    https://pypi.org/project/codemod/ hasn't been updated since 2017 and its cli doesn't even work under a PY3.6 environment anymore.

    Maybe there's an alternative, up-to-date datatype we can use here, so we can retire the codemod package usage?

    opened by besfahbod 1
  • Type narrowing for subscripting is too eager

    Type narrowing for subscripting is too eager

    import pandas as pd
    
    
    def x():
    
        res = pd.DataFrame([1, 2, 3])
    
        res["mean"] = [None] * 3
        reveal_type(res["mean"])
    

    This will currently produce something like list[None], but that's incorrect because DataFrames return Series objects from __getitem__. Mypy and pyright get this right, so it may be worth looking into what heuristics they are using.

    Solution I can think of: currently NameCheckVisitor.composite_from_subscript sets the narrowed type, regardless of the type of the subscripted object. If we instead did this in list.__setitem__ and dict._setitem__ (where we know it's safe), that would solve the problem.

    opened by JelleZijlstra 0
Releases(v0.8.0)
  • v0.8.0(Nov 6, 2022)

    Release highlights:

    • Support for Python 3.11
    • Drop support for Python 3.6
    • Support for PEP 692 (Unpack on **kwargs)

    Full changelog:

    • Infer async def functions as returning Coroutine, not Awaitable (#557, #559)
    • Drop support for Python 3.6 (#554)
    • Require typeshed_client>=2.1.0. Older versions will throw false-positive errors around context managers when typeshed_client 2.1.0 is installed. (#554)
    • Fix false positive error certain method calls on literals (#548)
    • Preserve Annotated annotations on access to methods of literals (#541)
    • allow_call callables are now also called if the arguments are literals wrapped in Annotated (#540)
    • Support Python 3.11 (#537)
    • Fix type checking of binary operators involving unions (#531)
    • Improve TypeVar solution heuristic for constrained typevars with multiple solutions (#532)
    • Fix resolution of stringified annotations in __init__ methods (#530)
    • Type check yield, yield from, and return nodes in generators (#529)
    • Type check calls to comparison operators (#527)
    • Retrieve attributes from stubs even when a runtime equivalent exists (#526)
    • Fix attribute access to stub-only names (#525)
    • Remove a number of unnecessary special-cased signatures (#499)
    • Add support for use of the Unpack operator to annotate heterogeneous *args and **kwargs parameters (#523)
    • Detect incompatible types for some calls to list.append, list.extend, list.__add__, and set.add (#522)
    • Optimize local variables with very complex inferred types (#521)
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Apr 13, 2022)

    Release highlights:

    • Support for PEP 673 (Self)
    • Support for PEP 675 (LiteralString)
    • Support for assert_type and other additions to typing in Python 3.11

    Full changelog:

    • Remove SequenceIncompleteValue (#519)
    • Add implementation function for dict.pop (#517)
    • Remove WeakExtension (#517)
    • Fix propagation of no-return-unless constraints from calls to unions (#518)
    • Initial support for variable-length heterogeneous sequences (required for PEP 646). More precise types are now inferred for heterogeneous sequences containing variable-length objects. (#515, #516)
    • Support LiteralString (PEP 675) (#514)
    • Add unused_assignment error code, separated out from unused_variable. Enable these error codes and possibly_undefined_name by default (#511)
    • Fix handling of overloaded methods called on literals (#513)
    • Partial support for running on Python 3.11 (#512)
    • Basic support for checking Final and for checking re-assignments to variables declared with a specific type (#505)
    • Correctly check the self argument to @property getters (#506)
    • Correctly track assignments of variables inside try blocks and inside with blocks that may suppress exceptions (#504)
    • Support mappings that do not inherit from collections.abc.Mapping (#501)
    • Improve type inference for calls to set(), list(), and tuple() with union arguments (#500)
    • Remove special-cased signatured for sorted() (#498)
    • Support type narrowing on bool() calls (#497)
    • Support context managers that may suppress exceptions (#496)
    • Fix type inference for with assignment targets on Python 3.7 and higher (#495)
    • Fix bug where code after a while loop is considered unreachable if all break statements are inside of if statements (#494)
    • Remove support for detecting properties that represent synchronous equivalents of asynq methods (#493)
    • Enable exhaustive checking of enums and booleans (#492)
    • Fix type narrowing in else branch if constraint is stored in a variable (#491)
    • Fix incorrectly inferred Never return type for some function implementations (#490)
    • Infer precise call signatures for TypedDict types (#487)
    • Add mechanism to prevent crashes on objects with unusual __getattr__ methods (#486)
    • Infer callable signatures for objects with a __getattr__ method (#485, #488)
    • Do not treat attributes that raise an exception on access as nonexistent (#481)
    • Improve detection of unhashable dict keys and set members (#469)
    • The in and not in operators always return booleans (#480)
    • Allow NotImplemented to be returned from special methods that support it (#479)
    • Fix bug affecting type compatibility between generics and literals (#474)
    • Add support for typing.Never and typing_extensions.Never (#472)
    • Add inferred_any, an extremely noisy error code that triggers whenever the type checker infers something as Any (#471)
    • Optimize type compatibility checks on large unions (#469)
    • Detect incorrect key types passed to dict.__getitem__ (#468)
    • Pick up the signature of open() from typeshed correctly (#463)
    • Do not strip away generic parameters explicitly set to Any (#467)
    • Fix bug that led to some overloaded calls incorrectly resolving to Any (#462)
    • Support __init__ and __new__ signatures from typeshed (#430)
    • Fix incorrect type inferred for indexing operations on subclasses of list and tuple (#461)
    • Add plugin providing a precise type for dict.get calls (#460)
    • Fix internal error when an __eq__ method throws (#461)
    • Fix handling of async def methods in stubs (#459)
    • Treat Thrift enums as compatible with protocols that int is compatible with (#457)
    • Assume that dataclasses have no dynamic attributes (#456)
    • Treat Thrift enums as compatible with int (#455)
    • Fix treatment of TypeVar with bounds or constraints as callables (#454)
    • Improve TypeVar solution algorithm (#453)
    • Cache decisions about whether classes implement protocols (#450)
    • Fix application of multiple suggested changes per file when an earlier change has added or removed lines (#449)
    • Treat NoReturn like Any in **kwargs calls (#446)
    • Improve error messages for overloaded calls (#445)
    • Infer NoReturn instead of Any for unreachable code (#443)
    • Make NoReturn compatible with all other types (#442)
    • Fix treatment of walrus operator in and, or, and if/else expressions (#441)
    • Refactor isinstance() support (#440)
    • Exclude Any[unreachable] from unified values (#439)
    • Add support for reveal_locals() (#436)
    • Add support for assert_error() (#435)
    • Add support for assert_type() (#434)
    • reveal_type() and dump_value() now return their argument, the anticipated behavior for typing.reveal_type() in Python 3.11 (#433)
    • Fix return type of async generator functions (#431)
    • Type check function decorators (#428)
    • Handle NoReturn in async def functions (#427)
    • Support PEP 673 (typing_extensions.Self) (#423)
    • Updates for compatibility with recent changes in typeshed (#421):
      • Fix override compatibility check for unknown callables
      • Fix usage of removed type _typeshed.SupportsLessThan
    • Remove old configuration abstraction (#414)
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Jan 13, 2022)

    Release highlights:

    • Support for configuration through pyproject.toml. The old configuration mechanism will be removed in the next release.
    • Support for experimental new type evaluation mechanism, providing a more powerful replacement for overloads.
    • Support for suggesting annotations for unannotated code.

    Full changelog:

    • Support generic type evaluators (#409)
    • Implement return annotation behavior for type evaluation functions (#408)
    • Support extend_config option in pyproject.toml (#407)
    • Remove the old method return type check. Use the new incompatible_override check instead (#404)
    • Migrate remaining config options to new abstraction (#403)
    • Fix stub classes with references to themselves in their base classes, such as os._ScandirIterator in typeshed (#402)
    • Fix type narrowing on the else case of issubclass() (#401)
    • Fix indexing a list with an index typed as a TypeVar (#400)
    • Fix "This function should have an @asynq() decorator" false positive on lambdas (#399)
    • Fix compatibility between Union and Annotated (#397)
    • Fix potential incorrect inferred return value for unannotated functions (#396)
    • Fix compatibility between Thrift enums and TypeVars (#394)
    • Fix accessing attributes on Unions nested within Annotated (#393)
    • Fix interaction of register_error_code() with new configuration mechanism (#391)
    • Check against invalid Signature objects and prepare for refactoring Signature compatibility logic (#390)
    • Treat int and float as compatible with complex, as specified in PEP 484 (#389)
    • Do not error on boolean operations on values typed as object (#388)
    • Support type narrowing on enum types and bool in match statements (#387)
    • Support some imports from stub-only modules (#386)
    • Support type evaluation functions in stubs (#386)
    • Support TypedDict in stubs (#386)
    • Support TypeAlias (PEP 612) (#386)
    • Small improvements to ParamSpec support (#385)
    • Allow CustomCheck to customize what values a value can be assigned to (#383)
    • Fix incorrect inference of self argument on some nested methods (#382)
    • Fix compatibility between Callable and Annotated (#381)
    • Fix inference for nested async def functions (#380)
    • Fix usage of type variables in function parameters with defaults (#378)
    • Support the Python 3.10 match statement (#376)
    • Support the walrus (:=) operator (#375)
    • Initial support for proposed new "type evaluation" mechanism (#374, #379, #384, #410)
    • Create command-line options for each config option (#373)
    • Overhaul treatment of function definitions (#372)
      • Support positional-only arguments
      • Infer more precise types for lambda functions
      • Infer more precise types for nested functions
      • Refactor related code
    • Add check for incompatible overrides in child classes (#371)
    • Add pyanalyze.extensions.NoReturnGuard (#370)
    • Infer call signatures for Type[X] (#369)
    • Support configuration in a pyproject.toml file (#368)
    • Require typeshed_client 2.0 (#361)
    • Add JSON output for integrating pyanalyze's output with other tools (#360)
    • Add check that suggests parameter and return types for untyped functions, using the new suggested_parameter_type and suggested_return_type codes (#358, #359, #364)
    • Extract constraints from multi-comparisons (a < b < c) (#354)
    • Support positional-only arguments with the __ prefix outside of stubs (#353)
    • Add basic support for ParamSpec (#352)
    • Fix error on use of AbstractAsyncContextManager (#350)
    • Check with and async with statements (#344)
    • Improve type compatibility between generics and literals (#346)
    • Infer signatures for method wrapper objects (bound methods of builtin types) (#345)
    • Allow storing type narrowing constraints in variables (#343)
    • The first argument to __new__ and __init_subclass__ does not need to be self (#342)
    • Drop dependencies on attrs and mypy_extensions (#341)
    • Correct location of error for incompatible parameter (#339)
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Dec 13, 2021)

    Version 0.5.0 (December 12, 2021)

    Highlights:

    • Partial support for @overload
    • Support for Protocol
    • Type check calls with *args or **kwargs

    Full changelog:

    • Recognize code following an infinite while loop as unreachable (#337)
    • Recognize overloaded functions in stubs (#325)
    • Fix handling of classes in stubs that have an incorrect __qualname__ at runtime (#336)
    • Fix type compatibility with generic functions (#335)
    • Support function calls in annotations (#334)
    • Better support for TypeVar bounds and constraints in stubs (#333)
    • Improve type checking of dict.update and dict.copy (#328)
    • Improve support for complex type aliases in stubs (#331)
    • Limit special case for Literal callables to functions, not any callable (#329)
    • Support for constants in stubs that do not exist at runtime (#330)
    • Fix detection of PEP 604 union types in stubs (#327)
    • Support literals over negative numbers in stubs and stringified annotations (#326)
    • Improved overload matching algorithm (#321) (#324)
    • Support runtime overloaded functions with pyanalyze.extensions.overload (#318)
    • Internal support for overloaded functions (#316)
    • Support TypeVar bounds and constraints (#315)
    • Improve error messages involving concrete dictionary and sequence values (#312)
    • More precise type inference for dict literals (#312)
    • Support AsynqCallable with no arguments as an annotation (#314)
    • Support iteration over old-style iterables providing only __getitem__ (#313)
    • Add support for runtime Protocols (#311)
    • Stop inferring Any for non-runtime checkable Protocols on Python 3.6 and 3.7 (#310)
    • Fix false positive where multiprocessing.Pool.map_async was identified as an asynq method (#306)
    • Fix handling of nested classes (#305)
    • Support Protocols for runtime types that are also defined in stubs (#297) (#307)
    • Better detect signatures of methods in stub files (#304)
    • Improve handling of positional-only arguments in stub files (#303)
    • Fix bug where pyanalyze incorrectly inferred that an attribute always exists (#302)
    • Fix compatibility of signatures with extra parameters (#301)
    • Enhance reveal_type() output for UnboundMethodValue (#300)
    • Fix handling of async for (#298)
    • Add support for stub-only Protocols (#295)
    • Basic support for stub-only types (#290)
    • Require typing_inspect>=0.7.0 (#290)
    • Improve type checking of raise statements (#289)
    • Support Final with arguments and ClassVar without arguments (#284)
    • Add pyanalyze.extensions.NoAny (#283)
    • Overhaul documentation (#282)
    • Type check calls with *args or **kwargs (#275)
    • Infer more precise types for comprehensions over known iterables (#279)
    • Add impl function for list.__iadd__ (+=) (#280)
    • Simplify some overly complex types to improve performance (#280)
    • Detect usage of implicitly reexported names (#271)
    • Improve type inference for iterables (#277)
    • Fix bug in type narrowing for in/not in (#277)
    • Changes affecting consumers of Value objects:
      • All Value objects are now expected to be hashable.
      • DictIncompleteValue and AnnotatedValue use tuples instead of lists internally.
      • DictIncompleteValue now stores a sequence of KVPair object instead of just key-value pairs, enabling more granular information.
      • The type of a TypedValue may now be a string
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Nov 18, 2021)

    • Support and test Python 3.10. Note that new features are not necessarily supported.
    • Support PEP 655 (typing_extensions.Required and NotRequired)
    • Improve detection of missing return statements
    • Improve detection of suspicious boolean conditions
    • The return type of calls with *args or **kwargs is now inferred correctly. The arguments are still not typechecked.
    • Fix bug affecting type compatibility between literals and generics
    • Improve type narrowing on the in/not in operator
    • Improve type checking for format strings
    • Add the pyanalyze.value.AnyValue class, replacing pyanalyze.value.UNRESOLVED_VALUE
    • Improve formatting for Union types in errors
    • Fix bug affecting type compatibility between types and literals
    • Support total=False in TypedDict
    • Deal with typeshed changes in typeshed_client 1.1.2
    • Better type checking for list and tuple.__getitem__
    • Improve type narrowing on the ==/!= operator
    • Reduce usage of VariableNameValue
    • Improve TypeVar inference procedure
    • Add support for constraints on the type of self, including if it has a union type
    • Detect undefined enum.Enum members
    • Improve handling of Annotated
    • Add pyanalyze.extensions.CustomCheck
    • Add pyanalyze.extensions.ExternalType
    • If you have code dealing with Value objects, note that there are several changes:
      • value is UNRESOLVED_VALUE will no longer be reliable. Use isinstance(value, AnyValue) instead.
      • TypedDictValue now stores whether each key is required or not in its items dictionary.
      • UnboundMethodValue now stores a Composite object instead of a Value object, and has a new typevars field.
      • There is a new KnownValueWithTypeVars class, but it should not be relevant to most use cases.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Aug 12, 2021)

    • Exit with a non-zero exit code when errors occur (contributed by C.A.M. Gerlach)
    • Type check the working directory if no command-line arguments are given (contributed by C.A.M. Gerlach)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Aug 1, 2021)

    • Type check calls on Unions properly
    • Add pyanalyze executable
    • Add --enable-all and --disable-all flags (contributed by C.A.M. Gerlach)
    • Bug fixes
    Source code(tar.gz)
    Source code(zip)
Owner
Quora
Quora’s mission is to share and grow the world’s knowledge.
Quora
TidyPy is a tool that encapsulates a number of other static analysis tools and makes it easy to configure, execute, and review their results.

TidyPy Contents Overview Features Usage Docker Configuration Ignoring Issues Included Tools Included Reporters Included Integrations Extending TidyPy

Jason Simeone 33 Nov 27, 2022
Robocop is a tool that performs static code analysis of Robot Framework code.

Robocop Introduction Documentation Values Requirements Installation Usage Example Robotidy FAQ Watch our talk from RoboCon 2021 about Robocop and Robo

marketsquare 132 Dec 29, 2022
Pymwp is a tool for automatically performing static analysis on programs written in C

pymwp: MWP analysis in Python pymwp is a tool for automatically performing static analysis on programs written in C, inspired by "A Flow Calculus of m

Static Analyses of Program Flows: Types and Certificate for Complexity 2 Dec 2, 2022
Collection of library stubs for Python, with static types

typeshed About Typeshed contains external type annotations for the Python standard library and Python builtins, as well as third party packages as con

Python 3.3k Jan 2, 2023
A system for Python that generates static type annotations by collecting runtime types

MonkeyType MonkeyType collects runtime types of function arguments and return values, and can automatically generate stub files or even add draft type

Instagram 4.1k Jan 2, 2023
A static type analyzer for Python code

pytype - ? ✔ Pytype checks and infers types for your Python code - without requiring type annotations. Pytype can: Lint plain Python code, flagging c

Google 4k Dec 31, 2022
Optional static typing for Python 3 and 2 (PEP 484)

Mypy: Optional Static Typing for Python Got a question? Join us on Gitter! We don't have a mailing list; but we are always happy to answer questions o

Python 14.4k Jan 5, 2023
Static type checker for Python

Static type checker for Python Speed Pyright is a fast type checker meant for large Python source bases. It can run in a “watch” mode and performs fas

Microsoft 9.4k Jan 7, 2023
A simple stopwatch for measuring code performance with static typing.

A simple stopwatch for measuring code performance. This is a fork from python-stopwatch, which adds static typing and a few other things.

Rafael 2 Feb 18, 2022
Code audit tool for python.

Pylama Code audit tool for Python and JavaScript. Pylama wraps these tools: pycodestyle (formerly pep8) © 2012-2013, Florent Xicluna; pydocstyle (form

Kirill Klenov 966 Dec 29, 2022
Alarmer is a tool focus on error reporting for your application.

alarmer Alarmer is a tool focus on error reporting for your application. Installation pip install alarmer Usage It's simple to integrate alarmer in yo

long2ice 20 Jul 3, 2022
Metrinome is an all-purpose tool for working with code complexity metrics.

Overview Metrinome is an all-purpose tool for working with code complexity metrics. It can be used as both a REPL and API, and includes: Converters to

null 26 Dec 26, 2022
pycallgraph is a Python module that creates call graphs for Python programs.

Project Abandoned Many apologies. I've stopped maintaining this project due to personal time constraints. Blog post with more information. I'm happy t

gak 1.7k Jan 1, 2023
Turn your Python and Javascript code into DOT flowcharts

Notes from 2017 This is an older project which I am no longer working on. It was built before ES6 existed and before Python 3 had much usage. While it

Scott Rogowski 3k Jan 9, 2023
Inspects Python source files and provides information about type and location of classes, methods etc

prospector About Prospector is a tool to analyse Python code and output information about errors, potential problems, convention violations and comple

Python Code Quality Authority 1.7k Dec 31, 2022
Find dead Python code

Vulture - Find dead code Vulture finds unused code in Python programs. This is useful for cleaning up and finding errors in large code bases. If you r

Jendrik Seipp 2.4k Jan 3, 2023
The strictest and most opinionated python linter ever!

wemake-python-styleguide Welcome to the strictest and most opinionated python linter ever. wemake-python-styleguide is actually a flake8 plugin with s

wemake.services 2.1k Jan 5, 2023
The uncompromising Python code formatter

The Uncompromising Code Formatter “Any color you like.” Black is the uncompromising Python code formatter. By using it, you agree to cede control over

Python Software Foundation 30.7k Dec 28, 2022
A Python utility / library to sort imports.

Read Latest Documentation - Browse GitHub Code Repository isort your imports, so you don't have to. isort is a Python utility / library to sort import

Python Code Quality Authority 5.5k Jan 6, 2023