Green is a clean, colorful, fast python test runner.

Overview

Version CI Status Coverage Status

Green -- A clean, colorful, fast python test runner.

Features

  • Clean - Low redundancy in output. Result statistics for each test is vertically aligned.
  • Colorful - Terminal output makes good use of color when the terminal supports it.
  • Fast - Tests run in independent processes. (One per processor by default. Does not play nicely with gevent)
  • Powerful - Multi-target + auto-discovery.
  • Traditional - Use the normal unittest classes and methods for your unit tests.
  • Descriptive - Multiple verbosity levels, from just dots to full docstring output.
  • Convenient - Bash-completion and ZSH-completion of options and test targets.
  • Thorough - Built-in integration with coverage.
  • Embedded - Can be run with a setup command without in-site installation.
  • Modern - Supports Python 3.5+. Additionally, PyPy is supported on a best-effort basis.
  • Portable - macOS, Linux, and BSDs are fully supported. Windows is supported on a best-effort basis.
  • Living - This project grows and changes. See the changelog

Community

  • For questions, comments, or feature requests, please open a discussion
  • For bug reports, please submit an issue to the GitHub issue tracker for Green.
  • Submit a pull request with a bug fix or new feature.
  • πŸ’– Sponsor the maintainer to support this project

Training Course

There is a training course available if you would like professional training: Python Testing with Green.

Python Testing with Green

Screenshots

Top: With Green! Bottom: Without Green :-(

Python Unit Test Output

Quick Start

pip3 install green    # To upgrade: "pip3 install --upgrade green"

Now run green...

# From inside your code directory
green

# From outside your code directory
green code_directory

# A specific file
green test_stuff.py

# A specific test inside a large package.
#
# Assuming you want to run TestClass.test_function inside
# package/test/test_module.py ...
green package.test.test_module.TestClass.test_function

# To see all examples of all the failures, errors, etc. that could occur:
green green.examples


# To run Green's own internal unit tests:
green green

For more help, see the complete command-line options or run green --help.

Config Files

Configuration settings are resolved in this order, with settings found later in the resolution chain overwriting earlier settings (last setting wins).

  1. $HOME/.green
  2. A config file specified by the environment variable $GREEN_CONFIG
  3. setup.cfg in the current working directory of test run
  4. .green in the current working directory of the test run
  5. A config file specified by the command-line argument --config FILE
  6. Command-line arguments

Any arguments specified in more than one place will be overwritten by the value of the LAST place the setting is seen. So, for example, if a setting is turned on in ~/.green and turned off by a command-line argument, then the setting will be turned off.

Config file format syntax is option = value on separate lines. option is the same as the long options, just without the double-dash (--verbose becomes verbose).

Most values should be True or False. Accumulated values (verbose, debug) should be specified as integers (-vv would be verbose = 2).

Example:

verbose       = 2
logging       = True
omit-patterns = myproj*,*prototype*

Troubleshooting

One easy way to avoid common importing problems is to navigate to the parent directory of the directory your python code is in. Then pass green the directory your code is in and let it autodiscover the tests (see the Tutorial below for tips on making your tests discoverable).

cd /parent/directory
green code_directory

Another way to address importing problems is to carefully set up your PYTHONPATH environment variable to include the parent path of your code directory. Then you should be able to just run green from inside your code directory.

export PYTHONPATH=/parent/directory
cd /parent/directory/code_directory
green

Integration

Bash and Zsh

To enable Bash-completion and Zsh-completion of options and test targets when you press Tab in your terminal, add the following line to the Bash or Zsh config file of your choice (usually ~/.bashrc or ~/.zshrc)

which green >& /dev/null && source "$( green --completion-file )"

Coverage

Green has built-in integration support for the coverage module. Add -r or --run-coverage when you run green.

setup.py command

Green is available as a setup.py runner, invoked as any other setup command:

python setup.py green

This requires green to be present in the setup_requires section of your setup.py file. To run green on a specific target, use the test_suite argument (or leave blank to let green discover tests itself):

# setup.py
from setuptools import setup

setup(
    ...
    setup_requires = ['green'],
    # test_suite = "my_project.tests"
)

You can also add an alias to the setup.cfg file, so that python setup.py test actually runs green:

# setup.cfg

[aliases]
test = green

Django

Django can use green as the test runner for running tests.

  • To just try it out, use the --testrunner option of manage.py:
./manage.py test --testrunner=green.djangorunner.DjangoRunner
  • Make it persistent by adding the following line to your settings.py:
TEST_RUNNER="green.djangorunner.DjangoRunner"
  • For verbosity, green adds an extra command-line option to manage.py which you can pass the number of v's you would have used on green.
./manage.py test --green-verbosity 3

nose-parameterized

Green will run generated tests created by nose-parameterized. They have lots of examples of how to generate tests, so follow the link above if you're interested.

Unit Test Structure Tutorial

This tutorial covers:

  • External structure of your project (directory and file layout)
  • Skeleton of a real test module
  • How to import stuff from your project into your test module
  • Gotchas about naming...everything.
  • Where to run green from and what the output could look like.
  • DocTests

For more in-depth online training please check out Python Testing with Green:

  • Layout your test packages and modules correctly
  • Organize your tests effectively
  • Learn the tools in the unittest and mock modules
  • Write meaningful tests that enable quick refactoring
  • Learn the difference between unit and integration tests
  • Use advanced tips and tricks to get the most out of your tests
  • Improve code quality
  • Refactor code without fear
  • Have a better coding experience
  • Be able to better help others

External Structure

This is what your project layout should look like with just one module in your package:

proj                  # 'proj' is the package
β”œβ”€β”€ __init__.py
β”œβ”€β”€ foo.py            # 'foo' (or proj.foo) is the only "real" module
└── test              # 'test' is a sub-package
    β”œβ”€β”€ __init__.py
    └── test_foo.py   # 'test_foo' is the only "test" module

Notes:

  1. There is an __init__.py in every directory. Don't forget it. It can be an empty file, but it needs to exist.

  2. proj itself is a directory that you will be storing somewhere. We'll pretend it's in /home/user

  3. The test directory needs to start with test.

  4. The test modules need to start with test.

When your project starts adding code in sub-packages, you will need to make a choice on where you put their tests. I prefer to create a test subdirectory in each sub-package.

proj
β”œβ”€β”€ __init__.py
β”œβ”€β”€ foo.py
β”œβ”€β”€ subpkg
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ bar.py
β”‚   └── test              # test subdirectory in every sub-package
β”‚       β”œβ”€β”€ __init__.py
β”‚       └── test_bar.py
└── test
    β”œβ”€β”€ __init__.py
    └── test_foo.py

The other option is to start mirroring your subpackage layout from within a single test directory.

proj
β”œβ”€β”€ __init__.py
β”œβ”€β”€ foo.py
β”œβ”€β”€ subpkg
β”‚   β”œβ”€β”€ __init__.py
β”‚   └── bar.py
└── test
    β”œβ”€β”€ __init__.py
    β”œβ”€β”€ subpkg            # mirror sub-package layout inside test dir
    β”‚   β”œβ”€β”€ __init__.py
    β”‚   └── test_bar.py
    └── test_foo.py

Skeleton of Test Module

Assume foo.py contains the following contents:

def answer():
    return 42

class School():

    def food(self):
        return 'awful'

    def age(self):
        return 300

Here's a possible version of test_foo.py you could have.

# Import stuff you need for the unit tests themselves to work
import unittest

# Import stuff that you want to test.  Don't import extra stuff if you don't
# have to.
from proj.foo import answer, School

# If you need the whole module, you can do this:
#     from proj import foo
#
# Here's another reasonable way to import the whole module:
#     import proj.foo as foo
#
# In either case, you would obviously need to access objects like this:
#     foo.answer()
#     foo.School()

# Then write your tests

class TestAnswer(unittest.TestCase):

    def test_type(self):
        "answer() returns an integer"
        self.assertEqual(type(answer()), int)

    def test_expected(self):
        "answer() returns 42"
        self.assertEqual(answer(), 42)

class TestSchool(unittest.TestCase):

    def test_food(self):
        school = School()
        self.assertEqual(school.food(), 'awful')

    def test_age(self):
        school = School()
        self.assertEqual(school.age(), 300)

Notes:

  1. Your test class must subclass unittest.TestCase. Technically, neither unittest nor Green care what the test class is named, but to be consistent with the naming requirements for directories, modules, and methods we suggest you start your test class with Test.

  2. Start all your test method names with test.

  3. What a test class and/or its methods actually test is entirely up to you. In some sense it is an artform. Just use the test classes to group a bunch of methods that seem logical to go together. We suggest you try to test one thing with each method.

  4. The methods of TestAnswer have docstrings, while the methods on TestSchool do not. For more verbose output modes, green will use the method docstring to describe the test if it is present, and the name of the method if it is not. Notice the difference in the output below.

DocTests

Green can also run tests embedded in documentation via Python's built-in doctest module. Returning to our previous example, we could add docstrings with example code to our foo.py module:

def answer():
    """
    >>> answer()
    42
    """
    return 42

class School():

    def food(self):
        """
        >>> s = School()
        >>> s.food()
        'awful'
        """
        return 'awful'

    def age(self):
        return 300

Then in some test module you need to add a doctest_modules = [ ... ] list to the top-level of the test module. So lets revisit test_foo.py and add that:

# we could add this to the top or bottom of the existing file...

doctest_modules = ['proj.foo']

Then running green -vv might include this output:

  DocTests via `doctest_modules = [...]`
.   proj.foo.School.food
.   proj.foo.answer

...or with one more level of verbosity (green -vvv)

  DocTests via `doctest_modules = [...]`
.   proj.foo.School.food -> /Users/cleancut/proj/green/example/proj/foo.py:10
.   proj.foo.answer -> /Users/cleancut/proj/green/example/proj/foo.py:1

Notes:

  1. There needs to be at least one unittest.TestCase subclass with a test method present in the test module for doctest_modules to be examined.

Running Green

To run the unittests, we would change to the parent directory of the project (/home/user in this example) and then run green proj.

In a real terminal, this output is syntax highlighted

$ green proj
....

Ran 4 tests in 0.125s using 8 processes

OK (passes=4)

Okay, so that's the classic short-form output for unit tests. Green really shines when you start getting more verbose:

In a real terminal, this output is syntax highlighted

$ green -vvv proj
Green 3.0.0, Coverage 4.5.2, Python 3.7.4

test_foo
  TestAnswer
.   answer() returns 42
.   answer() returns an integer
  TestSchool
.   test_age
.   test_food

Ran 4 tests in 0.123s using 8 processes

OK (passes=4)

Notes:

  1. Green outputs clean, hierarchical output.

  2. Test status is aligned on the left (the four periods correspond to four passing tests)

  3. Method names are replaced with docstrings when present. The first two tests have docstrings you can see.

  4. Green always outputs a summary of statuses that will add up to the total number of tests that were run. For some reason, many test runners forget about statuses other than Error and Fail, and even the built-in unittest runner forgets about passing ones.

  5. Possible values for test status (these match the unittest short status characters exactly)

  • . Pass
  • F Failure
  • E Error
  • s Skipped
  • x Expected Failure
  • u Unexpected pass

Origin Story

Green grew out of a desire to see pretty colors. Really! A big part of the whole Red/Green/Refactor process in test-driven-development is actually getting to see red and green output. Most python unit testing actually goes Gray/Gray/Refactor (at least on my terminal, which is gray text on black background). That's a shame. Even TV is in color these days. Why not terminal output? Even worse, the default output for most test runners is cluttered, hard-to-read, redundant, and the dang status indicators are not lined up in a vertical column! Green fixes all that.

But how did Green come to be? Why not just use one of the existing test runners out there? It's an interesting story, actually. And it starts with trial.

trial

I really like Twisted's trial test runner, though I don't really have any need for the rest of the Twisted event-driven networking engine library. I started professionally developing in Python when version 2.3 was the latest, greatest version and none of us in my small shop had ever even heard of unit testing (gasp!). As we grew, we matured and started testing and we chose trial to do the test running. If most of my projects at my day job hadn't moved to Python 3, I probably would have just stuck with trial, but at the time I wrote green trial didn't run on Python 3 (but since 15.4.0 it does). Trial was and is the foundation for my inspiration for having better-than-unittest output in the first place. It is a great example of reducing redundancy (report module/class once, not on every line), lining up status vertically, and using color. I feel like Green trumped trial in two important ways: 1) It wasn't a part of an immense event-driven networking engine, and 2) it was not stuck in Python 2 as trial was at the time. Green will obviously never replace trial, as trial has features necessary to run asynchronous unit tests on Twisted code. After discovering that I couldn't run trial under Python 3, I next tried...

nose

I had really high hopes for nose. It seemed to be widely accepted. It seemed to be powerful. The output was just horrible (exactly the same as unittest's output). But it had a plugin system! I tried all the plugins I could find that mentioned improving upon the output. When I couldn't find one I liked, I started developing Green (yes, this Green) as a plugin for nose. I chose the name Green for three reasons: 1) It was available on PyPi! 2) I like to focus on the positive aspect of testing (everything passes!), and 3) It made a nice counterpoint to several nose plugins that had "Red" in the name. I made steady progress on my plugin until I hit a serious problem in the nose plugin API. That's when I discovered that nose is in maintenance mode -- abandoned by the original developers, handed off to someone who won't fix anything if it changes the existing behavior. What a downer. Despite the huge user base, I already consider nose dead and gone. A project which will not change (even to fix bugs!) will die. Even the maintainer keeps pointing everyone to...

nose2

So I pivoted to nose2! I started over developing Green (same repo -- it's in the history). I can understand the allure of a fresh rewrite as much as the other guy. Nose had made less-than-ideal design decisions, and this time they would be done right! Hopefully. I had started reading nose code while writing the plugin for it, and so I dived deep into nose2. And ran into a mess. Nose2 is alpha. That by itself is not necessarily a problem, if the devs will release early and often and work to fix things you run into. I submitted a 3-line pull request to fix some problems where the behavior did not conform to the already-written documentation which broke my plugin. The pull request wasn't initially accepted because I (ironically) didn't write unit tests for it. This got me thinking "I can write a better test runner than this". I got tired of the friction dealing with the nose/nose2 and decided to see what it would take to write my own test runner. That brought be to...

unittest

I finally went and started reading unittest (Python 2.7 and 3.4) source code. unittest is its own special kind of mess, but it's universally built-in, and most importantly, subclassing or replacing unittest objects to customize the output looked a lot easier than writing a plugin for nose and nose2. And it was, for the output portion! Writing the rest of the test runner turned out to be quite a project, though. I started over on Green again, starting down the road to what we have now. A custom runner that subclasses or replaces bits of unittest to provide exactly the output (and other feature creep) that I wanted.

I had three initial goals for Green:

  1. Colorful, clean output (at least as good as trial's)
  2. Run on Python 3
  3. Try to avoid making it a huge bundle of tightly-coupled, hard-to-read code.

I contend that I nailed 1. and 2., and ended up implementing a bunch of other useful features as well (like very high performance via running tests in parallel in multiple processes). Whether I succeeded with 3. is debatable. I continue to try to refactor and simplify, but adding features on top of a complicated bunch of built-in code doesn't lend itself to the flexibility needed for clear refactors.

Wait! What about the other test runners?

  • pytest -- Somehow I never realized pytest existed until a few weeks before I released Green 1.0. Nowadays it seems to be pretty popular. If I had discovered it earlier, maybe I wouldn't have made Green! Hey, don't give me that look! I'm not omniscient!

  • tox -- I think I first ran across tox only a few weeks before I heard of pytest. It's homepage didn't mention anything about color, so I didn't try using it.

  • the ones I missed -- Er, haven't heard of them yet either.

I'd love to hear your feedback regarding Green. Like it? Hate it? Have some awesome suggestions? Whatever the case, go open a discussion

Comments
  • Implement GreenTestLoader and load_test protocol

    Implement GreenTestLoader and load_test protocol

    In regards with #87 and #88, I implemented the load_test protocol. To do so, I actually moved functions from green.loader to a new class green.loader.GreenTestLoader, inheriting from unittest.TestLoader.

    This way, you can actually use the builtin unittest.TestLoader.loadTestFromName, which can run load_tests if needed.

    Changes are backwards-incompatible (although some compatibility patches can be made), as I registered all functions of green.loader as methods of green.loader.GreenTestLoader.

    I also removed green.loader.loadFromModule as it was duplicating the unittest.TestLoader.loadFromModule code.

    I added a test containing a TestCase that is expected to fail, but the load_tests function in it will monkey patch the TestCase and make the test succeed.

    opened by althonos 28
  • Python 3.9.6 threading compatibility

    Python 3.9.6 threading compatibility

    We are encountering a high rate of failure with some of our tests after upgrading to Python 3.9.6.

    > python3 -m green --run-coverage --cov-config-file my_lib.coveragerc --junit-report my_lib-pytests.xml --include-patterns 'my_lib/*' my_lib
    .............................Exception in thread Thread-1:
    Traceback (most recent call last):
      File ".pyenv/versions/3.9.6/lib/python3.9/threading.py", line 973, in _bootstrap_inner
        self.run()
      File ".pyenv/versions/3.9.6/lib/python3.9/threading.py", line 910, in run
        self._target(*self._args, **self._kwargs)
      File ".pyenv/versions/3.9.6/lib/python3.9/multiprocessing/pool.py", line 513, in _handle_workers
        cls._maintain_pool(ctx, Process, processes, pool, inqueue,
      File ".pyenv/versions/3.9.6/lib/python3.9/multiprocessing/pool.py", line 337, in _maintain_pool
        Pool._repopulate_pool_static(ctx, Process, processes, pool,
      File ".pyenv/versions/3.9.6/lib/python3.9/multiprocessing/pool.py", line 319, in _repopulate_pool_static
        w = Process(ctx, target=worker,
      File ".pyenv/versions/3.9.6/lib/python3.9/multiprocessing/process.py", line 82, in __init__
        assert group is None, 'group argument must be None for now'
    AssertionError: group argument must be None for now
    ................................................................................................................................................................................................................................................................................................................................................................................................................................................Sending interrupt signal to process
    Killing processes
    kill finished with exit code 0
    

    This is failing most of the time under docker, not so much under macOS with the same versions of green (3.2.6) and python (3.9.6).

    I'm not sure if the issue is with green itself but the stack trace seems to come from green.

    bug 
    opened by sodul 14
  • Implement setup.py runner

    Implement setup.py runner

    PR for #158.

    Allows running green with python setup.py green by declaring a distutils command in green.command.

    The list of arguments is the same as the arguments of the plain green CLI, as it is dynamically generated using the StoreOpt results.

    I had to change the behaviour of green.main a bit, to make it possible to pass it a custom argv. The command only validates the input arguments, and then passes everything to green.main.

    Since distutils commands cannot have positional arguments, I made it possible to specify a test target to use using the test_suite argument in the setup.py.

    opened by althonos 14
  • Seems like Green runs the same tests multiple times

    Seems like Green runs the same tests multiple times

    I have a project that I've been fiddling with. I was trying to decide between the following two project structures:

    Structure A (the original structure):

    + ProjectFoler
      + dist
      + doc
      + main_package
        - __init__.py
        - module1.py
        - module2.py
        + tests
          - __init__.py
          - test_module1.py
          - test_module2.py
      - .travis.yml
      - other stuff like dev_requirements.txt, appveyor.yml, setup.py, etc.
    

    Structure B (what I tried moving to):

    + ProjectFoler
      + dist
      + doc
      + main_package
        - __init__.py
        - module1.py
        - module2.py
      + tests             <-- note that this is *not* under the package directory
        - __init__.py
        - test_module1.py
        - test_module2.py
      - .travis.yml
      - other stuff like dev_requirements.txt, appveyor.yml, setup.py, etc.
    

    Before making the change from A to B, nosetests and green worked just fine. My project currently has 23 tests.

    After switching to structure B, nosetests worked just fine, while green ran 46 tests - exactly 2x the number of tests there are. Running green in verbose showed that it was still running tests from ProjectFolder\main_package\tests in addition to the tests found in ProjectFoler\tests

    So I decided to switch back to structure A. Again, nosetests run 23 tests correctly, but this time green ran 69 tests - 3x! Running in verbose shows that green runs:

    1. tests from ProjectFolder\main_package\tests
    2. tests from ProjectFolder\tests
    3. tests from ProjectFolder\main_package\tests again

    Does green save anything? Is there a file I should delete? I've already tried deleting __pycache__ in all folders.

    I'm trying to recreate the issue with a simpler project, but have so far been unsuccessful. I'll continue to try and get exact steps to reproduce the issue.

    Edit 1

    Oh, I forgot relevant information:

    • Green version 1.9.4 (1.10 and above gives me different errors and doesn't run)
    • Python 3.4.3, 64bit
    • Windows 7 Professional
    • Running python via WinPython 1.1
    opened by dougthor42 13
  • Attribute error in Django 2.0

    Attribute error in Django 2.0

    Hi, seems that green doesn't work with Django 2.0.

    # python manage.py test
    
    Traceback (most recent call last):
      File "manage.py", line 22, in <module>
        execute_from_command_line(sys.argv)
      File "/home/vagrant/venv/lib/python3.6/site-packages/django/core/management/__init__.py", line 371, in execute_from_command_line
        utility.execute()
      File "/home/vagrant/venv/lib/python3.6/site-packages/django/core/management/__init__.py", line 365, in execute
        self.fetch_command(subcommand).run_from_argv(self.argv)
      File "/home/vagrant/venv/lib/python3.6/site-packages/django/core/management/commands/test.py", line 26, in run_from_argv
        super().run_from_argv(argv)
      File "/home/vagrant/venv/lib/python3.6/site-packages/django/core/management/base.py", line 288, in run_from_argv
        self.execute(*args, **cmd_options)
      File "/home/vagrant/venv/lib/python3.6/site-packages/django/core/management/base.py", line 335, in execute
        output = self.handle(*args, **options)
      File "/home/vagrant/venv/lib/python3.6/site-packages/django/core/management/commands/test.py", line 59, in handle
        failures = test_runner.run_tests(test_labels)
      File "/home/vagrant/venv/lib/python3.6/site-packages/green/djangorunner.py", line 124, in run_tests
        result = run(suite, stream, args)
      File "/home/vagrant/venv/lib/python3.6/site-packages/green/runner.py", line 115, in run
        result.addProtoTestResult(proto_test_result)
      File "/home/vagrant/venv/lib/python3.6/site-packages/green/result.py", line 346, in addProtoTestResult
        for test, err in proto_test_result.errors:
    AttributeError: 'NoneType' object has no attribute 'errors'
    
    bug 
    opened by dizballanze 12
  • Support unittest.subTest context manager (Python 3.4+)

    Support unittest.subTest context manager (Python 3.4+)

    Summary

    Python 3.4 added the unittest.subTest() context manager. It would be wonderful if Green elevated the information from subTest() up to the top level.

    I think it's best displayed with an example.

    Example

    Code (taken directly from the subTest() docs):

    class NumbersTest(unittest.TestCase):
    
        def test_even(self):
            """
            Test that numbers between 0 and 5 are all even.
            """
            for i in range(0, 6):
                with self.subTest(i=i):
                    self.assertEqual(i % 2, 0)
    

    Green's output:

    image

    Unittest's output:

    image

    See how the standard unittest module will show you (i=1), (i=3), and (i=5) on the fail line?

    Mockup:

    Here's a mockup of how Green might look when utilizing the subTest feature:

    green project -vvv

    image

    I don't think that the subTest items need to be displayed for lower verbosity levels. I also don't think that each failed iteration needs the Traceback (like is done with unittest) unless verbosity is set to the highest level.

    opened by dougthor42 12
  • Class name detection fails for metaclass

    Class name detection fails for metaclass

    green reports an error when running a test case which uses a metaclasses, and also reports the wrong name for the test class. (see __doc__ and green.result appearing below)

    $ green -vvv tests.tools_tests
    tests: max_retries reduced from 25 to 1
    Green 2.0.7, Coverage 3.7, Python 2.7.5
    
    tests.tools_tests
      ContextManagerWrapperTestCase
    .   Check that the wrapper permits exceptions.
    .   Create a test instance and verify the wrapper redirects.
    .....
      __doc__
    .   Test getargspec on args.
    .   Test getargspec on kwargs.
    .   Test getargspec on varargs.
    .   Test getargspec on varkwargs.
    .   Test getargspec on vars.
    .....
    green.result
      ProtoTestResult
    .   Test getargspec on kwargs.
    .   Test getargspec on varkwargs.
        errors[],Traceback (most recent call last):
      File "/usr/bin/green", line 9, in <module>
        load_entry_point('green==2.0.7', 'console_scripts', 'green')()
      File "/usr/lib/python2.7/site-packages/green/cmdline.py", line 75, in main
        result = run(test_suite, stream, args, testing)
      File "/usr/lib/python2.7/site-packages/green/runner.py", line 118, in run
        result.addProtoTestResult(proto_test_result)
      File "/usr/lib/python2.7/site-packages/green/result.py", line 337, in addProtoTestResult
        for test, err in proto_test_result.errors:
    AttributeError: 'NoneType' object has no attribute 'errors'
    <type 'exceptions.AttributeError'>
    CRITICAL: Closing network session.
    

    The top level entry __doc__ in the above is a metaclassed test class, and it is failing to get the class name/docstring correct, but otherwise works OK: https://github.com/wikimedia/pywikibot-core/blob/master/tests/tools_tests.py#L521

    Then the failure occurs on the 'next' test class, which inherits from the metaclass

    https://github.com/wikimedia/pywikibot-core/blob/master/tests/tools_tests.py#L555

    Returning to the __doc__ entry, I can fix that by sorting the names of the methods being created in __new__, e.g.

    diff --git a/tests/tools_tests.py b/tests/tools_tests.py
    index c58e33f..bdccb2f 100644
    --- a/tests/tools_tests.py
    +++ b/tests/tools_tests.py
    @@ -506,7 +506,8 @@ class MetaTestArgSpec(MetaTestCaseClass):
                     self.assertNoDeprecation()
                 return test_method
    
    -        for name, tested_method in list(dct.items()):
    +        for name in sorted(list(dct.keys())):
    +            tested_method = dct[name]
                 if name.startswith('_method_test_'):
                     suffix = name[len('_method_test_'):]
                     cls.add_method(dct, 'test_method_' + suffix,
    

    causes the green output to be

    ...
      getargspec
    .   Test getargspec on args.
    .   Test getargspec on kwargs.
    .   Test getargspec on varargs.
    .   Test getargspec on varkwargs.
    .   Test getargspec on vars.
    ...
    
    invalid 
    opened by jayvdb 12
  • coverage.py `pragma: no cover` not working

    coverage.py `pragma: no cover` not working

    coverage.py has a means of excluding lines from coverage: http://coverage.readthedocs.io/en/coverage-4.0.3/excluding.html

    I'm uncertain how coverage is being called within green, but these comments are not causing the line to be ignored.

    opened by lsh-0 11
  • uncaught exception from testcase - exit without traceback

    uncaught exception from testcase - exit without traceback

    Python 2.7.10 & 3.4.3

    I have a failing test (in fact it is expected at them moment lacking some implementation), yet I get no traceback, nor do any remaining tests run - clean exit after printing a red E, which is quite embarrassing.

    $ green lib
    ........................................E$
    

    In fact I get the same clean exit from plain unittest

    $ python -m unittest discover
    ........................................E$
    

    Note: even a final new line was not printed!

    unittest2 shows a relevant traceback and stops afterwards (no new tests are run).

    Actually I am using testtools which depends on unittest2, so it works with that as well:

    $ python -m testtools.run discover
    Tests running...
    ======================================================================
    ERROR: lib.tool.test_...
    ----------------------------------------------------------------------
    Traceback (most recent call last):
    ...
      File "/usr/lib64/python3.4/argparse.py", line 1728, in parse_args
        args, argv = self.parse_known_args(args, namespace)
      File "/usr/lib64/python3.4/argparse.py", line 1767, in parse_known_args
        self.error(str(err))
      File "/usr/lib64/python3.4/argparse.py", line 2386, in error
        self.exit(2, _('%(prog)s: error: %(message)s\n') % args)
      File "/usr/lib64/python3.4/argparse.py", line 2373, in exit
        _sys.exit(status)
    SystemExit: 2
    

    nose runs properly, even capturing the uncaught second exception in unittest2 and continuing.

    Unfortunately I could not create a minimal reproduction, yet (my attempts so far misteriously worked), but could pinpoint the place where it could be fixed.

    Changing this line makes things work (though, this might not be the proper fix):

    https://github.com/CleanCut/green/blob/2a3b302b8669be298bbf7e7c880445a7bdbcec1a/green/suite.py#L111

                try:
                    test(result)
                except:
                    pass
    
    opened by e3krisztian 11
  • Unconditionally cleanup temp folder in poolRunner

    Unconditionally cleanup temp folder in poolRunner

    Hi Nathan, because of a conditional branch the temporary directories that a poolRunner call creates are not properly created in Python 3. When running tests several times, the /tmp (or equivalent) directory can become cluttered very quickly. I removed the if branching so that the temporary folder is always removed.

    opened by althonos 10
  • Publicity - NEEDS YOUR HELP!

    Publicity - NEEDS YOUR HELP!

    Green should be listed next to py.test in public lists of python test frameworks!

    If people have a hard time finding out about green, they won't use green! Life will be so much slower and less colorful for them. Don't let that happen!

    LET ME KNOW about other places that ought to link to green:

    opened by CleanCut 10
  • Confusing error when I have errors in source code

    Confusing error when I have errors in source code

    I had a file with:

    import functools
    from typings import List
    
    ...
    

    Note that typings is an invalid import. It should be typing

    Trying to run green I got:

    ❯ poetry run green extract/lib/test/test_rules.py
    E
    
    Error in .unittest.loader
    TypeError: Test loader returned an un-runnable object.  Is "unittest.loader" importable from your current location?  Maybe you forgot an __init__.py in your directory?  Unrunnable object looks like: None of type <class 'NoneType'> with dir ['__bool__', '__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__']
    
    Ran 1 test in 0.024s using 8 processes
    
    FAILED (errors=1)
    

    The error doesn't give a hint of what the issue is.

    Then I tried pytest:

    ❯ poetry run pytest extract/lib/test/test_rules.py
    ...
    
    ========================================================================================================================================== ERRORS ==========================================================================================================================================
    _____________________________________________________________________________________________________________________ ERROR collecting extract/lib/test/test_rules.py ______________________________________________________________________________________________________________________
    ImportError while importing test module '/home/sebastian/Source/staxio/cost-spike-1/extract_py/extract/lib/test/test_rules.py'.
    Hint: make sure your test modules/packages have valid Python names.
    Traceback:
    /home/linuxbrew/.linuxbrew/opt/[email protected]/lib/python3.9/importlib/__init__.py:127: in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
    extract/lib/test/test_rules.py:1: in <module>
        import extract.lib.rules as rules
    extract/lib/rules.py:2: in <module>
        from typings import List
    E   ModuleNotFoundError: No module named 'typings'
    ================================================================================================================================= short test summary info ==================================================================================================================================
    ERROR extract/lib/test/test_rules.py
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    ===================================================================================================================================== 1 error in 0.07s =====================================================================================================================================
    

    Note ModuleNotFoundError: No module named 'typings'

    Would be great if green could show the error.

    Thanks

    bug help wanted 
    opened by sporto 4
  • SyntaxError not catched when named explicte but unittest does

    SyntaxError not catched when named explicte but unittest does

    My current environment: Green 3.4.0, Coverage 6.2, Python 3.9.4 on Windows10

    I run a test by naming it explicit:

    C:\Users\buhtzch\tab-cloud\_transfer\greenbug>py -3 -m green tests.test_my
    
    Ran 0 tests in 0.316s using 8 processes
    
    No Tests Found
    

    There are two problems about that output:

    1. The test ist not found. (Run 0 tests)
    2. A SyntaxError is not thrown.

    Unittest itself shows this output

    C:\Users\buhtzch\tab-cloud\_transfer\greenbug>py -3 -m unittest tests.test_my
    Traceback (most recent call last):
      File "C:\IUK\Python\lib\runpy.py", line 197, in _run_module_as_main
        return _run_code(code, main_globals, None,
      File "C:\IUK\Python\lib\runpy.py", line 87, in _run_code
        exec(code, run_globals)
      File "C:\IUK\Python\lib\unittest\__main__.py", line 18, in <module>
        main(module=None)
      File "C:\IUK\Python\lib\unittest\main.py", line 100, in __init__
        self.parseArgs(argv)
      File "C:\IUK\Python\lib\unittest\main.py", line 147, in parseArgs
        self.createTests()
      File "C:\IUK\Python\lib\unittest\main.py", line 158, in createTests
        self.test = self.testLoader.loadTestsFromNames(self.testNames,
      File "C:\IUK\Python\lib\unittest\loader.py", line 220, in loadTestsFromNames
        suites = [self.loadTestsFromName(name, module) for name in names]
      File "C:\IUK\Python\lib\unittest\loader.py", line 220, in <listcomp>
        suites = [self.loadTestsFromName(name, module) for name in names]
      File "C:\IUK\Python\lib\unittest\loader.py", line 154, in loadTestsFromName
        module = __import__(module_name)
      File "C:\Users\buhtzch\tab-cloud\_transfer\greenbug\tests\test_my.py", line 9
        X X
          ^
    SyntaxError: invalid syntax
    

    This is the MWE. The file is in a folder named tests and there is also an empty __init__.py in the same folder.

    import unittest
    
    class TestMY(unittest.TestCase):
        """
        """
        def test_valid_scores(self):
            """All items with valid values."""
            #self.assertTrue(True)
            X X
    
    

    The last line should cause a syntax error.

    When you fix the syntax of the MWE the test is found by green. So I think the not catched SyntaxError also causing the missing test.

    opened by buhtz 3
  • Feature Request: unittest timeout

    Feature Request: unittest timeout

    We thrive to make our unittests very fast through proper targeting and mocking. Unfortunately once in a while a new "unittest" will take over 1m. We would like to capture that as part of our CI pipelines. We can probably parse the test results XML but I thought this could be a nice feature to have in green.

    The timeout would not be for test suites but for individual tests.

    enhancement 
    opened by sodul 1
  • Coverage error with green's DjangoRunner

    Coverage error with green's DjangoRunner

    When using coverage with green in django, all import statements, class heads and function heads (presumably etc.) are marked as not covered. I have no idea about the inner workings of the TestRunner, Django, Green and Coverage, but could it maybe be that coverage is "started" to late, and doesn't catch the initial import of these files? (As all those things are lines that are evaluated when the file is first read by Python)

    It does work perfectly fine when using django's standard testRunner.

    For example, here is the test.py:

    from django.test import TestCase
    
    import main.models
    
    
    class MainTests(TestCase):
        def test_setting_and_retrieving_setting_works(self):
            """Setting and retrieving a setting work as expected.
            """
            setting_name = "some setting"
            setting_value = "a value"
            main.models.Settings.set_setting(setting_name, setting_value)
            self.assertEqual(
                setting_value, main.models.Settings.get_setting(setting_name)
            )
    
        def test_trying_to_get_an_unset_setting_returns_none(self):
            """When trying to get an unknown setting, None is returned.
            """
            self.assertIsNone(main.models.Settings.get_setting("Something"))
    
        def test_trying_to_get_an_unset_setting_returns_default(self):
            """When trying to get an unknown setting, None is returned.
            """
            self.assertEqual(
                "Default", main.models.Settings.get_setting("Something", "Default")
            )
    

    Here is the main/models.py:

    from django.db import models
    from django.db.utils import OperationalError
    
    
    class Settings(models.Model):
        """
        Stores settings as simple key/value pairs. These can then be used elsewhere in the app.
        """
    
        name = models.CharField(max_length=200, primary_key=True)
        value = models.CharField(max_length=200)
    
        @classmethod
        def get_setting(cls, name: str, default: str = None) -> str:
            """Retrieves a setting's value, or None if it doesn't exist."""
            try:
                return cls.objects.get(pk=name).value
            except (cls.DoesNotExist, OperationalError):
                return default
    
        @classmethod
        def set_setting(cls, name: str, value: str) -> None:
            """Sets the specified setting to the specified value, creates it if it doesn't exist."""
            cls.objects.update_or_create(name=name, defaults={"value": value})
    

    For completeness sake, here is the .green file:

    verbose         = 3
    run-coverage    = True
    cov-config-file = .coveragerc
    processes = 1
    

    And .coveragerc:

    [run]
    branch = True
    
    [xml]
    output = coverage.xml
    

    And coverage's xml output:

    <?xml version="1.0" ?>
    <coverage version="5.2" timestamp="1594479177454" lines-valid="14" lines-covered="5" line-rate="0.3571" branches-valid="0" branches-covered="0" branch-rate="1" complexity="0">
    	<!-- Generated by coverage.py: https://coverage.readthedocs.io -->
    	<!-- Based on https://raw.githubusercontent.com/cobertura/web/master/htdocs/xml/coverage-04.dtd -->
    	<sources>
    		<source>/home/flix/PythonProjects/imagetagger</source>
    	</sources>
    	<packages>
    		<package name="main" line-rate="0.3571" branch-rate="1" complexity="0">
    			<classes>
    				<class name="models.py" filename="main/models.py" complexity="0" line-rate="0.3571" branch-rate="1">
    					<methods/>
    					<lines>
    						<line number="1" hits="0"/>
    						<line number="2" hits="0"/>
    						<line number="5" hits="0"/>
    						<line number="10" hits="0"/>
    						<line number="11" hits="0"/>
    						<line number="13" hits="0"/>
    						<line number="14" hits="0"/>
    						<line number="16" hits="1"/>
    						<line number="17" hits="1"/>
    						<line number="18" hits="1"/>
    						<line number="19" hits="1"/>
    						<line number="21" hits="0"/>
    						<line number="22" hits="0"/>
    						<line number="24" hits="1"/>
    					</lines>
    				</class>
    			</classes>
    		</package>
    	</packages>
    </coverage>
    
    
    bug help wanted 
    opened by MxFlix 10
Owner
Nathan Stocks
Engineering Manager, Git Storage by day. Rust Instructor & Indie Game Dev by night. Family, Food, Rust, Python, Game Engines, Open Source, Maple Trees.
Nathan Stocks
splinter - python test framework for web applications

splinter - python tool for testing web applications splinter is an open source tool for testing web applications using Python. It lets you automate br

Cobra Team 2.6k Dec 27, 2022
A test fixtures replacement for Python

factory_boy factory_boy is a fixtures replacement based on thoughtbot's factory_bot. As a fixtures replacement tool, it aims to replace static, hard t

FactoryBoy project 3k Jan 5, 2023
create custom test databases that are populated with fake data

About Generate fake but valid data filled databases for test purposes using most popular patterns(AFAIK). Current support is sqlite, mysql, postgresql

Emir Ozer 2.2k Jan 6, 2023
A screamingly fast Python 2/3 WSGI server written in C.

bjoern: Fast And Ultra-Lightweight HTTP/1.1 WSGI Server A screamingly fast, ultra-lightweight WSGI server for CPython 2 and CPython 3, written in C us

Jonas Haag 2.9k Dec 21, 2022
The lightning-fast ASGI server. πŸ¦„

The lightning-fast ASGI server. Documentation: https://www.uvicorn.org Community: https://discuss.encode.io/c/uvicorn Requirements: Python 3.6+ (For P

Encode 6k Jan 3, 2023
Scalable user load testing tool written in Python

Locust Locust is an easy to use, scriptable and scalable performance testing tool. You define the behaviour of your users in regular Python code, inst

Locust.io 20.4k Jan 8, 2023
A cross-platform GUI automation Python module for human beings. Used to programmatically control the mouse & keyboard.

PyAutoGUI PyAutoGUI is a cross-platform GUI automation Python module for human beings. Used to programmatically control the mouse & keyboard. pip inst

Al Sweigart 7.6k Jan 1, 2023
Let your Python tests travel through time

FreezeGun: Let your Python tests travel through time FreezeGun is a library that allows your Python tests to travel through time by mocking the dateti

Steve Pulec 3.5k Jan 9, 2023
HTTP client mocking tool for Python - inspired by Fakeweb for Ruby

HTTPretty 1.0.5 HTTP Client mocking tool for Python created by Gabriel FalcΓ£o . It provides a full fake TCP socket module. Inspired by FakeWeb Github

Gabriel FalcΓ£o 2k Jan 6, 2023
A utility for mocking out the Python Requests library.

Responses A utility library for mocking out the requests Python library. Note Responses requires Python 2.7 or newer, and requests >= 2.0 Installing p

Sentry 3.8k Jan 2, 2023
Mixer -- Is a fixtures replacement. Supported Django, Flask, SqlAlchemy and custom python objects.

The Mixer is a helper to generate instances of Django or SQLAlchemy models. It's useful for testing and fixture replacement. Fast and convenient test-

Kirill Klenov 871 Dec 25, 2022
Faker is a Python package that generates fake data for you.

Faker is a Python package that generates fake data for you. Whether you need to bootstrap your database, create good-looking XML documents, fill-in yo

Daniele Faraglia 15.2k Jan 1, 2023
Mimesis is a high-performance fake data generator for Python, which provides data for a variety of purposes in a variety of languages.

Mimesis - Fake Data Generator Description Mimesis is a high-performance fake data generator for Python, which provides data for a variety of purposes

Isaak Uchakaev 3.8k Jan 1, 2023
Coroutine-based concurrency library for Python

gevent Read the documentation online at http://www.gevent.org. Post issues on the bug tracker, discuss and ask open ended questions on the mailing lis

gevent 5.9k Dec 28, 2022
Radically simplified static file serving for Python web apps

WhiteNoise Radically simplified static file serving for Python web apps With a couple of lines of config WhiteNoise allows your web app to serve its o

Dave Evans 2.1k Jan 8, 2023
livereload server in python (MAINTAINERS NEEDED)

LiveReload Reload webpages on changes, without hitting refresh in your browser. Installation python-livereload is for web developers who know Python,

Hsiaoming Yang 977 Dec 14, 2022
Waitress - A WSGI server for Python 2 and 3

Waitress Waitress is a production-quality pure-Python WSGI server with very acceptable performance. It has no dependencies except ones which live in t

Pylons Project 1.2k Dec 30, 2022
Python HTTP Server

Python HTTP Server Preview Languange and Code Editor: How to run? Download the zip first. Open the http.py and wait 1-2 seconds. You will see __pycach

SonLyte 16 Oct 21, 2021
PyQaver is a PHP like WebServer for Python.

PyQaver is a PHP like WebServer for Python.

Dev Bash 7 Apr 25, 2022