The successor to nose, based on unittest2

Overview
build status Latest PyPI version Supported Python Versions Join discuss@nose2.io

Welcome to nose2

nose2 is the successor to nose.

It's unittest with plugins.

nose2 is a new project and does not support all of the features of nose. See differences for a thorough rundown.

nose2's purpose is to extend unittest to make testing nicer and easier to understand.

nose2 vs pytest

nose2 may or may not be a good fit for your project.

If you are new to python testing, we encourage you to also consider pytest, a popular testing framework.

Quickstart

Because nose2 is based on unittest, you can start from the Python Standard Library's documentation for unittest and then use nose2 to add value on top of that.

nose2 looks for tests in python files whose names start with test and runs every test function it discovers.

Here's an example of a simple test, written in typical unittest style:

# in test_simple.py
import unittest

class TestStrings(unittest.TestCase):
    def test_upper(self):
        self.assertEqual("spam".upper(), "SPAM")

You can then run this test like so:

$ nose2 -v
test_upper (test_simple.TestStrings) ... ok

----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

However, nose2 supports more testing configuration and provides more tools than unittest on its own.

For example, this test exercises just a few of nose2's features:

# in test_fancy.py
from nose2.tools import params

@params("Sir Bedevere", "Miss Islington", "Duck")
def test_is_knight(value):
    assert value.startswith('Sir')

and then run this like so:

$ nose2 -v --pretty-assert
test_fancy.test_is_knight:1
'Sir Bedevere' ... ok
test_fancy.test_is_knight:2
'Miss Islington' ... FAIL
test_fancy.test_is_knight:3
'Duck' ... FAIL

======================================================================
FAIL: test_fancy.test_is_knight:2
'Miss Islington'
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/mnt/ebs/home/sirosen/tmp/test_fancy.py", line 6, in test_is_knight
    assert value.startswith('Sir')
AssertionError

>>> assert value.startswith('Sir')

values:
    value = 'Miss Islington'
    value.startswith = <built-in method startswith of str object at 0x7f3c3172f430>
======================================================================
FAIL: test_fancy.test_is_knight:3
'Duck'
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/mnt/ebs/home/sirosen/tmp/test_fancy.py", line 6, in test_is_knight
    assert value.startswith('Sir')
AssertionError

>>> assert value.startswith('Sir')

values:
    value = 'Duck'
    value.startswith = <built-in method startswith of str object at 0x7f3c3172d490>
----------------------------------------------------------------------
Ran 3 tests in 0.001s

FAILED (failures=2)

Full Docs

Full documentation for nose2 is available at docs.nose2.io

Contributing

If you want to make contributions, please read the contributing guide.

Comments
  • tests fail

    tests fail

    11 tests fail for the latest release using Python 2.7 from Debian/unstable.

    Unless fixed, this means nose2 and everything that depends on nose2 will get removed from Debian.

    Example:

    ======================================================================
    FAIL: test_failure_to_read_empty_properties (nose2.tests.functional.test_junitxml_plugin.JunitXmlPluginFunctionalFailureTest)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "nose2/tests/functional/test_junitxml_plugin.py", line 167, in test_failure_to_read_empty_properties
        "'.*%s'" % filename_for_regex)
      File "nose2/tests/_common.py", line 64, in assertTestRunOutputMatches
        testf(util.safe_decode(cmd_stderr), stderr)
    AssertionError: Regexp didn't match: "Internal Error: runTests aborted: JUnitXML: could not decode file: '.*empty_properties/properties.json'" not found in u"test (test_junitxml_empty_properties.Test) ... ok\nInternal Error: runTests aborted: [Errno 2] JUnitXML: Properties file does not exist: '/home/brian/tree/debian/python-modules/nose2/nose2/tests/functional/support/scenario/junitxml/empty_properties/properties.json'\n"
    
    
    opened by brianmay 26
  • It is possible to create a release of nose2 on pypi with this feature «

    It is possible to create a release of nose2 on pypi with this feature « "start-dir" configvar support in [unittest] section » ?

    It is possible to create a release of nose2 on pypi with this patch : https://github.com/nose-devs/nose2/commit/ef23ff9b0155904253c40d566510693a52f929ca ?

    Best regards, Stephane

    opened by harobed 24
  • function level fixtures supported?

    function level fixtures supported?

    i.e. as per nose v1:

    from nose.tools import with_setup
    
    def su():
       print 'setting up'
    
    @with_setup(su)
    def test():
        assert True
    

    I don't see with_setup in nose2.tools, but neither do I see it mentioned in the 'changes from nose1 doc'. And the docs also state something like "we support the fixtures that unittest2 supports" which would appear to be also test func with setup and teardown (via unittest2.FunctionTestCase)

    docs accepted 
    opened by sanga 24
  • Use specific version of python for requirements

    Use specific version of python for requirements

    This aims to fix #344

    For now, I'm not used to jython and pypy so I don't know how to handle them.

    Moreover I'm not sure using the sys.version instead of sys.version_info is the right thing to do.

    opened by artragis 22
  • Add py33 to tox.ini

    Add py33 to tox.ini

    py33 was added in the most recent version of tox (see http://tox.testrun.org/latest/changelog.html)

    tox output:

    ~/dev/git-repos/nose2$ tox
    ...
      py26: commands succeeded
      py27: commands succeeded
      py32: commands succeeded
      py33: commands succeeded
      pypy: commands succeeded
      docs: commands succeeded
      self26: commands succeeded
      cov26: commands succeeded
      congratulations :)
    
    ~/dev/git-repos/nose2$ pip freeze | grep tox
    Warning: cannot find svn location for INITools==0.3.1dev-r0
    detox==0.9
    -e git+https://github.com/msabramo/piptox.git@adef3b7826a42b89d74868f56c46e4a43bfd2fc6#egg=piptox-dev
    tox==1.4
    

    Passing Travis build: http://travis-ci.org/#!/msabramo/nose2/builds/1635897

    opened by msabramo 16
  • Suggested contributing guide

    Suggested contributing guide

    Basic contributing guide with sensible ground-rules.

    Sorry if this is being too forward, but I work on some projects with fairly sane and short contributing guides. This document is a modified version of one of them.

    A personal kibitz in here which I feel pretty strongly about: "No GitHub emoji". For an example of the inanity that can strike when this isn't policy, look no further than the Atom Editor project

    Resolves #333

    opened by sirosen 15
  • Replace cov-core with coverage package

    Replace cov-core with coverage package

    cov-core has been unmaintained for 3 years, but the coverage project is getting active work as of 2017. Try using coverage instead of cov-core. (This seems like a better solution than pytest-cov, mentioned in #336, which is a pytest plugin). Closes #336

    Notes:

    • coverage supports all of the reporting modes of cov-core, but how do we verify that they all work the same in all cases...?
    • Still allows "term-missing" to get "show_missing" behavior, to be backwards compatible, but otherwise will rely on coveragerc specification of "show_missing"
    • coverage matches the plugin module name, so for py2 to work it needs a future import of absolute_import
    • coverage doesn't accept reporting modes as an argument, but requires explicit calls to its reporting methods, so imitate the same dispatch that we got under cov-core
    • cov-core was running between plugin initialization and createdTestSuite getting called. It's not obvious from docs whether or not coverage supports multiple start/stop calls before reporting, so I've removed this. Was having coverage measurement during this period important?
    opened by sirosen 15
  • Merge of pull request #187 broke test discovery

    Merge of pull request #187 broke test discovery

    Checkout c94dfc59a0f7dccd269de67291c39e529d921f34 (merge of pull request #187) or newer master. Run tox -e py27. Tests fail. It seems alarming that CI doesn't catch this.

    λ tox -e py27
    GLOB sdist-make: D:\src\nose2.orig\setup.py
    py27 inst-nodeps: D:\src\nose2.orig\.tox\dist\nose2-0.4.7.zip
    py27 runtests: PYTHONHASHSEED='1999833526'
    py27 runtests: commands[0] | python -m unittest discover
    FFFFFFFFFFF.F.FFFFFFFFFF........FF..................FFF..EEEEEE..............................................................................................................
    
    opened by kurniliya 14
  • the multiprocess plugin fails to give a useful error for unimportable test modules

    the multiprocess plugin fails to give a useful error for unimportable test modules

    So if a test module is not importable you wind up with the following output:

    test_not_found (nose2.loader.LoadTestsFailure) ... ERROR
    test_not_found (nose2.loader.LoadTestsFailure) ... ERROR
    
    ======================================================================
    ERROR: test_not_found (nose2.loader.LoadTestsFailure)
    ----------------------------------------------------------------------
    AttributeError: 'module' object has no attribute 'ModuleImportFailure'
    

    The reason for this so far as I can tell is that the mp plugin first imports the tests in the main process and then distributes those tests across the child processes by name, where they are again imported. The problem arises because for a module which can't be imported, nose2 creates a testsuite with a method that will just raise the import exception when run. Like so:

    unittest2.suite.TestSuite tests=[<nose2.loader.ModuleImportFailure testMethod=v2.test_token>]
    

    mp.MultiProcess._flatten ends up making those into the following list of 'test names'

    flat: ['nose2.loader.ModuleImportFailure.v2.test_object_relations',
    'nose2.loader.ModuleImportFailure.v2.test_devices...
    

    and when the child process tries to import the name nose2.loader.ModuleImportFailure.v2.test_object_relations it fails as that test doesn't exist.

    And so the root cause of the import error is lost.

    so how might one fix this? in a way it doesn't make sense to farm those tests that can't be imported out to the child nose processes, as we already know that they fail, so we could filter them out already before the _flatten process my attempts to do that though haven't so far succeeded

    It might be easier to just change the test name in that case (util.test_id I think does that) so that nose2.loader.ModuleImportFailure would be removed so that the child process could actually find the test to import, and so that would catch the import error. Though test names must match in both the main and the child processes.

    bug 1.0.x accepted ready 
    opened by sanga 14
  • multiprocess plugin hanging during test execution

    multiprocess plugin hanging during test execution

    We've encountered an issues lately that is quite confusing. Our CI server runs tests in parallel with 4 processes. However, it has started hanging every once in awhile. Some digging has allowed us to create an example TestSuite that highlights this behavior:

    import unittest
    
    class ThreeTests(unittest.TestCase):
    
        def test_one(self):
            pass
    
        def test_two(self):
            pass
    
        def test_three(self):
            pass
    

    running the following will show nose2 hang:

    nose2 three_tests --plugin=nose2.plugins.mp -N 4
    

    The hypothesis is that when running the tests in bulk (10,000+ tests in many suites), that a test suite with three tests is firing at times that is not initially sending an event to the process?

    We've done some playing with where the test suite is flattened and distributed but these seems to have consequences on overall test run times: https://github.com/nose-devs/nose2/blob/master/nose2/plugins/mp.py#L75

            # send one initial task to each process
            for proc, conn in procs:
                if not flat and self.procs_initialized is True:
                    break
                if not flat and self.procs_initialized is False:
                    conn.send(None)
                    continue
                caseid = flat.pop(0)
                conn.send(caseid)
            self.procs_initialized = True
    

    Anyone have any other ideas?

    opened by jredl-va 13
  • Module level coverage missing

    Module level coverage missing

    Say I have the coverage plugin installed, configured, and enabled and I have a simple test:

    from unittest import TestCase
    
    from my_module import returns_42
    
    class MyTest(TestCase):
        def test_it(self):
            self.assertEqual(returns_42(), 42)
    

    The test passes and all is good until I have a look at coverage report. The module level coverage is absent, but the function level coverage is present:

    coverage

    According to this StackOverflow question, this behavior implies that coverage isn't being collected until after the test starts.

    A work around would be to do something like this:

    from unittest import TestCase
    
    class MyTest(TestCase):
        def test_it(self):
            from my_module import returns_42
            self.assertEqual(returns_42(), 42)
    

    which gives me the coverage I expect:

    coverage_i_expect

    however, that's not what I would consider an ideal solution. Is there an option or a setting that I am missing to fix this? Is there something else that I am missing?

    opened by frenchtoast747 13
  • License terms for unittest2?

    License terms for unittest2?

    The license file indicates that some of the code is derived from unittest2:

    https://github.com/nose-devs/nose2/blob/5375b1d5eee5fa59a1785098fedc4441d6cfcb55/license.txt#L30-L32

    The link http://docs.python.org/license.html does not make it clear what license terms are intended here, and I’m not sure that this is the right link anyway since unittest2 was never part of the standard library.

    The pypi package for unittest2 has a “BSD” license trove classifier, and its setup.py references http://www.voidspace.org.uk/python/license.shtml, a now-dead link that previously hosted “The Voidspace Open Source License” (Wayback Machine), which was a BSD-3-Clause license. I think that license text is what should apply here.

    If you believe these are the correct terms for unittest2 code in nose2, would you consider updating the license file with the actual license text?

    opened by musicinmybrain 1
  • Test fails due missing mod.py

    Test fails due missing mod.py

    I tested with the sdist from PyPI and tox failed with this error:

    py39 run-test: commands[2] | coverage report
    No source for code: '/tmp/tmpvb1ie5o6/mod.py'.
    Aborting report output, consider using -i.
    ERROR: InvocationError for command /usr/bin/coverage report (exited with code 1)
    

    To test I run this: tox --current-env --no-provision --recreate -e py39.

    opened by mtelka 0
  • sdist: PKG-INFO should list license.txt in License-File

    sdist: PKG-INFO should list license.txt in License-File

    The sdist at PyPI contains this:

    $ grep -i license nose2-0.12.0/PKG-INFO
    Classifier: License :: OSI Approved :: BSD License
    License-File: AUTHORS
    $
    

    But the actual nose2 license is in license.txt file. Please add license.txt to PKG-INFO.

    Thank you.

    opened by mtelka 0
  • Generator tests run on the wrong instance, causing setUp to be ineffective and other issues

    Generator tests run on the wrong instance, causing setUp to be ineffective and other issues

    This issue is a cleaned-up version of #80 , meant to capture the current context in nose2. It is an intentional duplicate to help make it easier to dive straight into this work, without getting tripped up on any of the older discussion.

    There are several issues with respect to generator tests running on the wrong instance. Here's a clean reproduction:

    $ cat test_gen.py
    import unittest
    
    
    class ExampleTest(unittest.TestCase):
        def setUp(self):
            print(f"setup on {id(self)}")
    
        def test_foo(self):
            def do_foo():
                print(f"run on {id(self)}")
            yield (do_foo,)
            yield (do_foo,)
            yield (do_foo,)
    
    $ nose2 -v
    test_foo:1
     (test_gen.ExampleTest) ... setup on 139890517293904
    run on 139890517293328
    ok
    test_foo:2
     (test_gen.ExampleTest) ... setup on 139890517294720
    run on 139890517293328
    ok
    test_foo:3
     (test_gen.ExampleTest) ... setup on 139890517294768
    run on 139890517293328
    ok
    
    ----------------------------------------------------------------------
    Ran 3 tests in 0.001s
    
    OK
    

    The setup runs on different instances each time, which is supposed to happen. But then the test invocations happen on the same instance (and not one of the ones which got setup), which is not supposed to happen.


    I'm going to try to make time to work on this after moving to python3-only, but I've also labelled it as 'help wanted'. If anyone wants to help try to understand and rework the generator test code, please feel free.

    bug accepted help wanted 
    opened by sirosen 1
  • 0.10.0: sphinx warnings

    0.10.0: sphinx warnings

    + /usr/bin/python3 setup.py build_sphinx -b man --build-dir build/sphinx
    running build_sphinx
    Running Sphinx v4.1.2
    making output directory... done
    loading intersphinx inventory from http://docs.python.org/objects.inv...
    intersphinx inventory has moved: http://docs.python.org/objects.inv -> https://docs.python.org/3/objects.inv
    building [mo]: targets for 0 po files that are out of date
    building [man]: all manpages
    updating environment: [new config] 51 added, 0 changed, 0 removed
    reading sources... [100%] usage
    /home/tkloczko/rpmbuild/BUILD/nose2-0.10.0/nose2/events.py:docstring of nose2.events.Event.version:1: WARNING: duplicate object description of nose2.events.Event.version, other instance in dev/event_reference, use :noindex: for one of them
    /usr/lib64/python3.8/unittest/suite.py:docstring of unittest.suite.TestSuite:1: WARNING: duplicate object description of nose2.loader.PluggableTestLoader.suiteClass, other instance in dev/loader, use :noindex: for one of them
    /home/tkloczko/rpmbuild/BUILD/nose2-0.10.0/nose2/loader.py:docstring of nose2.loader.PluggableTestLoader:1: WARNING: duplicate object description of nose2.main.PluggableTestProgram.loaderClass, other instance in dev/main, use :noindex: for one of them
    /home/tkloczko/rpmbuild/BUILD/nose2-0.10.0/nose2/runner.py:docstring of nose2.runner.PluggableTestRunner:1: WARNING: duplicate object description of nose2.main.PluggableTestProgram.runnerClass, other instance in dev/main, use :noindex: for one of them
    /home/tkloczko/rpmbuild/BUILD/nose2-0.10.0/nose2/session.py:docstring of nose2.session.Session:1: WARNING: duplicate object description of nose2.main.PluggableTestProgram.sessionClass, other instance in dev/main, use :noindex: for one of them
    /home/tkloczko/rpmbuild/BUILD/nose2-0.10.0/nose2/events.py:docstring of nose2.events.Hook:1: WARNING: duplicate object description of nose2.events.PluginInterface.hookClass, other instance in dev/plugin_class_reference, use :noindex: for one of them
    /home/tkloczko/rpmbuild/BUILD/nose2-0.10.0/nose2/result.py:docstring of nose2.result.PluggableTestResult:1: WARNING: duplicate object description of nose2.runner.PluggableTestRunner.resultClass, other instance in dev/runner, use :noindex: for one of them
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/buffer
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/collect
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/coverage
    /home/tkloczko/rpmbuild/BUILD/nose2-0.10.0/nose2/plugins/debugger.py:docstring of nose2.plugins.debugger.Debugger.pdb:1: WARNING: duplicate object description of nose2.plugins.debugger.Debugger.pdb, other instance in plugins/debugger, use :noindex: for one of them
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/debugger
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/discovery
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/doctests
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/eggdiscovery
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/functions
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/generators
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/junitxml
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/layers
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/loadtests
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/logcapture
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/mp
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/outcomes
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/parameters
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/prettyassert
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/printhooks
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/prof
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/result
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/testcases
    <autodoc>:1: WARNING: duplicate configvar description of always-on, other instance in plugins/testclasses
    looking for now-outdated files... none found
    pickling environment... done
    checking consistency... done
    writing... python-nose2.3 { getting_started usage configuration differences plugins plugins/discovery plugins/functions plugins/generators plugins/parameters plugins/testcases plugins/testclasses plugins/loadtests plugins/dundertests plugins/result plugins/buffer plugins/debugger plugins/failfast plugins/logcapture plugins/coverage plugins/prettyassert plugins/junitxml plugins/attrib plugins/mp plugins/layers plugins/doctests plugins/outcomes plugins/collect plugins/testid plugins/prof plugins/printhooks plugins/eggdiscovery tools decorators params such_dsl changelog dev/writing_plugins dev/documenting_plugins dev/event_reference dev/hook_reference dev/session_reference dev/plugin_class_reference dev/contributing dev/internals dev/main dev/exceptions dev/loader dev/result dev/runner dev/utils } /home/tkloczko/rpmbuild/BUILD/nose2-0.10.0/nose2/plugins/result.py:docstring of nose2.plugins.result:7: WARNING: unknown option: verbose
    /home/tkloczko/rpmbuild/BUILD/nose2-0.10.0/docs/plugins/mp.rst:24: WARNING: unknown option: --plugin
    /home/tkloczko/rpmbuild/BUILD/nose2-0.10.0/docs/plugins/eggdiscovery.rst:27: WARNING: unknown option: --plugin
    done
    build succeeded, 33 warnings.
    
    bug docs help wanted 
    opened by kloczek 1
  • Test that has child process logging to stdout not consistent when run with python -m unittest vs nose2

    Test that has child process logging to stdout not consistent when run with python -m unittest vs nose2

    I have a command line command that I'm trying to write an end-to-end test for to make sure everything is working well. I wanted to invoke the CLI via a multiprocessing.Process, because when the CLI starts running it configures the root logger, and I want to test that, along with everything else, without creating inconsistencies in my test suite. Below is a simple of the type of test I'm interested in doing. You can see that I have child, which is the entrypoint of my CLI, you can see that it configures logging, and also that it makes a couple of print statements. The test itself verifies both the print statement and the log message show up in stdout, which is redirected to a file in a child process.

    When I run this test with unittests, everything passes. When I run with nose2, I get a failure. See below for details.

    tests/test_example.py

    import logging
    import sys
    import tempfile
    import unittest
    
    from pathlib import Path
    from multiprocessing import Process
    
    
    def child():
        logging.basicConfig(
            handlers=[logging.StreamHandler(sys.stdout)],
            level=logging.INFO,
            format="%(message)s",
            datefmt="%Y-%m-%dT%H:%M:%S%z",
        )
    
        print("MY PRINT STATEMENT")
        logging.info("MY LOG MESSAGE")
    
    
    def child_wrapper(target, stdout_file, stderr_file):
        sys.stdout = stdout_file.open("w")
        sys.stderr = stderr_file.open("w")
    
        try:
            target()
        finally:
            sys.stdout.flush()
            sys.stderr.flush()
    
    
    class TestLoggingInChildProc(unittest.TestCase):
        def test_logging_in_child_proc(self):
            with tempfile.TemporaryDirectory() as d:
                stdout_file = Path(f"{d}/stdout.txt")
                stderr_file = Path(f"{d}/stderr.txt")
    
                p = Process(
                    target=child_wrapper,
                    args=[child, stdout_file, stderr_file],
                )
                p.start()
                p.join()
                p.close()
    
                self.assertEqual(
                    "MY PRINT STATEMENT\nMY LOG MESSAGE",
                    stdout_file.read_text().strip()
                )
    
                self.assertEqual("", stderr_file.read_text().strip())
    

    When I run tree ., I see:

    .
    └── tests
        ├── __init__.py
        └── test_example.py
    
    1 directory, 2 files
    

    When I run python -m unittest tests.test_example the test passes, but when I run nose2 test_example the test fails. Here is the failure text:

    F
    ======================================================================
    FAIL: test_logging_in_child_proc (tests.test_example.TestLoggingInChildProc)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/tmp/tests/test_example.py", line 47, in test_logging_in_child_proc
        self.assertEqual(
    AssertionError: 'MY PRINT STATEMENT\nMY LOG MESSAGE' != 'MY PRINT STATEMENT'
    - MY PRINT STATEMENT
    ?                   -
    + MY PRINT STATEMENT- MY LOG MESSAGE
    
    ----------------------------------------------------------------------
    Ran 1 test in 0.004s
    
    FAILED (failures=1)
    

    When I invoke this test using python -m unittest test_example.py the test passes. However, when I invoke this script using nose2

    bug help wanted 
    opened by mikeholler 1
nose is nicer testing for python

On some platforms, brp-compress zips man pages without distutils knowing about it. This results in an error when building an rpm for nose. The rpm bui

null 1.4k Dec 12, 2022
Hypothesis is a powerful, flexible, and easy to use library for property-based testing.

Hypothesis Hypothesis is a family of testing libraries which let you write tests parametrized by a source of examples. A Hypothesis implementation the

Hypothesis 6.4k Jan 5, 2023
A command-line tool and Python library and Pytest plugin for automated testing of RESTful APIs, with a simple, concise and flexible YAML-based syntax

1.0 Release See here for details about breaking changes with the upcoming 1.0 release: https://github.com/taverntesting/tavern/issues/495 Easier API t

null 909 Dec 15, 2022
Airspeed Velocity: A simple Python benchmarking tool with web-based reporting

airspeed velocity airspeed velocity (asv) is a tool for benchmarking Python packages over their lifetime. It is primarily designed to benchmark a sing

null 745 Dec 28, 2022
Docker-based integration tests

Docker-based integration tests Description Simple pytest fixtures that help you write integration tests with Docker and docker-compose. Specify all ne

Avast 326 Dec 27, 2022
The (Python-based) mining software required for the Game Boy mining project.

ntgbtminer - Game Boy edition This is a version of ntgbtminer that works with the Game Boy bitcoin miner. ntgbtminer ntgbtminer is a no thrills getblo

Ghidra Ninja 31 Nov 4, 2022
Based on the selenium automatic test framework of python, the program crawls the score information of the educational administration system of a unive

whpu_spider 该程序基于python的selenium自动化测试框架,对某高校的教务系统的成绩信息实时爬取,在检测到成绩更新之后,会通过电子邮件的方式,将更新的成绩以文本的方式发送给用户,可以使得用户在不必手动登录教务系统网站时,实时获取成绩更新的信息。 该程序仅供学习交流,不可用于恶意攻

null 1 Dec 30, 2021
This is a web test framework based on python+selenium

Basic thoughts for this framework There should have a BasePage.py to be the parent page and all the page object should inherit this class BasePage.py

Cactus 2 Mar 9, 2022
The successor to nose, based on unittest2

Welcome to nose2 nose2 is the successor to nose. It's unittest with plugins. nose2 is a new project and does not support all of the features of nose.

null 738 Jan 9, 2023
The successor of GeoSnipe, a pythonic Minecraft username sniper based on AsyncIO.

OneSnipe The successor of GeoSnipe, a pythonic Minecraft username sniper based on AsyncIO. Documentation View Documentation Features • Mojang & Micros

null 1 Jan 14, 2022
Django test runner using nose

django-nose django-nose provides all the goodness of nose in your Django tests, like: Testing just your apps by default, not all the standard ones tha

Jazzband 880 Dec 15, 2022
nose is nicer testing for python

On some platforms, brp-compress zips man pages without distutils knowing about it. This results in an error when building an rpm for nose. The rpm bui

null 1.4k Dec 12, 2022
The spiritual successor to knockknock for PyTorch Lightning, get notified when your training ends

Who's there? The spiritual successor to knockknock for PyTorch Lightning, to get a notification when your training is complete or when it crashes duri

twsl 70 Oct 6, 2022
BudouX is the successor to Budou, the machine learning powered line break organizer tool.

BudouX Standalone. Small. Language-neutral. BudouX is the successor to Budou, the machine learning powered line break organizer tool. It is standalone

Google 868 Jan 5, 2023
A simple and secure password-based encryption & decryption algorithm based on hash functions, implemented solely based on python.

pyhcrypt A simple and secure password-based encryption & decryption algorithm based on hash functions, implemented solely based on python. Usage Pytho

Hongfei Xu 3 Feb 8, 2022
Line based ATR Engine based on OCRopy

OCR Engine based on OCRopy and Kraken using python3. It is designed to both be easy to use from the command line but also be modular to be integrated

null 948 Dec 23, 2022
PyTorch implementation of Algorithm 1 of "On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models"

Code for On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models This repository will reproduce the main results from our pape

Mitch Hill 32 Nov 25, 2022
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

null 139 Jan 1, 2023
Alex Pashevich 62 Dec 24, 2022