BDD library for the py.test runner

Related tags

Testing pytest-bdd
Overview

BDD library for the py.test runner

https://travis-ci.org/pytest-dev/pytest-bdd.svg?branch=master Documentation Status

pytest-bdd implements a subset of the Gherkin language to enable automating project requirements testing and to facilitate behavioral driven development.

Unlike many other BDD tools, it does not require a separate runner and benefits from the power and flexibility of pytest. It enables unifying unit and functional tests, reduces the burden of continuous integration server configuration and allows the reuse of test setups.

Pytest fixtures written for unit tests can be reused for setup and actions mentioned in feature steps with dependency injection. This allows a true BDD just-enough specification of the requirements without maintaining any context object containing the side effects of Gherkin imperative declarations.

Install pytest-bdd

pip install pytest-bdd

The minimum required version of pytest is 4.3.

Example

An example test for a blog hosting software could look like this. Note that pytest-splinter is used to get the browser fixture.

publish_article.feature:

Feature: Blog
    A site where you can publish your articles.

    Scenario: Publishing the article
        Given I'm an author user
        And I have an article

        When I go to the article page
        And I press the publish button

        Then I should not see the error message
        And the article should be published  # Note: will query the database

Note that only one feature is allowed per feature file.

test_publish_article.py:

from pytest_bdd import scenario, given, when, then

@scenario('publish_article.feature', 'Publishing the article')
def test_publish():
    pass


@given("I'm an author user")
def author_user(auth, author):
    auth['user'] = author.user


@given("I have an article", target_fixture="article")
def article(author):
    return create_test_article(author=author)


@when("I go to the article page")
def go_to_article(article, browser):
    browser.visit(urljoin(browser.url, '/manage/articles/{0}/'.format(article.id)))


@when("I press the publish button")
def publish_article(browser):
    browser.find_by_css('button[name=publish]').first.click()


@then("I should not see the error message")
def no_error_message(browser):
    with pytest.raises(ElementDoesNotExist):
        browser.find_by_css('.message.error').first


@then("the article should be published")
def article_is_published(article):
    article.refresh()  # Refresh the object in the SQLAlchemy session
    assert article.is_published

Scenario decorator

The scenario decorator can accept the following optional keyword arguments:

  • encoding - decode content of feature file in specific encoding. UTF-8 is default.
  • example_converters - mapping to pass functions to convert example values provided in feature files.

Functions decorated with the scenario decorator behave like a normal test function, and they will be executed after all scenario steps. You can consider it as a normal pytest test function, e.g. order fixtures there, call other functions and make assertions:

from pytest_bdd import scenario, given, when, then

@scenario('publish_article.feature', 'Publishing the article')
def test_publish(browser):
    assert article.title in browser.html

Step aliases

Sometimes, one has to declare the same fixtures or steps with different names for better readability. In order to use the same step function with multiple step names simply decorate it multiple times:

@given("I have an article")
@given("there's an article")
def article(author, target_fixture="article"):
    return create_test_article(author=author)

Note that the given step aliases are independent and will be executed when mentioned.

For example if you associate your resource to some owner or not. Admin user can’t be an author of the article, but articles should have a default author.

Feature: Resource owner
    Scenario: I'm the author
        Given I'm an author
        And I have an article


    Scenario: I'm the admin
        Given I'm the admin
        And there's an article

Step arguments

Often it's possible to reuse steps giving them a parameter(s). This allows to have single implementation and multiple use, so less code. Also opens the possibility to use same step twice in single scenario and with different arguments! And even more, there are several types of step parameter parsers at your disposal (idea taken from behave implementation):

string (the default)
This is the default and can be considered as a null or exact parser. It parses no parameters and matches the step name by equality of strings.
parse (based on: pypi_parse)
Provides a simple parser that replaces regular expressions for step parameters with a readable syntax like {param:Type}. The syntax is inspired by the Python builtin string.format() function. Step parameters must use the named fields syntax of pypi_parse in step definitions. The named fields are extracted, optionally type converted and then used as step function arguments. Supports type conversions by using type converters passed via extra_types
cfparse (extends: pypi_parse, based on: pypi_parse_type)
Provides an extended parser with "Cardinality Field" (CF) support. Automatically creates missing type converters for related cardinality as long as a type converter for cardinality=1 is provided. Supports parse expressions like: * {values:Type+} (cardinality=1..N, many) * {values:Type*} (cardinality=0..N, many0) * {value:Type?} (cardinality=0..1, optional) Supports type conversions (as above).
re
This uses full regular expressions to parse the clause text. You will need to use named groups "(?P<name>...)" to define the variables pulled from the text and passed to your step() function. Type conversion can only be done via converters step decorator argument (see example below).

The default parser is string, so just plain one-to-one match to the keyword definition. Parsers except string, as well as their optional arguments are specified like:

for cfparse parser

from pytest_bdd import parsers

@given(
    parsers.cfparse("there are {start:Number} cucumbers",
    extra_types=dict(Number=int)),
    target_fixture="start_cucumbers",
)
def start_cucumbers(start):
    return dict(start=start, eat=0)

for re parser

from pytest_bdd import parsers

@given(
    parsers.re(r"there are (?P<start>\d+) cucumbers"),
    converters=dict(start=int),
    target_fixture="start_cucumbers",
)
def start_cucumbers(start):
    return dict(start=start, eat=0)

Example:

Feature: Step arguments
    Scenario: Arguments for given, when, thens
        Given there are 5 cucumbers

        When I eat 3 cucumbers
        And I eat 2 cucumbers

        Then I should have 0 cucumbers

The code will look like:

import re
from pytest_bdd import scenario, given, when, then, parsers


@scenario("arguments.feature", "Arguments for given, when, thens")
def test_arguments():
    pass


@given(parsers.parse("there are {start:d} cucumbers"), target_fixture="start_cucumbers")
def start_cucumbers(start):
    return dict(start=start, eat=0)


@when(parsers.parse("I eat {eat:d} cucumbers"))
def eat_cucumbers(start_cucumbers, eat):
    start_cucumbers["eat"] += eat


@then(parsers.parse("I should have {left:d} cucumbers"))
def should_have_left_cucumbers(start_cucumbers, start, left):
    assert start_cucumbers['start'] == start
    assert start - start_cucumbers['eat'] == left

Example code also shows possibility to pass argument converters which may be useful if you need to postprocess step arguments after the parser.

You can implement your own step parser. It's interface is quite simple. The code can looks like:

import re
from pytest_bdd import given, parsers


class MyParser(parsers.StepParser):
    """Custom parser."""

    def __init__(self, name, **kwargs):
        """Compile regex."""
        super(re, self).__init__(name)
        self.regex = re.compile(re.sub("%(.+)%", "(?P<\1>.+)", self.name), **kwargs)

    def parse_arguments(self, name):
        """Get step arguments.

        :return: `dict` of step arguments
        """
        return self.regex.match(name).groupdict()

    def is_matching(self, name):
        """Match given name with the step name."""
        return bool(self.regex.match(name))


@given(parsers.parse("there are %start% cucumbers"), target_fixture="start_cucumbers")
def start_cucumbers(start):
    return dict(start=start, eat=0)

Step arguments are fixtures as well!

Step arguments are injected into pytest request context as normal fixtures with the names equal to the names of the arguments. This opens a number of possibilies:

  • you can access step's argument as a fixture in other step function just by mentioning it as an argument (just like any othe pytest fixture)
  • if the name of the step argument clashes with existing fixture, it will be overridden by step's argument value; this way you can set/override the value for some fixture deeply inside of the fixture tree in a ad-hoc way by just choosing the proper name for the step argument.

Override fixtures via given steps

Dependency injection is not a panacea if you have complex structure of your test setup data. Sometimes there's a need such a given step which would imperatively change the fixture only for certain test (scenario), while for other tests it will stay untouched. To allow this, special parameter target_fixture exists in the given decorator:

from pytest_bdd import given

@pytest.fixture
def foo():
    return "foo"


@given("I have injecting given", target_fixture="foo")
def injecting_given():
    return "injected foo"


@then('foo should be "injected foo"')
def foo_is_foo(foo):
    assert foo == 'injected foo'
Feature: Target fixture
    Scenario: Test given fixture injection
        Given I have injecting given
        Then foo should be "injected foo"

In this example existing fixture foo will be overridden by given step I have injecting given only for scenario it's used in.

Multiline steps

As Gherkin, pytest-bdd supports multiline steps (aka PyStrings). But in much cleaner and powerful way:

Feature: Multiline steps
    Scenario: Multiline step using sub indentation
        Given I have a step with:
            Some
            Extra
            Lines
        Then the text should be parsed with correct indentation

Step is considered as multiline one, if the next line(s) after it's first line, is indented relatively to the first line. The step name is then simply extended by adding further lines with newlines. In the example above, the Given step name will be:

'I have a step with:\nSome\nExtra\nLines'

You can of course register step using full name (including the newlines), but it seems more practical to use step arguments and capture lines after first line (or some subset of them) into the argument:

import re

from pytest_bdd import given, then, scenario


@scenario(
    'multiline.feature',
    'Multiline step using sub indentation',
)
def test_multiline():
    pass


@given(parsers.parse("I have a step with:\n{text}"), target_fixture="i_have_text")
def i_have_text(text):
    return text


@then("the text should be parsed with correct indentation")
def text_should_be_correct(i_have_text, text):
    assert i_have_text == text == 'Some\nExtra\nLines'

Note that then step definition (text_should_be_correct) in this example uses text fixture which is provided by a a given step (i_have_text) argument with the same name (text). This possibility is described in the Step arguments are fixtures as well! section.

Scenarios shortcut

If you have relatively large set of feature files, it's boring to manually bind scenarios to the tests using the scenario decorator. Of course with the manual approach you get all the power to be able to additionally parametrize the test, give the test function a nice name, document it, etc, but in the majority of the cases you don't need that. Instead you want to bind all scenarios found in the feature folder(s) recursively automatically. For this - there's a scenarios helper.

from pytest_bdd import scenarios

# assume 'features' subfolder is in this file's directory
scenarios('features')

That's all you need to do to bind all scenarios found in the features folder! Note that you can pass multiple paths, and those paths can be either feature files or feature folders.

from pytest_bdd import scenarios

# pass multiple paths/files
scenarios('features', 'other_features/some.feature', 'some_other_features')

But what if you need to manually bind certain scenario, leaving others to be automatically bound? Just write your scenario in a normal way, but ensure you do it BEFORE the call of scenarios helper.

from pytest_bdd import scenario, scenarios

@scenario('features/some.feature', 'Test something')
def test_something():
    pass

# assume 'features' subfolder is in this file's directory
scenarios('features')

In the example above test_something scenario binding will be kept manual, other scenarios found in the features folder will be bound automatically.

Scenario outlines

Scenarios can be parametrized to cover few cases. In Gherkin the variable templates are written using corner braces as <somevalue>. Gherkin scenario outlines are supported by pytest-bdd exactly as it's described in be behave docs.

Example:

Feature: Scenario outlines
    Scenario Outline: Outlined given, when, thens
        Given there are <start> cucumbers
        When I eat <eat> cucumbers
        Then I should have <left> cucumbers

        Examples:
        | start | eat | left |
        |  12   |  5  |  7   |

pytest-bdd feature file format also supports example tables in different way:

Feature: Scenario outlines
    Scenario Outline: Outlined given, when, thens
        Given there are <start> cucumbers
        When I eat <eat> cucumbers
        Then I should have <left> cucumbers

        Examples: Vertical
        | start | 12 | 2 |
        | eat   | 5  | 1 |
        | left  | 7  | 1 |

This form allows to have tables with lots of columns keeping the maximum text width predictable without significant readability change.

The code will look like:

from pytest_bdd import given, when, then, scenario


@scenario(
    "outline.feature",
    "Outlined given, when, thens",
    example_converters=dict(start=int, eat=float, left=str)
)
def test_outlined():
    pass


@given("there are <start> cucumbers", target_fixture="start_cucumbers")
def start_cucumbers(start):
    assert isinstance(start, int)
    return dict(start=start)


@when("I eat <eat> cucumbers")
def eat_cucumbers(start_cucumbers, eat):
    assert isinstance(eat, float)
    start_cucumbers["eat"] = eat


@then("I should have <left> cucumbers")
def should_have_left_cucumbers(start_cucumbers, start, eat, left):
    assert isinstance(left, str)
    assert start - eat == int(left)
    assert start_cucumbers["start"] == start
    assert start_cucumbers["eat"] == eat

Example code also shows possibility to pass example converters which may be useful if you need parameter types different than strings.

Feature examples

It's possible to declare example table once for the whole feature, and it will be shared among all the scenarios of that feature:

Feature: Outline

    Examples:
    | start | eat | left |
    |  12   |  5  |  7   |
    |  5    |  4  |  1   |

    Scenario Outline: Eat cucumbers
        Given there are <start> cucumbers
        When I eat <eat> cucumbers
        Then I should have <left> cucumbers

    Scenario Outline: Eat apples
        Given there are <start> apples
        When I eat <eat> apples
        Then I should have <left> apples

For some more complex case, you might want to parametrize on both levels: feature and scenario. This is allowed as long as parameter names do not clash:

Feature: Outline

    Examples:
    | start | eat | left |
    |  12   |  5  |  7   |
    |  5    |  4  |  1   |

    Scenario Outline: Eat fruits
        Given there are <start> <fruits>
        When I eat <eat> <fruits>
        Then I should have <left> <fruits>

        Examples:
        | fruits  |
        | oranges |
        | apples  |

    Scenario Outline: Eat vegetables
        Given there are <start> <vegetables>
        When I eat <eat> <vegetables>
        Then I should have <left> <vegetables>

        Examples:
        | vegetables |
        | carrots    |
        | tomatoes   |

Combine scenario outline and pytest parametrization

It's also possible to parametrize the scenario on the python side. The reason for this is that it is sometimes not needed to mention example table for every scenario.

The code will look like:

import pytest
from pytest_bdd import scenario, given, when, then


# Here we use pytest to parametrize the test with the parameters table
@pytest.mark.parametrize(
    ["start", "eat", "left"],
    [(12, 5, 7)],
)
@scenario(
    "parametrized.feature",
    "Parametrized given, when, thens",
)
# Note that we should take the same arguments in the test function that we use
# for the test parametrization either directly or indirectly (fixtures depend on them).
def test_parametrized(start, eat, left):
    """We don't need to do anything here, everything will be managed by the scenario decorator."""


@given("there are <start> cucumbers", target_fixture="start_cucumbers")
def start_cucumbers(start):
    return dict(start=start)


@when("I eat <eat> cucumbers")
def eat_cucumbers(start_cucumbers, start, eat):
    start_cucumbers["eat"] = eat


@then("I should have <left> cucumbers")
def should_have_left_cucumbers(start_cucumbers, start, eat, left):
    assert start - eat == left
    assert start_cucumbers["start"] == start
    assert start_cucumbers["eat"] == eat

With a parametrized.feature file:

Feature: parametrized
    Scenario: Parametrized given, when, thens
        Given there are <start> cucumbers
        When I eat <eat> cucumbers
        Then I should have <left> cucumbers

The significant downside of this approach is inability to see the test table from the feature file.

Organizing your scenarios

The more features and scenarios you have, the more important becomes the question about their organization. The things you can do (and that is also a recommended way):

  • organize your feature files in the folders by semantic groups:
features
│
├──frontend
│  │
│  └──auth
│     │
│     └──login.feature
└──backend
   │
   └──auth
      │
      └──login.feature

This looks fine, but how do you run tests only for certain feature? As pytest-bdd uses pytest, and bdd scenarios are actually normal tests. But test files are separate from the feature files, the mapping is up to developers, so the test files structure can look completely different:

tests
│
└──functional
   │
   └──test_auth.py
      │
      └ """Authentication tests."""
        from pytest_bdd import scenario

        @scenario('frontend/auth/login.feature')
        def test_logging_in_frontend():
            pass

        @scenario('backend/auth/login.feature')
        def test_logging_in_backend():
            pass

For picking up tests to run we can use tests selection technique. The problem is that you have to know how your tests are organized, knowing only the feature files organization is not enough. cucumber tags introduce standard way of categorizing your features and scenarios, which pytest-bdd supports. For example, we could have:

@login @backend
Feature: Login

  @successful
  Scenario: Successful login

pytest-bdd uses pytest markers as a storage of the tags for the given scenario test, so we can use standard test selection:

py.test -m "backend and login and successful"

The feature and scenario markers are not different from standard pytest markers, and the @ symbol is stripped out automatically to allow test selector expressions. If you want to have bdd-related tags to be distinguishable from the other test markers, use prefix like bdd. Note that if you use pytest --strict option, all bdd tags mentioned in the feature files should be also in the markers setting of the pytest.ini config. Also for tags please use names which are python-compartible variable names, eg starts with a non-number, underscore alphanumberic, etc. That way you can safely use tags for tests filtering.

You can customize how hooks are converted to pytest marks by implementing the pytest_bdd_apply_tag hook and returning True from it:

def pytest_bdd_apply_tag(tag, function):
    if tag == 'todo':
        marker = pytest.mark.skip(reason="Not implemented yet")
        marker(function)
        return True
    else:
        # Fall back to pytest-bdd's default behavior
        return None

Test setup

Test setup is implemented within the Given section. Even though these steps are executed imperatively to apply possible side-effects, pytest-bdd is trying to benefit of the PyTest fixtures which is based on the dependency injection and makes the setup more declarative style.

@given("I have a beautiful article", target_fixture="article")
def article():
    return Article(is_beautiful=True)

The target PyTest fixture "article" gets the return value and any other step can depend on it.

Feature: The power of PyTest
    Scenario: Symbolic name across steps
        Given I have a beautiful article
        When I publish this article

When step is referring the article to publish it.

@when("I publish this article")
def publish_article(article):
    article.publish()

Many other BDD toolkits operate a global context and put the side effects there. This makes it very difficult to implement the steps, because the dependencies appear only as the side-effects in the run-time and not declared in the code. The publish article step has to trust that the article is already in the context, has to know the name of the attribute it is stored there, the type etc.

In pytest-bdd you just declare an argument of the step function that it depends on and the PyTest will make sure to provide it.

Still side effects can be applied in the imperative style by design of the BDD.

Feature: News website
    Scenario: Publishing an article
        Given I have a beautiful article
        And my article is published

Functional tests can reuse your fixture libraries created for the unit-tests and upgrade them by applying the side effects.

@pytest.fixture
def article():
    return Article(is_beautiful=True)


@given("I have a beautiful article")
def i_have_a_beautiful_article(article):
    pass


@given("my article is published")
def published_article(article):
    article.publish()
    return article

This way side-effects were applied to our article and PyTest makes sure that all steps that require the "article" fixture will receive the same object. The value of the "published_article" and the "article" fixtures is the same object.

Fixtures are evaluated only once within the PyTest scope and their values are cached.

Backgrounds

It's often the case that to cover certain feature, you'll need multiple scenarios. And it's logical that the setup for those scenarios will have some common parts (if not equal). For this, there are backgrounds. pytest-bdd implements Gherkin backgrounds for features.

Feature: Multiple site support

  Background:
    Given a global administrator named "Greg"
    And a blog named "Greg's anti-tax rants"
    And a customer named "Wilson"
    And a blog named "Expensive Therapy" owned by "Wilson"

  Scenario: Wilson posts to his own blog
    Given I am logged in as Wilson
    When I try to post to "Expensive Therapy"
    Then I should see "Your article was published."

  Scenario: Greg posts to a client's blog
    Given I am logged in as Greg
    When I try to post to "Expensive Therapy"
    Then I should see "Your article was published."

In this example, all steps from the background will be executed before all the scenario's own given steps, adding possibility to prepare some common setup for multiple scenarios in a single feature. About background best practices, please read here.

Note

There is only step "Given" should be used in "Background" section, steps "When" and "Then" are prohibited, because their purpose are related to actions and consuming outcomes, that is conflict with "Background" aim - prepare system for tests or "put the system in a known state" as "Given" does it. The statement above is applied for strict Gherkin mode, which is enabled by default.

Reusing fixtures

Sometimes scenarios define new names for the existing fixture that can be inherited (reused). For example, if we have pytest fixture:

@pytest.fixture
def article():
   """Test article."""
   return Article()

Then this fixture can be reused with other names using given():

@given('I have beautiful article')
def i_have_an_article(article):
   """I have an article."""

Reusing steps

It is possible to define some common steps in the parent conftest.py and simply expect them in the child test file.

common_steps.feature:

Scenario: All steps are declared in the conftest
    Given I have a bar
    Then bar should have value "bar"

conftest.py:

from pytest_bdd import given, then


@given("I have a bar", target_fixture="bar")
def bar():
    return "bar"


@then('bar should have value "bar"')
def bar_is_bar(bar):
    assert bar == "bar"

test_common.py:

@scenario("common_steps.feature", "All steps are declared in the conftest")
def test_conftest():
    pass

There are no definitions of the steps in the test file. They were collected from the parent conftests.

Using unicode in the feature files

As mentioned above, by default, utf-8 encoding is used for parsing feature files. For steps definition, you should use unicode strings, which is the default in python 3. If you are on python 2, make sure you use unicode strings by prefixing them with the u sign.

@given(parsers.re(u"у мене є рядок який містить '{0}'".format(u'(?P<content>.+)')))
def there_is_a_string_with_content(content, string):
    """Create string with unicode content."""
    string["content"] = content

Default steps

Here is the list of steps that are implemented inside of the pytest-bdd:

given
  • trace - enters the pdb debugger via pytest.set_trace()
when
  • trace - enters the pdb debugger via pytest.set_trace()
then
  • trace - enters the pdb debugger via pytest.set_trace()

Feature file paths

By default, pytest-bdd will use current module's path as base path for finding feature files, but this behaviour can be changed in the pytest configuration file (i.e. pytest.ini, tox.ini or setup.cfg) by declaring the new base path in the bdd_features_base_dir key. The path is interpreted as relative to the working directory when starting pytest. You can also override features base path on a per-scenario basis, in order to override the path for specific tests.

pytest.ini:

[pytest]
bdd_features_base_dir = features/

tests/test_publish_article.py:

from pytest_bdd import scenario


@scenario("foo.feature", "Foo feature in features/foo.feature")
def test_foo():
    pass


@scenario(
    "foo.feature",
    "Foo feature in tests/local-features/foo.feature",
    features_base_dir="./local-features/",
)
def test_foo_local():
    pass

The features_base_dir parameter can also be passed to the @scenario decorator.

Avoid retyping the feature file name

If you want to avoid retyping the feature file name when defining your scenarios in a test file, use functools.partial. This will make your life much easier when defining multiple scenarios in a test file. For example:

test_publish_article.py:

from functools import partial

import pytest_bdd


scenario = partial(pytest_bdd.scenario, "/path/to/publish_article.feature")


@scenario("Publishing the article")
def test_publish():
    pass


@scenario("Publishing the article as unprivileged user")
def test_publish_unprivileged():
    pass

You can learn more about functools.partial in the Python docs.

Hooks

pytest-bdd exposes several pytest hooks which might be helpful building useful reporting, visualization, etc on top of it:

  • pytest_bdd_before_scenario(request, feature, scenario) - Called before scenario is executed
  • pytest_bdd_after_scenario(request, feature, scenario) - Called after scenario is executed (even if one of steps has failed)
  • pytest_bdd_before_step(request, feature, scenario, step, step_func) - Called before step function is executed and it's arguments evaluated
  • pytest_bdd_before_step_call(request, feature, scenario, step, step_func, step_func_args) - Called before step function is executed with evaluated arguments
  • pytest_bdd_after_step(request, feature, scenario, step, step_func, step_func_args) - Called after step function is successfully executed
  • pytest_bdd_step_error(request, feature, scenario, step, step_func, step_func_args, exception) - Called when step function failed to execute
  • pytest_bdd_step_func_lookup_error(request, feature, scenario, step, exception) - Called when step lookup failed

Browser testing

Tools recommended to use for browser testing:

Reporting

It's important to have nice reporting out of your bdd tests. Cucumber introduced some kind of standard for json format which can be used for this jenkins plugin

To have an output in json format:

py.test --cucumberjson=<path to json report>

This will output an expanded (meaning scenario outlines will be expanded to several scenarios) cucumber format. To also fill in parameters in the step name, you have to explicitly tell pytest-bdd to use the expanded format:

py.test --cucumberjson=<path to json report> --cucumberjson-expanded

To enable gherkin-formatted output on terminal, use

py.test --gherkin-terminal-reporter

Terminal reporter supports expanded format as well

py.test --gherkin-terminal-reporter-expanded

Test code generation helpers

For newcomers it's sometimes hard to write all needed test code without being frustrated. To simplify their life, simple code generator was implemented. It allows to create fully functional but of course empty tests and step definitions for given a feature file. It's done as a separate console script provided by pytest-bdd package:

pytest-bdd generate <feature file name> .. <feature file nameN>

It will print the generated code to the standard output so you can easily redirect it to the file:

pytest-bdd generate features/some.feature > tests/functional/test_some.py

Advanced code generation

For more experienced users, there's smart code generation/suggestion feature. It will only generate the test code which is not yet there, checking existing tests and step definitions the same way it's done during the test execution. The code suggestion tool is called via passing additional pytest arguments:

py.test --generate-missing --feature features tests/functional

The output will be like:

============================= test session starts ==============================
platform linux2 -- Python 2.7.6 -- py-1.4.24 -- pytest-2.6.2
plugins: xdist, pep8, cov, cache, bdd, bdd, bdd
collected 2 items

Scenario is not bound to any test: "Code is generated for scenarios which are not bound to any tests" in feature "Missing code generation" in /tmp/pytest-552/testdir/test_generate_missing0/tests/generation.feature
--------------------------------------------------------------------------------

Step is not defined: "I have a custom bar" in scenario: "Code is generated for scenario steps which are not yet defined(implemented)" in feature "Missing code generation" in /tmp/pytest-552/testdir/test_generate_missing0/tests/generation.feature
--------------------------------------------------------------------------------
Please place the code above to the test file(s):

@scenario('tests/generation.feature', 'Code is generated for scenarios which are not bound to any tests')
def test_Code_is_generated_for_scenarios_which_are_not_bound_to_any_tests():
    """Code is generated for scenarios which are not bound to any tests."""


@given("I have a custom bar")
def I_have_a_custom_bar():
    """I have a custom bar."""

As as side effect, the tool will validate the files for format errors, also some of the logic bugs, for example the ordering of the types of the steps.

Migration of your tests from versions 3.x.x

Given steps are no longer fixtures. In case it is needed to make given step setup a fixture the target_fixture parameter should be used.

@given("there's an article", target_fixture="article")
def there_is_an_article():
    return Article()

Given steps no longer have fixture parameter. In fact the step may depend on multiple fixtures. Just normal step declaration with the dependency injection should be used.

@given("there's an article")
def there_is_an_article(article):
    pass

Strict gherkin option is removed, so the strict_gherkin parameter can be removed from the scenario decorators as well as bdd_strict_gherkin from the ini files.

Step validation handlers for the hook pytest_bdd_step_validation_error should be removed.

License

This software is licensed under the MIT license.

© 2013-2014 Oleg Pidsadnyi, Anatoly Bubenkov and others

Comments
  • pytest_bdd.exceptions.GivenAlreadyUsed with step arguments

    pytest_bdd.exceptions.GivenAlreadyUsed with step arguments

    From the readme:

    Often it's possible to reuse steps giving them a parameter(s). This allows to have single implementation and multiple use, so less code. Also opens the possibility to use same step twice in single scenario and with different arguments!

    But this doesn't seem to be the case for @given. My usecase is this feature file:

    Feature: Going back and forward.
        Testing the :back/:forward commands.
    
        Scenario: Going back
            Given I open backforward/1.html
            And I open backforward/2.html
            When I run :back
            Then backforward/1.html should be loaded
    

    and this python file:

    import pytest_bdd as bdd
    
    bdd.scenarios('.')
    
    
    @bdd.given(bdd.parsers.parse("I set {sect} -> {opt} to {value}"))
    def set_setting(quteproc, sect, opt, value):
        quteproc.set_setting(sect, opt, value)
    
    
    @bdd.given(bdd.parsers.parse("I open {path}"))
    def open_path(quteproc, path):
        quteproc.open_path(path)
    
    
    @bdd.when(bdd.parsers.parse("I run {command}"))
    def run_command(quteproc, command):
        quteproc.send_cmd(command)
    
    
    @bdd.then(bdd.parsers.parse("{} should be loaded"))
    def url_should_be_loaded(httpbin, path):
        requests = httpbin.get_requests()
        assert requests[-1] == [('GET', url)]
    

    So opening those two pages is clearly "setup", i.e. I think they both belong to Given ....

    But I get:

    pytest_bdd.exceptions.GivenAlreadyUsed: Fixture "open_path" that implements this "I open backforward/2.html" given step has been already used.
    
    opened by The-Compiler 48
  • Implement datatables

    Implement datatables

    Implement datatables for Given clauses. The work for this was mainly done in https://github.com/pytest-dev/pytest-bdd/pull/180. I am recreating it since I am taking it over to help get this merged in. The changes are identical between the two. All I did was merge in the latest changes to the PR/resolved merge conflicts and added more testing.

    @youtux @olegpidsadnyi can we get this reviewed/merged in please? It looks like a lot of the community wants this feature ( myself included ) based off of the comments in the original PR.

    Copied from origin PR: Fixes #150 and helps mitigating the effects of #157

    opened by gnikonorov 22
  • Added support for defining step definitions with regular expressions.

    Added support for defining step definitions with regular expressions.

    Added support for defining step definitions with regular expressions.

    See Step Definitions for Cucumber (Pythonified):

    Scenario: Some cukes
      Given I have 48 cukes in my belly
    

    The I have 48 cukes in my belly part of the step (the text following the Given keyword) will match the Step Definition below.

    @Given("I have (?P<cukes>\d+) cukes in my belly")
    def I_have_cukes_in_my_belly(cukes):
        // Do something with the cukes
    

    Note: I've required regex step definitions to use named capture groups for function arguments. Unnamed capture groups will not be passed to the step function.

    opened by curzona 21
  • Unable to Reuse Step with Scenario Outline Examples

    Unable to Reuse Step with Scenario Outline Examples

    I have a situation where Im unable to reuse a step when using scenario outline. Below is an example

    Example.feature

    Scenario Outline: Given I have Students list in database When Student list is filtered with <last_name> And Student list is filtered with <first_name> And Student list is filtered with <student_age> Then Student count is more than zero

    Examples: | last_name| first_name| student_age| | xyz_ln | xyz_fn. | 20 | | xyz_ln | fgh_fn. | 21 |

    Stepdef.py

    @when("Student list is filtered with <filter_value>") def filter_student_list(context, filter_value): printf(f"logic for filtering the list with: {filter_value}" )

    Error:

    When I execute the the above example. It throws stepdef not found error. Reason is in stepdef.py I have used variable <filter_value> as a generic name. But pytest expects that in feature file we need to use the same variable name. Can you please help me in resolving this?

    opened by MummanaSubramanya 17
  • Regression: KeyError in replacer

    Regression: KeyError in replacer

    With #445, I'm seeing many tracebacks like this in my tests:

    ______________________ test_inserting_text_into_a_text_field_at_specific_position _______________________
    
    request = <FixtureRequest for <Function test_inserting_text_into_a_text_field_at_specific_position>>
    _pytest_bdd_example = {}
    
        @pytest.mark.usefixtures(*args)
        def scenario_wrapper(request, _pytest_bdd_example):
    >       scenario = templated_scenario.render(_pytest_bdd_example)
    
    .tox/bleeding/lib/python3.9/site-packages/pytest_bdd/scenario.py:173: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    .tox/bleeding/lib/python3.9/site-packages/pytest_bdd/parser.py:249: in render
        steps = [
    .tox/bleeding/lib/python3.9/site-packages/pytest_bdd/parser.py:251: in <listcomp>
        name=templated_step.render(context),
    .tox/bleeding/lib/python3.9/site-packages/pytest_bdd/parser.py:364: in render
        return STEP_PARAM_RE.sub(replacer, self.name)
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    m = <re.Match object; span=(18, 24), match='<Home>'>
    
        def replacer(m: typing.Match):
            varname = m.group(1)
    >       return str(context[varname])
    E       KeyError: 'Home'
    
    .tox/bleeding/lib/python3.9/site-packages/pytest_bdd/parser.py:362: KeyError
    

    There are many other such failures, but this one is from this scenario:

        Scenario: Inserting text into a text field at specific position
            When I open data/paste_primary.html
            And I insert "one two three four" into the text field
            And I run :click-element id qute-textarea
            And I wait for "Entering mode KeyMode.insert (reason: clicking input)" in the log
            # Move to the beginning and two characters to the right
            And I press the keys "<Home>"
            And I press the key "<Right>"
            And I press the key "<Right>"
            And I run :insert-text Hello world
            # Compare
            Then the javascript message "textarea contents: onHello worlde two three four" should be logged
    

    specifically, the And I press the keys "<Home>" line there.

    The underlying Python code is simple:

    @bdd.when(bdd.parsers.re('I press the keys? "(?P<keys>[^"]*)"'))
    def press_keys(quteproc, keys):
        """Send the given fake keys to qutebrowser."""
        quteproc.press_keys(keys)
    

    A slightly simpler example:

        Scenario: :selection-follow with link tabbing (without JS)
            When I set content.javascript.enabled to false
            And I run :mode-leave
            And I run :jseval document.activeElement.blur();
            And I run :fake-key <tab>
            And I run :selection-follow
            Then data/hello.txt should be loaded
    

    with this code:

    @bdd.when(bdd.parsers.parse("I run {command}"))
    def run_command(quteproc, server, tmpdir, command):
        # ...
    

    results in a KeyError: tab. In other words, it seems like anything in <...> in a scenario now seems to be parsed in some special way.


    I've tried to write a reproducer:

    bug.feature:

    Feature: Reproducer
        Scenario: Pressing keys
            When I run :fake-key <Ctrl+c>
            And I run :fake-key <tab>
    

    test_bug.py:

    import pytest_bdd as bdd
    
    bdd.scenarios('bug.feature')
    
    @bdd.when(bdd.parsers.parse("I run {command}"))
    def run_command(command):
        pass
    

    but unfortunately, I can not reproduce the issue there. Any ideas what could be going wrong there?

    bug 
    opened by The-Compiler 17
  • FIx unicode argumented fixture

    FIx unicode argumented fixture

    This solves the problem when, as parameterized steps in Unicode, you can send Unicode. For example: Step: When вводим в поле "Email" данные "Случайно" на странице "Контактные данные" The code would look like this: @when(re.compile(r'вводим в поле "(?P<field_name>.+)" данные "(?P<input_data>.+)" на странице "(?P.+)"', re.U))

    opened by aohontsev 17
  • Reusable step functions regression: Specific step doesn't override generic one anymore

    Reusable step functions regression: Specific step doesn't override generic one anymore

    After #534, with this conftest.py:

    import pytest_bdd as bdd
    import pytest
    
    @pytest.fixture
    def value():
        return []
    
    @bdd.when(bdd.parsers.parse("I have a {thing} thing"))
    def generic(thing, value):
        value.append(thing)
    

    this test_specific.py:

    import pytest_bdd as bdd
    
    @bdd.when("I have a specific thing")
    def specific(value):
        value.append("42")
    
    @bdd.then(bdd.parsers.parse("The value should be {thing}"))
    def check(thing, value):
        assert value == [thing]
    
    
    bdd.scenarios("specific.feature")
    

    and this specific.feature:

    Scenario: Overlapping steps 1
            When I have a specific thing
            Then the value should be 42
    
    Scenario: Overlapping steps 2
            When I have a generic thing
            Then the value should be generic
    

    I would expect that specific takes precedence over generic, i.e., the value for the first test is "42", not "generic". This used to be the case, but isn't anymore after that commit:

        @bdd.then(bdd.parsers.parse("The value should be {thing}"))
        def check(thing, value):
    >       assert value == [thing]
    E       AssertionError: assert ['specific'] == ['42']
    E         At index 0 diff: 'specific' != '42'
    E         Use -v to get more diff
    

    Note, however, it does work fine when generic is moved from the conftest.py file into test_specific.py:

    bug 
    opened by The-Compiler 15
  • Fix getting module with --import-mode=importlib

    Fix getting module with --import-mode=importlib

    With the pytest 6 RC, a new --import-mode=importlib was added, which uses importlib rather than sys.path hacks to import test modules. I'd really like to use it, and in future pytest releases that'll become the default.

    Unfortunately, it fails with pytest-bdd:

    tests/end2end/features/test_backforward_bdd.py:21: in <module>
        bdd.scenarios('backforward.feature')
    .tox/py38-pyqt515/lib/python3.8/site-packages/pytest_bdd/scenario.py:296: in scenarios
        features_base_dir = get_features_base_dir(module)
    .tox/py38-pyqt515/lib/python3.8/site-packages/pytest_bdd/scenario.py:241: in get_features_base_dir
        default_base_dir = os.path.dirname(caller_module.__file__)
    E   AttributeError: 'NoneType' object has no attribute '__file__'
    

    because module here is None (relevant code):

    def scenarios(*feature_paths, **kwargs):
        frame = inspect.stack()[1]
        module = inspect.getmodule(frame[0])    # <- this is None
    
        features_base_dir = kwargs.get("features_base_dir")
        if features_base_dir is None:
            features_base_dir = get_features_base_dir(module)
    

    This fix seems to work, and is inspired from how pytest imports modules in this case. I'm not sure if it's the right solution though, there might be some easier way towards the same goal?

    cc @nicoddemus

    opened by The-Compiler 15
  • Scenario Outline placeholders not replaced with example values in argumented steps

    Scenario Outline placeholders not replaced with example values in argumented steps

    With Cucumber and Behave, scenario outline steps have their placeholders substituted with the real values from the current example before the step definition is looked up.

    We have a library of steps with arguments implementing commonly used parts of the Splinter API. So an outlined scenario might look like this:

    Feature: Generics
        Scenario Outline: Visit collection and page
            When I visit "/<item_type>/"
            When I click the link with text that contains "<link_text>"
    
        Examples: Collections
            | item_type                     | link_text                             |
            | antibody_lot                  | ENCAB000ACQ                           |
    

    Using step definitions:

    @when(parse('I visit "{url}"'))
    def when_i_visit_url(browser, base_url, url):
        full_url = urljoin(base_url, url)
        browser.visit(full_url)
    
    @when(parse('I click the link with text that contains "{text}"'))
    def click_link_with_text_that_contains(browser, text):
        anchors = browser.find_link_by_partial_text(text)
        anchors.first.click()
    

    However here , the url for When I visit "/<item_type>/" is the unsubstituted string: "/<item_type>/".

    opened by lrowe 14
  • Auto-create scenario test functions

    Auto-create scenario test functions

    pytest-bdd would be much more awesome if it would auto create scenario functions, so you wouldn't have to define empty functions just to make the test go. I think this either could be implemented as a test collector (also would make the test reporter more user friendly as it would show the feature file as the source of tests) or it could be just a invocation form of scenario function like:

    scenario('feature/something.feature', auto=True)

    opened by santagada 14
  • Fixtures not found with pytest 3.7

    Fixtures not found with pytest 3.7

    After upgrading from pytest 3.6.4 to 3.7.1, I get this with (I think) pretty much every test:

    .tox/py37-pyqt511/lib/python3.7/site-packages/_pytest/fixtures.py:520: in _get_active_fixturedef
        return self._fixture_defs[argname]
    E   KeyError: 'pytestbdd_given_I open data/backforward/1.txt'
    
    During handling of the above exception, another exception occurred:
    .tox/py37-pyqt511/lib/python3.7/site-packages/pytest_bdd/scenario.py:92: in _find_step_function
        return get_fixture_value(request, get_step_fixture_name(name, step.type, encoding))
    .tox/py37-pyqt511/lib/python3.7/site-packages/pytest_bdd/utils.py:36: in get_fixture_value
        return getfixturevalue(name)
    .tox/py37-pyqt511/lib/python3.7/site-packages/_pytest/fixtures.py:509: in getfixturevalue
        return self._get_active_fixturedef(argname).cached_result[0]
    .tox/py37-pyqt511/lib/python3.7/site-packages/_pytest/fixtures.py:523: in _get_active_fixturedef
        fixturedef = self._getnextfixturedef(argname)
    .tox/py37-pyqt511/lib/python3.7/site-packages/_pytest/fixtures.py:382: in _getnextfixturedef
        raise FixtureLookupError(argname, self)
    E   _pytest.fixtures.FixtureLookupError: ('pytestbdd_given_I open data/backforward/1.txt', <FixtureRequest for <Function 'test_going_back_in_a_new_tab_without_history'>>)
    
    During handling of the above exception, another exception occurred:
    tests/end2end/features/test_backforward_bdd.py:49: in _scenario
        ???
    .tox/py37-pyqt511/lib/python3.7/site-packages/pytest_bdd/scenario.py:163: in _execute_scenario
        step_func = _find_step_function(request, step, scenario, encoding=encoding)
    .tox/py37-pyqt511/lib/python3.7/site-packages/pytest_bdd/scenario.py:105: in _find_step_function
        feature=scenario.feature,
    E   pytest_bdd.exceptions.StepDefinitionNotFoundError: Step definition is not found: Given "I open data/backforward/1.txt". Line 47 in scenario "Going back in a new tab without history" in the feature "/home/florian/proj/qutebrowser/git/tests/end2end/features/backforward.feature
    
    opened by The-Compiler 12
  • Framework Mobile : pytest-bdd (6.1.1) + appium (Appium v1.22.3) + python client(2.7.1) + allure-report(2.12.0)

    Framework Mobile : pytest-bdd (6.1.1) + appium (Appium v1.22.3) + python client(2.7.1) + allure-report(2.12.0)

    Need help guys, I check all of the sample github but not have good result. so I do R & D to develop this framework :

    1. I want using appium + pytest-bdd for automation testing android

    start with basic and it's work very well :

    from appium import webdriver
    import time
    
    from selenium.webdriver.common.action_chains import ActionChains
    from selenium.webdriver.common.actions import interaction
    from selenium.webdriver.common.actions.action_builder import ActionBuilder
    from selenium.webdriver.common.actions.pointer_input import PointerInput
    from selenium.common.exceptions import ElementNotVisibleException, ElementNotSelectableException, NoSuchElementException
    from selenium.webdriver.support.wait import WebDriverWait
    from appium.webdriver.common.touch_action import TouchAction
    from appium.webdriver.common.mobileby import MobileBy
    from appium.options.android import UiAutomator2Options
    from appium.webdriver.common.appiumby import AppiumBy
    
    common_caps = {
            'appium:udid': 'emulator-5554',
            'appium:platformName': 'Android',
            'appium:automationName': 'uiautomator2',
            'appium:deviceName': 'test123',
            'appium:platformVersion': '11',
            'appium:network_speed': 'full',
            'appium:appPackage' : 'com.XXXX',
            'appium:appActivity': 'com.XXXXXX',
            'appium:noReset': 'true',
            'appium:disableWindowAnimation': 'true'
        }
    
    driver = webdriver.Remote('http://127.0.0.1:4723/wd/hub', options=UiAutomator2Options().load_capabilities(common_caps), direct_connection=True, strict_ssl=False)
    driver.update_settings({
        "waitForIdleTimeout": 500,  # 1 seconds = 1000
    })
    driver.implicitly_wait(15)
    
    driver.find_element(by=AppiumBy.ANDROID_UIAUTOMATOR, value='new UiSelector().text("Switch Store")').click()
    time.sleep(2)
    actions = ActionChains(driver)
    actions.w3c_actions = ActionBuilder(driver, mouse=PointerInput(interaction.POINTER_TOUCH, "touch"))
    actions.w3c_actions.pointer_action.move_to_location(473, 1773)
    actions.w3c_actions.pointer_action.pointer_down()
    actions.w3c_actions.pointer_action.pause(2)
    actions.w3c_actions.pointer_action.move_to_location(473, 707)
    actions.w3c_actions.pointer_action.release()
    actions.perform()
    
    driver.quit()
    

    2. but when I implment with pytest-bdd it doesnt work. the apps android open. but error on given

    import pytest
    from appium import webdriver
    from pytest_bdd import scenarios, given, when, then
    from appium.webdriver.common.mobileby import MobileBy
    from appium.options.android import UiAutomator2Options
    from appium.webdriver.common.appiumby import AppiumBy
    import time
    from selenium.webdriver.common.action_chains import ActionChains
    from selenium.webdriver.common.actions import interaction
    from selenium.webdriver.common.actions.action_builder import ActionBuilder
    from selenium.webdriver.common.actions.pointer_input import PointerInput
    # Set up the Appium options for the Android device or emulator
    # Set up the Appium options for the Android device or emulator
    
    
    
    @pytest.fixture
    def driver(request):
        simulator_caps = {
            # A real device udid could be retrieved from `adb devices -l` output
            # If it is ommitted then the first available device will be used
            'appium:udid': 'emulator-5554',
            'appium:platformName': 'Android',
            'appium:automationName': 'uiautomator2',
            'appium:deviceName': 'avana',
            'appium:platformVersion': '11',
            'appium:network_speed': 'full',
            'appium:appPackage': 'com.XXXX',
            'appium:appActivity': 'com.XXXX',
            'appium:noReset': 'true',
            'appium:disableWindowAnimation': 'true'
    
            # ...or run the test on an emulator
            # 'appium:avd': 'emulator-5554',
        }
    
        driver = webdriver.Remote('http://127.0.0.1:4723/wd/hub',
                                 options=UiAutomator2Options().load_capabilities(simulator_caps), direct_connection=True,
                                 strict_ssl=False)
        driver.update_settings({"waitForIdleTimeout": 1500, })  # 1 seconds = 1000
        driver.implicitly_wait(15)
        yield driver
        driver.quit()
    
    @scenarios('app_scenarios.feature')
    
    @given("I am on the main screen of the Android app")
    def main_screen(driver):
        #disable this activity
        #driver.start_activity("com.XXXXXX", "com.XXXXXX)
        driver.find_element(by=AppiumBy.ANDROID_UIAUTOMATOR, value='new UiSelector().text("Switch Store")').click()
        time.sleep(2)
        actions = ActionChains(driver)
        actions.w3c_actions = ActionBuilder(driver, mouse=PointerInput(interaction.POINTER_TOUCH, "touch"))
        actions.w3c_actions.pointer_action.move_to_location(473, 1773)
        actions.w3c_actions.pointer_action.pointer_down()
        actions.w3c_actions.pointer_action.pause(2)
        actions.w3c_actions.pointer_action.move_to_location(473, 707)
        actions.w3c_actions.pointer_action.release()
        actions.perform()
        pass
    
    @when("I tap on the button on the main screen")
    def tap_button(driver):
        button_element = driver.find_element(MobileBy.ID, "button_id")
        button_element.click()
    
    @then("I should see a new screen with a message")
    def new_screen(driver):
        message_element = driver.find_element(MobileBy.ID, "message_id")
        assert message_element
    

    is there any sample appium + pytest-bdd + allure report. I already implement on webbased and work very well

    here the example issue : image I confused where should I report this issue

    thank you in advance

    opened by permanadian 0
  • pytest_bdd_after_scenario has scenario status PASSED even if it failed

    pytest_bdd_after_scenario has scenario status PASSED even if it failed

    Fixtrue pytest_bdd_after_scenario(scenario: Scenario): All steps and scenarios has status failed=False, however test failed

    and pytest_bdd_step_error fixture worked correct image

    [gw0] [ 25%] FAILED ui_steps/test_reports_steps.py::test_deselect_facility_daily_risk_report[chromium] <- venv/lib/python3.9/site-packages/pytest_bdd/scenario.py ##teamcity[testFailed timestamp='2022-11-21T12:32:51.881' details='cls = <class |'_pytest.runner.CallInfo|'>|nfunc = <function call_runtest_hook.. at 0x107211c10>|nwhen = |'call|'|nreraise = (<class |'_pytest.outcomes.Exit|'>, <class |'KeyboardInterrupt|'>)|n|n @classmethod|n def from_call(|n cls,|n func: "Callable|[|[|], TResult|]",|n when: "Literal|[|'collect|', |'setup|', |'call|', |'teardown|'|]",|n reraise: Optional|[|n Union|[Type|[BaseException|], Tuple|[Type|[BaseException|], ...|]|]|n |] = None,|n ) -> "CallInfo|[TResult|]":|n """Call func, wrapping the result in a CallInfo.|n |n :param func:|n The function to call. Called without arguments.|n :param when:|n The phase in which the function is called.|n :param reraise:|n Exception or exceptions that shall propagate if raised by the|n function, instead of being wrapped in the CallInfo.|n """|n excinfo = None|n start = timing.time()|n precise_start = timing.perf_counter()|n try:|n> result: Optional|[TResult|] = fu

    opened by dmitriykow7 0
  • Cucumber JSON report is generated without skipped tests info

    Cucumber JSON report is generated without skipped tests info

    Environment: platform darwin -- Python 3.9.12, pytest-7.1.2, pluggy-1.0.0 plugins: splinter-3.3.2, bdd-6.0.1, clarity-1.0.1

    Steps to reproduce:

    1. Create simple pytest-bdd setup with feature file and tests.
    2. In feature file and default @skip flag like this: @skip Scenario: Test1 When blabla Then blablabla Or any other way to skip test execution
    3. Run pytest with addopts = -vv -rsfx --cucumber-json=cucumber_report.json --gherkin-terminal-reporter --color=yes in pytest.ini or same alternative in CLI
    4. Gherkin terminal output will contain test_bla.py::test_bla <- venv/lib/python3.9/site-packages/pytest_bdd/scenario.py SKIPPED (unconditional skip)

    Actual result: 5. cucumber.json file doesn't contain any info about skipped test at all

    Expected result: 5. cucumber.json file contains info about skipped test (at least ..."result": {"status": "skipped"... )

    opened by waad19 0
  • Add support of cucumber expressions

    Add support of cucumber expressions

    Cucumber expressions module is a part of the Gherkin toolset. To make pytest-bdd compatible with official tools it has to support cucumber expressions

    https://github.com/cucumber/cucumber-expressions#readme There is the official implementation for python: https://pypi.org/project/cucumber-expressions/

    An example implementation can be found here https://github.com/elchupanebrej/pytest-bdd-ng/pull/76

    opened by elchupanebrej 0
Releases(6.1.1)
  • 6.1.1(Nov 9, 2022)

  • 6.0.0(Jul 5, 2022)

    This release introduces breaking changes in order to be more in line with the official gherkin specification.

    • Cleanup of the documentation and tests related to parametrization (elchupanebrej) https://github.com/pytest-dev/pytest-bdd/pull/469
    • Removed feature level examples for the gherkin compatibility (olegpidsadnyi) https://github.com/pytest-dev/pytest-bdd/pull/490
    • Removed vertical examples for the gherkin compatibility (olegpidsadnyi) https://github.com/pytest-dev/pytest-bdd/pull/492
    • Step arguments are no longer fixtures (olegpidsadnyi) https://github.com/pytest-dev/pytest-bdd/pull/493
    • Drop support of python 3.6, pytest 4 (elchupanebrej) https://github.com/pytest-dev/pytest-bdd/pull/495 https://github.com/pytest-dev/pytest-bdd/pull/504
    • Step definitions can have "yield" statements again (4.0 release broke it). They will be executed as normal fixtures: code after the yield is executed during teardown of the test. (youtux) https://github.com/pytest-dev/pytest-bdd/pull/503
    • Scenario outlines unused example parameter validation is removed (olegpidsadnyi) https://github.com/pytest-dev/pytest-bdd/pull/499
    • Add type annotations (youtux) https://github.com/pytest-dev/pytest-bdd/pull/505
    • pytest_bdd.parsers.StepParser now is an Abstract Base Class. Subclasses must make sure to implement the abstract methods. (youtux) https://github.com/pytest-dev/pytest-bdd/pull/505
    • Angular brackets in step definitions are only parsed in "Scenario Outline" (previously they were parsed also in normal "Scenario"s) (youtux) https://github.com/pytest-dev/pytest-bdd/pull/524.
    Source code(tar.gz)
    Source code(zip)
  • 5.0.0(Oct 25, 2021)

    • Rewrite the logic to parse Examples for Scenario Outlines. Now the substitution of the examples is done during the parsing of Gherkin feature files. You won't need to define the steps twice like @given("there are <start> cucumbers") and @given(parsers.parse("there are {start} cucumbers")). The latter will be enough.
    • Removed example_converters from scenario(...) signature. You should now use just the converters parameter for given, when, then.
    • Removed --cucumberjson-expanded and --cucumber-json-expanded options. Now the JSON report is always expanded.
    • Removed --gherkin-terminal-reporter-expanded option. Now the terminal report is always expanded.
    Source code(tar.gz)
    Source code(zip)
Django test runner using nose

django-nose django-nose provides all the goodness of nose in your Django tests, like: Testing just your apps by default, not all the standard ones tha

Jazzband 880 Dec 15, 2022
Selects tests affected by changed files. Continous test runner when used with pytest-watch.

This is a pytest plug-in which automatically selects and re-executes only tests affected by recent changes. How is this possible in dynamic language l

Tibor Arpas 614 Dec 30, 2022
Local continuous test runner with pytest and watchdog.

pytest-watch -- Continuous pytest runner pytest-watch a zero-config CLI tool that runs pytest, and re-runs it when a file in your project changes. It

Joe Esposito 675 Dec 23, 2022
The definitive testing tool for Python. Born under the banner of Behavior Driven Development (BDD).

mamba: the definitive test runner for Python mamba is the definitive test runner for Python. Born under the banner of behavior-driven development. Ins

Néstor Salceda 502 Dec 30, 2022
Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report

pytest-ui-automatic Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report How to run Run tests execute_test

moyu6027 11 Nov 8, 2022
a plugin for py.test that changes the default look and feel of py.test (e.g. progressbar, show tests that fail instantly)

pytest-sugar pytest-sugar is a plugin for pytest that shows failures and errors instantly and shows a progress bar. Requirements You will need the fol

Teemu 963 Dec 28, 2022
Pynguin, The PYthoN General UnIt Test geNerator is a test-generation tool for Python

Pynguin, the PYthoN General UnIt test geNerator, is a tool that allows developers to generate unit tests automatically.

Chair of Software Engineering II, Uni Passau 997 Jan 6, 2023
Ab testing - The using AB test to test of difference of conversion rate

Facebook recently introduced a new type of offer that is an alternative to the current type of bidding called maximum bidding he introduced average bidding.

null 5 Nov 21, 2022
A small automated test structure using python to test *.cpp codes

Get Started Insert C++ Codes Add Test Code Run Test Samples Check Coverages Insert C++ Codes you can easily add c++ files in /inputs directory there i

Alireza Zahiri 2 Aug 3, 2022
A Simple Unit Test Matcher Library for Python 3

pychoir - Python Test Matchers for humans Super duper low cognitive overhead matching for Python developers reading or writing tests. Implemented in p

Antti Kajander 15 Sep 14, 2022
splinter - python test framework for web applications

splinter - python tool for testing web applications splinter is an open source tool for testing web applications using Python. It lets you automate br

Cobra Team 2.6k Dec 27, 2022
A test fixtures replacement for Python

factory_boy factory_boy is a fixtures replacement based on thoughtbot's factory_bot. As a fixtures replacement tool, it aims to replace static, hard t

FactoryBoy project 3k Jan 5, 2023
create custom test databases that are populated with fake data

About Generate fake but valid data filled databases for test purposes using most popular patterns(AFAIK). Current support is sqlite, mysql, postgresql

Emir Ozer 2.2k Jan 4, 2023
A test fixtures replacement for Python

factory_boy factory_boy is a fixtures replacement based on thoughtbot's factory_bot. As a fixtures replacement tool, it aims to replace static, hard t

FactoryBoy project 2.4k Feb 5, 2021
splinter - python test framework for web applications

splinter - python tool for testing web applications splinter is an open source tool for testing web applications using Python. It lets you automate br

Cobra Team 2.3k Feb 5, 2021
A set of pytest fixtures to test Flask applications

pytest-flask An extension of pytest test runner which provides a set of useful tools to simplify testing and development of the Flask extensions and a

pytest-dev 433 Dec 23, 2022
Wraps any WSGI application and makes it easy to send test requests to that application, without starting up an HTTP server.

WebTest This wraps any WSGI application and makes it easy to send test requests to that application, without starting up an HTTP server. This provides

Pylons Project 325 Dec 30, 2022
A complete test automation tool

Golem - Test Automation Golem is a test framework and a complete tool for browser automation. Tests can be written with code in Python, codeless using

null 486 Dec 30, 2022
Parameterized testing with any Python test framework

Parameterized testing with any Python test framework Parameterized testing in Python sucks. parameterized fixes that. For everything. Parameterized te

David Wolever 714 Dec 21, 2022