pytest test reporting

If you are not using pytest view the Python docs for information about integrating with a lower level library.

Installation

First install tesults:

pip install tesults

Then install the pytest-tesults plugin, which depends on the tesults package above:

pip install pytest-tesults

Configuration

tesults-targetRequired

You must provide your target token to push results to Tesults. If this arg is not provided the pytest-tesults plugin does not attempt upload, effectively disabling it. You get your target token on project or target creation and can regenerate one at anytime from the configuration menu.

Inline method (Command)

pytest --tesults-target eyJ0eXAiOiJ...

In this case, the target token is supplied directly or inline in the commandline args. This is the simplest approach.

Key method (Configuration File)

pytest --tesults-target target1

In this case, the pytest-tesults plugin will automatically look up the value of the token based on the property provided from a configuration file. Specifically the standard pytest configuration files are checked in this order: pytest.ini, pyproject.toml, tox.ini and setup.cfg. Note that pytest.ini is always used if found, and tox.ini must contain a [pytest] section, setup.cfg must contain a [tool:pytest] section, pyproject.toml must contain a [tool.pytest.ini_options] section. This is standard pytest configuration behavior.

Here are examples with each type of configuration file:

pytest.ini

[tesults]
target1 = eyJ0eXAiOiJ...
target2 = ...
target3 = ...
target4 = ...

You may want to be more descriptive about what each target corresponds to:

[tesults]
web-qa-env = eyJ0eXAiOiJ...
web-staging-env = ...
web-prod-env = ...

ios-qa-env = eyJ0eXAiOiK...
ios-staging-env = ...
ios-prod-env = ...

android-qa-env = eyJ0eXAiOiL...
android-staging-env = ...
android-prod-env = ...

pyproject.toml

# pyproject.toml
[tool.pytest.ini_options]
minversion = "6.0"
addopts = "-ra -q"
testpaths = [
  "tests",
  "integration",
]

[tesults]
target1 = "eyJ0eXAiOiJ..."
target2 = "eyJ0eXAiOiK..."

tox.ini

[pytest]

[tesults]
target1 = eyJ0eXAiOiJ...
target2 = eyJ0eXAiOiK...

setup.cfg

[tool:pytest]

[tesults]
target1 = eyJ0eXAiOiJ...
target2 = eyJ0eXAiOiK...


Basic configuration complete

At this point the pytest-tesults plugin will have self registered and will push results to Tesults when you run your pytest command and supply the tesults-target arg.

Enhanced reporting

file

Use Tesults to report additional properties for each test case. To upload files generated during a test case such as logs or screenshots, require the file function in your test file:

from pytest_tesults import file

Associate a file to a test case in order to upload it for reporting. Utilize the built in pytest 'request' fixture and pass it into the file function as the first parameter. The second parameter is the full path to the generated file to upload for the test case.

def test1(request):
  file(request, '/full/path/to/file')

Example:

import pytest
from pytest_tesults import file

pytestmark = pytest.mark.suite("test_suite_a")
@pytest.mark.description("test 1 description")
def test1(request):
  file(request, '/Users/admin/log.txt')
  file(request, '/Users/admin/screenshot.png')
  assert 5 == 5

Files generated by tests

tesults-filesOptional

This method of uploading files is no longer recommended starting from pytest-tesults 1.6.0+. If using pytest-tesults 1.6.0 or newer, utilize the file method described above to simplify uploading files from tests.

If you want to save files such as logs and screen captures, provide the absolute path to the top-level directory where this data is saved for the running test run. Files, including logs, screen captures and other artifacts will be automatically pushed to Tesults.

pytest --tesults-files /Users/admin/Desktop/temporary

This is one area where the pytest-tesults plugin is opinionated and requires that files generated during a test run be saved locally temporarily within a specific directory structure.

Store all files in a temporary directory as your tests run. After Tesults upload is complete, delete the temporary directory or just have it overwritten on the next test run.

Please see the markers section for more details on how to provide a suite name and also take a look at the tesults-nosuites flag below. The default behavior of the pytest-tesults plugin is to set the module name as the test suite if a test suite is not explicitly provided using a marker. Also be aware that if providing build files, the build suite is always set to [build] and files are expected to be located in temporary/[build]/buildname

Caution: If uploading files the time taken to upload is entirely dependent on your network speed. Typical office upload speeds of 100 - 1000 Mbps should allow upload of even hundreds of files quite quickly, just a few seconds, but if you have slower access it may take hours. We recommend uploading a reasonable number of files for each test case. The upload method blocks at the end of a test run while uploading test results and files. When starting out test without files first to ensure everything is setup correctly.

During a test run, save test generated files such as logs and screenshots to a local temporary directory. At the end of the test run all files will automatically be saved to Tesults as long as you save files in the directory structure below. Omit test suite folders if not using test suites.
  • expanded temporary folder
    • expanded Test Suite A
      • expanded Test 1
        • test.log
        • screenshot.png
      • expanded Test 2
        • test.log
        • screenshot.png
    • expanded Test Suite B
      • expanded Test 3
        • metrics.csv
        • error.log
      • expanded Test 4
        • test.log

Build

tesults-build-nameOptional

Use this to report a build version or name for reporting purposes.

pytest --tesults-build-name 1.0.0

tesults-build-resultOptional

Use this to report the build result, must be one of [pass, fail, unknown].

pytest --tesults-build-result pass

tesults-build-descriptionOptional

Use this to report a build description for reporting purposes.

pytest --tesults-build-description 'added new feature'

tesults-build-reasonOptional

Use this to report a build failure reason.

pytest --tesults-build-reason 'build error line 201 somefile.py'

No Test Suites


tesults-nosuitesOptional

Use this flag to stop automatic setting of the module name as the test suite in the case where a suite is not explicitly supplied using @pytest.mark.suite("suite name here")

pytest --tesults-nosuites

tesults-save-stdoutOptional

Use this flag to save any print output (using Python's print function) to stdout within test cases to a file and upload this file as part of results automatically.

pytest --tesults-save-stdout

Markers

suite

Set a suite name either for a specific test case or a whole module. If a suite is not provided the pytest-tesults plugin automatically sets the module name as the suite. This behavior can be disabled using the tesults-nosuites flag.

@pytest.mark.suite("suite name here")
description

To have a test description for reporting, decorate your test functions in this way:

@pytest.mark.description("test description")
parametrize

To have the test parameters picked up for reporting, decorate your test functions in this way:

@pytest.mark.parametrize(("test_input","expected"), [("3+5", 8),("2+4", 6),("6*9", 42),])
custom markers

If you are using pytest 4, any custom fields can be added as custom markers

Example 1

@pytest.mark.custommarker("this is a custom marker")

Example 2

@pytest.mark.anotherone("this is another custom field")

Result Interpretation

Result interpretation is supported by Pytest. Install version 1.4.0 or newer. If you use result interpretation we recommend you add these minimum mapping values:

passed
pass
failed
fail

Consolidating parallel test runs

If you execute multiple test runs in parallel or serially for the same build or release and results are submitted to Tesults within each run, separately, you will find that multiple test runs are generated on Tesults. This is because the default behavior on Tesults is to treat each results submission as a separate test run. This behavior can be changed from the configuration menu. Click 'Results Consolidation By Build' from the Configure Project menu to enable and disable consolidation by target. Enabling consolidation will mean that multiple test runs submitted with the same build name will be consolidated into a single test run.

Dynamically created test cases

If you dynamically create test cases, such as test cases with variable values, we recommend that the test suite and test case names themselves be static. Provide the variable data information in the test case description or other custom fields but try to keep the test suite and test name static. If you change your test suite or test name on every test run you will not benefit from a range of features Tesults has to offer including test case failure assignment and historical results analysis. You need not make your tests any less dynamic, variable values can still be reported within test case details.

Proxy servers

Does your corporate/office network run behind a proxy server? Contact us and we will supply you with a custom API Library for this case. Without this results will fail to upload to Tesults.

Have questions or need help? Contact us