TestCafe test reporting

Installation

Tesults provides a reporter for TestCafe to make integrating easy. This guide assumes you already have TestCafe setup.

Within your TestCafe project, install using npm:

npm install testcafe-reporter-tesults

TestCafe 1.3.0 or higher is required. Please ensure you are using a compatible version.

Configuration

Given a typical TestCafe test case like this:

fixture `Getting Started`// declare the fixture
  .page `https://devexpress.github.io/testcafe/example`;

test
  .meta({ description: 'test description here', key1: 'value1', key2: 'value2' })
  ('My first test', async t => {
  await t
  .typeText('#developer-name', 'John Smith')
  .takeScreenshot()
  .click('#submit-button')

  // Use the assertion to check if the actual header text is equal to the expected one
  .expect(Selector('#article-header').innerText).eql('Thank you, John Smith!');
});

The fixture, 'Getting Started' in this case, is considered the test suite for reporting for all test cases contained with in it. 'My first test' would be the test name.

Providing a field called description in the meta data as above will have Tesults report this as the description and any other custom fields can be passed through as meta data too the same way as key1 and key2 above.

tesults-target

Required to upload to Tesults, if this arg is not provided the tesults reporter does not attempt upload, effectively disabling it. Get your target token from the configuration menu in Tesults.

Inline method

testcafe -r tesults -- tesults-target=eyJ0eXAiOiJ...

In this case, the target token is supplied directly or inline in the command line args. This is the simplest approach.

Key method

testcafe -r tesults -- tesults-target=target1 tesults-config=config.json

In this case, testcafe-reporter-tesults will automatically look up the value of the token based on the property provided from a configuration file.

Here is the corresponding config.json file:

{
  "target1": "eyJ0eXAiOiJ...",
  "target2": "eyJ0eXAiOiK...",
  "target3": "eyJ0eXAiOiL...",
  "target4": "eyJ0eXAiOiM..."
}

Something more descriptive about the targets is better:

{
  "chrome-qa-env": "eyJ0eXAiOiJ...",
  "chrome-staging-env": "eyJ0eXAiOiK...",
  "chrome-prod-env": "eyJ0eXAiOiL...",
  "firefox-qa-env": "eyJ0eXAiOiM...",
  "firefox-staging-env": "eyJ0eXAiOiN...",
  "firefox-prod-env": "eyJ0eXAiOiO...",
  "safari-qa-env": "eyJ0eXAiOiP...",
  "safari-staging-env": "eyJ0eXAiOiQ...",
  "safari-prod-env": "eyJ0eXAiOiR..."
}

Subsequently, a real command to run all tests in the tests dir on the chrome browser in the qa env might look like this:

testcafe chrome tests -r tesults -- tesults-target=chrome-qa-env tesults-config=config.json
tesults-config

Provide the path, including the file name to a .json file. All of the args below can be provided in a config file instead of the testcafe command.

testcafe -r tesults -- tesults-config=/path/config.json

An example of a config file:

{
  "chrome-web-qa": "eyJ0eXAiOiJ...",
  "chrome-web-staging": "eyJ0eXAiOiK...",
  "chrome-web-prod": "eyJ0eXAiOiL...",
  "tesults-files": "/paths/files",
  "tesults-build-name": "1.0.0",
  "tesults-build-desc": "Build description",
  "tesults-build-result": "pass"
}

Basic configuration complete

At this point reporting will push results to Tesults when you run your testcafe test command if the tesults reporter is passed through to TestCafe the same way any other reporter is passed through (with the -r option) along with your tesults target token:

testcafe -r tesults -- tesults-target=token

The token should be replaced with the token for your project target. You can regenerate a token at any time from the config menu.

Using the programming interface for TestCafe?

If you use the programming interface for TestCafe rather than the command line interface, pass in Tesults arguments using process.argv:

const createTestCafe = require('testcafe');
let runner = null;

process.argv.push('tesults-target=token');
process.argv.push('tesults-build-name=1.0.0')
process.argv.push('tesults-build-result=pass')

createTestCafe('localhost', 1337, 1338)
  .then(testcafe => {
    runner = testcafe.createRunner();
    runner
      .src('tests/myFixture.js')
      .browsers([remoteConnection, 'chrome'])
      .reporter('json')
      .run()
      .then(failedCount => {
        /* ... */
      })
      .catch(error => {
        /* ... */
      });
    return testcafe.createBrowserConnection();
  })
  .then(remoteConnection => {
    /* ... */
  });

Enhanced reporting

Use the Tesults reporter to report additional properties for each test case. Enhanced reporting functions are supported in testcafe-report-tesults version 1.2.0 and higher. To begin doing this, require the reporter in your test file:

const tesults = require('testcafe-reporter-tesults')();
// Note that the function execution () above is required.

Report custom fields, test steps and attach files to test cases using the Tesult reporter functions: description, custom, step, and file. Call reporter functions from within your tests (test block). The first parameter is always the test controller to provide the reporter with context about the test:

description

Add a description of the test case for reporting purposes. We recommend using the meta description for this but you can also use the Tesults reporter as demonstrated here.

const tesults = require('testcafe-reporter-tesults')();

fixture `Getting Started`// declare the fixture
  .page `https://devexpress.github.io/testcafe/example`;

test
  .meta({ description: 'Recommended way to set description', key1: 'value1', key2: 'value2' })
  ('My first test', async t => {
  tesults.description(t, "An alternative way to set description");
  await t
  .typeText('#developer-name', 'John Smith')
  .takeScreenshot()
  .click('#submit-button')

  // Use the assertion to check if the actual header text is equal to the expected one
  .expect(Selector('#article-header').innerText).eql('Thank you, John Smith!');
});

custom

Add a custom field and value to the test case for reporting purposes. These fields and values can be set using meta data just as with the description. Alternatively use the Tesults reporter as demonstrated here.

const tesults = require('testcafe-reporter-tesults')();

fixture `Getting Started`// declare the fixture
  .page `https://devexpress.github.io/testcafe/example`;

test
  .meta({ description: 'Description', custom_field_1: 'custom_value_1'})
  ('My first test', async t => {
  tesults.custom(t, "Another custom field", "Custom value");
  await t
  .typeText('#developer-name', 'John Smith')
  .takeScreenshot()
  .click('#submit-button')

  // Use the assertion to check if the actual header text is equal to the expected one
  .expect(Selector('#article-header').innerText).eql('Thank you, John Smith!');
});

file

Associate a file to the test case in order to upload it for reporting.

const tesults = require('testcafe-reporter-tesults')();

fixture `Getting Started`// declare the fixture
  .page `https://devexpress.github.io/testcafe/example`;

test
  .meta({ description: 'Description', custom_field_1: 'custom_value_1'})
  ('My first test', async t => {
  tesults.file(t, "/full/path/to/file/log.txt");
  await t
  .typeText('#developer-name', 'John Smith')
  .takeScreenshot()
  .click('#submit-button')

  // Use the assertion to check if the actual header text is equal to the expected one
  .expect(Selector('#article-header').innerText).eql('Thank you, John Smith!');
});

Caution: If uploading files the time taken to upload is entirely dependent on your network speed. Typical office upload speeds of 100 - 1000 Mbps should allow upload of even hundreds of files quite quickly, just a few seconds, but if you have slower access it may take hours. We recommend uploading a reasonable number of files for each test case. The upload method blocks at the end of a test run while uploading test results and files. When starting out test without files first to ensure everything is setup correctly.

step

Add test steps to the test case for reporting. Each step is an object with a name and result (one of [pass, fail, unknown]). You can also add the optional fields description and reason (failure reason in case of step fail).

const tesults = require('testcafe-reporter-tesults')();

fixture `Getting Started`// declare the fixture
  .page `https://devexpress.github.io/testcafe/example`;

test
  .meta({ description: 'Description', custom_field_1: 'custom_value_1'})
  ('My first test', async t => {
tesults.step(t, {
  name: "First step",
  result: "pass"
})
tesults.step(t, {
  name: "Second step",
  desc: "Second step description",
  result: "fail"
  reason: "Error line 203 of test.js"
})
  await t
  .typeText('#developer-name', 'John Smith')
  .takeScreenshot()
  .click('#submit-button')

  // Use the assertion to check if the actual header text is equal to the expected one
  .expect(Selector('#article-header').innerText).eql('Thank you, John Smith!');
});

Files generated by tests

tesults-files=path

If you save logs or other files during your test run include this option to have files upload take place. If you use the TestCafe takeScreenshot() function to take screen shots, these files will automatically be uploaded and there is no need to include this option but for all other files this option should be included.

Replace path with the top-level directory where files generated during testing are saved for the running test run. Files, including logs, screen captures and other artifacts will be automatically uploaded.

testcafe -r tesults -- tesults-files=/path/files

This is one area where the tesults reporter is opinionated and requires that files generated during a test run be saved locally temporarily during a run within a specific directory structure.

Store all files in a temporary directory as your tests run. After Tesults upload is complete, delete the temporary directory or just have it overwritten on the next test run.

Within this temporary directory create subdirectories matching the test suite (fixture in TestCafe) and case name

Also be aware that if providing build files, the build suite is always set to [build] and files are expected to be located in temporary/[build]/buildname

Caution: If uploading files the time taken to upload is entirely dependent on your network speed. Typical office upload speeds of 100 - 1000 Mbps should allow upload of even hundreds of files quite quickly, just a few seconds, but if you have slower access it may take hours. We recommend uploading a reasonable number of files for each test case. The upload method blocks at the end of a test run while uploading test results and files. When starting out test without files first to ensure everything is setup correctly.

During a test run, save test generated files such as logs and screenshots to a local temporary directory. At the end of the test run all files will automatically be saved to Tesults as long as you save files in the directory structure below. Omit test suite folders if not using test suites.
  • expanded temporary folder
    • expanded Test Suite A
      • expanded Test 1
        • test.log
        • screenshot.png
      • expanded Test 2
        • test.log
        • screenshot.png
    • expanded Test Suite B
      • expanded Test 3
        • metrics.csv
        • error.log
      • expanded Test 4
        • test.log

Build

tesults-build-name=buildname

Use this to report a build version or name for reporting purposes.

testcafe -r tesults -- tesults-build-name=1.0.0

tesults-build-result=result

Use this to report the build result, must be one of [pass, fail, unknown].

testcafe -r tesults -- tesults-build-result=pass

tesults-build-desc=description

Use this to report a build description for reporting purposes.

testcafe -r tesults -- tesults-build-desc='added new feature'

tesults-build-reason=reason

Use this to report a build failure reason.

testcafe -r tesults -- tesults-build-reason='build error line 201 somefile.py'

Result Interpretation

Result interpretation is not currently supported by this integration. If you are interested in support please contact help@tesults.com.

Consolidating parallel test runs

If you execute multiple test runs in parallel or serially for the same build or release and results are submitted to Tesults within each run, separately, you will find that multiple test runs are generated on Tesults. This is because the default behavior on Tesults is to treat each results submission as a separate test run. This behavior can be changed from the configuration menu. Click 'Results Consolidation By Build' from the Configure Project menu to enable and disable consolidation by target. Enabling consolidation will mean that multiple test runs submitted with the same build name will be consolidated into a single test run.

Dynamically created test cases

If you dynamically create test cases, such as test cases with variable values, we recommend that the test suite and test case names themselves be static. Provide the variable data information in the test case description or other custom fields but try to keep the test suite and test name static. If you change your test suite or test name on every test run you will not benefit from a range of features Tesults has to offer including test case failure assignment and historical results analysis. You need not make your tests any less dynamic, variable values can still be reported within test case details.

Proxy servers

Does your corporate/office network run behind a proxy server? Contact us and we will supply you with a custom API Library for this case. Without this results will fail to upload to Tesults.

Have questions or need help? Contact us