Swift test reporting

note
Using XCTest/XCUITest? It is strongly recommended you use the tesults-xctest-observer package for reporting results to Tesults. The tesults-xctest-observer makes it possible to report results with minimal configuration and no code. The documentation here is a lower-level Swift package for test framework authors. If you are using a custom test framework, the documentation here may be useful to you.

Installation

Add the tesults-swift package as a dependency to your project. Search for tesults-swift in Xcode to find the package.

tesults-swift-package

Ensure you install version 1.0.7 or higher. You can verify the version you have installed by checking package dependencies.

tesults-swift-package-dependency

Package.swift manifest:

tesults-swift-package-dependency-check

If you receive a build error about missing dependencies for tests, ensure the Tesults library is added to the Link Binary With Libaries section of Build Phases for the target:

link-tesults

Configuration

Make use of the package using an import statement in any file you need it:

import tesults

Usage

Upload results using the results function of the Tesults struct. This is the single point of contact between your code and Tesults.

Tesults().results(data: data)

The results function is async so use await. The results function returns a ResultsResponse struct with four properties indicating success or failure:

print("Tesults results upload...")
let resultsResponse = await Tesults().results(data: data)
print("Success: \(resultsResponse.success)")
print("Message: \(resultsResponse.message)")
print("Warnings: \(resultsResponse.warnings.count)")
print("Errors: \(resultsResponse.errors.count)")

The results function returns a ResultsResponse type that you can use to check whether the upload was a success.

  • Value for key "success" is a Bool: true if results successfully uploaded, false otherwise.
  • Value for key "message" is a String: if success is false, check message to see why upload failed.
  • Value for key "warnings" is a [String], if count is not zero there may be issues with file uploads.
  • Value for key "errors" is a [String], if "success" is true then this will be empty.

The data param passed to resultss is a Dictionary<String, Any?> containing your results data. Here is a complete example showing how to populate data with your build and test results and then upload it:

Complete example:
// Required imports:
import tesults

// Other imports:
import XCTest
@testable import app_under_test

// Dictionary to hold test cases data.
var cases : [Dictionary<String, Any>] = []

// Each test case is a Dictionary<String, Any> too. We use generic types rather than
// concrete helper classes so that if and when the Tesults services adds more fields
// you do not have to update the library.

// You would usually add test cases in a loop taking the results from the data objects
// in your build/test scripts. At a minimum you must add a name and result:

var testCase1 = Dictionary<String, Any>()
testCase1["name"] = "Test Case 1"
testCase1["result"] = "pass" // result value must be pass, fail or unknown

testCase1["suite"] = "Suite A" // Suite is usually the class name
testCase1["desc"] = "Test Case 1 description"
cases.append(testCase1);

var testCase2 = Dictionary<String, Any>()
testCase2["name"] = "Test 2"
testCase2["desc"] = "Test 2 description"
testCase2["suite"] = "Suite B"
testCase2["result"] = "pass"

// (Optional) Add start and end time for test (in milliseconds since epoch):
// In this example, end is set to 100 seconds later
testCase2["start"] = Date().timeIntervalSince1970*1000
testCase2["end"] = (Date().timeIntervalSince1970 + 100) * 1000

// An optional duration can also be set if a start or end time is unavailable
testCase2["duration"] = 100 * 1000

// (Optional) For a paramaterized test case:
testCase2["params"] = ["Param 1" : "Param 1 Value", "Param 2" : "Param 2 Value"]

// (Optional) Custom fields start with an underscore:
testCase2["_Custom field"] = "Custom field value"

cases.append(testCase2);

var testCase3 = Dictionary<String, Any>()
testCase3["name"] = "Test 3"
testCase3["desc"] = "Test 3 description"
testCase3["suite"] = "Suite A"
testCase3["result"] = "fail"
testCase3["reason"] = "Assert fail in line 203 of example.swift." // Test failure reason

// (Optional) For uploading files:
testCase3["files"] = ["/full/path/to/file/log.txt", "/full/path/to/file/screenshot.png"]

// (Optional) For providing test steps for a test case:
testCase3["steps"] = [
  [
    "name":"Step 1",
    "desc":"Step 1 description",
    "result":"pass"
  ],
  [
    "name":"Step 2",
    "desc":"Step 2 description",
    "result":"fail",
    "reason":"Failure reason if result is a fail"
  ]
]

cases.append(testCase3);

// Prepare data for upload
var data = Dictionary<String, Any>()
data["target"] = "token" // The token for the target you wish to upload to
data["results"] = ["cases" : cases] // Test case data

// Upload
print("Tesults results upload...")
let resultsResponse = await Tesults().results(data: data)
print("Success: \(resultsResponse.success)")
print("Message: \(resultsResponse.message)")
print("Warnings: \(resultsResponse.warnings.count)")
print("Errors: \(resultsResponse.errors.count)")

The target value, 'token' above should be replaced with your Tesults target token. If you have lost your token you can regenerate one from the config menu.

Test case properties

This is a complete list of test case properties for reporting results. The required fields must have values otherwise upload will fail with an error message about missing fields.

PropertyRequiredDescription
name*Name of the test.
result*Result of the test. Must be one of: pass, fail, unknown. Set to 'pass' for a test that passed, 'fail' for a failure.
suiteSuite the test belongs to. This is a way to group tests.
descDescription of the test
reasonReason for the test failure. Leave this empty or do not include it if the test passed
paramsParameters of the test if it is a parameterized test.
filesFiles that belong to the test case, such as logs, screenshots, metrics and performance data.
stepsA list of test steps that constitute the actions of a test case.
startStart time of the test case in milliseconds from Unix epoch.
endEnd time of the test case in milliseconds from Unix epoch.
durationDuration of the test case running time in milliseconds. There is no need to provide this if start and end are provided, it will be calculated automatically by Tesults." : "Duration of the build time in milliseconds. There is no need to provide this if start and end are provided, it will be calculated automatically by Tesults.
_customReport any number of custom fields. To report custom fields add a field name starting with an underscore ( _ ) followed by the field name.

Build properties

To report build information simply add another case added to the cases array with suite set to [build]. This is a complete list of build properties for reporting results. The required fields must have values otherwise upload will fail with an error message about missing fields.

PropertyRequiredDescription
name*Name of the build, revision, version, or change list.
result*Result of the build. Must be one of: pass, fail, unknown. Use 'pass' for build success and 'fail' for build failure.
suite*Must be set to value '[build]', otherwise will be registered as a test case instead.
descDescription of the build or changes.
reasonReason for the build failure. Leave this empty or do not include it if the build succeeded.
paramsBuild parameters or inputs if there are any.
filesBuild files and artifacts such as logs.
startStart time of the build in milliseconds from Unix epoch.
endEnd time of the build in milliseconds from Unix epoch.
durationDuration of the build time in milliseconds. There is no need to provide this if start and end are provided, it will be calculated automatically by Tesults.
_customReport any number of custom fields. To report custom fields add a field name starting with an underscore ( _ ) followed by the field name.

Consolidating parallel test runs

If you execute multiple test runs in parallel or serially for the same build or release and results are submitted to Tesults within each run, separately, you will find that multiple test runs are generated on Tesults. This is because the default behavior on Tesults is to treat each results submission as a separate test run. This behavior can be changed from the configuration menu. Click 'Results Consolidation By Build' from the Configure Project menu to enable and disable consolidation by target. Enabling consolidation will mean that multiple test runs submitted with the same build name will be consolidated into a single test run.

Dynamically created test cases

If you dynamically create test cases, such as test cases with variable values, we recommend that the test suite and test case names themselves be static. Provide the variable data information in the test case description or other custom fields but try to keep the test suite and test name static. If you change your test suite or test name on every test run you will not benefit from a range of features Tesults has to offer including test case failure assignment and historical results analysis. You need not make your tests any less dynamic, variable values can still be reported within test case details.

Proxy servers

Does your corporate/office network run behind a proxy server? Contact us and we will supply you with a custom API Library for this case. Without this results will fail to upload to Tesults.

Have questions or need help? Contact us