C# test reporting

note
Using NUnit? If you use the NUnit test framework no code is necessary to integrate, click here to view documentation for the Tesults NUnit extension and ignore the instructions below.
note
Using the Visual Studio Unit Testing Framework (MS Test)? If you use the Visual Studio Unit Testing Framework (MSTest) take a look at documentation for integrating with Tesults and ignore the instructions below.

Installation

The package is hosted on The NuGet Gallery: https://www.nuget.org/packages/tesults/

Run this command in Package Manager Console.

Install-Package tesults

Configuration

No additional configuration is required once the package has been installed.

Usage

Upload results using the Upload method in the Results class. This is the single point of contact between your code and Tesults.

Tesults.Results.Upload(data);

The Upload method returns a response indicating success or failure:

var response = Tesults.Results.Upload(data);

// The response value is a dictionary with four keys.

// Value for key "success" is a bool, true if successfully uploaded, false otherwise.
Console.WriteLine("Success: " + response["success"]);

// Value for key "message" is a string, useful to check if the upload was unsuccessful.
Console.WriteLine("Message: " + response["message"]);

// Value for key "warnings" is List<string>, if non empty there may be file upload issues.
Console.WriteLine("Warnings: " + ((List<string>)response["warnings"]).Count);

// Value for key "errors" is List<string>, if success is true this will be empty.
Console.WriteLine("Errors: " + ((List<string>)response["errors"]).Count);

The data param passed to Upload is a Dictionary<string, object> containing your results data. Here is a complete example showing how to populate data with your build and test results and complete upload to Tesults with a call to Tesults.Results.Upload(data):

Complete example:
// Required namespaces:
// using System;
// using System.Collections.Generic;

// Create a list to hold your test case results.
var testCases = new List<Dictionary<string, object>>();

// Each test case is a dictionary. Usually you would
// create these in a loop from whatever data objects your
// test framework provides.
var testCase1 = new Dictionary<string, object>();
testCase1.Add("name", "Test 1");
testCase1.Add("desc", "Test 1 Description");
testCase1.Add("suite", "Suite A");
testCase1.Add("result", "pass");

testCases.Add(testCase1);

var testCase2 = new Dictionary<string, object>();
testCase2.Add("name", "Test 2");
testCase2.Add("desc", "Test 2 Description");
testCase2.Add("suite", "Suite A");
testCase2.Add("result", "pass");

// (Optional) For a paramaterized test case:
var parameters = new Dictionary<string, object>();
parameters.Add("param1", "value1");
parameters.Add("param2", "value2");
testCase2.Add("params", parameters);

// (Optional) Custom fields start with an underscore:
testCase2.Add("_CustomField", "Custom field value");

testCases.Add(testCase2);

var testCase3 = new Dictionary<string, object>();
testCase3.Add("name", "Test 3");
testCase3.Add("desc", "Test 3 Description");
testCase3.Add("suite", "Suite B");
testCase3.Add("result", "fail");
testCase3.Add("reason", "Assert fail in line 203 of example.cs");

// (Optional) Add start and end time for test:
// In this example, start is offset to 60 seconds earlier
// but it should be set to current time when the test starts
testCase3.Add("start", (DateTime.Now.Ticks / TimeSpan.TicksPerMillisecond) - 60000);
testCase3.Add("end", DateTime.Now.Ticks / TimeSpan.TicksPerMillisecond);

// Optional for file uploading
var files = new List<string>();
files.Add(@"full/path/to/file/log.txt");
files.Add(@"full/path/to/file/screencapture.png");
files.Add(@"full/path/to/file/metrics.xls");
testCase3.Add("files", files);

// Optional for adding test case steps
var steps = new List<Dictionary<string, object>>();

var step1 = new Dictionary<string, object>();
step1.Add("name", "Step 1");
step1.Add("desc", "Step 1 Description");
step1.Add("result", "pass");
steps.Add(step1);

var step2 = new Dictionary<string, object>();
step2.Add("name", "Step 2");
step2.Add("desc", "Step 2 Description");
step2.Add("result", "fail");
step2.Add("reason", "Assert fail in line 203 of example.cs");
steps.Add(step2);

testCase3.Add("steps", steps);

testCases.Add(testCase3);

// The results dictionary will contain the test cases.
var results = new Dictionary<string, object>();
results.Add("cases", testCases);

// Finally a dictionary to contain all of your results data.
var data = new Dictionary<string, object>();
data.Add("target", "token");
data.Add("results", results);

// Complete the upload.
var response = Tesults.Results.Upload(data);

// The response value is a dictionary with four keys.

// Value for key "success" is a bool, true if successfully uploaded, false otherwise.
Console.WriteLine("Success: " + response["success"]);

// Value for key "message" is a string, useful to check if the upload was unsuccessful.
Console.WriteLine("Message: " + response["message"]);

// Value for key "warnings" is List<string>, if non empty there may be file upload issues.
Console.WriteLine("Warnings: " + ((List<string>)response["warnings"]).Count);

// Value for key "errors" is List<string>, if success is true this will be empty.
Console.WriteLine("Errors: " + ((List<string>)response["errors"]).Count);

The target value, 'token' above should be replaced with your target token. If you have lost your token you can regenerate one from the config menu. The cases array should contain your test cases. You would usually add these by looping through the test case objects you currently have in your build and test scripts.

The API library makes use of generics rather than providing classes with specific properties so that the package does not require updating often as and when the Tesults service adds fields.

Test case properties

This is a complete list of test case properties for reporting results. The required fields must have values otherwise upload will fail with an error message about missing fields.

PropertyRequiredDescription
name*Name of the test.
result*Result of the test. Must be one of: pass, fail, unknown. Set to 'pass' for a test that passed, 'fail' for a failure.
suiteSuite the test belongs to. This is a way to group tests.
descDescription of the test
reasonReason for the test failure. Leave this empty or do not include it if the test passed
paramsParameters of the test if it is a parameterized test.
filesFiles that belong to the test case, such as logs, screenshots, metrics and performance data.
stepsA list of test steps that constitute the actions of a test case.
startStart time of the test case in milliseconds from Unix epoch.
endEnd time of the test case in milliseconds from Unix epoch.
durationDuration of the test case running time in milliseconds. There is no need to provide this if start and end are provided, it will be calculated automatically by Tesults." : "Duration of the build time in milliseconds. There is no need to provide this if start and end are provided, it will be calculated automatically by Tesults.
rawResultReport a result to use with the result interpretation feature. This can give you finer control over how to report result status values beyond the three Tesults core result values of pass, fail and unknown.
_customReport any number of custom fields. To report custom fields add a field name starting with an underscore ( _ ) followed by the field name.

Build properties

To report build information simply add another case added to the cases array with suite set to [build]. This is a complete list of build properties for reporting results. The required fields must have values otherwise upload will fail with an error message about missing fields.

PropertyRequiredDescription
name*Name of the build, revision, version, or change list.
result*Result of the build. Must be one of: pass, fail, unknown. Use 'pass' for build success and 'fail' for build failure.
suite*Must be set to value '[build]', otherwise will be registered as a test case instead.
descDescription of the build or changes.
reasonReason for the build failure. Leave this empty or do not include it if the build succeeded.
paramsBuild parameters or inputs if there are any.
filesBuild files and artifacts such as logs.
startStart time of the build in milliseconds from Unix epoch.
endEnd time of the build in milliseconds from Unix epoch.
durationDuration of the build time in milliseconds. There is no need to provide this if start and end are provided, it will be calculated automatically by Tesults.
_customReport any number of custom fields. To report custom fields add a field name starting with an underscore ( _ ) followed by the field name.

Files generated by tests

The example above demonstrates how to upload files for each test case. In practice you would generate the array of file paths for each test case programatically.

To make this process simpler we suggest you write a helper function to extract files for each test case and this can be easily achieved by following a couple of simple conventions when testing.

1. Store all files in a temporary directory as your tests run. After Tesults upload is complete you can delete the temporary directory or overwrite it on the next test run.

2. Within this temporary directory create subdirectories for each test case so that files for each test case are easily mapped to a particular test case.

  • expanded temporary folder
    • expanded Test Suite A
      • expanded Test 1
        • test.log
        • screenshot.png
      • expanded Test 2
        • test.log
        • screenshot.png
    • expanded Test Suite B
      • expanded Test 3
        • metrics.csv
        • error.log
      • expanded Test 4
        • test.log

Then all your helper function needs to do is take the test name and/or suite as parameters and return an array of files for that particular test case.

using System;
using System.IO;

static List<string> filesForTest (string suite, string name) {
  var tempDir = @"C:\temp-files-dir";
  var filePaths = new List<string>();
  var dir = Path.Combine(tempDir, suite, name);
  try
  {
    var files = Directory.GetFiles(dir);
    foreach(var file in files)
    {
      filePaths.Add(file);
    }
  }
  catch (Exception)
  {
    // Test may not have files
  }
  return filePaths;
}

/*
var testCase1 = new Dictionary<string, object>();
testCase1.Add("name", "Test 1");
testCase1.Add("desc", "Test 1 Description");
testCase1.Add("suite", "Suite A");
testCase1.Add("result", "pass");
testCase1.Add("files", filesForTest("Suite A", "Test 1"));
*/

Caution: If uploading files the time taken to upload is entirely dependent on your network speed. Typical office upload speeds of 100 - 1000 Mbps should allow upload of even hundreds of files quite quickly, just a few seconds, but if you have slower access it may take hours. We recommend uploading a reasonable number of files for each test case. The upload method blocks at the end of a test run while uploading test results and files. When starting out test without files first to ensure everything is setup correctly.

Consolidating parallel test runs

If you execute multiple test runs in parallel or serially for the same build or release and results are submitted to Tesults within each run, separately, you will find that multiple test runs are generated on Tesults. This is because the default behavior on Tesults is to treat each results submission as a separate test run. This behavior can be changed from the configuration menu. Click 'Results Consolidation By Build' from the Configure Project menu to enable and disable consolidation by target. Enabling consolidation will mean that multiple test runs submitted with the same build name will be consolidated into a single test run.

Dynamically created test cases

If you dynamically create test cases, such as test cases with variable values, we recommend that the test suite and test case names themselves be static. Provide the variable data information in the test case description or other custom fields but try to keep the test suite and test name static. If you change your test suite or test name on every test run you will not benefit from a range of features Tesults has to offer including test case failure assignment and historical results analysis. You need not make your tests any less dynamic, variable values can still be reported within test case details.

Proxy servers

Does your corporate/office network run behind a proxy server? Contact us and we will supply you with a custom API Library for this case. Without this results will fail to upload to Tesults.

Have questions or need help? Contact us