Let's talk about what to consider when deciding which test automation reporting tool to use for your project.
First a quick note on test automation. Test automation is critical for software engineering teams maintaining complex systems. Regression (breaking existing functionality) is not uncommon when implementing new changes or features. Automated tests make is possible to spot regression efficiently. Besides regression spotting, automated tests enable greater test coverage and more testing can be carried out by automation compared with even an army of human testers. Testing programmatically is increasingly the only way to test systems with highly complex parallel test scenarios and large data sets.
For any team maintaining systems that are mission critical to a business and taking quality seriously, test automation is important. Automated tests are often run ad hoc by request but usually teams setup jobs to run on a regular interval or run tests as part of their build and continious integration process. Having a way to manage test results data output by test automation systems becomes necessary and that is the point where you need to start considering reporting tools. If you don't have any automated tests yet, it's probably best to create a few tests and then start looking into test automation reporting dashboards and tools.
The obvious question to begin with is why use any test automation reporting tool at all. Why not just log test results out to the console and be done. This can be fine if your team is small and you don't have many tests and you run tests only ocassionally by manually triggering them. The problem with this approach becomes clearer when a product's feature set grows and testing needs to become increasingly sophisticated and the number of test cases gets larger. Parsing and making sense of the test output becomes increasingly challenging.
The scale of testing that automation can do means test results data output can be large. Managing large amounts of data generated from software that customers of a business use has become a specialized field, big data, with techniques developed in recent years to manage and analyze this data. When it comes to managing large amounts of test automation data, it's also important to consider how your team will handle this. One solution is with a test automation reporting tool.
Test automation reporting also serves as a feedback mechanism to your test developers and helps to keep tests up-to-date. Beyond scale and data processing, read on to understand what other benefits good test automation reporting tools provide and what to look for when considering a tool.
You'll need to decide whether you want your team to maintain the test automation reporting tool or use a cloud/web-based service. This is a choice development teams commonly have to make for many development tools including code repositories, continuous integration systems, application servers and test device farms. Tesults is a web-based tool, there is no maintainence required and updates are applied automatically. That makes it easy to utilize for resource constrained teams and efficient to run for larger organizations because engineers don't to spend time maintaining the reporting system. Some teams will want to have an on-premise, self-hosted solution in which case an alternative to Tesults is required. If you've got enough resources you could consider maintaining a custom internal solution but for most teams and businesses this is unlikely to be cost effective.
The test cases your team maintains may change over time. Check that integrating with the reporting tool you are considering is straightforward not just for the programming language your team currently uses to write tests but also for languages and test frameworks they may start using soon. Some tools only provide APIs for C# or Java for example, check that the integration is easy for as many languages and frameworks as possible.
If you use a non-coding test automation platform where all testing is defined within the platform itself, perhaps via a UI, and a reporting dashboard is integrated with that tool then you may not have a need for a standalone test automation reporting tool. Such systems exist for some types of UI testing. However, most teams still write coded tests for flexibility, power and control over what the tests do and will have a variety of front-end and back-end end-to-end tests and internal component tests that would benefit from having a consolidated test reporting tool. Examine your needs and decide whether a standalone test automation reporting tool makes sense for your team.
Take a look at how a tool supports consolidation of results reporting from all of your test jobs and check that adding a new test job at a later time is easy to do. Since test cases you maintain will likely increase overtime as your system grows more sophisticated your team will likely want to split test jobs up for specialization and add new ones for new features or products. The reporting tool should make it easy to add new test jobs and not constrain you with respect to how you want to setup reporting for different jobs. For example Tesults provides Targets for this purpose and adding a new test job is as simple as generating a new token from the menu. Check that any alternative tool offers a similar mechanism. Consolidating test results output from front-end clients, back-end apis and other areas in your system is important for quality analysis and it should not be a big task to add additional test jobs as your team needs them.
Making sense of large quantities of test data requires the ability to zoom out to see high level summary information and zoom in to see details like logs and failures for a specific test case. If you're a release manager reviewing results from several test jobs you probably want a high-level overview first to see which test job is highlighting failures and needs attention. From there it is useful to drill down to lists of test cases and then further to view details of an individual test case. Tesults provides the Dashboard, Results and Case view to handle these various levels of detail. Fortunately most test tools will provide similar views but do check to make sure you have the appropriate level of summary and detail that you require from the tooling you are considering.
There will be times when a comprehensive view of all of the test cases and suites within a test job is important. Tesults provides the Results view for this purpose. However to save time when you have a large number of tests, automated analysis of the output becomes essential. The Supplemental view in Tesults carries out automatic analysis to enable an efficient review by the viewer. This includes automatically displaying new failures for the latest test run, failures that are carrying over from previous test runs, test cases that changed from failing to passing in the latest run and suspected flaky test cases where results are flipping between success and failure often. Examine the tool you are assessing to see if it provides automated analysis because without that your team will need to waste time on manual analyis.
Test automation can generate artifacts such as screenshots, logs, and other files. Check how storage and data management is handled when comparing tools. Will your team be responsible for managing a database for this or does the reporting software handle it. As a data point, Tesults stores and manages file data and test history. This can make it easy to identify when regression was introduced and look back through logs, images and other data as necessary.
Take a look at how easy it is to share a test report with team members and stakeholders when comparing reporting tools. Ideally it is simple to add a team member to the reporting software and share links. You may want to check the software supports SSO. Consider integrations with other applications as well. If your team primarily communicates via email then email notifications may be enough but if your team is remote or distributed then Slack or MS Teams notifications may be preferred. If you plan to use your test automation reporting tool to monitor production systems as well then PagerDuty and other alerting integrations are also useful.
Reviewing test results and output is important. Actioning is the next step. Consider who will be responsible for investigating a test case that has started to fail in the latest build deployed to the test environment. If the person viewing the report is the same person who will be investigating the bug, which may be the case, the next steps are clear. Sometimes though, the team member who came across the failure for the first time in the report may not be the person who will be investigating. In that case it is useful to be able to assign the failing test case to someone else to look at and to be notified about it. If this sort of actioning flow would be useful to your team check that the reporting software you are considering supports it. Tesults uses Tasks for this purpose.
Consider how you will extract or export results from the reporting tool. Tesults provides an API for fetching results data. The main use case for this capability is for programmatic analysis of test results for decisions for deployment in a continuous integration or deployment system. Most reporting tools will have a solid user interface for analysis and viewing test results and that will be the primary way teams interact with results. If your team has no need for anything beyond viewing results within the reporting tool then this will not be a major concern. If it is important that you can programmatically examine results data or send results data elsewhere take a look at the APIs for the test automation reporting tool you are considering using.
We are talking about test automation reporting tools but most teams conduct some manual testing too. While it is not necessary for a test automation reporting tool to handle manual test case management and you could use a separate tool for that, if your team is not already using something else, handling everything in one place can be useful. Check to see if the tool you are considering supports manual test case management. Tesults provides Lists for authoring and storing manual test cases. Tesults provides Runs to run a manual test run and assign test cases to team members. Having one place for software engineers and QA testers to review test output may be useful when testing for a release requires examining both automated and manual test output.
While test data is unlikely to include highly sensitive data, such as real customer data in production databases, the data should still be treated with the same level of security consideration. It is possible for attackers to find useful data within test output and it is possible to leak information about unreleased products to external entities if you do not take security seriously. If your team is self hosting a reporting tool, security will chiefly be the responsibility of the team. If you are considering a web-based/cloud solution you should ensure the provider meets the security requirements your team expects. This is something that should be done for all cloud based services of course, not just test automation reporting tools. Tesults encrypts data in transit and at rest, offers audit logs for projects and supports SSO. Check that any cloud based automation reporting tool you are considering does the same. With self hosted or on premise tools work with your internal IT and security team to ensure you are meeting appropriate safe guarding and security requirements.
There is a lot to consider when looking at test automation reporting tools. We hope the points discussed above serve as a checklist of points to consider for tooling options you are examining. We leave you with a list summarizing the key points discussed to help you with next steps.
Ajeet Dhaliwal writing for Tesults (tesults.com)