Our newest test result analysis tools help you pinpoint issues when things go wrong during your load test. These new tools complement our already industry leading results analysis platform. Read on for more details and other recently launched features including export to PDF, manually starting a test, and viewing histogram metrics in a table.

New Result Analysis Tools

On the test results you will find several new ways to drill down to the results from one virtual user performing one iteration of your test scenario. Each network request, custom metric, screenshot, and assertion captured during your load test is associated with a virtual user iteration (VUI) which is assigned a unique iteration ID. This works even with JMeter, Gatling, and Locust where the results from each thread have a unique identifier. The result analysis tool now allows you to drill down in several new ways to a specific VUI:

By User

An example of drilling down by user with a JMeter scenario:

And another example with Node.js scenario where Testable controls the number of virtual users:

Related to Assertion Failures

An example using Webdriver.io + Mochajs to capture assertions:

Related to Screenshot

For tools like Webdriver.io that support screenshot capture, the screenshots can be found in the results Images widget. Click on any screenshot name to see all results collected during the virtual user iteration (VUI) in which the screenshot was captured.

By Iteration ID

There are various ways to get the iteration ID of a virtual user iteration including exporting the results to CSV and double clicking a trace or result. Once you have the iteration ID you can enter it in the Result Analysis tab:

Export to PDF

From any test result, choose Export to PDF in the dropdown menu in the upper right to export the results to PDF. The layout of the export will match the current view. The results view can be customized as required.

Manually Start a Test

Provisioning and initializing the EC2 test runner instances required to run your load test can take a variable amount of time. Sometimes we want to be able to start generating traffic at a specific time with certainty.

While configuring your test, in the Locations section, you can now check the "Start the test manually after allocating and initializing test runners" box to solve this. The EC2 instances will be initialized but will not start generating any traffic until you manually tell it to (either via the website or API).

Histogram Metrics - Table View

Previously you could only view histogram metrics like response code and HTTP request method as a pie chart. In any widget where you display one of these metrics you can now configure it to display as a table as well.

Try out the new features and let us know if you have any feedback or concerns. Happy testing!