Skip to main content
Track Runs
Aaron Collier avatar
Written by Aaron Collier
Updated this week

This section of the Runs report provides a summary of key metrics for test runs conducted during the selected period, helping teams evaluate test execution performance and identify trends.

Overview indicators for runs in the Testlio platform.

Indicator

What it Tracks

What Signals it Might Bring

New Runs

The total number of test runs initiated during the period. Automatically compares with how many runs were initiated during the prior period.

To be included, runs must have started within the period but don't need to have finished within it. Only finished runs are included.

A significant increase or decrease in the number of new runs may indicate changes in testing frequency. Ensure testing cadence aligns with project needs and investigate any drops in volume to identify potential blockers.

Avg. Run Duration (hrs)

The average length of time each test run lasted. Automatically compares with the average length during the prior period.

This metric reflects the efficiency and complexity of test execution. Differences may arise due to variations in test types (such as sanity, regression, smoke).

  • Changes may indicate a need to ensure a consistent distribution of test types across periods for accurate comparisons.

  • Increasing run durations could indicate growing scope or more thorough testing — review and optimize scope where possible.

When assessing this metric, check for discrepancies in the number of runs between periods to ensure the data remains comparable.

Issues per Run

The average number of issues identified per test run. Automatically compares with the average during the prior period.

This metric helps in evaluating test run effectiveness in identifying issues.

  • A low average may indicate stable areas within the scope. Review these areas to confirm stability and adjust your test strategy to focus on more dynamic or high-risk sections.

  • A high average could signal potential regressions or areas needing deeper investigation. Use this as an opportunity to refine your testing approach and ensure adequate coverage.

New Runs vs. Testers

A visual comparison of the number of new test runs with the average number of testers per run over time.

This metric highlights tester involvement and resource allocation trends, helping you helps spot opportunities to streamline test coverage and keep things consistent across your portfolio.

  • A mismatch between the number of testers and runs could be a sign to adjust scope or balance resources, especially when runs vary in size or device needs.

Testers per Run

The average number of testers involved in each test run. Automatically compares with the average during the prior period.

Different types of runs (such as sanity, regression, smoke) can naturally vary in tester needs. This metric provides insight into team effort distribution. It can also help optimize resource allocation and ensure the right balance of testers for each run's scope and complexity.

  • Changes in the average indicate it’s a good time to check for consistency in run types over time.

Devices per Run

The average number of devices used per test run. Automatically compares with the average during the prior period.

This metric offers an understanding of device diversity in testing. The types of runs (such as sanity, regression, smoke) can influence the number of devices needed. This metric helps you optimize device usage and ensure resources are well-matched to each run’s scope.

  • Changes in device counts signal a need to check for consistency in run type and consider if the coverage aligns with current testing needs.

Countries per Run

The average number of countries covered in each test run. Automatically compares with the average during the prior period.

This metric helps in tracking global testing reach and supports better localization alignment.

  • Changes in the number of countries might highlight a need to focus on specific regions or adjust testing coverage to ensure all relevant locales are properly represented.

Did this answer your question?