This section of the Runs report provides a summary of key metrics for test runs conducted during the selected period, helping teams evaluate test execution performance and identify trends.
Indicator | What it Tracks | What Signals it Might Bring |
New Runs | The total number of test runs initiated during the period. Automatically compares with how many runs were initiated during the prior period.
To be included, runs must have started within the period but don't need to have finished within it. Only finished runs are included. | A significant increase or decrease in the number of new runs may indicate changes in testing frequency. Ensure testing cadence aligns with project needs and investigate any drops in volume to identify potential blockers. |
Avg. Run Duration (hrs) | The average length of time each test run lasted. Automatically compares with the average length during the prior period. | This metric reflects the efficiency and complexity of test execution. Differences may arise due to variations in test types (such as sanity, regression, smoke).
When assessing this metric, check for discrepancies in the number of runs between periods to ensure the data remains comparable. |
Issues per Run | The average number of issues identified per test run. Automatically compares with the average during the prior period. | This metric helps in evaluating test run effectiveness in identifying issues.
|
New Runs vs. Testers | A visual comparison of the number of new test runs with the average number of testers per run over time. | This metric highlights tester involvement and resource allocation trends, helping you helps spot opportunities to streamline test coverage and keep things consistent across your portfolio.
|
Testers per Run | The average number of testers involved in each test run. Automatically compares with the average during the prior period. | Different types of runs (such as sanity, regression, smoke) can naturally vary in tester needs. This metric provides insight into team effort distribution. It can also help optimize resource allocation and ensure the right balance of testers for each run's scope and complexity.
|
Devices per Run | The average number of devices used per test run. Automatically compares with the average during the prior period. | This metric offers an understanding of device diversity in testing. The types of runs (such as sanity, regression, smoke) can influence the number of devices needed. This metric helps you optimize device usage and ensure resources are well-matched to each run’s scope.
|
Countries per Run | The average number of countries covered in each test run. Automatically compares with the average during the prior period. | This metric helps in tracking global testing reach and supports better localization alignment.
|