Skip to main content

Gain Insights from the Speed Report

Aaron Collier avatar
Written by Aaron Collier
Updated this week

Get a clear overview of how fast your testing efforts are moving in a single place with the Languages report. Track key indicators to determine if they are sending any signals that require action or further investigation.

Find the report in the Testlio platform under Reports > Speed. You can filter the data for a specific date range and/or specific workspaces.

This report provides visibility into testing speed by surfacing trends in test duration and response times. Through the Speed report, teams can identify patterns that impact test efficiency, uncover opportunities to streamline execution, and make informed decisions that support faster, more effective testing cycles. This improves operational transparency, reduces reliance on manual tracking, and enables better planning and prioritization across test runs.

Monitor Execution

The Execution section of the Speed report in the Testlio platform with some sample data.

Indicator

What It Tracks

What Signals it Might Bring

Total Test Execution

The total number of tests executed during the selected period. Automatically compares with the number during the prior period.

This metric helps teams understand overall test workload, track efficiency trends over time, and optimize how testing resources are allocated.

  • More tests may indicate higher testing demand, growing test complexity, or extended execution duration – potentially signaling a need to assess resource allocation.

  • Fewer tests could suggest improved efficiency or streamlined processes, but may also warrant a review to ensure testing scope hasn't been unintentionally reduced.

Total Execution Time

The total time spent executing all tests during the selected period. Automatically compares with the number during the prior period.

This metric helps teams monitor overall test workload, analyze efficiency trends, and manage resource allocation.

  • More testing time may indicate growing testing demand, broader coverage, or added complexity.

  • Less testing time could reflect improved efficiency, but may also result from reduced coverage or a rise in blocked or UTT (unable to test) scenarios, which should be reviewed to ensure completeness of execution.

Avg. Execution Time per Test

The average time it took to execute a single test during the selected period. Automatically compares with the number during the prior period.

This metric helps teams assess execution speed and track efficiency trends across test runs.

  • Higher average execution time may signal performance slowdowns, resource limits, or an expanded scope of tests under review.

  • Lower average time typically reflects improved efficiency, but could also indicate increased UTT (unable to uest) or out-of-scope areas that need review.

Test Execution Overview

A visual comparison of the total test execution, total execution time, and average execution time per test over time during the selected period.

Use this chart to track individual indicators over time.

Track Efficiency

Indicator

What It Tracks

What Signals it Might Bring

Avg. Turnaround Time

The average time between when a test run is scheduled and when it is marked as finished during the selected period. Automatically compares with the number during the prior period.

This metric helps teams evaluate the efficiency of test execution workflows and track how quickly results are delivered.

Parallel Execution Efficiency

The percentage of time saved by executing tests in parallel rather than sequentially during the selected period. Automatically compares with the number during the prior period.

This metric helps teams assess how effectively resources are being utilized and how well the test infrastructure supports parallelization.

  • Increased efficiency may reflect stronger infrastructure performance, more effective test distribution, or improvements in how tests are designed to run concurrently.

  • Decreased efficiency may suggest opportunities to revisit parallelization strategy, review test scheduling, or explore ways to reduce delays and maximize resource usage.

Avg. Queue Wait Time

The average time devices spent in the queue before test execution began during the selected period. Automatically compares with the number during the prior period.

The wait time is calculated as the difference between the test run start time and the device run start time.

This metric helps assess how efficiently test runs are being scheduled and how well available resources are being utilized.

  • Longer wait times may suggest higher system load, scheduling delays, or infrastructure constraints that could benefit from tuning or scaling.

  • Shorter wait times typically indicate more efficient resource allocation and faster test start times, contributing to smoother, faster feedback cycles.

Test Execution Efficiency

A visual comparison of the total turnaround time, parallel execution efficiency, and queue wait time over time during the selected period.

This chart helps teams identify patterns, assess system performance, and detect emerging bottlenecks that may impact testing speed and resource utilization.

Avg Turnaround Time with Benchmarks

A visual representation of the turnaround time over time. The chart includes benchmarks from all Testlio testing, with lines for the 25th, 50th (median), and 75th percentiles.

This chart helps you track where you stand with your testing efficiency as compared to other companies testing with Testlio.

More on Parallel Execution Efficiency

Example: Visualizing Parallel Efficiency

In this example, there are three tests that take different amounts of time:

  • Test A: 20 min

  • Test B: 10 min

  • Test C: 5 min

A test run included these tests with some overlap

  • Test A: 12:00–12:20

  • Test B: 12:05–12:15

  • Test C: 12:25–12:30

If these tests had run sequentially (one after another), the total time would have been: 20 min (A) + 10 min (B) + 5 min (C) = 35 minutes

In reality, with tests running in parallel, the actual run took: From 12:00 (start of A) to 12:30 (end of C) = 30 minutes

That means 5 minutes were saved by running some tests in parallel.

➡️ Parallel Execution Efficiency = (35 - 30)/35 = 14.3%

This means the run was 14.3% faster than if the tests had run sequentially.

📊 Why It Matters

  • A higher percentage = better use of parallel execution → faster feedback

  • A lower percentage = less efficient use of parallelization → possible delays

⚠️ How Efficiency Can Be Negative

If there are gaps between tests (time where no tests are running), the total run time might actually be longer than if the tests had run back-to-back without any parallelization.

🔻 Example: Negative Efficiency

  • Three tests take 10 minutes each.

  • But they are scheduled poorly with long pauses in between them.

  • The full run ends up taking 40 minutes instead of 30

→ In this case, the efficiency is negative: (30 - 40) / 30 = -33%

This indicates an inefficient run setup. It might be due to infrastructure delays, poor parallelization logic, or test distribution issues.

Did this answer your question?