Skip to main content
Monitor Manual Testing
Aaron Collier avatar
Written by Aaron Collier
Updated this week

This section of the Runs report provides insights into manual testing efforts, highlighting key metrics to evaluate tester productivity and the efficiency of manual processes.

Overview indicators for manual testing in the Testlio platform.

Indicators

What it Tracks

What Signals it Might Bring

Total Manual Testing Time

The total number of hours spent on manual testing during the period. Automatically compares with the number during the prior period.

This helps assess the overall effort dedicated to manual tasks.

  • An increase may indicate an increase in scope, a good opportunity to review and optimize where possible.

Avg. Time per Run

The average manual testing time for each run. Automatically compares with the average during the prior period.

This metric offers insight into the complexity and duration of manual tasks across runs.

  • Longer averages can signal increased scope or issues (such as environmental issues) causing delays in test execution. This represents a good opportunity to assess if adjustments are needed to streamline or address obstacles.

Avg. Time per Tester

The average number of manual testing hours conducted by each tester during the period. Automatically compares with the average during the prior period.

This metric helps in monitoring individual workload distribution.

  • An increase may indicate that more issues were uncovered, requiring additional time for review and resolution, or it could suggest an expansion in scope.

This serves as a good opportunity to assess the testing strategy, ensuring that the time spent aligns with priorities and that any increased testing efforts are addressing key areas of concern effectively.

Manual Testing Time vs. Testers by Run

A visual comparison of the total manual testing time with the number of testers per run.

This metric provides visibility into the correlation between team size and effort. It can help identify opportunities to optimize test coverage and maintain consistency across the workspace.

  • An inbalance between the number of runs and testers may be due to varying test scope or device requirements. Review to ensure resources are appropriately allocated and identify areas where scope might be streamlined for efficiency.

Total Manual Testing Tasks

The total number of manual testing tasks completed during the period. Automatically compares with the number during the prior period.

This metric provided a measure of overall output. It can help identify opportunities to optimize processes and save hours or prompt a review of scope to ensure comprehensive coverage while maintaining efficiency.

  • More tasks can signal increased scope or deeper test coverage.

Avg. Tasks per Run

Tthe average number of manual testing tasks executed in each run. Automatically compares with the average during the prior period.

This metric helping track the efficiency and scope of testing activities.

  • An increase may indicate expanded scope or more detailed test coverage. This helps identify opportunities to optimize processes, ensuring efficient use of hours while maintaining thorough coverage.

Avg. Testing Window per Run

The average testing window duration for each run. Automatically compares with the average during the prior period.

This metric offers insight into the time frame allocated for manual testing.

  • If the average is more than 4 hours, consider optimizing the scope or splitting tasks to maintain efficiency. Keeping testing windows at 4 hours or less helps ensure focus, thorough coverage, and timely execution.

Did this answer your question?