Skip to main content
Analyze Results and Quality
Aaron Collier avatar
Written by Aaron Collier
Updated over 2 months ago

These indicators in the Tests Report show what results you are getting from your tests. Tracking them helps keep you focused on improving stability and making smart, efficient choices.

Example indicators in the Testlio Platform, failure rate, a breakdown by status, and failure count
A chart from the Testlio Platform showing tests by status over time
A chart from the Testlio Platform showing failure rate vs. count over time

Indicator

What It Tracks

What Signals It Might Bring

Failure Rate

The percentage of tests that failed during execution in the period. Automatically compares to the failure rate failed during the same time frame immediately before the period.

Monitoring the failure rate of test cases helps prioritize testing efforts rather than assuming all failures are critical.

  • A high failure rate suggests that certain areas need further investigation to determine whether these failures point to important bugs or test gaps.

  • A consistently low failure rate might indicate that testing is well-aligned with the system’s current state, allowing the team to focus on improving automation or covering other areas.

Breakdown by Status

The percentages of tests that passed, failed, and were blocked.

The breakdown by status helps you prioritize testing efforts and resources.

  • A high pass rate indicates stable testing.

  • A high failure rate should prompt an investigation of patterns or potential problem areas.

  • A high blocked rate might suggest dependencies or external factors that need to be addressed before continuing testing. Monitoring this helps reduce blockers and shift focus where it’s needed most.

Failure Count

The total number of test executions that failed in the period. Automatically compares to how many tests failed during the same time frame immediately before the period.

The failure count helps you assess the overall health of the test suite.

  • A high count might highlight areas that need deeper investigation or adjustments in the test cases themselves.

  • A low count could signal that the tests are stable and effective, allowing the team to focus on other areas like expanding coverage or further automation.

Tests by Status

A visual representation of the distribution of test results over time, categorized into passed, failed, and blocked statuses.

This chart helps you track trends in your testing process.

  • A consistently high pass rate is a positive signal of system stability, allowing the team to focus on other areas like expanding coverage or further automation.

  • A growing number of failures or blocks could indicate recurring issues or dependencies that need to be resolved. Monitoring these patterns allows you to adjust the focus of your testing efforts, ensuring resources are used efficiently and blockers are addressed promptly.

Failure Rate vs. Count

A visual representation of the failure rate percentage and the total number of test failures over time.

This chart helps you identify trends in test quality, allowing you to understand whether fluctuations in failure rates are linked to the volume of tests being executed or shifts in the system's stability.

  • Sudden spikes in failure rates are a signal that something needs to be addressed. Addressing these promptly can help maintain test stability and prevent small issues from escalating into larger problems.

Features Tested

The total number of features tested during the period. Automatically compares to how many features were tested during the same time frame immediately before the period.

This indicator shows the scope of testing efforts across the product’s functionality.

  • Many features being tested shows comprehensive coverage of capabilities.

  • Few features being tests might signal the need to broaden the testing scope or it might reflect prioritization of key features.

Test Coverage

The percentage of available features that have been tested during the period. Automatically compares to the percentage during the same time frame immediately before the period.

This metric indicates how well the testing process is covering the product’s features.

  • High coverage shows thorough testing of all product areas.

  • Low coverage may indicate gaps in the testing strategy with related risks, suggesting that additional resources or focus may be needed to cover critical areas.

Total Available Features

The total number of features in the product that are available for testing. Automatically compares to the number of features during the same time frame immediately before the period.

This indicator shows the full scope of product functionality. It is useful together with Features Tested, though it’s important to recognize that not all available features are critical.

Where Failures Happen

These indicators help you track where failing and passing tests are happening most frequently. This helps optimize testing strategy by reducing focus on stable features and reallocating resources to more problematic areas or features that haven’t been tested as thoroughly. Monitoring these trends ensures that testing efforts are balanced and aligned with the product’s needs.

A chart showing failure rate by feature, with boxes varying in size to indicate frequency and color to indicate failure rate
A table listing frequently failing tests
A table listing several tests that always pass

Indicator

What It Tracks

What Signals It Might Bring

Failure Rate by Features

This chart chart highlights the distribution of test failures across different product features. Each block represents a feature with its size indicating the number of tests and its color intensity showing the failure rate.

This chart helps identify features that might need more or less attention.

  • Dark colors (meaning high failure rates), especially on small boxes (meaning few tests), might indicate a need for further investigation and more immediate attention.

  • Light colors on big boxes (meaning low failure rates with lots of tests) might show stable areas that may not need as much testing moving forward.

Top Frequently Failed Tests

This table lists tests with the highest failure rates. Each entry gives the name, key, failure count, and failure rate.

This information helps you quickly identify which tests are most problematic, enabling you to investigate and address the underlying issues to improve overall test stability.

  • High rates and high counts show a need for further investigation to identify patterns or recurring issues, allowing your team to investigate the root causes and fix underlying problems.

  • High rates and low counts might showcase how new tests are doing their job well at identifying issues. These tests might not need immediate attention, but monitoring over the longer term.

Top Frequently Passing Tests

This table shows test cases with the highest pass rates. Each entry gives the name, key, pass count, and pass rate.

This table shows areas where the system is performing consistently well. High pass rates can signal that certain features are stable and reliable, meaning they be suitable candidates for automation.

  • High pass rates can signal that certain features are stable and reliable.

    • This means they may be suitable candidates for automation. Automating these frequently passing tests allows the team to focus manual efforts on more complex or risk-prone areas.

    • Alternatively, resources from these tests could be directly redirected to other areas.

Did this answer your question?