These indicators in the Tests Report show what results you are getting from your tests. Tracking them helps keep you focused on improving stability and making smart, efficient choices.
Indicator | What It Tracks | What Signals It Might Bring |
Failure Rate | The percentage of tests that failed during execution in the period. Automatically compares to the failure rate failed during the same time frame immediately before the period. | Monitoring the failure rate of test cases helps prioritize testing efforts rather than assuming all failures are critical.
|
Breakdown by Status | The percentages of tests that passed, failed, and were blocked. | The breakdown by status helps you prioritize testing efforts and resources.
|
Failure Count | The total number of test executions that failed in the period. Automatically compares to how many tests failed during the same time frame immediately before the period. | The failure count helps you assess the overall health of the test suite.
|
Tests by Status | A visual representation of the distribution of test results over time, categorized into passed, failed, and blocked statuses. | This chart helps you track trends in your testing process.
|
Failure Rate vs. Count | A visual representation of the failure rate percentage and the total number of test failures over time. | This chart helps you identify trends in test quality, allowing you to understand whether fluctuations in failure rates are linked to the volume of tests being executed or shifts in the system's stability.
|
Features Tested | The total number of features tested during the period. Automatically compares to how many features were tested during the same time frame immediately before the period. | This indicator shows the scope of testing efforts across the product’s functionality.
|
Test Coverage | The percentage of available features that have been tested during the period. Automatically compares to the percentage during the same time frame immediately before the period. | This metric indicates how well the testing process is covering the product’s features.
|
Total Available Features | The total number of features in the product that are available for testing. Automatically compares to the number of features during the same time frame immediately before the period. | This indicator shows the full scope of product functionality. It is useful together with Features Tested, though it’s important to recognize that not all available features are critical. |
Where Failures Happen
These indicators help you track where failing and passing tests are happening most frequently. This helps optimize testing strategy by reducing focus on stable features and reallocating resources to more problematic areas or features that haven’t been tested as thoroughly. Monitoring these trends ensures that testing efforts are balanced and aligned with the product’s needs.
Indicator | What It Tracks | What Signals It Might Bring |
Failure Rate by Features | This chart chart highlights the distribution of test failures across different product features. Each block represents a feature with its size indicating the number of tests and its color intensity showing the failure rate. | This chart helps identify features that might need more or less attention.
|
Top Frequently Failed Tests | This table lists tests with the highest failure rates. Each entry gives the name, key, failure count, and failure rate. | This information helps you quickly identify which tests are most problematic, enabling you to investigate and address the underlying issues to improve overall test stability.
|
Top Frequently Passing Tests | This table shows test cases with the highest pass rates. Each entry gives the name, key, pass count, and pass rate. | This table shows areas where the system is performing consistently well. High pass rates can signal that certain features are stable and reliable, meaning they be suitable candidates for automation.
|