Skip to main content

Understand Insights from LeoInsights

See what the insights generated from your reports mean.

Written by Aaron Collier
Updated today

When your workspace has certain characteristics that are outside the norm for testing, LeoInsights generates insights to let you know so it can guide your testing strategy.

See Insights

To see all generated insights, navigate to the Home section of Reports in the Testlio platform.

The Home section of Reports with two insights, one about an iPhone 21 being linked to many issues but few test executions and the other about having many high-severity issues.

Get Notifications

To get notified when LeoInsights discovers unusual data, set up a Slack notification for new insights. You should set up notifications for each of your workspaces.

Available Insights

Note that the percentiles listed for when the insights are triggered were calculated at a given time (when the insight was developed). They are not dynamic and do not change when workspace usage changes.

Device with Many Issues in Few Test Executions

Message: Consider increasing test coverage for the device to catch issues earlier.

When It Appears

This insight is triggered when many issues are found on a given device (a specific model) even though that device is only used in a small percentage of test executions.

This insight might be triggered because of issues with the device itself, as it does not consider whether issues are only reproducible on a given device (that data is not available). But it can also come from what is done within the workspace on that device (harder flows, deeper exploration, longer sessions, different feature focus).

Technical details on when the insight is triggered

This insight is only triggered for workspaces with at least the following within the past 60 days:

  • As many device usages (use of a device in 1 test) as 75% of all workspaces (at least 26)

  • As many issues linked to a device usage as 75% of all workspaces (at least 11)

Checks if any device has an issue-to-usage ratio higher than 95% of all devices among included workspaces. This ratio is a device’s share of the workspace issues divided by its share of the workspace device usages.

What To Do

When you see this insight, first check if its an actual device risk or based on the activity with that device. Start with a confidence check, such as investigating whether issues reproducible only (or disproportionately) on that device, cluster by category (performance/UI/crash), and persist across multiple testers and runs. If so, it is worth increasing coverage of the device to catch issues sooner.

Many High-Severity Issues

Message: Consider prioritizing investigation into critical issues to improve stability.

When It Appears

This insight is triggered when a given workspace has a higher proportion of high-severity issues than most other workspaces.

This insight might mean different things depending on context. It might indicate product instability or effective testing (such as strong exploratory coverage, good severity triage, or a deliberate focus on critical paths).

Technical details on when the insight is triggered

This insight is only triggered for workspaces with at least the following:

  • As many approved issues linked to a test that were created within the past 60 days as 75% of other workspaces (at least 16)

Checks if the workspace has a higher high-severity issue ratio than 90% of included workspaces. This ratio is the number of high-severity issues divided by the total number of issues (only issues meeting the criteria).

What To Do

To determine actions to take, investigate the context further, looking at when these issues are being found (early vs late), whether they are regressions or new, and how many are addressed before release. Also check how issue severity is being assigned to make sure it is consistent.

Long Resolution Times for High-Severity Issues

Message: Focus on high-severity issues to avoid exposure across runs.

When It Appears

This insight is triggered when a given workspace has a longer average time for high-severity issues to be resolved than most other workspaces.

The insight means that issues that have been tagged as being high-severity are not being resolved in a timely manner.

Technical details on when the insight is triggered

This insight is only triggered for workspaces with at least the following:

  • Any approved high-severity issues that have been resolved during the past 90 days or remain unresolved

Checks if the workspace has as long a time from issue creation to resolution as 75% or more of included workspaces. This number is the average resolution time for all high-severity issues.

What To Do

When you see this insight, start a review of your issue triage process. Make sure that the issues being marked as high severity really are important. If the issues still seem high severity after the review, check with the client if anything is blocking action on their end. Make sure the issues are being surfaced to them as high priority and that you are aligned on what priorities should be.

Execution Concentrated Within a Very Small Tester Subset

Message: Consider rotating testers, adding fresh eyes for new perspectives.

When It Appears

This insight is triggered when a given workspace has testers who have completed a high proportion of tests when compared to other workspaces. It is triggered whenever any individual tester has a high proportion, rather than by the percentage of testers in a workspace executing tests or anything similar.

The insight likely means that too many tests are being executed by a single freelancer. This creates a dependency on that freelancer and a potential point of failure if that freelancer does not accept work for a while.

Technical details on when the insight is triggered

This insight is only triggered for workspaces with at least the following:

  • Test executions created within the past 60 days

  • As many testers as 75% of other workspaces (at least 6)

  • As many test executions as 75% of other workspaces (at least 39)

Checks if any tester in the workspace has a higher proportion of test executions than 95% of testers from all workspaces.

What To Do

When you see this insight, consider whether you have other testers who could be doing more of the work. Try to spread testing work around so that you have contingency plans if any tester is unable to work. Consider bringing in more testers and getting them familiar with the product, such as through LeoMatch: Find Freelancers with LeoMatch and Work Opportunities.

Also look at whether participation may be limited by complex test instructions or product knowledge requirements. If so, expand workspace onboarding materials to broaden participation.

Recurring Product Fragility

Message: Review related product features for instability.

When It Appears

This insight is triggered when certain tests often switch between pass and fail between executions and the issues from the failures are accepted, indicating the failures were for valid reasons.

This insight potentially means the underlying product feature is fragile, with frequent needs for fixes that do not last. Alternatively, there could be issues with the tests or test environments that cause this flakiness. Note that unlike the insight on flaky tests, with this insight the failures are more often connected to valid issues and so are more likely to be connected to the product.

Technical details on when the insight is triggered

This insight is only triggered for workspaces with at least the following:

  • Runs started within the past 60 days

  • As many runs as 75% of other workspaces (at least 2)

  • As many issues from runs being accepted as 75% of other workspaces (a rate of at least 80%)

Checks if the workspace has any tests that switch between pass and fail more frequently than 90% of tests across all workspaces.

What To Do

When you see this insight, even though the connected issues are usually valid, first check to see if the flakiness of the tests is a result of problems in the test environment or the tests themselves. Check other tests that cover the same feature to see if they have similar results. If you don’t have other tests covering this area, add some.

If your investigation shows the tests are uncovering product fragility, review issue history for the feature to identify recurring issues and share insights with the client to support deeper root-cause investigations.

Flaky Test with Low Issue Acceptance

Message: Review these tests for clarity and related environments for flakiness.

When It Appears

This insight is triggered when certain tests often switch between pass and fail between executions and the issues from the failures are not accepted, indicating the tests more likely failed for reasons related to the test environment or the tests themselves.

This insight potentially means the tests or test environments have issues that cause this flakiness. Alternatively, the underlying product feature could be fragile, with frequent needs for fixes that do not last. Note that unlike the insight on product fragility, with this insight the failures are more often not connected to valid issues and so are more likely to be connected to the tests rather than the product.

Technical details on when the insight is triggered

This insight is only triggered for workspaces with at least the following:

  • Runs started within the past 60 days

  • As many runs as 75% of other workspaces (at least 2)

  • As many issues from runs being accepted as 75% of other workspaces (a rate of at least 80%)

Checks if the workspace has any tests that switch between pass and fail more frequently than 75% of tests across all workspaces.

What To Do

When you see this insight, check to see if the flakiness of the tests is a result of problems in the test environment or the tests themselves. Check other tests that cover the same feature to see if they have similar results. If you don’t have other tests covering this area, add some. Also review test steps and expected results for ambiguity, clarifying where necessary to ensure testers are always testing in the same way.

If your investigation shows no issues in the tests or test environment, look at the underlying product area to see if it might be fragile. Once verified, share potential concerns with the client.

Unbalanced Failure Rate Across Platforms

Message: Consider shifting test coverage to avoid platform-specific blind spots.

When It Appears

This insight is triggered when a workspace that is testing multiple platforms (such as Android, iOS, and web) has a substantially greater rate of failure for one platform than the others. This applies when the rate is higher for a given feature, not across all features together.

This insight potentially means that a feature is covered more completely in one platform than other platforms. Alternatively, it could mean the workspace intentionally focuses more on a single platform but occasionally tests other platforms. Alternatively, it could mean that a feature on one of the platforms has more issues than it does on the others.

Technical details on when the insight is triggered

This insight is only triggered for workspaces with at least the following:

  • Test executions created within the past 60 days

  • As many platforms as 75% of other workspaces (at least 2)

  • As many test executions as 75% of other workspaces (at least 5)

  • At least 2 test failures

Checks if the workspace has a platform with a failure rate for a feature that is higher when compared to the rate across all platforms in that workspace than is true for 90% of all workspaces. Only triggers if, after rounding, the failure rate for the feature is higher than the failure rate for all features.

What To Do

When you see this insight, check to see if the workspace is trying to have balanced coverage across platforms. If that is the goal, check what tests are being run to make sure you are covering a feature in a relatively way across platforms. Try rotating tests of the features on different platforms if you can’t test them all at once. You want to avoid missing issues because only one platform is being tested.

If you find that coverage is already balanced, bring up the insight with the client to discuss potential balance issues in the platforms and align on coverage goals.

Low Issue Addressal Rate

Message: Consider reviewing the process for addressing issues.

When It Appears

This insight is triggered when a workspace has fewer of its recent issues marked as addressed than most other workspaces.

This insight could arise for several reasons. For example, perhaps runs are finding more lower-priority issues that the client isn’t prioritizing to address. Or perhaps issues aren’t being raised with the client in a timely manner, for example due to integration problems.

Technical details on when the insight is triggered

This insight is only triggered for workspaces with at least the following:

  • Runs started within the past 60 days

  • As many runs as 75% of other workspaces (at least 2)

  • As many issues from runs being accepted as 75% of other workspaces (a rate of at least 80%)

Checks if the workspace has a lower percentage of issues linked to runs that have been addressed than 75% of all workspaces.

What To Do

When you see this insight, start by checking that issues are properly being reported to the client to be addressed. If that process seems to be working, review for alignment on the priority of issues to be reported. If runs result in many issues that the client is not likely to address, consider increasing what is out of scope for the runs or otherwise working to ensure only high-impact issues are reported.

An Issue Results in Many Tests Failing

Message: Review test coverage and consider consolidating tests that cover the same functionality to reduce redundant failures and potential duplicative execution effort.

When It Appears

This insight is triggered when a specific issue is linked to more failing tests than is true for most issues across all workspaces.

This insight could mean that some tests cover the same area multiple times, meaning testing efforts are being needlessly duplicated. If the same issue is coming up repeatedly, tests might not have enough variation to cover the product fully. It is also possible that known issues are not communicated clearly enough to testers, resulting in more time being spent on that.

Technical details on when the insight is triggered

This insight is only triggered for workspaces with at least the following:

  • Test executions created within the past 60 days

  • As many failed test executions as 75% of other workspaces (at least 3)

Checks if the workspace has any issues that are linked to more unique failed tests than 95% of all issues from all workspaces.

What To Do

When you see this insight, make sure that the referenced issue is not a known issue that should be communicated to testers as being out of scope. Review the linked tests to make sure they are not duplicating effort in specific feature areas. Review test plans to make sure coverage is balanced across features and not duplicated in one area.

An Issue Blocks Many Tests

Message: Consider diversifying testing pathways so that a blocker in one functional area doesn't halt validation of unrelated features.

When It Appears

This insight is triggered when a specific issue is linked to more blocked tests than is true for most issues across all workspaces.

This insight could mean that too many tests require the same prerequisite.

Technical details on when the insight is triggered

This insight is only triggered for workspaces with at least the following:

  • Test executions created within the past 60 days

  • As many blocked test executions as 75% of other workspaces (at least 3)

Checks if the workspace has any issues that are linked to more unique blocked tests than 95% of all issues from all workspaces.

What To Do

When you see this insight, check to make sure not all tests have the same prerequisites or that those prerequisites can be met before testing starts. Also adjust test plans to isolate unrelated feature validation where possible.

An Issue Is Linked to Many Passing Tests

Message: Analyze frequently linked issues on passing tests to determine if new tests need to be created to capture potentially unmapped functional areas.

When It Appears

This insight is triggered when a specific issue is linked to more passing tests than is true for most workspaces.

This insight could mean that tests are not sufficiently broad to capture all features within the product.

Technical details on when the insight is triggered

This insight is only triggered for workspaces with at least the following:

  • Test executions created within the past 60 days

  • As many passing test executions as 75% of other workspaces (at least 3)

Checks if the workspace has any issues that are linked to more unique passing tests than 95% of all issues from all workspaces.

What To Do

When you see this insight, investigate the referenced issue to see if it’s from a feature that needs more thorough testing. Review testing instructions to make sure everyone is aligned on what issues need to be reported or linked.

Many Test Executions with Few Unique Issues

Message: Check whether you are re-running too many low-value tests or focusing on very stable areas. Consider rebalancing toward new flows or edge cases to increase discovery value per execution.

When It Appears

This insight is triggered when a plan within a workspace is run often but results in few issues being reported when compared to plans across all workspaces. It covers tests that haven’t been updated recently.

This insight could mean that testing is overly focused on stable areas of the product. It might also mean that test coverage does not go deep enough to discover high-priority issues.

Technical details on when the insight is triggered

This insight is only triggered for workspaces with at least the following:

  • Test executions created within the past 60 days

  • As many test executions in the past 60 days as 75% of other workspaces (at least )

Checks if the workspace has any plans where the rate of unique issues found per test execution is lower than for 75% of all plans across all workspaces. Only applies to tests that have the time since their last update as longer than 75% of all tests across all workspaces.

What To Do

When you see this insight, review test coverage to ensure you are covering features that are under development and where high-value issues might be discovered. Make sure tests go sufficiently deep into a feature to cover all essential flows and discover any impactful edge cases. Consider introducing exploratory testing or expanding test scenarios to include edge cases and new feature flows.

Did this answer your question?