Skip to main content
Monitor Leo Usage
Aaron Collier avatar
Written by Aaron Collier
Updated over a month ago

This section of the Runs report offers data on the usage of Leo, an AI-powered chatbot that helps testers get relevant information quickly to complete testing. To get data on Leo usage, your workspace first needs to have activated Leo.

These metrics provide insights into the performance of Leo and engagement with it, highlighting Leo activity and the feedback it receives during test runs.

Overview of Leo metrics in the Testlio platform.

Workspaces where Leo is not activated see 0 for all values.

Indicator

What It Tracks

What Signals it Might Bring

Leo Active in Runs

The total number of test runs where Leo was in use during the period. Automatically compares with the number during the prior period.

Low activity is a good indicator to look into how to grow adoption in your workspace.

Leo Answers

The total number of answers provided by Leo during the period. Automatically compares with the number during the prior period.

This metric reflects Leo contributions to resolving questions or issues.

  • More answers suggests an increased dependency on Leo for resolving questions. Monitor this to assess how well Leo supports test processes and identify opportunities for further integration.

  • Few answers shows a need to refer to your Test Leads on what questions they are currently getting to update the documents Leo uses.

Leo Answers per Run

The average number of answers given by Leo in each test run. Automatically compares with the average during the prior period.

This metric provides insight into the frequency of Leo use.

  • Frequent answers per run may indicate a high volume of queries or complexity in testing. Review this to ensure Leo responses are addressing tester needs effectively.

Leo Answers by Feedback Type

A visual categorization of Leo answers based on feedback type (no feedback, positive feedback, negative feedback) over time.

This chart helps in assessing the quality and impact of Leo responses.

  • Positive feedback indicates Leo is providing helpful responses.

  • Negative feedback highlights areas needing improvement.

Ensure trends in feedback are reviewed to refine Leo accuracy and usefulness.

Answer Engagement Rate

The percentage of Leo answers that received feedback (positive or negative) from testers. Automatically compares with the percentage during the prior period.

This metric helps in evaluating user interaction with Leo.

  • High engagement suggests testers are actively interacting with Leo responses.

  • Low engagement may indicate the need to improve awareness or the relevance of Leo answers.

Negative Feedback

The number of answers rated positively by testers. Automatically compares with the number during the prior period.

This metric provides insight into Leo effectiveness in delivering helpful responses.

  • An increase signals that Leo is successfully assisting testers. Leverage this to identify strengths and encourage broader use.

Positive Feedback

The number of answers rated negatively by testers. Automatically compares with the number during the prior period.

This metric helps in identifying areas for improvement in Leo functionality.

  • Consistent negative feedback highlights potential gaps in Leo responses. Use this as an opportunity to improve training data or functionality to enhance effectiveness.

Did this answer your question?