Skip to main content
Engagement Report Metrics Glossary

A detailed description of each metric included in the Engagement report, what it measures, and how it is calculated.

Doris Sooläte avatar
Written by Doris Sooläte
Updated over 10 months ago
  • Unless explicitly mentioned, there is no special engagement-level calculation - it is done by aggregating all the instances of its workspaces.

  • Archived workspaces aggregate the data of workspaces active within the selected date range but currently archived.

Metric

It measures..

Is calculated as the...

Issues

The total approved issues reported from manual and automated test Runs

Distinct count of issue IDs by creation date, where the issue is approved, is not deleted, and is not created by an integration.

Runs

Manual and automated Test Runs started within the selected date range.

Distinct count of run IDs by start date, where the run is not deleted and has approved hours.

Tests

Test executions finished within the selected date range.
A single test may be accounted for as many times as it’s been executed.

Distinct count of Test executions within Runs, when the test execution is finished.

Devices/OS

Unique tested device/OS combinations.

Distinct count of device/OS combinations related approved tasks within Test Runs.

Testers

Unique QA Testers who participated in test Runs started within the selected date range.

Distinct count of QA tester IDs - defined as the user ID associated with an approved task, by task start date.

Issues per Hour

The average amount of issues per hour of manual testing within the selected date range.

Sum of approved Issues (see description) divided by the sum of approved manual testing hours on Runs started within the selected date range.

Locations

Unique countries included in test task finished within the selected date range.

Distinct count of country names of testers (see description) allocated to finished tasks for test executions where hours have been approved.

Did this answer your question?