Skip to main content
Define Metadata for Test Cases

Learn what metadata you need to add to your test cases.

Aaron Collier avatar
Written by Aaron Collier
Updated over 9 months ago

The exact usage of test metadata can depend on the workspace/engagement. This page presents an example as one possible option.

Title

The test case title should indicate the feature (or a logical grouping of the functionality under test) with any additional specifications about the testable function or flow. The title is used to give quick information to stakeholders reviewing the test list.

The naming convention for titles is as follows:

[Tag][Main feature under test][Optional: Sub-feature under test] Description of the test case

Convention

Description

Tag

An example of tags to give quick information about the type of test case:

  • [F]: Function-based test case

  • [S]: Scenario-based test case

  • [E]: Embedded test

Main feature under test

Represents the name of the main functionality. Must be aligned with a mindmap node.

Sub-feature under test

Optional: Leave out if the test case only relates to the main feature.

Must be aligned with a mindmap node.

Description of the test case

Short assertive description of the coverage, also mentioning any constraints.

Example Title

[F][Tests][Export] Export multiple selected tests

Platforms

When creating a test case, it is essential to specify the platforms (including operating systems, devices, browsers) that the test case applies to. Only include those platforms that are relevant to the specific test case. If the test case is applicable to all platforms, it should be marked as such.

Priority

Test case priority is determined based on the potential risk associated with failure of the functionality being tested. A higher priority indicates a higher business risk if the tested feature doesn't perform as expected or encounters issues.

  • Multiple Test Cases for the Same Functionality: You can have multiple test cases covering the same functionality, each assigned different priorities. Higher priority test cases focus on critical functionalities that pose significant business risks if they fail. Lower priority test cases extend the coverage of higher priority test cases by exploring additional scenarios or edge cases.

  • Input for Test Coverage Decisions: Test case priorities serve as valuable inputs when deciding on the coverage for different testing needs. For example, a smoke test might only contain high priority test cases and extended regression testing might include test cases with all three priorities.

Priority

What failure means

Execution criteria

High

Unacceptable loss of functionality from a business perspective. Users might be blocked from completing essential activities.

High priority tests should be executed before any deployment to production and should be part of all regression runs.

Medium

A loss in functionality that would potentially disrupt but not fully block relevant stakeholders.

Medium priority tests are candidates for regular scheduled regression execution.

Low

A minor loss of functionality that can cause some inconvenience to stakeholders but would not significantly affect their main actions.

Low priority tests are candidates for one-time execution (such as on new feature releases) and/or for execution in scenarios where extended test coverage is required.

In addition, clients can decide whether any of the test cases rise to the level of release criteria (meaning the product isn’t released if any given test cases fail).

Execution Time

Net execution time refers to the actual time we expect the tester to spend executing a test case in minutes, excluding any time spent reporting potential issues.

Guidelines for Scripted Tests

For scripted tests, net execution time should be measured by recording the time taken to run the test steps and validate the expected results. Another approach is taking a simpler test as a baseline (for example, if a test on logging in is estimated to take 5 minutes) and estimate how much more complex and lengthy the other test is against this baseline. For example, in a social media app, a log-in test might be estimated to take 5 minutes to cover a few negative scenarios and one positive scenario. Assuming the messaging feature of the app is 3x as complex and lengthy, you could estimate a messaging feature test as taking 15 minutes.

Guidelines for Exploratory Tests

Since exploratory tests are creative tests, testers could spend several hours exploring. So you need to set a timebox for all exploratory tests and explicitly write the limit in the test body so testers can see. For example: Timebox testing to 20 minutes.

Feature

The main feature of a test case refers to the primary functionality or aspect being tested. Also identify sub-features, providing a hierarchical structure for test cases related to the parent feature. This hierarchical approach allows for quick and precise selection of relevant coverage, whether targeting a specific sub-feature or all test cases related to the main feature.

All feature names have to correlate with mindmap nodes (the mindmap should have the same naming as in the product).

Labels

Labels are optional but valuable for categorizing and filtering test cases. They provide additional information for grouping and organizing test cases based on specific criteria. By using labels, coordinators can quickly identify and retrieve test cases that share common characteristics or attributes.

Agreement on Basic Labels

Labels provide additional categorization and so filtering opportunities for test cases. Labels are optional but highly beneficial for more precise filtering and grouping of test cases.

To ensure consistency and effectiveness, it is important to agree on basic labels when starting the process of writing test cases in a particular workspace. Establishing a standard set of labels will facilitate precise filtering and grouping of test cases, making it easier for coordinators to manage and organize test case repositories efficiently.

An example:

Label category

Description

Values

Test case type

Determining the test case type: scenario- or function-based or embedded.

  • Scenario

  • Function

  • Embedded

Automated Flag

Please set “Automated” to “Yes” for test cases that have been automated.

What's Next

Did this answer your question?