In most cases Testlio uses 2 types of task lists: 

  • Structured exploratory or Regression task lists where you can choose whether the step passed, failed or is not answerable.
  • Exploratory task lists where you have to fill out a text box describing your testing strategy 

(There can also be other task list forms, in which case the instructions are included in the descriptions.)

To make sure that your testing goes well and the client gets the best overview of the results, there are a couple of rules we'd like to draw your attention to.

1. Choosing "Pass", "Fail" or "Unable to test" in structured tests

In structured tests, such as regression or structure exploratory, you can choose one of three options: passed, failed or unable to test. The tester can leave feedback to a test case in any circumstance ("Unable to test", "Fail", "Pass"). 

1.1 Choose "Pass": when the functionality works properly or has related issues 

For example when:

  • User can log in.
  • User can log in, but there’s a misaligned button. 
  • User can log in, but you found some login interface related issues with forgot password flow.

Feedback options

  • Add minor issue(s). Use this when executing the test case and you have found a related issue to the feature under test but this does not block the feature itself from performing. Submit the issue on the platform and provide the issue ID to the minor issue field.
  • Add feedback about the test case if you have ideas on how to enhance it.

NB! Selecting “Pass” does not mean that you just didn’t find bugs, but that the functionality is working as expected. Common mistakes are to select “Pass” when you can’t test (and because of that can’t find bugs either), but in that case Pass should be selected, minor issue added and/or problem explained via "Add feedback about test case"

1.2 Choose Fail: When the functionality is broken or actual results don’t match fully with the described expected outcome.

For example when:

  • User cannot log in, throws the blank screen.
  • User reached login screen but the action cannot be completed - i.e Facebook login works, but email login does not.

When that happens, submit the issue and provide an issue ID in the field. If the bug is from the past, please add reproducing comments inside the issue and report issue ID as regular.

Feedback options:

  • Report the issue ID, for example, #85589. All issue-related information should be included only in the issue
  • Add feedback about the test case if you have ideas on how to enhance it.

NB! Common mistakes are to select “Fail” when the feature described by the test case is functioning, but you found an issue during executing this specific test case. But in that case, "Pass" should be selected, minor issue added and/or problem explained via "Add feedback about test case"

1.3. Unable to test: choose in cases when the functionality cannot be tested or you feel the need to leave feedback.

Select reason

  • Blocked by another issue. There’s a bug that blocks testing the feature completely i.e the video player crashes while loading, so it’s not possible to test play and pause functionality. If that happens to link the blocking issue ID in the comments. All issue-related information should be included only in the issue.
  • Functionality out of scope
  • Don't understand the reason if the test case description is hard to understand for testing and you did not receive help from Test Lead.
  • Another reason if the feedback does not match the criteria above, please specify the reason via comment. For example "Can't log in because of staging environment problems."

Also, you can add a feedback about the case to help enhance the case.

NB! Commenting on the reasons why the functionality can’t be tested is important to highlight the parts of the application that could not be tested and to build trust. 

If you're still unsure or in doubt regarding reporting a specific result, please ask in project chat. 

2. Describing exploratory testing results

With exploratory tests, testers should independently explore specific areas of the app using their judgment and creativity. Because user scenarios and testing strategies are not specified in the tests, testers must describe their work process and result in the text area provided.

That includes

  1. All scenarios and areas covered in a short, brief, well-structured form
  2. Any found issues 
  3. Any important notes that emerged

Example 1:

Task: Posting and commenting exploratory > Admin adding posts

Tester results:

Add new post with text only = OK

Add new post with photos = OK

Try to add post without internet connection = Appropriate notification is shown;

Try to open added photos to the post = unable: #30066

Post > add photos > remove photos before posting = OK

Add photos > Hold on photo > Minimize app = It freezes after resuming: #30078;

Example 2:

Task: Exploratory testing of the whole app for the remainder of task list time

Tester results:

Profile > Photos > Delete Photos = #27450

Feed > New Activity > Add Photos = #27457

Profile > Edit Data = OK

Profile > Activities > Turn off WiFi > Scroll down Activities screen to upload more activities = #41152

Contacts > Follow = OK

Invite Friends = OK

Comments: Generally, the UI for posting and managing comments is easy to use. Could be a bit more responsive though.

Did this answer your question?