This document outlines the standard process of PR review. The aim is to move to a fully automated PR review process thanks to the tooling that is being explored as part of the Immediate Value Workstream within the Kickstart Automation to Unlock Fused priority endeavor. For now, we still rely on manual reviews.
This article presents a requirement for our freelancers. Failure to follow the process might represent a breach of the Freelancer Services Agreement.
Review Timelines
Reviewers must complete the initial review of pull requests by the end of the second business day from submission. This timeline is critical to maintaining project momentum and respecting Quality Engineers' work. If you are unable to meet this deadline due to your workload or other commitments, immediately reassign the PR to another available reviewer and notify your manager about the reassignment. The 2-day review window applies to all standard PRs, with no exceptions unless formally approved by team leadership. Prompt reviews keep projects moving forward and directly impact team productivity and delivery schedules.
Setup
Open the test case: Your target will be to verify the test script against the corresponding test case on http://app.testlio.com.
Open coding guidelines: Language specific guide will serve you as a checklist https://testlions.atlassian.net/wiki/spaces/SER/pages/5008392229/Automation+Coding+Guidelines.
Set up your IDE: Use an IDE with IntelliSense and plugins like ESLint/TSLint or Prettier.
Utilize GitHub tools: Use the GitHub built-in plugin for PRs.
Checkout and test: Checkout the branch, pull changes, configure environment settings, and perform a test run.
Verify repository files: Check the versions of AppObjectRepository.xlsx and Suites.xlsx. Rollback any local changes before pulling the latest version or download the files directly from the PR on GitHub.
Make yourself aware of general PR review practices: Google's Engineering Practices Documentation is one good starting point.
Perform the Review
Validate PR Submission Correctness
Verify branching correctness: Ensure the PR follows Testlio's guidelines for branching and creation:
Staging branch is used
Branch is named with test case ID
No more than 2 scripts in a single PR
Validate Alignment with Scripting Guidelines
Validate adherence to every requirement provided inhttps://testlions.atlassian.net/wiki/spaces/SER/pages/5008392229/Automation+Coding+Guidelines
Validate Code Execution
Validate the following criteria:
Does the project build without any errors?
Does the test script execute?
Does the test script report the results at the end of the execution?
Are the test results generated?
Conclude the Review
Leave Feedback:
Be constructive and straightforward in your feedback. Every PR = time = money.
Additionally, the target is to coach QEs & Scripters to do better next time.
Provide clear comments with reasons, examples, links, and references.
Offer suggestions instead of enforcing changes.
Appreciate good solutions with positive comments.
Explain your disagreements and provide alternatives.
Attach test results, logs, stack traces, or any relevant data.
Reply to every comment, even with a simple acknowledgment.
Utilize pair review and involve colleagues when needed.
Submit your review by clicking the "Submit Review" button.
Handling Common Conflicts
Merge Conflicts
We have handled merge conflicts in the past for freelancers through planned work allocation:
Platform-based team division: Divide freelancer groups into iOS and Android teams with separate test case assignments.
Phased reallocation strategy: Once platform-specific scripts are completed, switch assignments at 50% capacity to adapt completed scripts for the additional platform.
Feature-based grouping: Structure assignments based on feature groups to prevent conflicts between teams working on similar functionalities.
Capacity planning: Carefully plan assignment distribution to optimize freelancer productivity and reduce governance efforts.
This approach significantly accelerates progress across both platforms while minimizing merge conflicts through strategic work allocation.
Test Data Conflicts
To avoid test data conflicts:
Group prerequisite-dependent scripts: Ensure that prerequisite-specific test scripts are grouped within a single test suite, allowing them to be executed on a single instance.
Enforce data lifecycle management: Always implement data generation at the start and cleanup at the end of each test execution.
State validation: Verify the application state before test execution to ensure no residual data remains from previous runs.
Roll back changes: Ensure all changes are rolled back after test completion to prevent interference with subsequent test runs.
Residual data left behind could lead to conflicts in subsequent script runs, affecting test reliability and results interpretation.