Skip to main content
Python Scripting Guidelines

See what standards to maintain while creating test automation using Python.

Aaron Collier avatar
Written by Aaron Collier
Updated over 2 weeks ago

This page represents required guidelines for Python test automation scripting. It includes general and Python-specific aspects.

This article presents a requirement for our freelancers. Failure to follow the process might represent a breach of the Freelancer Services Agreement.

Code Quality and Maintainability

  • Consistent naming: Ensure all variables, functions, and class names are descriptive and follow a consistent naming convention:

    • Be consistent

    • Avoid special characters

    • Don't use reserved keywords

    • Keep names concise but descriptive

    • Use domain-specific terminology when appropriate

    • Avoid numbers in names unless necessary

    • Don't use Hungarian notation unless required

  • No code duplication: Refactor duplicate code into reusable functions or modules to improve maintainability - follow OOP principles.

  • Single responsibility principle: Verify that each function, class, or module adheres to the Single Responsibility Principle (SRP).

  • Comments: Write comments that clarify the purpose and reasoning behind complex logic, avoiding redundant explanations of the code itself.

    • Examples of comments that should be avoided:

      • Inappropriate Information

        • Comments shouldn't contain system information (commit history, timestamps)

        • Keep metadata in source control systems, not code

        • Avoid redundant documentation better suited for other tools

      • Obsolete Comment

        • Old, irrelevant, or incorrect comments

        • Comments that don't match current code

        • Delete or update outdated comments immediately

        • Comments that drift from code are worse than no comments

      • Redundant Comment

        • Comments that state the obvious

        • When code is self-documenting Example:

      • Poorly Written Comment

        • Unclear or ambiguous comments

        • Grammar/spelling errors

        • Comments that require other comments to understand

        • Be brief, precise, and professional

      • Commented-Out Code

        • Dead code left as comments

        • Use version control instead

        • Delete commented code; it can be retrieved from history if needed

        • Creates confusion about code status

  • Commit messages: Ensure commit messages are clear, concise, and follow a consistent format, and use a structured branching strategy aligned with the team’s workflow.

  • Proper grammar and spelling: Ensure that comments, variable names, function names, and commit messages use correct English grammar and spelling for clarity and professionalism.

  • No Error Masking: Don’t mask errors (try / catch for errors).

  • Follow PEP 8 and PEP 257: Adhere to style and documentation standards.

  • Use type hints: Implement type annotations (def func(x: int) -> str).

  • Prefer dataclasses: Utilize @dataclass for simple data structures.

  • Avoid mutable default arguments: Prevent unintended side effects (def func(lst=[]): is bad practice).

  • Use context managers: Handle resources safely with with open(...) as f:.

  • Leverage f-strings: Prefer f"Hello, {name}!" over concatenation or .format().

Coding Structure and Organization

  • Modular organization: Organize test code into modular, maintainable components.

  • Page Object Model (POM): Use Page Object Model (POM) for UI automation.

  • API Testing - API Object Model (AOM) which follows a similar principle to POM by encapsulating API request logic into reusable objects.

  • Database Testing - "Repository Pattern" abstracts database queries into reusable methods, making it easier to test database interactions.

  • Secrets: Store secrets in environment variables. Don’t add environment files (i.e. .env) in git commits to avoid leaking secrets.

  • Configuration management: Use configuration files for values that may change between environments.

  • Store non sensitive test data externally: Move test data to static files.

  • Error handling: Implement robust error handling mechanisms to catch and log unexpected failures without causing false negatives.

  • Parametrize instead of duplication: Parameterize tests to support multiple input values and improve coverage. Do not duplicate tests.

  • Maintain responsibility separation between tests: Tests should not be inter-dependent. To promote code reusability, prefer separating common functionality in its own scope (POM, Test File).

  • Use __init__.py: Define package structure where necessary.

  • Prefer absolute imports: Avoid relative imports for maintainability.

  • Use pytest fixtures: Implement @pytest.fixture for setup/teardown logic.

  • Organize test files in tests/ directory: Keep test code separate from application code.

  • Use Python logging: Replace print() statements with logging for better control.

Mapping to Manual Test

  • Linking Manual Test: Adding link to the manual test is mandatory.

  • Flow Matching: Every step in the script has to match to a test step in the manual test.

    • If an underlying manual test does not allow creation of clear assertions (contradicts our test writing guidelines - e.g. has exploratory steps), revert back to QEL/TM to get a fix for the test first.

  • Form: The Pass/Fail steps commonly follow an Action/Expected Result format that should be replicated in your code. Actions commonly translate to calling a method of a POM, while Expected Results are the assertions performed to confirm the action achieved the expected outcome.

    • To foment reusability, common steps can be grouped together, at the test level, in functions/methods.

    • Make usage of class inheritance to share common functionality between tests and POMs (e.g. have a class Widget that is extended by all widget components).

  • Examples:

Manual Test

Automated Code (Pseudo-Code)

Navigate to Terms & Conditions
The T&C content should be visible and readable.

// Step
Allure.step('Navigate to Terms & Conditions') {
  // Action
  termsPage = homePage.clickTerms()
  // Expected Result
  assert(termsPage.header).isVisible('T&C page was not loaded')
  visuallyAssert(termsPage.screenshot())
    .comparesWith(expectedContent.png)
    .threshold(15)
    .message('T&C content is not visible/readable')
}

Upload a picture of a document
A toast message appears indicating success.

// Step
Allure.step('Upload a picture of a document') {
  // Action
  uploadWidget.upload(mockdoc.pdf)
  // Expected Result
  assertExists(uploadWidget.toastSuccess).message('Toast message was not displayed')
}

Do not enter any name, then click OK button
The field is highlighted in red, and the user cannot proceed.

// Step
Allure.step('Do not enter any name, then click OK button') {
    // Actions
    formWidget.enterName("")
    formWidget.clickOk()
    // Expected Results
    // - Ensures the user wasn't navigated away from the form.
    assert(formPage.header).isNotVisible('User was able to proceed without a name')
    // - Check the color of the highlighted field
    visuallyAssert(formWidget.nameField.screenshot())
    .comparesWith(redHighlight.png)
    .threshold(0)
    .message('Incorrect highlighting when name is not entered')
 }

Assertions

  • Clear assertions: Write descriptive failure messages that explain what was expected, show what actually occurred, and help identify the root cause. Generic messages such as 'Test failed' or 'Assert true' are not acceptable. You should also include relevant context and values in failure messages:

    • Example

      Instead of:

      assert(user.isActive)

      Write:

      assert(user.isActive,
        `Expected user ${user.id} to be active, but status was ${user.status}`)
       Python:
       assert  user.is_active == True, 
                f"Expected user {user.id} to be active, but status was {user.status}"
        
  • Do not over-rely on hard assertions: Use hard assertions only for critical paths (e.g., checking if the navigation button is present) to ensure execution stops if they fail, as continuing would be pointless. Use soft assertions everywhere else.

  • Use assertions in your tests, not in objects: The assertions are used to test if the expected behavior was achieved, and should be linked to the action being performed. Make sure the assertions are located in the test file, not on the page object models.

    • Keep SRP (Single Responsibility Principle) when designing the test files

    • Use class inheritance to avoid code duplication when multiple tests cover similar functionality/steps

      • Take advantage of Allure annotations/methods to encapsulate and reuse logic

    • Keep test files at a consistent abstraction level (e.g. don’t mix POMs with direct calls to driver functions)

  • Use pytest assertions: Prefer assert actual == expected, 'Failure message' over unittest assertions.

  • Utilize pytest.raises: Test expected exceptions efficiently.

  • Create custom matchers: Improve assertion clarity and reusability.

Locator Strategy

By default, the Testlio team should work with developers to introduce strong locators that can then be used to automate the tests.

  • Stable locators: Select locators that are unique, stable, and resilient to UI changes (e.g. accessibility IDs, resource IDs, and other similar, intentional, values).

  • Avoid text-based locators: Minimize reliance on text-based locators as they are prone to changes; prefer attribute-based or structural locators instead.

  • Use robust selectors: Prioritize IDs and data-test attributes over dynamic class names or element indexes.

  • Consistent locator strategy: Maintain a structured approach to defining and managing locators, within a centralized repository or Page Object Model.

  • Framework-specific Locator Practices: Refer to framework-specific locator strategy guidelines:

  • Use Selenium or Playwright locators: Choose appropriate frameworks (webdriver.find_element or locator API).

Performance and Scalability

  • Efficient waiting: Use dynamic waits to synchronize test execution with application state instead of relying on fixed delays.

  • Independency:

    • Design tests to be independent, ensuring they leave no residual state that affects subsequent executions.

    • Structure tests so that each test can run independently without reliance on other tests.

    • If an underlying manual test violates these principles, revert back to QEL/TM to get a fix for the test first.

  • Parallel execution: Ensure test scripts are structured to allow safe and efficient parallel execution.

  • Retry mechanism: Implement a retry strategy to improve test reliability while ensuring it does not mask underlying issues.

    • Allure: Allure results should only list one attempt.

    • Limit retries: Avoid excessive retries (e.g., 2-3 attempts) to prevent masking genuine issues.

    • Log retry reasons: Record logs to analyze why retries are triggered and proactively resolve flaky tests.

    • Use selectively: Apply retries only to tests prone to transient failures; don’t overuse for stable test cases.

    • Combine with fail-fast strategy: Abort retries if a critical failure occurs to save execution time.

  • Storage of Locators/Suites: The default choice should be JSON, for Playwright/Cypress Page Classes. Other exceptions are a subject for separate approval from QEM. Excel should not be used.

  • Use pytest-xdist: Run tests in parallel (pytest -n auto).

  • Prefer explicit waits: Use WebDriverWait instead of sleep().

  • Leverage async execution: Utilize Python’s asyncio when applicable.

  • Optimize resource usage: Implement lazy initialization for better performance.

Data Handling

  • Independent data: Make sure to use data that is unique to the current instance of running test - this would allow execution of the same test instance in parallel.

  • Data integrity: Use proper test data setup and teardown mechanisms to maintain data integrity across test runs.

  • Avoid Races: When tests access the same data set or files, ensure you put in place the correct techniques to avoid racing conditions (e.g. semaphores).

  • Use factory-based test data: Implement factory_boy for dynamic test objects.

  • Prefer pytest fixtures: Manage test data with @pytest.fixture.

  • Utilize structured data formats: Store test data in JSON or YAML.

Logging and Reporting

  • No credentials or any client sensitive data logging: Do not capture credentials in the logs.

  • No obsessive screenshot creation: You can use additional screenshots per step temporarily for debugging, but not as a permanent solution.

  • Use Python’s logging module: Replace print() with configurable logging.

  • Configure log levels: Use DEBUG, INFO, WARNING, ERROR appropriately.

  • Integrate Allure reports: Enhance reporting with pytest-allure.

Security and Compliance

  • AI usage for coding: If a customer has opted out from AI usage, do not use AI-based tools (like Copilot) for script creation.

Did this answer your question?