This page represents required guidelines for TypeScript test automation scripting. It includes general and TypeScript-specific aspects.
This article presents a requirement for our freelancers. Failure to follow the process might represent a breach of the Freelancer Services Agreement.
Code Quality and Maintainability
Consistent naming: Ensure all variables, functions, and class names are descriptive and follow a consistent naming convention:
Be consistent
Avoid special characters
Don't use reserved keywords
Keep names concise but descriptive
Use domain-specific terminology when appropriate
Avoid numbers in names unless necessary
Don't use Hungarian notation unless required
No code duplication: Refactor duplicate code into reusable functions or modules to improve maintainability - follow OOP principles.
Single responsibility principle: Verify that each function, class, or module adheres to the Single Responsibility Principle (SRP).
Comments: Write comments that clarify the purpose and reasoning behind complex logic, avoiding redundant explanations of the code itself.
Examples of comments that should be avoided:
Inappropriate Information
Comments shouldn't contain system information (commit history, timestamps)
Keep metadata in source control systems, not code
Avoid redundant documentation better suited for other tools
Obsolete Comment
Old, irrelevant, or incorrect comments
Comments that don't match current code
Delete or update outdated comments immediately
Comments that drift from code are worse than no comments
Redundant Comment
Comments that state the obvious
When code is self-documenting Example:
Poorly Written Comment
Unclear or ambiguous comments
Grammar/spelling errors
Comments that require other comments to understand
Be brief, precise, and professional
Commented-Out Code
Dead code left as comments
Use version control instead
Delete commented code; it can be retrieved from history if needed
Creates confusion about code status
Commit messages: Ensure commit messages are clear, concise, and follow a consistent format, and use a structured branching strategy aligned with the team’s workflow.
Proper grammar and spelling: Ensure that comments, variable names, function names, and commit messages use correct English grammar and spelling for clarity and professionalism.
No Error Masking: Don’t mask errors (try / catch for errors).
Enable strict mode: Use
"strict": true
intsconfig.json
for better type safety.Use interfaces and types: Prefer interfaces over
any
for structured data.✅ Good practice:
interface User { username: string; password: string; }const login = (user: User) => { console.log(Logging in ${user.username}); };login({ username: "testUser", password: "securePass" });
Prefer readonly properties: Prevent accidental mutations where applicable.
interface TestConfig { readonly baseUrl: string; readonly timeout: number; }const config: TestConfig = { baseUrl: "https://example.com ", timeout: 5000, };// config.baseUrl = "https://malicious-site.com"; // ❌ TypeScript error: Cannot assign to 'baseUrl' because it is a read-only property.
Avoid enums where possible: Use union types instead for better maintainability.
type TestStatus = "passed" | "failed" | "skipped";function reportTest(status: TestStatus) { console.log(`Test ${status}`); }reportTest("passed"); // ✅ Works reportTest("unknown"); // ❌ Should not work
Use explicit return types: Improve code readability and debugging.
Utilize
unknown
instead ofany
: Ensure better type safety in generic handling and preventing runtime errors.
Coding Structure and Organization
Modular organization: Organize test code into modular, maintainable components.
Page Object Model (POM): Use Page Object Model (POM) for UI automation.
API Testing - API Object Model (AOM) which follows a similar principle to POM by encapsulating API request logic into reusable objects.
Database Testing - "Repository Pattern" abstracts database queries into reusable methods, making it easier to test database interactions.
Secrets: Store secrets in environment variables. Don’t add environment files (i.e. .env) in git commits to avoid leaking secrets.
Configuration management: Use configuration files for values that may change between environments.
Store non sensitive test data externally: Move test data to static files.
Error handling: Implement robust error handling mechanisms to catch and log unexpected failures without causing false negatives.
Parametrize instead of duplication: Parameterize tests to support multiple input values and improve coverage. Do not duplicate tests.
Maintain responsibility separation between tests: Tests should not be inter-dependent. To promote code reusability, prefer separating common functionality in its own scope (POM, Test File).
Use barrel exports (
index.ts
) to simplify module imports.Prefer
import
overrequire
for module loading.Leverage TypeScript decorators for metadata and test organization.
Use ES6 modules (
import/export
) for a cleaner structure.Favor named exports over default exports for maintainability.
Mapping to Manual Test
Linking Manual Test: Adding link to the manual test is mandatory.
Flow Matching: Every step in the script has to match to a test step in the manual test.
If an underlying manual test does not allow creation of clear assertions (contradicts our test writing guidelines - e.g. has exploratory steps), revert back to QEL/TM to get a fix for the test first.
Form: The Pass/Fail steps commonly follow an Action/Expected Result format that should be replicated in your code. Actions commonly translate to calling a method of a POM, while Expected Results are the assertions performed to confirm the action achieved the expected outcome.
To foment reusability, common steps can be grouped together, at the test level, in functions/methods.
Make usage of class inheritance to share common functionality between tests and POMs (e.g. have a class Widget that is extended by all widget components).
Examples:
Manual Test | Automated Code (Pseudo-Code) |
Navigate to Terms & Conditions | // Step Allure.step('Navigate to Terms & Conditions') { // Action termsPage = homePage.clickTerms() // Expected Result assert(termsPage.header).isVisible('T&C page was not loaded') visuallyAssert(termsPage.screenshot()) .comparesWith(expectedContent.png) .threshold(15) .message('T&C content is not visible/readable') } |
Upload a picture of a document | // Step Allure.step('Upload a picture of a document') { // Action uploadWidget.upload(mockdoc.pdf) // Expected Result assertExists(uploadWidget.toastSuccess).message('Toast message was not displayed') } |
Do not enter any name, then click OK button | // Step Allure.step('Do not enter any name, then click OK button') { // Actions formWidget.enterName("") formWidget.clickOk() // Expected Results // - Ensures the user wasn't navigated away from the form. assert(formPage.header).isNotVisible('User was able to proceed without a name') // - Check the color of the highlighted field visuallyAssert(formWidget.nameField.screenshot()) .comparesWith(redHighlight.png) .threshold(0) .message('Incorrect highlighting when name is not entered') } |
Assertions
Clear assertions: Write descriptive failure messages that explain what was expected, show what actually occurred, and help identify the root cause. Generic messages such as 'Test failed' or 'Assert true' are not acceptable. You should also include relevant context and values in failure messages:
Example
Instead of:
assert(user.isActive)
Write:
assert(user.isActive, `Expected user ${user.id} to be active, but status was ${user.status}`) Python: assert user.is_active == True, f"Expected user {user.id} to be active, but status was {user.status}"
Do not over-rely on hard assertions: Use hard assertions only for critical paths (e.g., checking if the navigation button is present) to ensure execution stops if they fail, as continuing would be pointless. Use soft assertions everywhere else.
Use assertions in your tests, not in objects: The assertions are used to test if the expected behavior was achieved, and should be linked to the action being performed. Make sure the assertions are located in the test file, not on the page object models.
Keep SRP (Single Responsibility Principle) when designing the test files
Use class inheritance to avoid code duplication when multiple tests cover similar functionality/steps
Take advantage of Allure annotations/methods to encapsulate and reuse logic
Keep test files at a consistent abstraction level (e.g. don’t mix POMs with direct calls to driver functions)
Use type-safe assertions: Utilize native assertion functions provided by the chosen test framework.
Leverage custom assertion functions: Improve clarity and reusability.
Locator Strategy
By default, the Testlio team should work with developers to introduce strong locators that can then be used to automate the tests.
Stable locators: Select locators that are unique, stable, and resilient to UI changes (e.g. accessibility IDs, resource IDs, and other similar, intentional, values).
Avoid text-based locators: Minimize reliance on text-based locators as they are prone to changes; prefer attribute-based or structural locators instead.
Use robust selectors: Prioritize IDs and data-test attributes over dynamic class names or element indexes.
Consistent locator strategy: Maintain a structured approach to defining and managing locators, within a centralized repository or Page Object Model.
Framework-specific Locator Practices: Refer to framework-specific locator strategy guidelines:
Store locators correctly:
Use Page Object classes to store locators in TypeScript projects for test automation.
For Playwright, define locator by adding
readonly getStartedLink: Locator;
and assign the locator path underconstructor(page: Page)
, e.g.this.getStartedLink = page.locator('a', { hasText: 'Get started' });
Example: https://playwright.dev/docs/pomFor WebDriverIO, define locator by adding getter functions:
get username () { return $('#username') }
. Example: https://webdriver.io/docs/pageobjects/
Prefer Playwright’s
locator
API: Over direct XPath for better readability.
Performance and Scalability
Efficient waiting: Use dynamic waits to synchronize test execution with application state instead of relying on fixed delays.
Independency:
Design tests to be independent, ensuring they leave no residual state that affects subsequent executions.
Structure tests so that each test can run independently without reliance on other tests.
If an underlying manual test violates these principles, revert back to QEL/TM to get a fix for the test first.
Parallel execution: Ensure test scripts are structured to allow safe and efficient parallel execution.
Retry mechanism: Implement a retry strategy to improve test reliability while ensuring it does not mask underlying issues.
Allure: Allure results should only list one attempt.
Limit retries: Avoid excessive retries (e.g., 2-3 attempts) to prevent masking genuine issues.
Log retry reasons: Record logs to analyze why retries are triggered and proactively resolve flaky tests.
Use selectively: Apply retries only to tests prone to transient failures; don’t overuse for stable test cases.
Combine with fail-fast strategy: Abort retries if a critical failure occurs to save execution time.
Storage of Locators/Suites: The default choice should be JSON, for Playwright/Cypress Page Classes. Other exceptions are a subject for separate approval from QEM. Excel should not be used.
Use Playwright’s
waitFor
methods: Instead of arbitrary timeouts.Leverage TypeScript Promises and async/await: Improve asynchronous execution control.
Data Handling
Independent data: Make sure to use data that is unique to the current instance of running test - this would allow execution of the same test instance in parallel.
Data integrity: Use proper test data setup and teardown mechanisms to maintain data integrity across test runs.
Avoid Races: When tests access the same data set or files, ensure you put in place the correct techniques to avoid racing conditions (e.g. semaphores).
Use TypeScript’s
readonly
modifier: Prevent unintended data modifications.Utilize TypeScript interfaces: Structure test data representation.
Leverage Faker.js (
@faker-js/faker
): Generate dynamic test data.
Logging and Reporting
No credentials or any client sensitive data logging: Do not capture credentials in the logs.
No obsessive screenshot creation: You can use additional screenshots per step temporarily for debugging, but not as a permanent solution.
Use structured logging (
console.debug
,console.warn
) for meaningful debugging.Integrate Allure reports: Utilize Allure step decorators for better test documentation.
Implement log levels (
info
,warn
,error
) using TypeScript enums.
Security and Compliance
AI usage for coding: If a customer has opted out from AI usage, do not use AI-based tools (like Copilot) for script creation.
Use ESLint security rules: (
eslint-plugin-security
) for static analysis.Enforce strict typing (
noImplicitAny
) to prevent vulnerabilities.