Skip to main content

Generated code not matching your expectations

This documentation provides a comprehensive guide on using Additional Instructions within SQAI Suite to fine-tune AI-driven test automation.

Updated this week

Optimizing Automation with Additional Instructions

This documentation provides a comprehensive guide on using Additional Instructions within SQAI Suite to fine-tune AI-driven test automation. It covers strategies for aligning manual test case language with technical repository standards, offers structural prompting examples for major frameworks (Selenium, Playwright, Cypress, etc.), and explains the best practice of implementing a Rules File to maintain coding standards.

While SQAI Suite uses advanced AI agents and similarity search to automate your tests, the gap between a manual test case (written in business language) and your code repository (written in technical logic) can sometimes lead to generic results.

The Additional Instructions field allows you to bridge this gap, acting as a "Lead Engineer" guiding the AI to specific files, patterns, and rules.

1. Why Similarity Search Sometimes Needs Help

SQAI operates by converting your manual test case into a vector (a numerical representation of meaning) and searching your repository for code that matches the wide area arround that meaning.

  • The Challenge: A manual test might say "Verify the user can purchase a premium subscription."

  • The Code: Your repository might not have the word "subscription" in the code files; instead, it might use abstract technical terms like billingService.upgradePlan(PlanID.TIER_2).

If the Domain Language gap is too wide, SQAI might miss a specific Page Object or Helper Class you want it to use. Additional Instructions let you explicitly point the AI to the right assets without burning an exces of SQAI Tokens.

2. When to Use Additional Instructions

You should use this field in the generation modal when:

  1. Strict Design Patterns: You adhere to a strict Object Model, Architectural Pattern, or specific folder structure.

  2. Abstract Naming: Your code uses technical IDs or variable names that do not match the Domain Language in the test case.

  3. New Features: You are automating a test for a feature that has no existing code in the repo yet (in case of a mono-repo), so similarity search will fail to find a precedent. You need to tell SQAI which similar feature to copy as a template.

  4. Custom Rules: You want to enforce specific linting rules or a coding style guide (e.g., "Always use data-testid selectors").

3. Structural Examples by Framework

The table below provides specific examples of how to prompt SQAI using Additional Instructions for the most popular frameworks supported by our platform.

Category

Framework

Context

Instruction Example

WebUI

Selenium WebDriver (Java)

POM & Fluent Assertions

"Generate this test using the Page Object Model. Use LoginPage.java as a reference. Ensure all assertions use AssertJ. Initialize driver via BaseTest.java."

WebUI

Playwright (TS/JS)

Fixtures & Locator API

"Use custom fixtures from fixtures.ts. Rely on auto-waiting (no page.wait()). Use HomePage.ts as the template for the new Page Object."

WebUI

Cypress (JS)

Custom Commands

"For login, use cy.loginViaAPI() from commands.js. Ensure selectors target data-cy attributes only. Follow the structure of checkout.cy.js."

Mobile

Appium (Java/Kotlin)

Driver Capabilities & Screens

"Use AndroidDriver capabilities from MobileCapabilityFactory.java. Use the Screen pattern shown in DashboardScreen.java. Handle waits via WaitHelper."

Mobile

Maestro (YAML)

Declarative Flows

"Follow flow structure in login_flow.yaml. Use appId variables from env.yaml. Ensure you use assertVisible for UI validation."

API

Rest-Assured (Java)

Request Specifications

"Use RequestSpecBuilder from ApiConfig.java for base URL/auth. Use POJO classes like UserPayload.java. Validate response against user-schema.json."

API

Karate DSL

Reusable Features

"Call auth feature classpath:common/auth.feature before the scenario. Use matching logic from products.feature. Validate status using match keyword."

BDD

Cucumber (Java/JS)

Step Definitions

"Map Gherkin steps to OrderSteps.java. If a new step is needed, create it following the pattern in CommonSteps.java using dependency injection."

BDD

SpecFlow (.NET)

C# Bindings

"Generate Step Definitions in C#. Use ScenarioContext to share data. Follow naming conventions in LoginSteps.cs."

Python

Pytest

Fixtures

"Use the authenticated_user fixture from conftest.py. Do not create a setup method inside the test class. Use standard Python assert statements."

Python

Robot Framework

Keywords

"Use keywords defined in BrowserKeywords.robot. Follow Upper Case With Spaces variable naming from Variables.py. Tag the test with Regression."

4. The SQAI Rules File: Persistent Context

Tip: Instead of typing long, repetitive instructions for every single test generation job, we strongly recommend creating a dedicated SQAI Rules File in the root of your repository (name it sqai.rules.md)

Why is this important?

Think of this file as the equivalent of the unwritting agreements you make with your developer- or tester colleagues for working in your repo.

It serves as the "Constitution" for your repository, providing persistent, project-wide context that the AI should always respect.

By centralizing your standards, you ensure consistency across all AI-generated code without having to micromanage every prompt.

This rules file is not to be confused with a ReadMe file, in most set-ups SQAI has access to the ReadMe file and can add, change or delete specific content in that file. The SQAI Rules File is a locked file that SQAI cannot edit, which should only be adapted by the human in the loop.

Examples of What to include in your Rules File:

  1. Naming Conventions: "Always use snake_case for Python files and PascalCase for Java classes."

  2. Selector Strategy: "Prefer data-testid over CSS classes or XPaths."

  3. Forbidden Patterns: "Never use Thread.sleep(); always use explicit waits."

  4. Architectural Rules: "All API calls must go through the APIService class; do not make direct HTTP calls in test files."

You can download an example rules file here

---
description: Universal standards for UI and API test automation. Applies to all frameworks (Playwright, Selenium, Cypress, RestAssured, etc.).
globs: ["**/*test*", "**/*spec*", "**/tests/**", "**/e2e/**"]
alwaysApply: false
---

# Role & Objective
You are a Senior Test Automation Architect. Your goal is to generate robust, maintainable, and flaky-free automation code. You do not just write scripts; you build scalable test frameworks.

# 1. Universal Testing Principles (Apply to ALL Tests)
- **AAA Pattern:** Strict adherence to **Arrange-Act-Assert** in every test case. Visually separate these sections with newlines.
- **Independence:** Every test must be self-contained. Never rely on the state left by a previous test. Use setup/teardown hooks (e.g., `beforeEach`, `afterEach`) for state management.
- **Descriptive Naming:** - Test files: `feature-name.spec.ext` or `feature-name.test.ext`.
- Test names: Should be sentences describing the behavior (e.g., "User should be redirected to dashboard after valid login").
- **Soft vs Hard Assertions:** Use hard assertions for critical checks (blocking further execution) and soft assertions (where available) for validating multiple fields in a single object.
- **Data Management:** Never hardcode sensitive data or volatile test data. Use data fixtures, factories, or environment variables.

# 2. UI Automation Standards
- **Page Object Model (POM):**
- **Strict Separation:** Tests describe *what* is being tested; Page Objects describe *how* to interact with the UI.
- **No Selectors in Tests:** Never use raw selectors (CSS/XPath) inside a test file. They must exist only within Page Classes.
- **Public Methods:** Page classes should expose user-intent methods (e.g., `loginAs(user)`) rather than low-level mechanics (e.g., `typeIntoUsernameField`).

- **Locator Strategy:**
- **Priority 1:** Stable attributes (e.g., `data-testid`, `data-cy`, `aria-label`).
- **Priority 2:** Text content (if unique and unlikely to change).
- **Forbidden:** Brittle XPaths, absolute paths, or CSS classes coupled to styling (e.g., `.btn-blue`, `div > div > span:nth-child(3)`).

- **Waits & Synchronization:**
- **Forbidden:** Never use hard sleeps / fixed pauses (e.g., `Thread.sleep`, `cy.wait(5000)`, `time.sleep`).
- **Preferred:** Use dynamic, explicit waits that wait for a condition (visibility, clickability, API response) with a reasonable timeout.

# 3. API Automation Standards
- **Client Abstraction:**
- Wrap your HTTP client (Axios, RestAssured, Requests) in a Service/Client class.
- Endpoints should be methods (e.g., `UserService.createUser(payload)`).
- **Assertions:**
- **Status Code:** Always verify the HTTP status code first.
- **Schema Validation:** Validate the response structure (schema) before validating specific data values.
- **Response Time:** (Optional) Warn if response time exceeds thresholds.
- **Payloads:** Use builder patterns or factory functions to construct request payloads dynamically.

# 4. Code Style & Error Handling
- **DRY (Don't Repeat Yourself):** Abstract repeated logic into Helper/Utility classes.
- **Logging:** Tests must log critical actions. If a test fails, the error message must clearly state *expected* vs *actual* values.
- **Type Safety:** If the language supports it (TypeScript, Java, C#, Go), use strict typing for API responses and Page Object methods.

# 5. Behavior Constraints
- **If generating a UI test:** Ask if a Page Object already exists for the feature. If not, generate the Page Object Class first.
- **If generating an API test:** Ask for the Swagger/OpenAPI definition or a sample cURL request to ensure payload accuracy.
- **If modifying existing code:** Do not break existing patterns. Match the indentation, naming convention, and structure of the surrounding file.


How to use it:

Once this file exists in your repo, your Additional Instructions can be extremely concise:

"Generate this test following the architectural rules and coding standards defined in sqai.rules.md located in the root of the repo."

(with some small test-focused additions if needed)

SQAI will read that file, ingest your specific coding standards, then do a target similarity search and generate code that looks exactly like it was written by your Senior Automation Engineer.

Did this answer your question?