At SQAI Suite, we work with multiple large language models (LLMs) behind the scenes to help you streamline and enhance your Quality Assurance processes. Each of these models has its own way of calculating token usage for both inputs (e.g., reading or analyzing content) and outputs (e.g., generating test cases or scripts).
To simplify things for you, we’ve introduced SQAI Tokens — a unified, average-based token system that represents how much processing power you're using across all the smart features in our platform, including the usage of our agents.
What Are SQAI Tokens?
SQAI Tokens are a standardized unit we use to measure LLM usage across different tasks in SQAI Suite, performed by different agents. Whether you're pulling in user stories from JIRA, chatting with our AI assistant, generating test cases, analyzing code repositories, or producing automated test scripts — SQAI Tokens are what fuel those actions.
Because different LLMs handle tokens differently (some are more efficient for inputs, others for outputs), we’ve averaged out the usage across all models to give you one simple and fair metric: SQAI Tokens.
Keep in mind that SQAI Tokens also include the usage of the agents of our platform.
What Do You Get on Average for Your Tokens?
The value you get from your SQAI Tokens depends on several factors, such as the complexity of your test scenarios, the size of your code repository, and the type of automation or documentation being generated.
In general, your tokens can be exchanged for a combination of test cases, automation scripts, and other QA deliverables. The exact output may vary depending on:
How detailed or complex your user stories and acceptance criteria are
The volume and structure of your source code or repositories
The efficiency of prompts and reusability of previous outputs
All estimates are based on average project configurations with teams of around three QA consultants. While every use case is unique, SQAI optimizes token efficiency to ensure you get the highest possible output quality per token spent.
Where Are Tokens Used?
Your SQAI Tokens are consumed during various activities across the platform, including:
Reading user stories from tools like JIRA, Confluence, Azure Wiki...
Communicating via the conversational interface
Generating detailed test cases
Analyzing/reading code repositories
Creating automated test scripts
Every action that requires language model processing uses SQAI Tokens — making it easy for you to keep track of how much intelligence power you’re consuming. In the statistics page, you can always keep track of the amount of SQAI Tokens you have used so far.
Keep in mind that within every subscription plan, token usage is according to our fair use policy.
In case the fair use policy isn't enough, check with your commercial contact about our upgrade packages for extra tokens.