Skip to main content

BrowserStack Workflows (MCP)

Learn to link SQAI Suite and BrowserStack MCP servers for an agentic QA workflow. This guide enables your AI to manage the full testing cycle, from pulling requirements to execution in one workspace.

Updated this week

By integrating both SQAI Suite and BrowserStack MCP servers into a single AI-powered client (like VS Code, Cursor, or Claude Desktop), you create a unified Agentic QA workflow. This setup allows your AI assistant to act as an end-to-end orchestrator: it can pull test from SQAI, generate automation logic, and then immediately execute those tests on BrowserStack's real-device cloud.

The Orchestration Advantage

This means you can manage test cases, run manual and automated tests, debug issues, and even get AI-suggested code fixes all by writing simple, natural language prompts. By bringing BrowserStack’s tools directly to SQAI, the MCP server helps you test faster, reduce context switching, and ship code with confidence.

When these two servers work together, your AI assistant can coordinate complex cross-system tasks within a single chat session:

  • Requirement-to-Execution: Ask the AI to "Find the latest user story in SQAI and run a cross-browser smoke test for it on BrowserStack".

  • Context-Aware Debugging: If a test fails on BrowserStack, the AI can fetch the logs from the session and compare them against the original documentation stored in SQAI to identify if it is a bug or a requirement change.

  • Automated Syncing: As testing completes on real devices, the AI can automatically update the test results and traceability matrices within the SQAI platform.

Combined Setup (VS Code / GitHub Copilot)

To use them together, you must list both servers in your mcp.json file. This ensures the AI has simultaneous access to the tools of both platforms. (Does not have to be VS Code, can be any other AI-powered Copilot or IDE we support)

Configuration Snippet

Add this to your .vscode/mcp.json (Workspace) or your User Configuration file:

JSON

{
"inputs": [
{
"type": "promptString",
"id": "sqai_api_key",
"description": "API key for SQAI MCP",
"password": true
}
],
"servers": {
"SQAI-Suite": {
"type": "http",
"url": "https://api.sqai-suite.com/mcp",
"headers": {
"Authorization": "Bearer ${input:sqai_api_key}"
}
},
"browserstack": {
"command": "npx",
"args": ["-y", "@browserstack/mcp-server@latest"],
"env": {
"BROWSERSTACK_USERNAME": "<your_username>",
"BROWSERSTACK_ACCESS_KEY": "<your_access_key>"
}
}
}
}

Example Multi-Server Prompts

Once both servers are active (indicated by green status in your client), you can use "interconnected" prompts:

  • Test Generation:

    "Retrieve the 'Checkout' user story from SQAI. Based on its criteria, generate a Playwright test and run it on BrowserStack using an iPhone 15 Pro."

  • Documentation Sync:

    "Run an accessibility scan on the login page via BrowserStack. If violations are found, create a new tests in SQAI with the scan details attached."

  • Impact Analysis:

    "I've updated the button component. Use SQAI to find which test cases are affected, then run only those specific tests on BrowserStack across Chrome and Safari."

Additional Resources

Troubleshooting Multi-Server Workflows

Issue

Potential Cause

Solution

Conflicting Tools

Tool name overlap.

Refer to tools by their full name if prompted (e.g., sqai-suite.get-test-case vs browserstack.createTestCase).

Context Overload

Too much data.

If the AI gets confused, explicitly tell it which server to use first: "First, use SQAI to get the story, then use BrowserStack to run it".

Auth Failures

Different auth types.

Remember SQAI typically uses a promptString for keys, while BrowserStack often requires env variables in the JSON.

Missing Tools

Not in Agent mode.

Both servers require Agent mode to be active in your chat client to perform autonomous cross-server orchestration.

Did this answer your question?