Devs use AI to ship more code. That code still needs testing. If your team writes E2E tests by hand, you have a problem - same QA capacity, way more surface to cover.
AI agents can write E2E test code, but you're stuck describing flows in text - the agent clicks around via Playwright MCP, takes wrong turns, you re-prompt, retry. 30 minutes for a flow you could click through in 30 seconds.
Qure works differently. You record the scenario in Qure's built-in browser by just using your product. The AI turns that recording into code. No prompt engineering, no MCP setup, no explaining your repo in chat - point it at your project and go. Beyond recording, you can also refactor tests, update them, or write new ones from a description.
What keeps the AI output grounded:
- We match the recording against your codebase - find your page objects, helpers, constants and feed them to the agent instead of hoping it figures out your repo
- When agent runs the test, it reads real failure output, fixes with actual error and app context
This is a closed beta of an experimental product. Web only, works best with Playwright. If your project has a few dozen tests - Claude Code will honestly get you there. Qure makes a difference on larger codebases with existing test infrastructure.
5-min demo: https://www.youtube.com/watch?v=4CZw4bSSDCE
Try the beta: https://quretests.com
Happy to answer any questions about the approach, product, or where it breaks - I'm the dev on the Qure team. Egor (@250xp), who leads the project, is in the thread too.