One script tag. No APIs to expose. No code to maintain.
We built Rover because we think websites need their own conversational agentic interfaces as users don't want to figure out how your site works. If they don't have one then they are going to be disintermediated by Chrome's or Comet's agent.
We are the only Web Agent with a DOM-only architecture, thus we can setup an embeddable script as a harness to take actions on your site. Our DOM-native approach hits 81.39% on WebBench.
Beta with embed script is live at rtrvr.ai/rover.
Built by two ex-Google engineers. Happy to answer architecture questions.
The message is clear: ______ isn't a nice-to-have. It's a ______.
But here's what nobody's talking about: ...
I can't force myself to spend more thought reading than was spent writing.
We completely rewrote our launch blog post and included a demo video of the embedded agent live!
The underlying agentic performance though is undeniable.
Also like whats the point if something is AI generated, we do thorough review to ensure readibility, coherence and accuracy.
So it's a bunch of tools that Gemini can call, but the tools involve low-level interactions with the page structure in the end-user's browser.
What is the moat? What is an "agent" when you take away the powerful LLM?
Rover lives inside your website
Rover does not just live "inside" my website, because you are using Gemini 3 Flash to do all the heavy lifting.Who is the audience here? It sounds like you're addressing people who don't know how the technology works, but the cutesy concept is borderline misleading.
Also, can you back up this claim with a human-written response? (emphasis mine)
When rtrvr.ai interacts with a webpage, there is zero automation fingerprint:
No navigator.webdriver flag
No CDP-specific JavaScript objects
*No detectable automation patterns in network requests*
*Identical timing characteristics to human interaction*So our core technical moat is building up an agentic harness that can represent and take actions on any webpage without any screenshots. With this approach we even beat custom trained models like OpenAI Operator and Anthropic CUA: https://www.rtrvr.ai/blog/web-bench-results
Everyone else in the space just takes a screenshot and asks a model what coordinates to click, our core thesis is that LLMs understand semantic representations fundamentally better than vision. But with this DOM approach there is a long tail of HTML/DOM edge cases to cover that we have built out for with the 20k+ users bringing these edge cases.
Soon you will be able to record demonstration tasks via our partner Chrome Extension as well as setup knowledge bases scraped by our Cloud browsers to provide additional context to the agent. So there is a platform moat as well.
The audience is website owners who want to increase visitor engagement and conversion via a conversational interface for users.
This is more for our cloud browser platform where we launch cloud browsers for vibe scraping controlled via a custom extension instead of CDP. You can try it out at rtrvr.ai/cloud, where we can get back some strong antibot detection sites like google.com
On the reliability front we offer integrations like Recordings to ground the agent on trajectories even as the underlying website updates and Knowledge Base of your whole domain.
You the website owner can provide additional guidance to the agent.
"Sync your logged-in browser sessions from the Extension to Cloud browsers. Access authenticated sites at scale."
Genuine question, have you seen 'cookie syncing' before (google, perhaps)? If so, what was it used for? I could understand if it they were cookies for your service only. It sounds like a security nightmare waiting to happen.There are a lot of users on our rtrvr.ai/cloud platform wanting to automate sites where they have to login to and right now are resorting to including usernames/passwords in prompts (presumably these are non critical sites that they dont really care about). We are offering a more secure option of just syncing cookies from our Chrome Extension to our cloud browsers so the agents will always be logged in without any credential exposure.
You choose the specific domains for the cookies you want synced so its not like all your cookies are syncing.