1) scrapers just run a full browser and wait for the page to stabilize. They did this before this thing launched, so it probably never worked.
2) The AI reading the page needs something like 5 seconds * 1600W to process it. Assuming my phone can even perform that much compute as efficiently as a server class machine, it’d take a large multiple of five seconds to do it, and get stupid hot in the process.
Note that (2) holds even if the AI is doing something smart like batch processing 10-ish articles at once.
Yes. Obviously dumb but also nearly 100% successful at the current point in time.
And likely going to stay successful as the non-protected internet still provides enough information to dumb crawlers that it’s not financially worth it to even vibe-code a workaround.
Or in other words: Anubis may be dumb, but the average crawler that completely exhausting some sites resources is even dumber.
And so it all works out.
And so the question remains: how dumb was it exactly, when it works so well and continues to work so well?
Only if you don't care about negatively affecting real users.
I’m not convinced by that makes sense.
Now ideally you would have the resources to serve all users and all the AI bots without performance degradation, but for some projects that’s not feasible.
In the end it’s all a compromise.
regarding authentication mentioned elsewhere, passing cookies is no big deal.
https://dukespace.lib.duke.edu/server/api/core/bitstreams/81...
And of all the high-profile projects implementing it, like the LKML archives, none have backed down yet, so I’m assuming the initial improvement in numbers must continue or it would have been removed since
if you want to save some $$$ you can spend like 30 minutes making a cracker like in the article. just make it multi threaded, add a queue and boom, your scraper nodes can go back to their cheap configuration. or since these are AI orgs we're talking about, write a gpu cracker and laugh as it solves challenges far faster than any user could.
custom solutions aren't worth it for individual sites, but with how widespread anubis is it's become worth it.
And frankly processing a single page of text is run within a single token window so likely is run for a blink (ms) before moving onto the next data entry. The kicker is it's run over potentially thousands of times depending on your training strategy.
At inference there's now a dedicated tool that may perform a "live" request to scrape the site contents. But then this is just pushed into a massive context window to give the next token anyway.