In the RFC's architecture, the request flow is like so:
1. CLIENT sends anonymous request to ORIGIN
2. ORIGIN sends token challenge to CLIENT
3. CLIENT uses its identity to request token from ISSUER/ATTESTER
4. ISSUER/ATTESTER issues token to CLIENT
5. CLIENT sends token to ORIGIN
You can see how the ISSUER/ATTESTER can identify the client as the source of the "anonymous request" to the ORIGIN because the ISSUER, ATTESTER and ORIGIN are the same entity, so it can use a timing attack to correlate the request to the ORIGIN (1.) with the request to the ISSUER/ATTESTER (3.).
However you can also see that if a lot of time passes between steps (1.) and (3.), then such an attack would be infeasible. Reading past your quote from RFC 9576 § 4.1., it states:
> Origin-Client, Issuer-Client, and Attester-Origin unlinkability requires that issuance and redemption events be separated over time, such as through the use of tokens that correspond to token challenges with an empty redemption context (see Section 3.4), or that they be separated over space, such as through the use of an anonymizing service when connecting to the Origin.
In Kagi's architecture, the "time separation" requirement is met by making the client generate a large batch of tokens up front, which are then slowly redeemed over a period of 2 months. The "space separation" requirement is also satisfied with the introduction of the Tor service.
There is some more discussion in RFC 9576 § 7.1. "Token Caching" and RFC 9577 § 5.5. "Timing Correlation Attacks".
One question you may have is: Why wasn't this solution used in the RFC?
This can be understood if you look at the mentions of "cross-ORIGIN" in the RFC. This RFC was written by Cloudflare, who envisioned it's use across the whole Internet. Different ORIGINs would trust different ISSUERs, tokens from one ORIGIN<->ISSUER network might not work in another ORIGIN<->ISSUER network. This made it infeasible for clients to mass-generate tokens in advance, as a client would need to generate tokens across many different ISSUERS.
Of course, adoption was weak and there ended up being only one ISSUER - Cloudflare, so they adopted the same architecture as Kagi where clients would batch generate tokens in advance (batch size was only 30 tokens though).
RFC 9576 § 7.1. also mentions a "token hoarding" attack, which Cloudflare felt particularly threatened by. Cloudflare's Privacy Pass system worked in concert with CAPTCHAs. Users could trade a completed CAPTCHA for a small batch of tokens, allowing a single CAPTCHA completion to be split into multiple redemptions across a longer time period.
However, rudimentary "hoarding"-like attacks were already in use against CAPTCHAs through "traffic exchanges". Opening up another avenue for hoarding through Privacy Pass would have only exacerbated the problem.