Huh? It’s absolutely a temporal consistency problem. The ordering was absolutely clear and consistent (most->least popular at time of request). But the popularity scores were changing so rapidly that “at time of request” makes the ordering
unstable. If the ordering was determined by popularity at the time I accessed page 1, the ordering would have been stable.
Sure, that popularity score would be stale. But who cares?
Think of it this way. Suppose you’re viewing your Twitter timeline in recent order, and suppose the pagination (lazy loaded on scroll) worked this way, and suppose you have new Tweets arriving at the same rate you scroll. What you would see is the same page of Tweets repeat forever (well, until you hit the pagination cap).
This is why people come up with solutions like cursors. But what I was suggesting is that you can instead continue to use offsets (for the benefits discussed in the article like parallelism and predictability) if you paginate on the state of the data at the time you began (edit: or on the state of your sorting criteria at the time you began, which allows for the mitigations I described upthread).
That’s not to suggest that once you begin a pagination, you’ll forever access stale data. It’s to suggest that a set of pagination requests can be treated as a session accessing a stable snapshot.
This can also be totally transparent to the client, and entirely optional (eg pagination is performed with an offset and an optional validAt token).