Reading a million individual keys would be quite rare I would guess, but that isn’t really the issue for a large range. The keys at the start and the end of the range are what’s counted in that case. So if you read the range A-Z, the size is only those two keys A and Z, not the size of keys in between.
More relevant for the current storage engines (although changing in a future storage engine from digging through the code and the abstract for an upcoming talk) is the five second transaction duration limit. That’s just because the multi-version data structure only includes the last 5s of versions.