That doesn't mean that it is "frequently" faster to send packets to other continents than change pixels on screens. It doesn't even apply to modern tvs set up for latency, let alone computer monitors.
There are just so many ways to accidentally get many frames of input lag. OS window compositors generally add a whole frame of input lag globally to every windowed app. Anything running in a browser has a second compositor in between it and the display that can add more frames. GPU APIs typically buffer one or two frames by default. And all of that is on top of whatever the app itself does, and whatever the monitor does (and whatever the input device does if you want to count that too).
Any game that runs at half the frame rate of a cheap TV and has an architecture designed to not draw frames immediately has nothing to do with what you're saying. That would be like someone deciding to send packets every 100ms and claiming 100ms extra latency.
All of this is forgetting that packets can be fired off whenever but with vsync on, frames need to wait for a specific timing. If you take that away you can set pixels with less latency.
Is there a way to verify this is the case? In X11 Linux specifically.
Also does variable refresh rate like freesync help with this?