Actual 1:1 dedicated geostationary which is very expensive in $/Mbps is a fixed, flat 492 to 495ms rtt latency, plus or minus a tiny bit either way, depending on modem encode and decode FEC type.
Consumer grade geostationary could be anywhere from 495ms in the middle of the night local time to 1350ms or worse.
Re: the figure for terrestrial fiber service, I'm curious how the presumed residential last mile "fiber" link in Geoff's example which is not real gigabit service would compare to one of the symmetric gigabit last mile operators that exist in some cities. Where you can see actual 980 x 980Mbps speed test results from fast.com or speedtest.net in a browser.
I'm always very suspicious of anything that says it's fiber but is limited to like 25Mbps up, either it's a totally artificial limit or in reality it's some vdsl2 link, or "fiber" delivered over docsis3 copper cable with limited upstream rf channel allocation, etc.
Well if you pay for gigabit over GPON it means you have at worst a 2:1 split, which gives you 1.2/1.2 Gbps. Even assuming they're still using an MPoA/ATM transfer layer like they did on DSL (keep in mind, these are ITU standards - GPON is not Ethernet - though there are fiber networks which are Ethernet, these are AONs by definition (no splitters possible), and use 1000BASE-BX10), that doesn't have nearly enough overhead to reduce 1.2 Gbps to below 1 Gbps.
Not sure what you mean by at worst a 1:2 split since GPON last mile can be implemented at the physical fiber level as many possible configurations such as a 8:1, 16:1, 32:1 split. Your ISP isn't likely to tell you or share with you the optical link budget and split of your segment.
But within the Starlink network signals will go essentially 100% c, and as the constellation approaches design capacity the paths will get closer to ideal too (at least to the nearest ground station). At long enough range the 40% speed advantage alone will make up for orbital RTT penalty even before path savings which means Starlink will be able to offer much lower latency then fiber. I think it'll be the first time though where we see a weird split where your local connection speed is no longer the sole deciding factor and you can actually see a radical latency split between local and very long range traffic for two different WAN types.
I have strong doubts this capacity will be used for random user traffic. It's worth much more to use it to serve oceans, poles, islands, and areas where ground stations can't be built (yet). Other capacity could easily be sold to HFT firms, etc.
https://www.google.com/maps/d/viewer?mid=1H1x8jZs8vfjy60TvKg...
Personally, I'm excited to see a latency reduction for across the pond comms.
I don't see much reason to be pessimistic about it, we shall see!
Wait, what happened to the missing 225 Starlink spacecrafts?
Yesterday was the very first time I have seen any issues at all; during a pretty ugly freak wind/snow storm things did get a little spotty off and on for an hour, but remained flawless throughout the rest of it.
Could be a regional thing too I suppose? I'm around 51N 111W and generally speaking it seems like I have really good coverage and maybe a relatively empty cell?
Actually figuring out the max speed for a connection is a surprisingly hard problem. Here are some of the things I found:
1. Latency is absolutely everything. With sub-2ms latency I could get 8.5Gbps downloads in a browser on JS over 10GbE on a Macbook Pro. Bump that up to 100ms and that plummets. I forget the exact numbers but this has real world consequences. Australia, for example, rolled out it's ridiculous NBN network with a max speed of 100Mbps. Well Australia has a built-in latency of 150-200ms to the US just by distance and the max effective download speed would be a mere fraction of that;
2. Larger blobs are better for overall throughput but depending on your device this may blow up your browser. Unfortunately for the Internet you're never really going to reliably get an MTU >1500 unles you control every node on the network;
3. This sort of traffic exposed a lot of weird browser bugs, even with Chrome. For example, Chrome could get in a state where despite all my efforts the temporary traffic would get cached and would fill up your /tmp partition on Linux and blow up with weird errors that don't relaly give you any clue that that's the problem and only restarting Chrome will solve the issue. I could never figure out why. Not sure if it's still an issue;
4. The author I guess was talking about Linux defaults but there are a lot of kernel parameters that affect this (eg RPS [2] is absolutely esential for high-throughput TCP beyond a certain point);
5. BBR was in development at the time (ironically I was next to that team at the time for a few months) so I can't really speak to how this changes things. I was going this development back in 2016-2017;
6. Among people who knew more about this than me the consensus seemed to be that BSD's TCP stack was superior to Linux's. Anecodtally this is backed up by real-world examples like Facebook having extreme difficulty moving away from freeBSD to Linux for WhatsApp. That took many years apparently; and
7. I agree with the author here on the impact of packet loss. It's affect on throughput can be devastating and (again, pre-BBR) the recovery time for maximum throughput could be really long.
[1]: http://speed.googlefiber.net/
[2]: https://access.redhat.com/documentation/en-us/red_hat_enterp...
Regarding BBR I've also found it to be a lifesaver for individual streams over high latency internet links, particularly when there is loss.
Also, kudos (?) it outlived Google+ as linked from the footer :-D
https://en.wikipedia.org/wiki/Bandwidth-delay_product
https://networklessons.com/cisco/ccnp-route/bandwidth-delay-...
Every now and then some network middlebox ends up screwing with your TCP window sizing and then the bandwidth delay product rears its ugly head.
Terrestrial Fibre: 20.6ms IPv4 / 21.5ms IPv6
Geo-Stationary Satellite Service: 660ms IPv4
LEO Satellite Service: 60.6ms IPv4https://linux.die.net/man/8/ping
Sure, it is not 100% reliable and in some cases it could show values not corresponding to real delay.
GSS ~650 ms
LEO ~60 ms