If a big player like Google or Netflix would decide to, say, double or triple its initcwnd from one day to the other, the result would probably be more congestion (more overloaded nodes in the whole network), higher packet loss, increase of traffic at peering points and the overall throughput to their users would be less than before - at the same time putting more load on the network so that even users not using Netflix or Google would be affected and suffer from lower throughputs.
You can view the whole situation also under aspects of game theory: If in a movie theater at the end of the film everybody got up immediately and ran to the exit door at the same time, the effect would be a clogged door and people would get out of the theater at a slower rate than in the current situation, where people look at the congestion rate and say "naah to many people at the exit, i sit for a while longer". TCP has built-in backoff and there is a wide range of different algorithms that solve this "try to get to the point so I can send as much as possible as quickly as possible, but if packet loss occurs go slower in order not to put even more pressure on the network". Search for TCP Reno, Tahoe, Vegas, Cubic, BIC etc. or look at the Wikipedia page of what is available: https://en.wikipedia.org/wiki/TCP_congestion_control
EDIT: regarding ISP applying backpressure: They do, in fact every router (even the ones in your home) does this automatically: You have a buffer you put packets in scheduled for delivery. If the buffer-file rate exceeds the buffer drain rate (=input data rate is higher than outgoing) the buffer runs full and packets get dropped. But this is not mitigating the problem: Every dropped packet gets retransmitted and will come again a few ms later. So if the sending side wouldn't back off (=throttle down transmission rate) and just retransmit at the same rate as before, the buffer would permanently run full, packet drops would occur again, and your connection will basically fail.
No comments yet.