They haven't improved the underlying link rate at all. In fact, the FEC overhead is going to reduce the effective link rate. However, in some edge-case high packet loss scenarios, the reduced packet loss will more than make up for the reduced effective link rate.
I remember doing research on FEC codes back in 2003-2004 for developing a protocol for sending large files over satellite links to multicast recipients when I was working for SmartJog.
Interestingly, this tool looks like it is useful for the problem you describe:
udpcast (http://www.udpcast.linux.lu/satellite.html) -> Sends/receives files over UDP. Supports FEC.
netcat -> Join file I/O (from udpcast) to local TCP/UDP sockets
openvpn / iptables userspace -> Provide connection routing.
Seems like an evenings work.
Edit: udpcast might not be suitable for this. I am surprised noone has already built a simple UDP FEC tunnel program...
Edit: Not RFPs, duh
Universities desperately need to commercialise their research in order to survive.
Basically, you split your data into blocks, XOR random blocks together and the client can recreate the data by solving the equation of which blocks where XORed with which.
A good tutorial is here: http://blog.notdot.net/2012/01/Damn-Cool-Algorithms-Fountain...
And a fast implementation: http://en.wikipedia.org/wiki/Raptor_code
This problem is very common but it wants fixing on the L2 and not TCP. Turning up the FEC on the L2 would reduce its capacity even further though since more of the bandwidth is taken up by the FEC (and so does this TCP level FEC).
3G gets it wrong on the other extreme, it pretends to always have 0% packet loss, your packets just sometimes show up 30-100 seconds late and in order.
Let's suppose you have a mathematical process that outputs a stream of [useful] data. The description of the process is much, much smaller than the output. You can "compress" the data by sending the process (or equation) instead. Think π. Do you transmit a million digits of π or do you transmit the instruction "π to a million digits"? The latter is shorter.
Now, reverse the process: given an arbitrary set of data, find an equation (or process) that represents it. Not easy for sure. Perhaps not possible. I recall as a teenager reading an article about fractals and compression that called on the reader to imagine a fractal equation that could re-output your specific arbitrary data.
If I've totally missed the article's point, please correct me, but explain why it also talks about algebra.
EDIT: I re-read and noticed this: "If part of the message is lost, the receiver can solve the equation to derive the missing data." I can see the FEC nod here.
Guh. I guess I'm blind tonight. "Wireless networks are in desperate need for forward error correction (FEC), and that’s exactly what coded TCP provides." I cannot for the life of me understand why they'd need to keep this a secret.
They have a patent on this: http://www.google.com/patents/US20120201147
(Disclosure: I was an intern there in 2009, when it was IPeak Networks.)
Obviously this is a much weaker and less efficient solution that what is proposed in the paper, but this would be trivial to implement. I believe netem allows you to simulate this.
It also means less power consumption on mobile phones. There will be no need to increase signal power to get better speed or voice quality.
Also, how were we not doing this already?
Also, I need a writer. Whoever wrote this up made it sound WAY cooler than when I explain error correcting codes.
TCP is old and reliable and there are massive costs associated with switching to anything else. I think that's why we weren't doing this yet, and I think that's why this initiative will fail too.
I wonder if something along the lines of old-school Parity files would work in the packet world? Basically just blast out the packets and any that were lost, you just reconstruct using the meta-data sent with the other packets.
The way coding works is that you divide data in to n data packets and then calculate c coding packets, and then you can lose any c out of (n+c) packets and still reconstruct the data. You need to see all the data packets in a group before you can finish calculating any of the coding packets, but nothing stops you from sending the data packets immediately.
On the other hand, non-hardware-based error correction that allows for recovery from a large amount of data loss is relatively processor intensive, and the processors in a lot of network equipment are very slow. So you can easily see an increase in latency, not because the packets take longer to send, but because the processor in the network equipment is too busy calculating error correction to timely forward your packets.
I imagine the higher latency was used to offset lower bandwidth due to FEC encoding loss, by amortizing it over more bits.