I have a checksum, uh, dream for lack of a better word, but I fear I lack the talent to pull it off. The problem with a checksum is that it only tells you that an error exists (hopefully), but it does not tell you where.
Imagine writing out your stream of bytes in a m x n grid. You could then make checksums for each 1 through m columns, and 1 through n rows. This results in an additional storage of (m + n) checksum bytes. A single error is localized as an intersection of the row and column with bad checksums. One could simply iterate through the other two hundred fifty-five possibilities and correct the issue. Two errors could give two situations. The most likely is four bad checksums (two columns, two rows) and you could again iterate. The less likely is three bad checksums because the two errors are in the same row or column.
I ran the math out for the data being rewritten as a volume and a hypervolume (four dimensions). I think the hypervolume was "too much" checksum, but the three-d version looked ... doable.
Someone smarter than I has probably already done this.