At the very least, this should use SHA-256.
If they really did it right, though, the protocol would use a secure tree hash. The construction they're using has trivial collisions, which are only avoided because the size of the file comes from a trusted source. A good hash (e.g. the Sakura construction) doesn't have this problem. Fixing that would make the resulting torrent files or URLs a bit shorter, as the size could potentially be omitted.
Could somebody elaborate on this? I assume that you're referring to the fact that (without the file size information) somebody could pretend that the concatenation of the child hashes at an inner node is actually the file content in this position. Is there anything else?
It seems that this could be trivially fixed by adding a single bit to the data hashed in each node to indicate whether the node is a leaf or an inner node, or by just adding the size information to the hash data in the root node.
Actually, you want to know the file size very early anyway, since this simplifies the data structures required to keep track of chunks you already have, allows you to already reserve hard disk space, and so on.
The file size being there does complicate an attack - but with the weaknesses in SHA-1, I certainly wouldn't feel comfortable with it.
This is a disaster of a spec, we already had TTH at this point and that at least did it better: it needed revising and should not be implemented by anyone.
Today, you should consider using BLAKE2b's tree hash for this purpose. It walks all over this construct from every direction.
> the protocol [sh]ould use a secure tree hash
Indeed, these are two huge flaws in the proposal that would surely bite us sooner or later.
Moreover, these issues aren't even mentioned in the "Discussion" section. Instead, they discuss pretty minor stuff such as binary versus n-ary trees or how to interface legacy clients.
Realistically, these are bits of a video stream, not your bitcoin wallet or some other bits where security it of the upmost concern. Were talking millions of dollars of equipment to find a collision in SHA1 today....
How exactly is a 160 bit hash too short? Collisions can be had after 2^80 trys in naive scenario and 2^57.5 with an active attacker, not exactly easy...
Breaking crypto, especially new crypto, should pass a much higher bar than "not exactly easy". 2^57.5 is not all that large by the standards of a big cloud provider or a government.
https://datatracker.ietf.org/wg/ppsp/documents/
Basically, instead of the stream source having to sign every single new chunk (so peers can verify that they're getting the right data), the source signs subtree hashes of the new data and slowly builds up a larger hash tree. Once the stream is over, the complete hash tree is instantly seedable by anybody in the original stream.
BitTorrent traditionally solves this using a hash list. All of the data in a torrent is broken up between pieces of a chosen size, and hashes are calculated for each of those pieces individually. (These are generally not aligned with file boundaries, which is why you may have noticed that even if you tell your torrent to only download a certain set of files, you may still end up with some data from adjacent files.)
This entirely hash list is included in the torrent file. For torrents with a lot of data, this could be 10MB or more.
If you're using a magnet link, that torrent file needs to be downloaded from peers. This brings back the original problem: you need to download this entire large file before you know that the peer isn't just sending you random data.
BEP-30 proposes a solution: generate a binary hash tree whose leaves are the torrent pieces, and include only the single root hash in the torrent file to keep it small. When you're getting pieces from a peer, they send you the missing inner hashes of the tree that you need to verify the piece.
The minimum data transfer in ideal circumstances is increased a bit, but the peer-to-peer system is made more robust, able to identify invalid data much faster.
I think it's a great modification to the protocol. Unfortunately it isn't widely-enough supported to be practical for general use.
There should be roughly as many internal nodes as there are leaves, so there is a linear space increase. As the leaves are much bigger than the internal nodes, the linear factor is small.
but bup[0] does so within a git repository to make incremental backup more efficient.
Finally! bittorrent designers are acknowledging the centralized deficiency in the torrent protocol and implementing Merkle / root hash distribution model, as eDonkey / eMule used since 2000's:
https://en.wikipedia.org/wiki/Ed2k_URI_scheme#eD2k_hash_algo...
15 years later, everything old is new again.
Thus it didn't have some of the benefits of a full tree that this 2009 Bittorrent spec was hoping to achieve, such as verifying smaller-sized chunks without a metadata-cost that grows linearly with the size of the full resource.
(AFAIK, the 1st application of multi-level Merkle trees to P2P filesharing was the TigerTree hash I wrote up with Justin Chapweske in 2002. At first glance, it looks like this proposal makes the same mistake we did in our first draft, not distinguishing between leaf and node hashes, corrected in the final TigerTree spec version of March 2003.)
To recap:
ICH : inteligent corruption hash : also known as the old original ed2k method. Root hash is md4 and is generated from a series of md4 hashes generated for 9.5mb chunks. If a file is below 9.5mb then ed2k root hash is just the real md4 hash.
AICH : Advanced inteligent corruption hash : is a full Merkle tree using SHA1, where chunks are 180kb (with exception of a chunk on the 9.5MiB boundary). This weird part/chunking allows to map 53 of these AICH chunks perfectly into the ed2k chunks.
More on this available at :
http://www.emule-project.net/home/perl/help.cgi?l=1&rm=show_...
9 years.