Tus however works as a thin layer on top of HTTP, so it’s easy to drop into existing web sites/load balancers/auth proxies/firewalls. BitTorrent ports are often closed off on airports/hotels/corporate networks. But websites work. And if you can access a website, you will be able to upload files to it with tus.
Another difference is that tus assumes classic client/server roles. The client uploads to the server. Downloading is done via your regular http stack and not facilitated by tus. BitTorrent facilitates both uploading and downloading in single clients. It is more peer-to-peer and decentralized in nature, where tus clients typically upload to a central point (like: many video producers upload to Vimeo. Not very contrived as Vimeo adopted tus).
There are more differences (Discoverability, trackers, pull vs push, pulling from many peers at once) but the comment is getting very long so I hope this already helps a bit :)
Happy to dive deeper into this at request tho :)
https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview....
* chunks need to be >5MB which can be problematic on flaky/poor connections (rural areas, tunnels, clubs/basements, people on the move switching connections all the time)
* your s3 bucket needs to allow write by the world, or you need to deploy signature authentication
* there’s an s3 vendor lock-in some might worry about
* not an open protocol, no chance of advancing it with the community
That said, that still leaves a large audience for direct s3 resumable uploads and I’m thankful aws offers it!
It would seem logical and practical to allow PATCH to modify any part of a resource that is already present on the server and/or to extend it by appending. This would also make the whole thing useful beyond resuming of interrupted uploads, e.g. to allow for rsync-style updating of existing files.
My point is that you appear to be pushing for adoption of an extension that handles one specific use case for PATCH, when a more general extension is trivially possible with little to no extra effort.
I don't think I have noticed Firefox getting worse at this over time, but I'm not downloading large files every day. Would you be willing to share where you're noticing this?
[0]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requ...
seems like over design. The list will get very long over time.
Just use a single integer instead and have the header include min and max version supported. E.g.
Tus-Version: 1-4
meaning it supports version 1 thru 4. No reason to be able to say version 1 and 4 but not 2 and 3.
> An origin server that allows PUT on a given target resource MUST send
> a 400 (Bad Request) response to a PUT request that contains a
> Content-Range header field (Section 4.2 of [RFC7233]),
https://tools.ietf.org/html/rfc7231#section-4.3.4Does it use anything fancy like fountain codes or does it just renegotiate chunks each time or something else?
1. The client POSTs, this allocates a unique Location which the server returns and
2. the client saves this (e.g. in localStorage) along with local file identifiers so it can be looked up later and can
3. query that URL to check how many bytes were already received, and then
4. PATCH the remaining bytes
Repeat step 3 & 4 on failures/resumes.
Yes I know this is mainly for browsers.
Tus is also used in datacenters for high throughput & reliable transmissions. Probably in most cases rsync is a sensible choice, but sometimes maybe you already have tus, http based auth, loadbalancing, etc in place that you want to leverage, or maybe you want to avoid exchanging ssh secrets