- torrents work if they are shared more than they are leeched, so it's just easier to have it run in the background. There's very little reason to not use the existing client/server models like transmission or tTorrent - indexing and search still relies too much on third parties. Integrate magnetico (passive indexing), bep 33 (dht-based scraping) and bep 51 (dht-based indexing) and the user gains an order of magnitude of autonomy because they don't rely on centralized authorities anymore - is there a possibility yo go further ? Make sharing files easier with bittorrent, or something like that ? Bittorrent clients should help with that
Innovation in this sector doesn't mean nicer fonts, there are real avenues for meaningful change that actually improve users' life
- torrents work if they are shared more than they are leeched, so it's just easier to have it run in the background. There's very little reason to not use the existing client/server models like transmission or tTorrent
- indexing and search still relies too much on third parties. Integrate magnetico (passive indexing), bep 33 (dht-based scraping) and bep 51 (dht-based indexing) and the user gains an order of magnitude of autonomy because they don't rely on centralized authorities anymore
- is there a possibility yo go further ? Make sharing files easier with bittorrent, or something like that ? Bittorrent clients should help with that
I don't think decentralized indexing is a solvable problem. There's too much crap, and too little in the way of useful signal to tell it apart.
What would be really useful, though, would be a way to separate indexing from the actual torrents.
Let me give you an example: say you're a member of a private tracker, and you see some release that looks nice. You click it in your torrent client. It then tries to find the same torrent on a public tracker (e.g. by hash), and downloads from and seeds to both.
If a system like this existed, then you could use private trackers just for the indexing, and then the computers could be left to do the actual work of finding the data.
Also, as a nice side bonus, this would allow for the "long tail" style of seeding that the more old-fashioned protocols have. If you're not seeding a whole torrent, but just a series of single files, nothing prevents you from just opening up e.g. the 'Downloads' folder to the public, and allow them to download if they can provide the hash of the file. This would make it much easier to find seeds, since the limit usually isn't bandwidth but storage.
I've been using decentralized indexing for more than a year and I haven't needed to go back to ad- and malware-ridden search sites. I find the number of seeders a good enough indicator of quality, but of course that only works for popular content.
EDIT: thanks for formatting, I always forget this double linefeed rule
Edit: slight rewording
Suggest you remove the examples of downloading copyrighted content and the yify website from your readme.
I suppose we could make Nerd Fonts opt-in via the config, but I don't think it's that unusual an ask. See: https://starship.rs
The images in the README have been updated.
Nerd Font is pretty cool, I'd give it a go - I don't know how terminals gracefully fallback if the glyphs don't exist.
1: https://github.com/smmr-software/mabel/blob/main/desert.png
OP: does this support version 2 of the spec?
I recently learned that transmission got hacked (more than once apparently) so I decided to stop using it. The alternatives seem to be deluge or qbittorrent. I picked the latter because it has labels (and supports moving finished downloads to different folders depending on label), which is a feature I'd always wanted in a torrent client. But my point is it seems to me all the torrent clients seems very similar, barring very minor features.
Anyway, what do you people use and why?
i use it because of drm.
You can probably debate endlessly piracy, ethics, "backups of media that have been purchased but have rotted away due to disc rot", etc. But fundamentally torrents are just a mechanism for delivery of large files or large sets of files, and they work extremely well for things in the tens-of-gigabytes range.
Yet the overwhelming use by far is just to pirate vids.
I don't think it's anything to be sad about; people use it because it works. It would be much sadder if it weren't used at all.
That's pretty much all I use it for. Great for grabbing the latest shiny Linux iso image.
Wish more stuff used it.
it's durable (as long as the process is still running it will keep sending on reconnection) and more private