Since IPFS is a global swarm (or, in principle, "interplanetary"), then your download will fetch the files I'm seeding. Indeed, you can fetch chunks from different files, if they happen to share some contents (e.g. if a Fedora release patches the end of a script, you can still fetch the initial part without the patch).
Since we're not artificially partitioning/siloing our data, two people who happen to share the same file will end up generating the same URL: fetching that URL can get chunks from either. This is nice since it avoids any need to coordinate, or even know of each others' existence: we can just share whatever we like, and the network will ensure downloaders will find seeders. This can even happen across time: if some files lose all their seeders then their URLs will stop resolving; but if someone later happens to share the same files, then those same URLs will start resolving again.
This makes it easy to host Web sites without needing a reliable machine or connection (just seed it from a few machines; as long as one's up, it should work); it also lets us refer to URLs that are host-agnostic, and which we can even seed ourselves (rather than e.g. npm.org URLs, which may be deleted like "leftpad"; or github.com URLs which may be deleted, e.g. when projects jumped ship after Microsoft bought the site).