That decision-making is quite widely distributed, with email service providers both making their own realtime or near-realtime determinations at both individual message and and network-provider levels (e.g., IP- or netblock-based quality determinations), as well as third-party quality measurements such as Spamhaus and Senderbase / Ironport (long since part of Cisco / Talos).
Increasingly even general Web traffic is subject to similar decisionmaking as with reCAPTCA, Cloudflare, and other services.
Individual decisionmaking simply does not scale to billions (or under IPv6, vastly more) relationships.
There's a give-and-take of blocking practices with Fediverse instances. I'm on a smaller instance maintained by someone I've known online for a decade or so, and who is highly principled in their decisions, though some do rub a bit raw on me. I've brought this up, and may yet decamp to another instance (or spin my own), though I'll also note that blocking fuckwits is a highly effective s/n preservation strategy. (The concept is highlighted in my Fediverse profile as a pinned post: <https://toot.cat/@dredmorbius/104371585950783019>).
And I've been online for going on 40 years. Many of the naive presumptions of kumbaya and universal brotherhood have proved grossly misdirected. I'd once subscribed to many of them. I've grown up (or old).
Email spam is a problem because it directly attacks the utility and value of the communications channel, driving people to other alternatives (or none at all in some cases). Similar issues exist with telephony abuses (robocalls, scam calls, spoofing, privacy invasion and sruveillance, etc.).
In the case of group discussion / social / microblogging platforms, a key dynamic is the nazi bar problem (let one in and you're now running a nazi bar), and the race-to-the-bottom dynamic of various forms of harassment and intimidation: those voices which don't feel safe talking on a platform or channel won't talk on that channel. They're denied a platform, and the platform is denied their voice.
(The Fediverse is actually under fairly sustained criticism by those voices for not having sufficient tools, policies, and/or enforcement.)
For commercial, advertising-supported platforms, an additional consideration is advertisers' sensibilities, and the fact that high-value advertising, brand-safe content, and attractive advertising audiences are all factors which are dependent in large part on content moderation policies. This doesn't apply generally to the Fediverse (though individual ad-supported instances might appear within it, as with Threads). It does strongly apply to Twitter and Facebook's properties generally, however.
There's also the observation that clue flees stupidity and/or banality. The more a channel is taken over by any low-signal content (whether that's overtly abusive or not), the less that intelligent and substantive contributors will care to engage with that channel.
That again is a dynamic I've observed for many decades now online, and am coming to appreciate has a long prior offline history before that.
And also, again, these are all cases where systemic abuse requires systemic response. Your initial comment is not only naive but demonstrable infeasible. It's been tried, repeatedly, and it simply does not work.
The fact that we're having this discussion on a forum in which there are in fact system-level controls over what does and does not appear, and no individual user tools to accomplish same (bar hiding specific stories) somewhat underlines my point.