This model makes "bad citizens" a nightmare because they are not isolated bits that fails individually but "something that fail inside the system", this might led to a slow development for revolutionary things, very quick development for incremental evolution, witch is probably very good.
Unfortunately this model means end-user programming, the power of computing in users hands, not the dumb human operating like a monkey an endpoint of a remote service/mainframe model alike. A thing most IT giants really dislike because it can easily erase their business...
The problem you describe is solved by introducing capabilities such as in seL4. Without beginning with a capability, something cannot do something else.
About security yes, the classic model means a full trust, witch yes in the modern world is a big issue but not really that big because also means FLOSS even if back at Xerox that was not advertised at all and their target was commercials. FLOSS at scale means little room for hostile actors since the code born and evolve seen by many eyes, and while it's perfectly possible injecting malicious code (as the recent XZ attack show very well) it's hard to keep it unseen for long and even harder on scale since such ecosystems tend to be far less uniform/consistent than modern commercial OSes that are almost the official ISO with marginal changes per host.
About capacity: in my personal EXWM/Emacs desktop I can link an email (notmuch-managed) in a note (org-mode) and while composing an email I can preview inside it a LaTeX fragment or solve an ode simply because the system I'm in offer such functionalities without tied them to a specific UI, a set of custom APIs and limited IPCs (essentially just D&D and cut&paste, since Unix pipes, redirections etc are not in the GUIs), also in eshell I can use a different kind of IPC, like redirection to buffers, de-facto creating a 2D CLI, witch happen to be a GUI, a DocUI. Long story short we can do modern software with classic ecosystems, it's definitively time consuming, but doable and keep up the current Babel tower of pseudo-isolated bits it's not less time consuming.
I call such phenomenon a cultural clash: the modern model is the ignorant model where anyone can step in, like Ford-model workers just able to do 1/4 turn of a key, but doing anything a continuous struggle and anything learned is short living. The classic model is the cultural model, stepping in is long, demand effort but anything learnt is an investment for life and piled up knowledge pay back all the time making anything easier or at least far less hard than the ignorant model... Not, take a look at our society: we have schools, meaning a year long period of learning before being active in society, learning things that theoretically will be valid and useful for a lifetime. So, if in the society we try the acculturated model why not doing the same in the nervous system of our society witch is IT?
It's interesting to think of how this sort of "neighborhood watch" could be incentivized, since it's probably way too big of a task for purely volunteer work. It's tricky though because any incentive to remove dependencies would automatically be a perverse incentive to ADD dependencies (so that you can later remove them and get the credit for it).
I've thought a little about, for example, building something that could slice just the needed utility functions out of a Shell utility library. (Not really for minimizing the dependency graph--just for reducing the source-time overhead of parsing a large utility library that you only want a few functions from.)
Would obviously need a lot of toolchain work to really operationalize broadly.
I can at least imagine the first few steps of how I might be able to build a Nix expression that, say, depends on the source of some other library and runs a few tools to find and extract a specific function and the other (manually-identified) bits of source necessary to build a ~library with just the one function, and then let the primary project depend on that. It smells like a fair bit of work, but not so much that I wouldn't try doing it if the complexity/stability of the dependency graph was causing me trouble?
That said, I hesitate to put too much emphasis on this. Largely suspecting that we have such granular dependency charts because we can. There is virtually no harm in having such large charts of things. Not that it helps, of course, but what is the harm?
Nor are you lacking a ton of resilience here? If you know what you want your website to be, you can probably recreate it faster than you'd suspect. Especially if you can target more modern browsers.