Caching dependencies is very different from general reproducibility. Committing node_modules guarantees that the app works even if the NPM registry were to implode. Try to deploy your deno thing from a cold state (e.g. maybe you're moving to a different AWS region or a different provider or whatever) while there's a deno.land outage and it will blow up. I'm actually curious what this caching story looks like for large cloud fleet deployments. Hopefully you don't have every single machine individually and simultaneously trying to warm up their own caches by calling out to domains on the internet, because that's a recipe for network flake outs. At least w/ something like yarn PNP, you can control exactly how dep caches get shuttled in and out of tightly controlled storage systems in e.g. a cloud CI/CD setup using AWS spot instances to save money.
These deno discussions frankly feel like trying too hard to justify themselves. It's always like, hey look Typescript out of the box. Um, sure, CRA does that too, and it does HMR out of the box to boot. But so what? There's a bunch of streamlined devexp setups out there, from Svelte to Next.js to vite-* boilerplates. To me, deno is just another option in that sea of streamlined DX options, but it isn't (yet) compatible with much of the larger JS ecosystem. </two-cents>