I presume you meant "verify" in the last sentence.
So I completely agree with you that throwing away classical knowledge won't go very far, which is why that's not what we're doing. We utilizing neural networks within and on top of classical methods to try and solve problems where they have not traditionally performed well, or utilizing it to cover epistemic uncertainty from model misspecification.
I think it would be a good topic for a blog post or teaching paper that shows how to do this for very simple problems "end-to-end" (e.g. advection eqt, diffusion eq, advection-diffusion, burgers eqt., poisson eqt, etc.).
I see the appeal in showing that these can be used for very complex problems, but what I want to understand is what are the trade-offs for the most basic hyperbolic, parabolic, and elliptic one-dimensional problems. What's the accuracy? What's the order of convergence in practice? Are there tight upper bounds? (does that even matter?), what's the performance, how does the performance scale with the number of degrees of freedom, what does a good training pipeline look like, what's the cost of training, inference, etc.
There are well-understood methods that are optimal for all of the problems above. Knowing that you can apply these NN for problems without optimal methods is good, but I'd be more convinced that this is not just "NN-all-the-things hype" if I were to understand how these methods fair against problems for which optimal methods are indeed available.