And of course Python has a memory model. While that model is not as well understood as C's model, it is the key to Python's success and popularity as a generic programming language and as a numeric/scientific programming language. Python's memory model unlike C's or Fortran's, isn't designed for high performance. It's designed for rich abstractions, high ergonomics and high interoperability with those performant languages. For most people, the processing time lost executing python code is an acceptable tradeoff for the highly expressive control that Python gives them over the scheduling of lower level operations.
This does not mean Fortran is bad (obligatory disclaimer for Fortran fans).
One of the best improvements to Python's memory model was extending it to allow sensible access to blobs of memory containing regular structures such as vectors and matrices, making it relatively easy to interface with and control already-available, well-tried, highly-optimised algorithms.
There is an interesting writeup by dvyukov that shows the potential cost of "thin veneers" :) https://github.com/dvyukov/perf-load