I agree with your entire post
but, and I‘m saying this as a fulltime python dev, there‘s often a point where it starts being bothersome, and that usually comes only later in the lifecycle of an application after it had some organic growth. Some day e.g. a sales manager comes down to your lair and asks you if you couldn‘t just also parse this little 200MB Excel spreadsheet after it came over the network such that your ETL process could save it into a new table. And boom you‘re into CPU bound lands now. Often it‘s fine, you can wait those 1-2min for a daily occuring process. But what if for example you put this whole component behind a REST API that is behind a load balancer that is set with a certain timeout? There are even strict upper limits if for example you chose AWS Lambda for your stuff.
And suddenly you need to introduce quite a bit more technical complexity into this story that‘s gonna be hard to explain to management - all they see is that you now can insert a couple of millions of DB rows and their Big Data consultants[TM] told them that this is nowadays not even worth thinking about.
Point being: If your performance ceiling is low, you‘re gonna hit it sooner.