I once owned a small business server with a Xeon processor, Linux installed. Just for kicks I wrote a C program that would loop over many thousands of files, read their content, sort in memory, dump into a single output file.
I ran the program and as I ran it, it was done. I kept upping the scope and load but it seems I could throw anything at it and the response time was zero, or something perceived as zero.
Meanwhile, it's 2022 and we can't even have a text editor place a character on screen without noticeable lag.
Shit performance is even ingrained in our culture. When you have a web shop with a "submit order" button, if you'd click it and would instantly say "thanks for your order", people are going to call you. They wonder if the order got through.
Or the always fun "profile it!" or "the runtime will optimize it" when discussing new language features and systems.
So often performance isn't just ignored, it's actively preached against. Don't question how that new runtime feature performs today or even dare to ask. No no no, go all in on whatever and hope the JIT fairy is real and fixes it. Even though it never is and never does.
There's a place for all the current tech, of course. Developer productivity can be more important at times. But it should be far more known what the tradeoffs are and rough optimization guides than there are.
Take my simple example of reading a file, processing it in memory, writing output. A process that should be instant in almost any case.
An implementation of such process that is commonly used in the front-end world would be CSS compilation, where a SCSS file (which is 90% CSS) is compiled into normal CSS output. The computation being pretty simple, it's all in-memory and some reshuffling of values.
In terms of what is actually happening (if we take the shortest path to solve the problem), this process should be instant. Not only that, it can probably handle a 1,000 of such files per second.
Instead, just a handful of files takes multiple seconds. Possibly a thousand times slower than the shortest path. Because that process is a node package with dependencies 17 layers deep running an interpreted language. Worse, the main package requires a Ruby runtime (no longer true for this example, but it was), which then loads a gem and then finally is ready to discover alien life, or...do simple string manipulation.
To appreciate the absurdity of this level of waste, I'd compare it to bringing the full force of the US army in order to kill a mosquito.
It's in end user apps too, and spreading. Desktop apps like Spotify, parts of Photoshop, parts of Office365...all rewritten in Electron, React, etc.
I can understand the perspective of the lonesome developer needing productivity. What I cannot understand is that the core layers are so poor. It means that millions of developers are building millions of apps on this poor foundation. It's a planetary level of waste.
As an industry we're not qualified to even start caring about performance when our record on correctness is so abysmal. If you have a bug then your worst-case runtime is infinity, and so far almost all nontrivial programs have bugs.
Similarly, the "profile it!" answer is often used when the person answering doesn't actually know themselves, and is just shutting down the discussion without meaningfully contributing. And it doesn't provide any commentary on why something performs like it does or if the cost is reasonable.
I agree it would be nice to value performance a bit more, but not at all costs, and depending on the use case and context of the application not necessarily as the priority over security, maintainability, velocity, reliability, etc.
That's what's preached against in theory. But in practice any performance discussion is immediately met with that answer. The standing recommendation is build it fully ignorant of all things performance, and then hope you can somehow profile and fix it later. But you probably can't, because your architecture and APIs are fundamentally wrong now. Or you've been so pervasively infested with slow patterns you can't reasonably dig out of it after the fact. Like, say, if you went all in on Java Streams and stopped using for loops entirely, which is something I've seen more than a few times. Or another example would be if you actually listen to all the lint warnings yelling at you to use List<T> everywhere instead of the concrete type. That pattern doesn't meaningfully improve flexibility, but it does cost you performance everywhere.
No, I can tell you this same record has been stuck on repeat since at least the mid 1990's. People want to shut down conversations or assign homework because it gets them out of having to think. Not because they're stupid (though occasionally...) but because you're harshing their buzz, taking away from something that's fun to think about.