Right, exactly. It's obvious too that software has scaled faster than hardware in the sense that to do an equivalent task like say, boot to a usable state, takes orders of magnitude longer today than it it used to, despite having hardware that's also orders of magnitude faster.
So when I see demo of ported software that does something computing used to do back in the 90s (but slowly), I'm really only impressed by the massive towers of abstraction we're building on these days, but what we're actually able to do is not all that much better. To think that I'm sitting on a machine capable of billions of instructions per second, and I'm watching it perform like a computer doing millions, is frankly depressing.
All of this is really to make the programmers more efficient, because programmer time is expensive (and getting stuff out the door quicker is important), but the amount of lost time (and money) on the user's end, waiting for these monstrosities of abstraction to compute something must far far exceed those costs.
I'm actually of the opinion that developers should work on or target much lower end machines to force them to think of speed and memory optimizations. The users will thank them and the products will simply "be better" and continue to get better as machines get better automatically.