But even more importantly, with a slightly better solution I don’t get woken up in the night once a week because some buffoon left behind a hundred of these little performance landmines that worked great when the table had ten rows on their dev box but causes an outage in prod the second we hit some critical threshold of data.
This takes all sorts of forms from people using quadratic time or quadratic memory to my personal favorite: pulling entire database tables across the network to do basic analytics.
The authors always have the same excuse, that “it worked good enough for what was needed at the time”, ignoring the simple fact that the version which wouldn’t have caused an outage would have been just as easy from day one.