The halting problem bit is a shower thought with no supporting evidence whatsoever, so your complexity lowering scenario may well be doable. However, paring complexity is a strictly developer-side measure of goodness (that is assuming that the low complexity result is still readable, maintainable...) - we can agree that reducing bugs is also a very good user side metric, but that tells only a (little) part of the story.
In my experience, developer-side evaluation has a very low impact (I was about to write: zero) on the perceived and actual goodness of the software itself. Which is tied mostly to factors such as user experience, fit to the problem it was designed for and to the organization(s) it is going to live in (user experience again). These properties do not strike me as amenable to algorithmic improvement, no more than "pleasant body lines and world class interiors" in the original car analogy. But they are a (big) part of good software design, besides being the 'raison d'etre' of the darned thing to begin with.
But let's forget cars, as hard as it is. Few months ago HN was running the story about developing software in Oracle. Now, Oracle may be by now a little soft around the edges, but I think that most would agree that it has been setting the standard for (R)DBMS for decades. Success may not on itself be the tell-all measure of software goodness, but the number of businesses that have been willing to stake the survival of their data on Oracle is surely a measure of its perceived goodness (as that other elusive factor - hipness - tends not to be paramount in the DBMSs business).
The development side story, taken as face value, was pure horror (https://news.ycombinator.com/item?id=18442941). Everything in it spoke bad, outdated, rotting design. The place must be teeming with ideas on how to improve just about everything in that environment. And yet if that came to be, maybe by some nifty edge pruning algorithm, it would do nothing to improved the goodness-to-the-world measure of the software, not until the internals' improvement translated to observables in the user base experience.That type of improvements will still require vaste amount of non-algorithmic design and, in the meantime, a very concrete risk will be run of deteriorating the overall user experience (because ehi, snafus will happen).
This (internals are just a small part of the story) is one of the reasons why so many reimplementations I have seen failed ("ehi, let's rewrite this piece of shit and make it awesome") and the reason because everyone resists the move from IPV4 to IPV6. I could think of many more examples.