Congratulations, you discovered that generating code is only part of software development process. If you don't understand what the code is actually doing, good luck maintaining it. If it's never reviewed, how do you know these tests even test anything? Because they say "test passed"? I can write you a script that prints "test passed" a billion times - would you believe it is a billion unit tests? If you didn't review them, you don't have tests. You have a pile of code that looks like tests. And "it takes too long to review" is not an excuse - it's like saying "it's too hard to make a car, so I just took a cardboard box, wrote FERRARI on it and sit inside it making car noises". Fine, but it's not a car. It's just pretending. If it's not properly verified, what you have is not tests, it's just pretending.
1. A more biological approach to programming - instead of reviewing every line of code in a self-contained way, the system would be viewed in a more holistic way, observing its behaviour and test whether it works for the inputs you care about. If it does, great, ship it, if not, fix it. This includes a greater openness to just throwing it away or massively rewriting it instead of tinkering with it. The "small, self-contained PRs" culture worked well when coding was harder and humans needed to retain knowledge about all of the details. This leads to the next point, which is
2. Smaller teams and less fungibility-oriented practices. Most software engineering practices are basically centred around making the bus factor higher, speeding onboarding up and decrease the volatility in programmers' practices. With LLM-assisted programming, this changes quite a bit, a smaller, more skilled team can more easily match the output of a larger, more sluggish one, due to the reduced communication overhead and being able to skip all the practices which slow the development velocity down in favour of doing things. A day ago, the good old Arthur Whitney-style C programming was posted to this site (https://news.ycombinator.com/item?id=45800777) and most commenters were horrified. Yes, it's definitely a mouthful on first read but this style of programming does have value - it's easier to overview, easier to modify than a 10KLOC interpreter spanning 150 separate files, and it's also quite token-efficient too. Personally, I'd add some comments but I see why this style is this way.
Same with style guides and whatnot - the value of having a code style guide (beyond basic stuff like whitespace formatting or wordwrapping on 160) drastically drops when you do not have to ask people to maintain the same part for years. You see this discussion playing out, "my code formatter destroyed my code and it made much more unreadable" - "don't despair, it was for the greater good for the sake of codebase consistency!". Again, way less of a concern when you can just tell an LLM to reformat/rename/add comments if you want it.
I'd definitely say that getting the architecture right is way more important, and let the details play out in an organic way, unless you're talking about safety-critical software. LLM-written code is "eventually correct", and that is a huge paradigm shift from "I write code and I expect the computer to do what I have written".