I'm not sure the end result would be lower scoring/smartness than an LLM could achieve in the same duration.It probably wouldn’t with current models. That’s exactly why I said we need smarter models - not more speed.
Unless you want to “use that for iteration, writing more tests and satisfying them, especially if you can pair that with a concurrent test runner to provide feedback.” - I personally don’t.