From what I've seen, you've used tcc for the benchmark, right? If so, that's kinda cheating, since tcc is written to be a simple as possible (and therefore as fast as possible), so the compilation time is essentially zero. Benchmarking with gcc with optimization turned on should give you a more realistic result.
> We have thousands of tests, lots of big projects written in V (V itself is 220k loc, Vinix OS, Ved editor, Gitly, vsql etc), and they all work fine, no compiler errors.
To quote Djikstra:
Program testing can be used to show the presence of bugs, but never to show their absence.It has tradeoffs, for the project developer such as the burden of maintaining two significantly different paths for codegen, and for the project user such as the need to trust that these different codegens are largely functionally equivalent (exposing and fixing a bug in debug means it would have been but now will not be present in release). But the tcc approach is arguably one of the better ways to minimize the negatives in this tradeoff, compared to cranelift, etc., as tcc is a small but popular enough project in general usage, meaning it has some battle-testing.
And I think the point about projects that "work fine" was quite reasonable, just showing there has been a non-trivial amount of battle-testing. There is now some basis to claim that the project is not full of trivial bugs, to the point of making it useless. Nowhere was it claimed this was a proof of the absence of all bugs -- that you included the Dijkstra quotation in this context is honestly quite humorous.
My understanding is that there has been some drama in the V community in the past, especially around the feature set and release timeline and promises from its author, but I don't see the justification for all this pompous negativity when it is finally out in the wild, warts and all, but showing some nice capabilities at the same time. I don't plan to use it for any hobby projects myself right now, but if I wasn't in the middle of using a different up-and-coming language with some of the same goals, I might.
It would be absurd for someone to write a frontend to LLVM then claim that their compiler is as fast as LLVM. V uses TCC - TCC is fast at compilation, not V. V is fast at transpilation, but that's not what the author has claimed.
> that you included the Dijkstra quotation in this context is honestly quite humorous
Proving compiler correctness with tests only is like proving that your regex parses html correctly with tests. It's never gonna work.
The tcc backend is just a smart choice, and in TFA it is mentioned up front:
> V compiles ≈110k (Clang backend) and ≈1 million (x64 and tcc backends) lines of code per second per CPU core. (Intel i5-7500, SM0256L SSD, no optimization)
Thank you for a voice of reason :)
Compilation time is very important during development for the quick dev cycle (change, build, test).
You don't need to do -O2 builds dozens of times per day.
-prod (-O2) builds are definitely an order of magnitude slower, that's a fact.
I also disagree with others that using a different compiler is somehow cheating. Engineering is about winning and choosing the right tradeoffs. Being able to choose a faster backend for debug is one that all compilers offer. Most people spend most of the time living with -O0 builds because they need to debug the code. Heck, 90% of complaints about Rust compile times are about its performance making a debug build and it’s where that dev team spends its efforts. This criticism seems misplaced although it’s not clear to me why there’s this much pushback that most other languages don’t see.
20%-100% difference between gcc -O0 and -O2.
To the end user it doesn't matter how the binary was generated. Via LLVM or via a bundled C compiler.
The only thing that matters: getting that binary after compilation and getting it fast.
Writing an interpreter is a huge task. It works well without it. Actually I'd say it's quicker, because interpreters have startup costs.