Pro-LLM coding agents: look! a working compiler built in a few hours by an agent! this is amazing!
Anti-LLM coding agents: it's not a working compiler, though. And it doesn't matter how few hours it took, because it doesn't work. It's useless.
Pro: Sure, but we can get the agent to fix that.
Anti: Can you, though? We've seen that the more complex the code base, the worse the agents do. Fixing complex issues in a compiler seems like something the agents will struggle with. Also, if they could fix it, why haven't they?
Pro: Sure, maybe now, but the next generation will fix it.
Anti: Maybe. While the last few generations have been getting better and better, we're still not seeing them deal with this kind of complexity better.
Pro: Yeah, but look at it! This is amazing! A whole compiler in just a few hours! How many millions of hours were spent getting GCC to this state? It's not fair to compare them like this!
Anti: Anthropic said they made a working compiler that could compile the Linux kernel. GCC is what we normally compile the Linux kernel with. The comparison was invited. It turned out (for whatever reason) that CCC failed to compile the Linux kernel when GCC could. Once again, the hype of AI doesn't match the reality.
Pro: but it's only been a few years since we started using LLMs, and a year or so since agents. This is only the beginning!
Anti: this is all true, and yes, this is interesting. But there are so many other questions around this tech. Let's not rush into it and mess everything up.
The PR author had zero understanding why their entirely LLM-generated contribution was viewed so suspiciously.
The article validates a significant point: it is one thing to have passing tests and be able to produce output that resembles correctness - however it's something entirely different for that output to be good and maintainable.
>Beats me. AI decided to do so and I didn't question it.
Haha that's comedy gold, and honestly a good interview screening situation - you'd instantly pass on the candidate!
[0] https://github.com/ocaml/ocaml/pull/14369#issuecomment-35565...
People aren't prompting LLMs to write good, maintainable code though. They're assuming that because we've made a collective assumption that good, maintainable code is the goal then it must also be the goal of an LLM too. That isn't true. LLMs don't care about our goals. They are solving problems in a probabilistic way based on the content of their training data, context, and prompting. Presumably if you take all the code in the world and throw it in mixer what comes out is not our Platonic ideal of the best possible code, but actually something more like a Lovecraftian horror that happens to get the right output. This is quite positive because it shows that with better prompting+context+training we might actually be able to guide an LLM to know what good and bad looks like (based on the fact that we know). The future is looking great.
However, we also need to be aware that 'good, maintainable code' is often not what we think is the ideal output of a developer. In businesses everywhere the goal is 'whatever works right now, and to hell with maintainability'. When a business is 3 months from failing spending time to write good code that you can continue to work on in 10 years feels like wasted effort. So really, for most code that's written, it doesn't actually need to be good or maintainable. It just needs to work. And if you look at the code that a lot of businesses are running, it doesn't. LLMs are a step forward in just getting stuff to work in the first place.
If we can move to 'bug free' using AI, at the unit level, then AI is useful. Above individual units of code, like logic, architecture, security, etc things still have to come from the developer because AI can't have the context of a complete application yet. When that's ready then we can tackle 'tech debt free' because almost all tech debt lives at that higher level. I don't think we'll get there for a long time.
Adding Ai generated comments are IMHO some of the most rude uses of Ai.
I wouldn't call this a fiasco, it reads to me more that being able to create huge amounts of code - whether the end result works well or not - breaks the traditional model of open source. Small contributions can be verified and the merrit-vs-maintenance-effort can at least be assessed somewhat more realistically.
I have no bones in the "vibe coding sucks" vs "vibe coding rocks" discussion and I reading that thread as an outsider. I cannot help but find the PR author's attitude absolutely okay while the compiler folks are very defensive. I do agree with them that submitting a huge PR request without prior discussion cannot be the way forward. But that's almost orthogonal to the question of whether AI-generated code is or is not of value.
If I were the author, I would probably take my 13k loc proof-of-concept implementation and chop it down into bite-size steps that are easy to digest, and try to get them to get integrated into the compiler successively, with being totally upfront about what the final goal is. You'd need to be ready to accept criticism and requests for change, but it should not be too hard to have your AI of choice incorporate these into your code base.
I think the main mistake of the author was not to use vibe coding, it was to dream up his own personal ideal of a huge feature, and then go ahead and single-handedly implement the whole thing without involving anyone from the actual compiler project. You cannot blame the maintainers for not being crazy about accepting such a huge blob.
- Ohh look it can [write small function / do a small rocket hop] but it can't [ write a compiler / get to orbit]!
- Ohh look it can [write a toy compiler / get to orbit] but it can't [compile linux / be reusable]
- Ohh look it can [compile linux / get reusable orbital rocket] but it can't [build a compiler that rivals GCC / turn the rockets around fast enough]
- <Denial despite the insane rate of progress>
There's no reason to keep building this compiler just to prove this point. But I bet it would catch up real fast to GCC with a fraction of the resources if it was guided by a few compiler engineers in the loop.
We're going to see a lot of disruption come from AI assisted development.
> - <Denial despite the insane rate of progress>
Sure, but not by what was actually promised. There may also be fundamental limitations to what the current architecture of LLMs can achieve. The vast majority of LLMs are still based on Transformers, which were introduced almost a decade ago. If you look at the history of AI, it wouldn't be the first time that a roadblock stalled progress for decades.
> But I bet it would catch up real fast to GCC with a fraction of the resources if it was guided by a few compiler engineers in the loop.
Okay, so at that point, we would have proved that AI can replicate an existing software project using hundreds of thousands of dollars of computing power and probably millions of dollars in human labour costs from highly skilled domain experts.
Yeah but the speed of progress can never catch the speed of a moving goalpost!
Human crews on Mars is just as far fetched as it ever was. Maybe even farther due to Starlink trying to achieve Kessler syndrome by 2050.
Interesting that people call this "progress" :)
The problem is that it is absolutely indiscernible from the Theranos conversation as well…
If Anthropic stopped making lies about the current capability of their models (like “it compiles the Linux kernel” here, but it's far from the first time they do that), maybe neutral people would give them the benefit of the doubt.
For one grifter that happen to succeed at delivering his grandiose promises (Elon), how many grifters will fail?
But "reliable, durable, scalable outcomes in adversarial real-world scenarios" is not convincingly demonstrated in public, the asterisks are load bearing as GPT 5.2 Pro would say.
That game is still on, and AI assist beyond FIM is still premature for safety critical or generally outcome critical applications: i.e. you can do it if it doesn't have to work.
I've got a horse in this race which is formal methods as the methodology and AI assist as the thing that makes it economically viable. My stuff is north of demonstrated in the small and south of proven in the large, it's still a bet.
But I like the stock. The no free lunch thing here is that AI can turn specifications into code if the specification is already so precise that it is code.
The irreducible heavy lift is that someone has to prompt it, and if the input is vibes the output will be vibes. If the input is zero sorry rigor... you've just moved the cost around.
The modern software industry is an expensive exercise in "how do we capture all the value and redirect it from expert computer scientists to some arbitrary financier".
You can't. Not at less than the cost of the experts if the outcomes are non-negotiable.
In 1935 the Auburn 851 S/C Speedster hit 100mph
In 1955 the Mercedes-Benz 300 SL Gullwing did 161mph
In 2025 the Yangwang U9 Xtreme hit 308mph
progress is a decaying exponential - Tsiolkovsky's tyranny
Do we need a c2 wiki page for "sufficiently smart LLM" like we do for https://wiki.c2.com/?SufficientlySmartCompiler ?
This is awkward
Maybe one of those companies will come out on top. The others produce garbage in comparison. Capital loves a single throat to choke and doesn't gently pluralise. So of course you buy the best service. And it really can generate any code, get it working, bug free. People unlearn coding on this level. And some day, poof, Microsoft is coming around and having some tiny problem that it can generate a working Office clone. Or whatever, it's just an example.
This technology will never be used to set anyone free. Never.
The entity that owns the generator owns the effective means of production, even if everyone else can type prompts.
The same technology could, in a different political and economic universe, widen human autonomy. But that universe would need strong commons, enforced interoperability, and a cultural refusal to outsource understanding.
And why is this different from abstractions that came before? There are people out there understanding what compilers are doing. They understand the model from top to bottom. Tools like compilers extended human agency while preserving a path to mastery. AI code generation offers capability while dissolving the ladder behind you.
We are not merely abstracting labor. We are abstracting comprehension itself. And once comprehension becomes optional, it rapidly becomes rare. Once it becomes rare, it becomes political. And once it becomes political, it will not be distributed generously.
I am "pro" in the sense that I believe that LLM's are making traditional programming obsolete. In fact there isn't any doubt in my mind.
However, I am "anti" in the sense that I am not excited or happy about it at all! And I certainly don't encourage anyone to throw money at accelerating that process.
That's a valid take. The problem is that there are, at this time, so many valid takes that it's hard to determine which are more valid/accurate than the other.
FWIW, I think this is more insightful than most of the takes I've seen, which basically amount to "side-1: we're moving to a higher level of abstraction" and "side-2: it's not higher abstraction, just less deterministic codegen".
“It will get better, and then we will use it to make many of you unemployed”
Colour-me-shocked that swathes of this industry might have an issue with that.
How many developers do you think are solving truly novel problems? Most like me are CRUD bunnies.
I'd rather get really good at leveraging AI now than to bury my head in the sand hoping this will go away.
I happen to agree with the saying that AI isn't going to replace people, but people using AI will replace people who don't. So by the time you come back in the future, you might have been replaced already.
Unless you need a correctly compiled Linux kernel. In that case one gets exhausting real quick.
That's a really nice fictitious conversation but in my experience "anti-ai" people would be prone to say "This is stupid LLM's will never be able to write complex code and attempting to do so is futile". If your mind is open to explore how LLM's will actually write complex software then by definition you are not "anti".
I think the pro would tell you that if GCC developers could leverage Opus 4.6, they'd be more productive.
The anti would tell you that it doesn't help with productivity, it makes us less versed in the code base.
I think the CCC project was just a demonstration on what Opus can do now autonomously. 99.9% of software projects out there aren't building something as complex as a Linux compiler.
I personally hope that that happens, but I doubt it will. Note also that processors still continued to improve even without Dennard Scaling due to denser, better optimized onboard caches, better branch prediction, and more parallelism (including at the instruction level), and the broader trend towards SoCs and away from PCB-based systems, among other things. So at least by analogy, it's not impossible that even with that conjectured roadblock, Big AI could still find room for improvement, just at a much slower rate.
But current LLMs are thoroughly compelling, and even just continued incremental improvements will prove massively disruptive to society.
As someone who leans pro in this debate, I don't think I would make that statement. I would say the results are exactly as we expect.
Also, a highly verifiable task like this is well suited to LLMs, and I expect within the next ~2 years AI tools will produce a better compiler than gcc.
That's what always puts me off: when AI replaces artists, SO and FOSS projects, it can only feed into itself and deteriorate..
Building a "better compiler than gcc" is a matter of cutting-age scientific research, not of being able to write good code
Right.
and the "anti" crowd will point to some exotic architecture where it is worse
Yes it will be far easier than if they did it without AI, but should we really call it “produced by AI” at that point?
> Anti-LLM coding agents: it's not a working compiler, though. And it doesn't matter how few hours it took, because it doesn't work. It's useless.
Also, from the Anti-LLM perspective: did the coding agent actually build a working compiler, or just plagiarize prior art? C compilers are certainly part of the LLM's training set.
That's relevant because the implication seems to be: "Look, the agent can successfully develop really advanced software!" when the reality may be that it can plagiarize existing advanced software, and will fall on its face if asked to do anything not already done before.
A lot of propaganda and hype follows the pattern of presenting things in a way to creating misleading implications in the mind of the listener that the facts don't actually support.
Reminds me so much of the people posting their problems about the tesla cybertruck and ending the post with "still love the truck though"
> Anti-LLM coding agents: it's not a working compiler, though. And it doesn't matter how few hours it took, because it doesn't work. It's useless.
Pro-LLM: Read the freaking article, it's not that long. The compiler made a mistake in an area where only two compilers exist that are up to the task: Linux Kernel.
It turns out that isn't true in all instances, as this article demonstrates. I'm not nearly expert enough to be able to decide if that error was simple, stupid, irrelevant, or whatever. I can make a call on whether it successfully compiled the Linux kernel: it did not.
The freaking article omits several issues in the "compiler". My bet is because they didn't actually challenged the output of the LLM, as it usually happens.
If you go to the repository, you'll find fun things, like the fact that it cannot compile a bunch of popular projects, and that it compiles others but the code doesn't pass the tests. It's a bit surprising, specially when they don't explain why those failures exist (are they missing support for some extensions? any feature they lack?)
It gets less surprising, though, when you start to see that the compiler doesn't actually do any type checking, for example. It allows dereferences to non-pointers. It allows calling functions with the wrong number of arguments.
There's also this fantastic part of the article where they explain that the LLM got the code to a point where any change or bug fix breaks a lot of the existing tests, and that further progress is not possible.
Then the fact that this article points out that the kernel doesn't actually link. How did they "boot it"? It might very well be possible that it crashed soon after boot and wasn't actually usable.
So, as usual, the problem here is that a lot of people look at LLM outputs and trust what they're saying they achieved.
> it's not a working compiler, though. And it doesn't matter how few hours it took, because it doesn't work. It's useless.
It works. It's not perfect, but anthropic claims to have successfully compiled and booted 3 different configurations with it. The blog post failed to reproduce one specific version on one specific architecture. I wish anthropic gave us more information about which kernel commits succeeded, but still. Compare this to years that it took for clang to compile the kernel, yet people were not calling that compiler useless.
If anyone thinks other compilers "just work", I invite them to start fixing packages that fail to build in nixos after every major compiler change, to get a dose of real world experience.
The billion dollar question is, can we get from 80% to 100%? Is this going to be a situation where that final gap is just insurmountable, or will the capabilities simply keep increasing?
"The source code of gcc is available online"
"Pro": give me tons of money to keep going this endeavour.
More seriously: if some LLM or other can _assist_ c++ to plain and simple C port...
Honestly? I think if we as a society could trust our leaders (government and industry) to not be total dirtbags the resistance to AI would be much lower.
Like imagine if the message was “hey, this will lead to unemployment, but we are going to make sure people can still feed their families during the transition, maybe look in to ways to support subsidies retraining programs for people whose jobs have been impacted .” Seems like a much more palatable narrative than, “fuck you pleb! go retrain as a plumber or die in a ditch. I’ll be on my private island counting the money I made from destroying your livelihood.”
First, remember when we had LLMs run optimisation passes last year? Alphaevolve doing square packing, and optimising ML kernels? The "anti" crowd was like "well, of course it can automatically optimise some code, that's easy". And things like "wake me up when it does hard tasks". Now, suddenly when they do hard tasks, we're back at "haha, but it's unoptimised and slow, laaame".
Second, if you could take 100 juniors, 100 mid level devs and 100 senior devs, lock them in a room for 2 weeks, how many working solutions that could boot up linux in 2 different arches, and almost boot in the third arch would you get? And could you have the same devs now do it in zig?
The thing that keeps coming up is that the "anti" crowd is fighting their own deamons, and have kinda lost the plot along the way. Every "debate" is about promisses, CEOs, billions, and so on. Meanwhile, at every step of the way these things become better and better. And incredibly useful in the right hands. I find it's best to just ignore the identity folks, and keep on being amazed at the progress. The haters will just find the next goalpost and the next fight with invisible entities. To paraphrase - those who can, do, those who can't, find things to nitpick.
Codex frustratingly failed at refactoring my tests for me the other day, despite me trying many, many prompts of increasing specificity. A task a junior could've done
Am I saying "haha it couldn't do a junior level task so therefor anything harder is out of reach?" No, of course not. Again, it's not a human. The comparison is irrelevant
Calculators are superhuman at arithmetic. Not much else, though. I predict this will be superhuman at some tasks (already is) and we'll be better at others
Second depends. If you told them to pretrain for writing C compiler however long it takes, I could see a smaller team doing it in a week or two. Keep in mind LLMs pretrain on all OSS including GCC.
> Meanwhile, at every step of the way these things become better and better.
Will they? Or do they just ingest more data and compute?[1] Again, time will tell. But to me this seems more like speed-running into an Idiocracy scenario than a revolution.[2]
I think this will turn out another driverless car situation where last 1% needs 99% of the time. And while it might happen eventually it's going to take extremely long time.
[1] Because we don't have much more computing jumps left, nor will future data be as clean as now.
[2] Why idiocracy?
Because they are polluting their own corpus of data. And by replacing thinking about computers, there will be no one to really stop them.
We'll equalize the human and computer knowledge by making humans less knowledgeable rather than more.
So you end up in an Idiocracy-like scenario where a doctor can't diagnose you, nor can the machine because it was dumbed down by each successive generation, until it resembles a child's toy.
I mean this compiler is the equivalent of handing someone a calculator when it was first invented and seeing that it took 2 hours to multiply two numbers together, I would go "cool that you have a machine that can do math, but I can multiply faster by hand, so it's a useless device to me".
At the same time - you could direct Claude to review the register spilling code and the linker code of both LLVM/gcc for potential improvements to CCC and you will see improvements. You can ask it not to copy GPL code verbatim but to paraphrase and tell it it can rip code from LLVM as long as the licenses are preserved. It will do it.
You might only see marginal improvements without spending another $100K on API calls. This is about one of the hardest projects you could ask it to bite off and chew on. And would you trust the compiler output yet over GCC or LLVM?
Of course not.
But I wager, that if you _started_ with the LLVM/gcc codebases and asked it to look for improvements - it might be surprising to see what it finds.
Both sides have good arguments. But this could be a totally different ball game in 2, 5 and 10 years. I do feel like those who are most terrified by it are those whose identity is very much tied to being a programmer, and seeing the potential for their role to be replaced and I can understand that.
Me personally - I'm relieved I finally have someone else to blame and shout at rather than myself for the bugs in the software I produce. I'm relieved that I can focus now on the more creative direction and design of my personal projects (and even some work projects on the non-critical paths) and not get bogged down in my own perfectionism with respect to every little component until reaching exhaustion and giving up.
And I'm fascinated by the creativity of some of the projects I see that are taking the same mindset and approach.
I was depressed by it at first. But as I've experimented more and more, I've come to enjoy seeing things that I couldn't ever have achieved even with 100 man years of my own come to fruition.
I learned about it from HackerNews and ChatGPT.
But lying and hype is baked into the DNA of AI booster culture. At this point it can be safely assumed anything short of right-here-right-now proof is pure unfettered horseshit when coming from anyone and everyone promoting the value of AI.
It seemed pretty unambiguous to me from the blog post that they were saying the kernel could boot on all three arch's, but clearly that's not true unless they did some serious hand-waving with kernel config options. Looking closer in the repo they only show a claimed Linux boot for RISC-V, so...
[0]: https://www.anthropic.com/engineering/building-c-compiler - "build a bootable Linux 6.9 on x86, ARM, and RISC-V."
[1]: https://github.com/anthropics/claudes-c-compiler/blob/main/B... - only shows a test of RISC-V
In the specific case of __jump_table I would even guess there was some work in getting the Clang build working.
I'm not dissing CCC here, rather I'm impressed with how much speed is squeezed out by GCC out of what is assumed to be already an intrinsically fast language.
The primatives are directly related to the actual silicon. A function call is actually going to turn into a call instruction (or get inlined). The order of bytes in your struct are how they exist in memory, etc. A pointer being dereferenced is a load/store.
The converse holds as well. Interpreted languages are slow because this association with the hardware isn't the case.
When you have a poopy compiler that does lots of register shuffling then you loose this association.
Specifically the constant spilling with those specific functions functions that were the 1000x slowdown, makes the C code look a lot more like Python code (where every variable is several dereference away).
> The compiler did its job fine
> Where CCC Succeeds Correctness: Compiled every C file in the kernel (0 errors)
I don't think that follows. It's entirely possible that the compiler produced garbage assembly for a bunch of the kernel code that would make it totally not work even if it did link. (The SQLite code passing its self tests doesn't convince me otherwise, because the Linux kernel uses way more advanced/low-level/uncommon features than SQLite does.)
Whenever I've done optimisation (e.g. genetic algorithms / simulated annealing) before you always have to be super careful about your objective function because the optimisation will always come up with some sneaky lazy way to satisfy it that you didn't think of. I guess this is similar - their objective was to compile valid C code and pass some tests. They totally forgot about not compiling invalid code.
Indeed. For a specific example of it not erroring out:
https://www.reddit.com/r/Compilers/comments/1qx7b12/comment/...
The assembler is harder than it looks. It needs to know the exact binary encoding of every instruction for the target architecture. x86-64 alone has thousands of instruction variants with complex encoding rules (REX prefixes, ModR/M bytes, SIB bytes, displacement sizes). Getting even one bit wrong means the CPU will do something completely unexpected.
The linker is arguably the hardest. It has to handle relocations, symbol resolution across multiple object files, different section types, position-independent code, thread-local storage, dynamic linking and format-specific details of ELF binaries. The Linux kernel linker script alone is hundreds of lines of layout directives that the linker must get exactly right."
I worked on compilers, assemblers and linkers and this is almost exactly backwards
This explanation confused me too:
Each individual iteration: around 4x slower (register spilling)
Cache pressure: around 2-3x additional penalty (instructions do not fit in L1/L2 cache)
Combined over a billion iterations: 158,000x total slowdown
If each iteration is X percent slower, then a billion iterations will also be X percent slower. I wonder what is actually going on.Supporting linker scripts is marginally harder, but having manually written compilers before, my experience is the exact opposite of yours.
I thought it was just the compiler that Anthropic produced.
Imagine five years ago saying that you could have a general purpose AI write a c compiler that can handle the Linux kernel, by itself, from scratch for $20k by writing a simple English prompt.
That would have been completely unbelievable! Absurd! No one would take it seriously.
And now look at where we are.
And that’s where my suspicion stems from.
An equivalent original human piece of work from an expert level programmer wouldn’t be able to do this without all the context. By that I mean all the all the shared insights, discussion and design that happened when making the compiler.
So to do this without any of that context is likely just very elaborate copy pasta.
You’re very conveniently ignoring the billions in training and that it has practically the whole internet as input.
Just because we're here doesn't mean we're getting to AGI or software developers begging for jobs at Starbucks
That said, I think the framing of "CCC vs GCC" is wrong. GCC has had thousands of engineer-years poured into it. The actually impressive thing is that an LLM produced a compiler at all that handles enough of C to compile non-trivial programs. Even a terrible one. Five years ago that would've been unthinkable.
The goalpost everyone should be watching isn't "can it match GCC" — it's whether the next iteration closes that 158,000x gap to, say, 100x. If it does, that tells you something real about the trajectory.
It says that a nested query does a large number of iterations through the SQLite bytecode evaluator. And it claims that each iteration is 4x slower, with an additional 2-3x penalty from "cache pressure". (There seems to be no explanation of where those numbers came from. Given that the blog post is largely AI-generated, I don't know whether I can trust them not to be hallucinated.)
But making each iteration 12x slower should only make the whole program 12x slower, not 158,000x slower.
Such a huge slowdown strongly suggests that CCC's generated code is doing something asymptotically slower than GCC's generated code, which in turn suggests a miscompilation.
I notice that the test script doesn't seem to perform any kind of correctness testing on the compiled code, other than not crashing. I would find this much more interesting if it tried to run SQLite's extensive test suite.
It could have spotted out GCC source code verbatim and matched its performance.
(a small remark, but to be clear I'm not terribly impressed by AI showcase of the c compiler, nor with browser before that, as it stands)
I'd like to see someone disagree with the following:
Building a C compiler, targeting three architectures, is hard. Building a C compiler which can correctly compile (maybe not link) the modern linux kernel is damn hard. Building a C compiler which can correctly compile sqlite and pass the test suite at any speed is damn hard.
To the specific issues with the concrete project as presented: This was the equivalent of a "weekend project", and it's amazing
So what if some gcc is needed for the 16-bit stuff? So what if a human was required to steer claude a bit? So what if the optimizing pass practically doesn't exist?
Most companies are not software companies, software is a line-item, an expensive, an unavoidable cost. The amount of code (not software engineering, or architecture, but programming) developed tends towards glue of existing libraries to accomplish business goals, which, in comparison with a correct modern C compiler, is far less performance critical, complex, broad, etc. No one is seriously saying that you have to use an LLM to build your high-performance math library, or that you have to use an LLM to build anything, much in the same way that no one is seriously saying that you have to rewrite the world in rust, or typescript, or react, or whatever is bothering you at the moment.
I'm reminded of a classic slashdot comment--about attempting to solve a non-technical problem with technology, which is doomed to fail--it really seems that the complaints here aren't about the LLMs themselves, or the agents, but about what people/organizations do with them, which is then a complaint about people, but not the technology.
I mean, $20k in tokens, plus the supervision by the author to keep things running, plus the number of people that got involved according to the article... doesn't look like "a weekend project".
> Building a C compiler which can correctly compile (maybe not link) the modern linux kernel is damn hard.
Is it correctly compiling it? Several people have pointed out that the compiler will not emit errors for clearly invalid code. What code is it actually generating?
> Building a C compiler which can correctly compile sqlite and pass the test suite at any speed is damn hard.
It's even harder to have a C compiler that can correctly compile SQLite and pass the test suite but then the SQLite binary itself fails to execute certain queries (see https://github.com/anthropics/claudes-c-compiler/issues/74).
> which, in comparison with a correct modern C compiler, is far less performance critical, complex, broad, etc.
That code might be less complex for us, but more complex for an LLM if it has to deal with lots of domain-specific context and without a test suite that has been developed for 40 years.
Also, if the end result of the LLM has the same problem that Anthropic concedes here, which is that the project is so fragile that bug fixes or improvements are really hard/almost impossible, that still matters.
> it really seems that the complaints here aren't about the LLMs themselves, or the agents, but about what people/organizations do with them, which is then a complaint about people, but not the technology
It's a discussion about what the LLMs can actually do and how people represent those achievements. We're point out that LLMs, without human supervision, generate bad code, code that's hard to change, with modifications specifically made to address failing tests without challenging the underlying assumptions, code that's inconsistent and hard to understand even for the LLMs.
But some people are taking whatever the LLM outputs at face value, and then claiming some capabilities of the models that are not really there. They're still not viable for using without human supervision, and because the AI labs are focusing on synthetic benchmarks, they're creating models that are better at pushing through crappy code to achieve a goal.
But they instead made a blog post about how it would cost you twenty thousand dollars to recreate a piece of software that they do not, with a straight face, actually recommend that you use in any capacity beyond as a toy.
[0] I am categorically not talking about anything AI related or anything that is directly a part of their sales funnel. I am talking about a piece of software that just efficiently does something useful. GCC is an example, Everything by voidtools is an example, Wireshark is an example, etc. Claude is not an example.
I'd challenge anyone who are negative to this to try to achieve what they did by hand, with the same restrictions (e.g. generating full SSA form instead of just directly emitting code, capable of compiling Linux), and log their time doing it.
Having written several compilers, I'll say with some confidence that not many developers would succeed. Far fewer would succeed fast enough to compete with $20k cost. Even fewer would do that and deliver decent quality code.
Now notice the part where they've done this experiment before. This is the first time it succeeded. Give it another model iteration or two, and expect quality to increase, and price to drop.
This is the new floor.
From the original blog post:
>Every agent would hit the same bug, fix that bug, and then overwrite each other's changes. Having 16 agents running didn't help because each was stuck solving the same task.
>The fix was to use GCC as an online known-good compiler oracle to compare against. I wrote a new test harness that randomly compiled most of the kernel using GCC
The blog post used the word autonomous a lot, which I suppose is true if Nicholas Carlini is not a human being but in fact a Claude agent.
>I'd challenge anyone who are negative to this to try to achieve what they did by hand, with the same restrictions (e.g. generating full SSA form instead of just directly emitting code, capable of compiling Linux), and log their time doing it.
Why would anyone do that? My point was that why does the company _not_ make a useful tool? I feel like that is a much more interesting topic of discussion than “why aren’t people that aren’t impressed by this spending their time trying to make this company look good?”
>This is the new floor.
Aside from the notion that they maybe intentionally set out to create the least useful or valuable output from their tooling (eg ‘the floor’) when they did not say that they did that, my question was “Why do they not make something genuinely useful?”. Marketing speak and imaginary engineers failing at made up challenges does not answer that question.
if I deliver my work with lots of bugs and serious caveats, I will be sacked. Streching the definition of "working", especially in times where code quality is going down, does not help.
As far as I underestand from the comments, Anthropic released a "compiler" that translates C code to some assembly, which might or might not be valid input for a linker.
They claimed they were able to compile the Linux kernel (which version ? which config) and boot it (was the boot successfull ? were all devices correctly initialized ? Is userland running without problems ?)
At the moment it really looks like a political farce with no real outcome except some promisses.
Is there a code repository so i can test it ?
What is the licence of this compiler ?
I don't think that's a valid explanation. If something takes 8x as long then if you do it a billion times it still takes 8x as long. Just now instead of 1 vs 8 it's 1 billion vs 8 billion.
I'd be curious to know what's actually going on here to cause a multiple order of magnitude degradation compared to the simpler test cases (ie ~10x becomes ~150,000x). Rather than I-cache misses I wonder if register spilling in the nested loop managed to completely overwhelm L3 causing it to stall on every iteration waiting for RAM. But even that theory seems like it could only account for approximately 1 order of magnitude, leaving an additional 3 (!!!) orders of magnitude unaccounted for.
I think there's a lot more to the story here.
I wonder if there could be a bug where extra code runs but the result is discarded (and the code that runs happens to have no side effects).
The post also says
> That is roughly 1 billion iterations
but that doesn't sound right because GCC's version runs in only 0.047s, and no CPU can do a billion iterations that quickly.
These kinds of tasks are relatively easy for LLMs, they’re operating in a solved design space and recombining known patterns. It looks impressive to us because writing a compiler from scratch is difficult and time consuming for a human, not because of the problem itself.
That doesn’t mean LLMs aren’t useful, even if progress plateaued tomorrow, they’d still be very valuable tools. But building yet another C compiler or browser isn’t that compelling as a benchmark. The industry keeps making claims about reasoning and general intelligence, but I’d expect to see systems producing genuinely new approaches or clearly better solutions, not just derivations of existing OSS.
Instead of copying a big project, I'd be more impressed if they could innovate in a small one.
Generally after the SSA pass, you convert all of them into register transfer language (RTL) and then do register allocation pass. And for GCC's case it is even more extreme -- You have GIMPLE in the middle that does more aggressive optimization, similar to rustc's MIR. CCC doesn't have all that, and for register allocation you can try to do simple linear scan just as the usual JIT compiler would do though (and from my understanding, something CCC should do at a simple cost), but most of the "hard part" of compiler today is actually optimization -- frontend is mostly a solved problem if you accept some hacks, unlike me who is still looking for an elegant academic solution to the typedef problem.
typedef int AA;
void foo()
{
AA AA; /\* OK - define variable AA of type AA */
int BB = AA * 2; /\* OK - AA is just a variable name here \*/
}
void bar()
{
int aa = sizeof(AA), AA, bb = sizeof(AA);
}
https://eli.thegreenplace.net/2011/05/02/the-context-sensiti...I don't know off the top of my head whether there's a parser framework that makes this parse "straightforward" to express.
- Old Russian proverb.
1. In the real world, for a similar task, there are little reasons for: A) not giving the compiler access to all the papers about optimizations, ISAs PDFs, MIT-licensed compilers of all the kinds. It will perform much better, and this is a proof that the "uncompressing GCC" is just a claim (but even more point 2).
2. Of all the tasks, the assembler is the part where memorization would help the most. Instead the LLM can't perform without the ISA documentation that it saw repreated infinite number of times during pre-training. Guess what?
3. Rust is a bad language for the test, as a first target, if you want an LLM-coded Rust C compiler, and you have LLM experience, you would go -> C compiler -> Rust port. Rust is hard when there are mutable data structures with tons of references around, and a C compiler is exactly that. To compose complexity from different layers is an LLM anti pattern that who worked a lot with automatic programming knows very well.
4. In the real world, you don't do a task like that without steering. And steering will do wonders. Not to say that the experiment was ill conceived. The fact is that the experimenter was trying to show a different point of what the Internet got (as usually).
All of your points are important, but I think this is the most important one.
Having written compilers, $20k in tokens to get to a foundation for a new compiler with the feature set of this one is a bargain. Now, the $20k excludes the time of to set up the harness, so the total cost would be significantly higher, but still.
The big point here is that the researchers in question demonstrated that a complex task such as this could be achived shockingly cheaply, even when the agents were intentionally forced to work under unrealistically harsh conditions, with instructions to include features (e.g. SSA form) that significantly complicated the task but made the problem closer to producing the foundation for a "proper" compiler rather than a toy compiler, even if the outcome isn't a finished production-ready multi-arch C-compiler.
While I agree that the technology behind this is impressive, the biggest issue is license infringement. Everyone knows there's GPL code in the training data, yet there's no trace of acknowledgment of the original authors.
What’s the big deal about that?
Large language models and small language models are very strong for solving problems, when the problem is narrow enough.
They are above human average for solving almost any narrow problem, independent of time, but when time is a factor, let's say less than a minute, they are better than experts.
An OS kernel is exactly a problem, that everyone prefers to be solved as correct as possible, even if arriving at the solution takes longer.
The author mentions stability and correctness of CCC, these are properties of Rust and not of vibe coding. Still impressive feat of claude code though.
Ironically, if they populated the repo first with objects, functions and methods with just todo! bodies, be sure the architecture compiles and it is sane, and only then let the agent fill the bodies with implementations most features would work correctly.
I am writing a program to do exactly that for Rust, but even then, how the user/programmer would know beforehand how many architectural details to specify using todo!, to be sure that the problem the agent tries to solve is narrow enough? That's impossible to know! If the problem is not narrow enough, then the implementation is gonna be a mess.
"The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time."
Sure these things can technically frontload a lot of work at the beginning of a project, but I would argue the design choices made at the beginning of a project set the tone for the entire project, and its best those be made with intention, not stochastic text extruders.
Lets be real these things are shortcut machines that appeal to people's laziness, and as with most shortcuts in life, they come with consequences.
Have fun with your "Think for me SaaS" im not going to let my brain atrophy to the point where my competency is 1:1 correlated to the quantity and quality or tokens I have access too.
However looking at the assembly, it's clear to me the opt passes do not work, an I suspect it contains large amounts of 'dead code' - where the AI decided to bypass non-functioning modules.
If a human expert were to write a compiler not necessarily designed to match GCC, but provide a really good balance of features to complexity, they'd be able to make something much simpler. There are some projects like this (QBE,MIR), which come with nice technical descriptions.
Likewise there was a post about a browser made by a single dude + AI, which was like 20k lines, and worked about as well as Cursor's claimed. It had like 10% of the features, but everything there worked reasonably well.
So while I don't want to make predictions, but it seems for now, the human-in-the-loop method of coding works much better (and cheaper!) than getting AI to generate a million lines of code on its own.
Per the article from the person who directed this, the user directed the AI to use SSA form.
> However looking at the assembly, it's clear to me the opt passes do not work, an I suspect it contains large amounts of 'dead code' - where the AI decided to bypass non-functioning modules.
That is quite possibly true, but presumably at least in part reflects the fact that it has been measured on completeness, not performance, and so that is where the compiler has spent time. That doesn't mean it'd necessarily be successful at adding optimisation passes, but we don't really know. I've done some experiments with this (a Ruby ahead-of-time compiler) and while Claude can do reasonably well with assembler now, it's by no means where it's strongest (it is, however, far better at operating gdb than I am...), but it can certainly do some of it.
> So while I don't want to make predictions, but it seems for now, the human-in-the-loop method of coding works much better (and cheaper!) than getting AI to generate a million lines of code on its own.
Yes, it absolutely is, but the point in both cases was to test the limits of what AI can do on their own, and you won't learn anything about that if you let a human intervene.
$20k in tokens to get to a surprisingly working compiler from agents working on their own is at a point where it is hard to assess how much money and time you'd save once considering the cleanup job you'd probably want to do on it before "taking delivery", but had you offered me $20k to write a working C-compiler with multiple backends that needed to be capable of compiling Linux, I'd have laughed at the funny joke.
But more importantly, even if you were prepared to pay me enough, delivering it as fast if writing it by hand would be a different matter. Now, if you factor in the time used to set up the harness, the calculation might be different.
But now that we know models can do this, efforts to make the harnesses easier to set up (for my personal projects, I'm experimenting with agents to automatically figure out suitable harnesses), and to make cleanup passes to review, simplify, and document, could well end up making projects like this far more viable very quickly (at the cost of more tokens, certainly, but even if you double that budget, this would be a bargain for many tasks).
I don't think we're anywhere near taking humans out of the loop for many things, but I do see us gradually moving up the abstraction levels, and caring less about the code at least at early stages and more about the harnesses, including acceptance tests and other quality gates.
The generated code's quality is more inline with 'undergrad course compiler backend', that is, basically doing as little work on the backend as possible, and always doing all the work conservatively.
Basic SSA optimizations such as constant propagation, copy propagation or common subexpression propagation are clearly missing from the assembly, the register allocator is also pretty bad, even though there are simple algorithms for that sort of thing that perform decently.
So even though the generated code works, I feel like something's gone majorly wrong inside the compiler.
The 300k LoC things isnt encouraging either, its way too much for what the code actually does.
I just want to point out, that I think a competent-ish dev (me?) could build something like this (a reasonably accurate C compiler), by a more human-in-the-loop workflow. The result would be much more reasonable code and design, much shorter, and the codebase wouldn't be full of surprises like it is now, and would conform to sane engineering practices.
Honestly I would certainly prefer to do things like this as opposed to having AI build it, then clean it up manually.
And it would be possible without these fancy agent orchestration frameworks and spending tens of thousands of dollars on API.
This is basically what went down with Cursor's agentic browser, vs an implementation that was recreated by just one guy in a week, with AI dev tools and a premium subscription.
There's no doubt that this is impressive, but I wouldn't say that agentic sofware engineering is here just yet.
Am I just old? "How did they fit those people into the television?!"
The article is clear about its limitations. The code README opens by saying “don’t use this” which no research paper I know is honest enough to say.
As for hype, it’s less hyped than most university press releases. Of course since it’s Anthropic, it gets more attention than university press.
I think the people most excited are getting ahead of themselves. People who aren’t impressed should remember that there is no C compiler written in Rust for it to have memorized. But, this is going to open up a bunch of new and weird research directions like this blog post is beginning to do.
By evaluating the objective (successful compilation) in a loop, the LLM effectively narrows the problem space. This is why the code compiles even when the broader logic remains unfinished/incorrect.
It’s a good example of how LLMs navigate complex, non-linear spaces by extracting optimal patterns from their training data. It’s amazing.
p.s. if you translate all this to marketing jargon, it’ll become “our LLM wrote a compiler by itself with a clean room setup”.
Edit: typo
I'm surprised that this wasn't possible before with just a bigger context size.
I have cough AI generated an x86 to x86 compiler (takes x86 in, replaces arbitrary instructions with functions and spits x86 out), at first it was horrible, but letting it work for 2 more days it was actually close to only 50% to 60% slowdown when every memory read instruction was replaced.
Now that's when people should get scared. But it's also reasonable to assume that CCC will look closer to GCC at that point, maybe influenced by other compilers as well. Tell it to write an arm compiler and it will never succeed (probably, maybe can use an intermeriadry and shove it into LLVM and it'll work, but at that point it is no longer a "C" compiler).
This is what I've noticed about most LLM generated code, its about the quality of an undergrad, and I think there's a good reason for this - most of the code its been trained on is of undergrad quality. Stack overflow questions, a lot of undergrad open source projects, there are some professional quality open source projects (eg SqlLite) but they are outweighed by the mass of other code. Also things like Sqllite don't compare to things like Oracle or Sql Server which are proprietary.
Having LLM generates a first complete iteration of a C compiler in rust is super useful if the code is of good enough quality that it can be maintained and improved by humans (or other AIs). It is (almost) completely useless otherwise.
And that is the case for most of today's code generated by AIs. Most of it will still have to be maintained by humans, or at least a human will ultimately be responsible for it.
What i would like to see is whether that C compiler is a horrible mess of tangled spaghetti code with horrible naming. Or something with a clear structure, good naming, and sensible comments.
Additionally there is the additional problem, that LLM comments often represent what the code would be supposed to do, not what it actually does. People write comments to point out what was weird during implementation and what they found out during testing the implementation. LLM comments seems more to reflect the information present before writing the implementation, i.e. the use it as an internal check list what to generate.
In my opinion deceiving comments are worse than no comments at all.
It seems to run.
Testing will be implemented in another release.
Looking at Readme.md it downloads a particular kernel version with a particular busybox version and runs them in qemu.
A parody.
This was the aim. The reality is far away from it.
That’s the whole promise to reach AGI that it will be able to improve itself.
I think Anthropic ruined this by releasing it too early would have been way more fun to have seen a live website where you can see it iterating and the progress is making.
It would be interesting to compare the source code used by CCC to other projects. I have a slight suspicion that CCC stole a lot of code from other projects.
Nevertheless, the victories continue to be closer to home.
We'll see how fun that will be for these big corporations.
For example: "Hey, Claude, re-implement Adobe Photoshop in Rust."
I am curious about what results would be for something like a lexer + parser + abstract machine code generator generation for a made up language
Perhaps that would be a more telling benchmark to evaluate the Claude compiler against.
- Deal with legacy code from day one.
- Have mess of a codebase that is most likely 10-20x the amount of LOC compared to human code
- Have your program be really slow and filled with bugs and edge cases.
This is the battlefield for programmers. You either just build the damn thing or fix bugs for the next decade.
Quite well, possibly.
Look, I wasn't even aware of this until it popped up a few days ago on HN, I am not privy to the details of Anthropics engineers in general, or the specific engineer who curated this marathon multi-agent dev cycle, but I can tell you how anyone familiar with compilers or programming language development will proceed:
1. Vibe an IL (intermediate language) specification into existence (even if it is only held in RAM as structures/objects)
2. Vibe some utility functions for the IL (dump, search, etc)
3. Vibe a set of backends, that take IL as input and emit ISA (Instruction Set Architecture), with a set of tests for each target ISA
4. Vibe a front-end that takes C language input and outputs the IL, with a set of tests for each language construct.
(Everything from #2 onwards can be done in parallel)
I have no reason to believe that the engineer who vibe-coded CCC is anything other than competent and skillful, so lets assume he did at least the above (TBH, he probably did more)[1].
This means that CCC has, in its code, everything needed to vibe a never-before-seen ISA, given the ISA spec. It also means it has everything needed to support a new front-end language as long as it is similar enough to C (i.e. language constructs can map to the IL constructs).
So, this should be pretty easy to expand on, because I find it unlikely that the engineer who supervised/curated the process would be anything less than an expert.
The only flaw in my argument is that I am assuming that effort from CC was so large because it did the src -> IL -> ISA route. If my assumption is wrong, it might be well-nigh impossible to add support for a new ISA.
------------------------------
[1] When I agreed to a previous poster on a previous thread that I can recreate the functionality of CCC for $20k, these are the steps I would have followed, except I would not have LLM-generated anything.
I am pretty sure everybody agrees that this result is somewhere between slop code that barely works and the pinnacle of AI-assisted compiler technology. But discussions should not be held from the extreme points. Instead, I am looking for a realistic estimation from the HN community about where to place these results in a human context. Since I have no experience with compilers, I would welcome any of your opinions.
I offered to do it, but without a deadline (I work f/time for money), only a cost estimation based on how many hours I think it should take me: https://news.ycombinator.com/item?id=46909310
The poster I responded to had claimed that it was not possible to produce a compiler capable of compiling a bootable Linux kernel within the $20k cost, nor for double that ($40k).
I offered to do it for $40k, but no takers. I initially offered to do it for $20k, but the poster kept evading, so I settled on asking for the amount he offered.
We act so superior to LLMs but I'm very unimpressed with humanity at this stage.
Disclaimer: I have a near-zero competence in compilers and compiler-building but i just want to summarize what's going on in my opinion.
It's the same thing if i was given millions of repos of already-built compiler and had an ability to only wield these parts together. Yeah, it TECHNICALLY will work, but what's the point of building on top of the garbage afterwards?
You'll definitely want to refactor it, and it will not really be a pleasant experience to begin with. You have to have a certain amount of dedication and knowledge to contribute to this compiler, which you don't have if you're a plain vibe-coder. The things that are most difficult part of c compilers (and basically any compiler whatsoever) are optimizations and portability. Will you be able to have these things in a full claude-generated repo? Who knows! Maybe you'll cause an irreversible damage to the system of the end user, no one knows! There are so many snippets of code in the world, and you can't just filter-out the malicious and stupid ones.
The thing is, LLM's are stupid. I partially agree with Richard Stallman's take on current AI state - these are not intelligence, more of a bullshit generators if improperly used. Well to think, humans are partially LLM's themselves, but we have much more than that. LLM can only be used as a tool to help developers. My bet - never in the future the LLMs will be able to supply 100% prod-ready code by themselves. They are just not capable of that, it's in their nature to mimic and not to think.
LLMs in education and fast information fetching are blessing. It's the best thing that's happened since the invention of search engines. But never in my life will i blindly copypaste some shell-script or code that i don't know is not harmful or the code snippet lacks hyperlink to the original snippet of the code.
Vibe-coders imo are guys that copy-pasted stuff from internet back in... well, anytime since 2000s. They just evolved into guys that blindly copy-paste average result of their requests given by more convenient search engines. Not that it's a bad evolution step, it's just pretty much the same thing, but maybe it's less harmful to copypasters themselves.
THE BAD THING in CCC's creation is that some non-technical people are degenerates that will take this repo and say "LOOK, A COMPILER BUILT BY AN AI. AI!!! IT'S LIKE... A REALLY TEDIOUS TASK TO BUILD A COMPILER YKNOW. AND IT WAS BUILT (wielded from others people's repos) BY AI WITH NO HUMAN INTERVENTION. AND IT WORKS!!!!". No, it kind of doesn't. It even lacks "--help" lol. With every update, every pull request there is no guarantee that it will not become such an unstable codebase that any of its future extensions will either fail or misbehave. AI is only an option when ruled by the one who knows their stuff. They'll look at the code and say - well, that part is crappy, we need to refactor it", or "hey, that snippet is pretty good, didn't know you can do it that simple".
LLMs are just a big dictionary that you can either use to expand your knowledge about certain things you're interested into or to just blindly look for stuff you urgently need to use it once. If you want to ask somebody polish if you can borrow their phone, you certainly can grab Polish language dictionary, go to the part with sentences and read aloud: "Czy może skorzystac z twojego telefonu?". Will it help you learn? Technically yes, realistically - absolutely not. These snippets are only useful if you know how to use them right, how to form something with meaning out of them.
Pro-LLM people are dumb. But so are the Anti-LLM peoples. And what i mean by that is not "WE NEED AI EVERYWHERE!", but to acknowledge AI as a tool, not the worker.
As post-scriptum i want to add one thing - Pro-LLM mindset is a lot worse than Anti-LLM. AI guys, don't you see that the Bubble has already grown and becomes bigger and bigger as we go on? AI integration as of today is a really dum and frightening process. When you want to debate with Pro-LLM folks, please, don't act all high and mighty, you're not really in the situation to forbid someone from using something, especially CEOs, ESPECIALLY CEOs. With this attitude you're only contributing to building a wall with echo chamber for vibe coders. Monkey (ceo) see AI is capable of building something - monkey fire an entire department to save money on development team. Is the end result worse? Yes. But does it really bother mister Monkey - no, for him it's his another win for company's profit. He will not hear your point of view if you won't prove him the opposite - and yet again, you cannot do this if you're gonna act like you he doesn't know shit in business. It's literally the same thing that's happened to tons of job positions prior in human history, but with one small change - now it's tech, and every businessman thinks they knows tech because they use technical devices (idk, his smarthone or pc). BUSINESS DEMANDS PROFIT RAISE - always has been. You're gonna stand for your right to only integrate with AI wisely, not pushing it everywhere, and it is really important that you know how to do it.
If you're capable of boosting yourself with a bit of AI - why not? Performance boost will bend the learning curve in your favor, you only gonna win from that. And when the bubble will pop, the demand for real workers who know their stuff and who know how to boost themselves with right tools will skyrocket. That is, my bet.
/s
This is actually a nice case study in why agentic LLMs do kind of think. It's by no means the same code or compiler. It had to figure out lots and lots of problems along the way to get to the point of tests passing.
Why the sarcasm tag? It is almost certainly trained on several compiler codebases, plus probably dozens of small "toy" C compilers created as hobby / school projects.
It's an interesting benchmark not because the LLM did something novel, but because it evidently stayed focused and maintained consistency long enough for a project of this complexity.