Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.
Programming languages are after all the interface that a human uses to give instructions to a computer. If you’re not writing or reading it, the language, by definition doesn’t matter.
There may actually be more value in creating specialized languages now, not less. Most new languages historically go nowhere because convincing human programmers to spend the time it would take to learn them is difficult, but every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.
If there are millions of lines on github in your language.
Otherwise the 'teaching AI to write your language' part will occupy so much context and make it far less efficient that just using typescript.
How will it "learn" anything if the only available training data is on a single website?
LLMs struggle with following instructions when their training set is massive. The idea that they will be able to produce working software from just a language spec and a few examples is delusional. It's a fundamental misunderstanding of how these tools work. They don't understand anything. They generate patterns based on probabilities and fine tuning. Without massive amounts of data to skew the output towards a potentially correct result they're not much more useful than a lookup table.
That's assuming that your new, very unknown language gets slurped up in the next training session which seems unlikely. Couldn't you use RAG or have an LLM read the docs for your language?
I think this is right. Strategically, do you have a mental model of some key elements such a new programming language should exhibit? I'm curious about which existing programming languages might be best suited or where the opportunity is for designing something new that could throw away all the optimizations we've done for humans and instead optimize for AI programmers.
Programming languages function in large parts as inductive biases for humans. They expose certain domain symmetries and guide the programmer towards certain patterns. They do the same for LLMs, but with current AI tech, unless you're standing up your own RL pipeline, you're not going to be able to get it to grok your new language as well as an existing one. Your chances are better asking it to understand a library.
Roughly: machine code --> assembly --> C --> high-level languages --> frameworks --> visual tools --> LLM-assisted coding. Most of those transitions were controversial at the time, but in retrospect they mostly expanded the toolbox rather than replacing the lower layers.
One workflow I’ve found useful with LLMs is to treat them more like a code generator after the design phase. I first define the constraints, objects, actors, and flows of the system, then use structured prompts to generate or refine pieces of the implementation.
I'm being slightly facetious of course, I still use sequence diagrams and find them useful. The rest of its legacy though, not so much.
On a different but related note, it's almost the same as pairing django or rails with an LLM. The framework allows you to trust that things like authentication and a passable code organization are being correctly handled.
I'm working on a language as well (hoping to debut by end of month), but the premise of the language is that it's designed like so:
1) It maximizes local reasoning and minimizes global complexity
2) It makes the vast majority of bugs / illegal states impossible to represent
3) It makes writing correct, concurrent code as maximally expressive as possible (where LLMs excel)
4) It maximizes optionality for performance increases (it's always just flipping option switches - mostly at the class and function input level, occassionaly at the instruction level)
The idea is that it should be as easy as possible for an LLM to write it (especially convert other languages to), and as easy as possible for you to understand it, while being almost as fast as absolutely perfect C code, and by virtue of the design of the language - at the human review phase you have minimal concerns of hidden gotcha bugs.
So yeah for some things we are already at the point of "I am not longer the coder, I am the architect".. and it's scary.
By what definition? It still matters if I write my app in Rust vs say Python because the Rust version still have better performance characteristics.
That is the part of the post that stuck with me, because I've also picked up impossible challenges and tried to get Claude to dig me out of a mess without giving up from very vague instructions[1].
The effect feels like the Loss-Disguised-As-Win feeling of the video-games I used to work on at Zynga.
Sure it made a mistake, but it is right there, you could go again.
Pull the lever, doesn't matter if the kids have Karate at 8 AM.
If you can write a blogpost for this i'd like to read it.
> This sounds like the Loss Disguised as a Win concept from gambling addiction. Consider the hundreds of lines of code, all the apps being created: some of these are genuinely useful, but much of this code is too complex to maintain or modify in the future, and it often contains hidden bugs.
That said, the core value of the software wouldn't exist without a human at the helm. It requires someone to expend the energy to guide it, explore the problem space, and weave hundreds of micro-plans into a coherent, usable system. It's a symbiotic relationship, but the ownership is clear. It’s like building a house: I could build one with a butter knife given enough time, but I'd rather use power tools. The tools don't own the house.
At this point, LLMs aren't going to autonomously architect a 400+ table schema, network 100+ services together, and build the UI/UX/CLI to interface with it all. Maybe we'll get there one day, but right now, building software at this scale still requires us to drive. I believe the author owns the language.
Going into the vault!
Not according to the US Copyright Office. It is 100% LLM output, so it is not copyrighted, thus it's free for anyone to do anything with it and no claimed ownership or license can stop them.
I have yet to see a study showing something like a 2x or better boost in programmer productivity through LLMs. Usually it's something like 10-30%, depending on what metrics you use (which I don't doubt). Maybe it's 50% with frontier models, but seeing these comments on HN where people act like they're 10x more productive with these tools is strange.
I guess you're just not going to believe what anyone says.
I've been trying a new approach I call CLI first. I realized CLI tools are designed to be used both by humans (command line) and machines (scripting), and are perfect for llms as they are text only interface.
Essentially instead of trying to get llm to generate a fully functioning UI app. You focus on building a local CLI tool first.
CLI tool is cheaper, simpler, but still has a real human UX that pure APIs don't.
You can get the llm to actually walk through the flows, and journeys like a real user end to end, and it will actually see the awkwardness or gaps in design.
Your commands structure will very roughly map to your resources or pages.
Once you are satisfied with the capability of the cli tool. (Which may actually be enough, or just local ui)
You can get it to build the remote storage, then the apis, finally the frontend.
All the while you can still tell it to use the cli to test through the flows and journeys, against real tasks that you have, and iterate on it.
I did recently for pulling some of my personal financial data and reporting it. And now I'm doing this for another TTS automation I've wanted for a while.
It’s missing all the heart, the soul, of deciding and trading off options to get something to work just for you. It’s like you bought a rat bike from your local junkyard and are trying to pass it off as your own handmade cafe racer.
Also you decide how much in control you are. Want to provide a hand made grammar? go ahead, want the agent to come up with it just from chatting and pointing it to other languages, ok too. Want to program just the first arithmetic operator yourself and then save the tedium of typing all the others so you can go to the next step? fine...
So you can have a huge toy language in mere days and experiment with stuff you'd have to build for months by hand to be able to play with.
Mine is an Io and Rebol inspired language that uses SQlite and Luajit as a runtime.
1.to 10 .map[n | n * n].each[n | n.say!]
Like, I've had it build a full APL interpreter, half an optimizer, started on a copy-and-patch JIT compiler and it completely fails at "read the spec and make sure the test suite ensures compliance". Plus some additional artifacts which are genuinely useful on their own as I now have an Automated Yak Shaver™ which is where most of my projects ended up dying as the yaks are a fun bunch to play with.
That said, it's a lot of words to say not a lot of things. Still a cool post, though!
I believe we're at a point where it's not possible to accurately decide whether text is completely written by human, by computer, or something in between.
If this blog post is unedited LLM output, the blog owner needs to sell whatever model, setup and/or prompt he used for a million dollars, since it's clearly far beyond the state-of-the-art in terms of natural-sounding tone.
This is such an interesting statement to me in the context of leftpad.
Black Mirror did it first https://en.wikipedia.org/wiki/Hang_the_DJ
In all seriousness, this is great, and why not? As the post said, what once took months now takes weeks. You can experiment and see what works. For me, I started off building a web/API framework with certain correctness built in, and kept hitting the same wall: the guarantees I wanted (structured error handling, API contracts, making invalid states unrepresentable) really belonged at the language level, not bolted onto a framework. A few Claude Code sessions later, I had a spec, then a tree-sitter implementation, then a VM/JIT... something that, given my sandwich-generation-ness, I never would have done a few months ago.
However, I fear that agents will always work better on programming languages they have been heavily trained on, so for an agent-based development inventing a new domain specific language (e.g. for use internally in a company) might not be as efficient as using a generic programming language that models are already trained on and then just live with the extra boilerplate necessary.
I really liked that part - the house always wins.
It's not an abstract thing they can't do, you just have to tell them to.
I haven't read any farther than this, yet, but this made me stutter in my reading. Isn't a comparison just a function that takes two arguments and returns a third? How is that different from "+"?
It has not had any issues at all writing objc3 code
a vibe coded
programming language,
I would ask my LLM. Not go on HN.
Congratulations on getting to the front page ;)
fn read_float_literal(&mut self) -> &'a str {
let start = self.pos;
while let Some(ch) = self.peek_char() {
if ch.is_ascii_alphanumeric() || ch == '.' || ch == '+' || ch == '-' {
self.advance_char();
} else {
break;
}
}
&self.source[start..self.pos]
}
Admittedly, I do have a very idiosyncratic definition of floating-point literal for my language (I have a variety of syntaxes for NaNs with payloads), but... that is not a usable definition of float literal.At the end of the day, I threw out all of the code the AI generated and wrote it myself, because the AI struggled to produce code that was functional to spec, much less code that would allow me to easily extend it to other kinds of future operators that I knew I would need in the future.
This latest fever for LLMs simply confirms that people would rather do _anything_ other than program in a (not necessarily purely) functional language that has meta-programming facilities. I personally blame functional fixedness (psychological concept). In my experience, when someone learns to program in a particular paradigm or language, they are rarely able or willing to migrate to a different one (I know many people who refused to code in anything that did not look and feel like Java, until forced to by their growling bellies). The AI/LLM companies are basically (and perhaps unintentionally) treating that mental inertia as a business opportunity (which, in one way or another, it was for many decades and still is -- and will probably continue to be well into a post-AGI future).
I mean, they may be right but there is also a big opportunity for this being Gell-Mann amnesia : "The phenomenon of a person trusting newspapers for topics which that person is not knowledgeable about, despite recognizing the newspaper as being extremely inaccurate on certain topics which that person is knowledgeable about."
TL;DR I don't think an LLM can create a language from scratch better than what we have. To LLMs they operate on a hoffoman coded format. (Generalization). For them, you probably could communicate directly in the token representation and you'd be better off. The actual language understanding for the LLM is probably very inefficent.
For human languages, I think there is opportunity here where you can build up intelligence on common reusable patterns and find places to optimize the usage, or break them down in a more cpu/readable way.
Step #2 is: get real people to use it!
Who the hell is going to use it then? You certainly won't, because you're dependent on AI.
It's a deep dismissal that gets right to the heart of the matter in a few succinct sentences.
The "more on that later" was unit tests (also generated by Claude Code) and sample inputs and outputs (which is basically just unit tests by a different name).
This is... horrifically bad. It's stupidly easy to make unit tests pass with broken code, and even more stupidly easy when the test is also broken.
These "guardrails" are made of silly putty.
EDIT: Would downvoters care to share an explanation? Preferably one they thought of?
While I agree "AI is bad", well-written posts like this one can provide real insight into the process of using them, and reveal more about _why_ AI is bad.