Go is an excellent language for LLM code generation. There exists a large stable training corpus, one way to write it, one build system, one formatter, static typing, CSP concurrency that doesn't have C++ footguns.
The language hasn't had a breaking version in over a decade. There's minimal framework churn. When I advise teams to adopt agentic coding workflows at my consultancy [0], Go delivers highly consistent results via Claude and Codex regularly and more often than working with clients using TypeScript and/or Python.
When LLMs have to navigate Python and TypeScript there is a massive combinatorial space of frameworks, typing approaches, and utility libraries.
Too much optionality in the training distribution. The output is high entropy and doesn't converge. Python only dominated early AI coding because ML researchers write Python and trained on Python first. It was path dependence, not merit.\
The thing nobody wants to say is that the reason serious programmers historically hated Go is exactly why LLMs are great at it: There's a ceiling on abstraction.
Go has many many failings (e.g. it took over a decade to get generics). But LLMs don't care about expressiveness, they care about predictability. Go 1.26 just shipped a completely rewritten go fix built on the analysis framework that does AST-level refactoring automatically. That's huge for agentic coding because it keeps codebases modern without needing the latest language features in training data or wasting tokens looking up new signatures.
I spent four years building production public key infrastructure in Golang before LLMs [1]. After working coding agents like everyone else and domain-switching for clients - I've become more of a Go advocate because the language finally delivers on its promise. Engineers have a harder time complaining about the verbose and boilerplate syntax when an LLM does it correctly every single time.
It's an even more popular language with even more training data and also has a better type system so more validation on LLM output, etc.
I personally think neither Go nor Java would be good for "agents". Better to have them sandboxed in WASM.
Java is a fine language, tech stack, and ecosystem, but I agree with the author and parent commenter that this is a sweet spot for Go. Their decision to use it makes a lot of sense.
Go has _none_ of this nonsense, and it's better for it. Nor does Rust, FWIW, which IMO is also a better target language than Java for just about everything.
You've almost certainly got more experts-exchange.com than github.com
It really felt like using AI tooling of a year or two ago. It wasn’t understanding my prompts, going on tangents, not following the existing style and idioms. Maybe Claude was hungover or doesn’t like mondays, but the contrast with Go was surprising.
One example is that I wanted to add an extra prometheus metric to keep track of an edge case in some for loop. All it had to do was define a counter and increment it. For some reason it would define the counter the line before increment it, instead of defining it next to the other counters outside of the for loop. Technically not wrong (defining a counter is idempotent), but who does that? Especially when the other counters are defined elsewhere in the same function?
Anyway, n=1 but I feel it has an easier time with Go.
Python is an interesting one because it's not always obvious the program is wrong or will fail, thanks to dynamic typing.
But another problem is the Python philosophy since 3.0. Where once backwards compatibility was treated as almost sacrosanct, 3.0+ does not. 2.7 persisted for so long for this reason. But minor releases in 3.x make breaking changes and it's wild to me.
I just wish Go had cooperative async/await rather than channels because (IMHO) cooperative async/await is a vastly superior abstraction to unbuffered channels in particular.
Now the case for Go or for other tightly standardised languages is that whatever the LLM produces, you're likely to be familiar with and make sense of its decisions. With C++, you can generally steer the LLM to refactor things in a certain way but it's extra steps. With Ruby it works surprisingly well too. I'm a lot less happy with their results in Lisp or in Bash/zsh for instance, and mixed results in C depending on what you give them to start - they just come with such random stuff. But it may be just a matter of training set and the relative free-form of those languages.
I think this is true, but it misses a very key point. Go does an impressively bad job at designing APIs that are difficult to misuse, so LLMs will misuse them and will require also writing unit tests to walk through it, just to validate it used the libraries correctly. This isn't always possible (or is awkward/cumbersome) for certain scenarios like database querues.
All of the reasons people argue Go is good for LLMs are more true for Rust. You and the LLM can design libraries to be difficult to misuse, and then get instant feedback from the compiler to the LLM about what it did wrong, and often with suggestions about how it should fix them! This also makes RL deriving from compiler feedback more effective.
This allows the LLMs to reason more abstractly at larger scales, since the abstractions are less leaky (unlike in Go). The ceiling on abstraction screws you here, since troubleshooting requires more deep diving. It's the same reason Go projects become difficult for humans at large scales, too.
Rust got a severe case of npm-itis
Golang is the best language there is for most workflows that aren't bare metal embedded or have real time requirements, and this is coming from a 20 year+ C++ dev.
Python doesn’t need dependence to prove its merit. There’s a reason why it is one the major programming languages and was top 1 for a while.
What would be the best language properties for LLM assisted coding?
This lines up neatly with the kind of low‑abstraction systems I like running: 2021 HP PC with i7, bare‑metal‑ish, Crunchbang++, no desktop, openbox window manager. Boots to login in 17 seconds. Terminal front and center — local AI bare-metal inference, no wrapper, ffmpeg, ffplay, etc.
Go’s “no abstraction ceiling” feels like the same preference at the language level: shallow stack, no indirection, and code that stays close to the metal. That’s why LLMs work so well on Go: it’s opinionated, predictable, and there’s usually one obvious way to do things. Personally, I've come to love a LACK of abstraction.
I would say Rust is quite good for just letting something churn through compiler errors until it works, and then you're unlikely to get runtime errors.
I haven't tried Haskell, but I assume that's even better.
With other languages, whether it's TypeScript/Go/Python, even if you explicitly ask agents to write/run tests, after a while agents just forget to do that, unless they cause build failures. You have to constantly remind them to do that as the session goes. Never happens with Rust in my experience.
Add a single task using your project's preferred task-runner that performs all the checks you want the agent to adhere to: linting, test coverage, style checks, test, etc, and add a rule in AGENTS.md that agents should always run this tasks after edits, and fix any warnings or errors produced.
Add the same task to your version management's pre-merge checks, in case the agent (or colleague) forgets to check before pushing. This was good practice since before LLMs, but I never was a fan of having such checks to pre-commit hooks.
I am trying out building a toy language hosted on Haskell and it's been a nice combo - the toy language uses dependent typing for even more strictness, but simple regular syntax which is nicer for LLMs to use, and under the hood if you get into the interpreter you can use the full richness of Haskell with less safety guardrails of dependent typing. A bit like safe/unsafe Rust.
I haven't had this problem with Opus 4.5+ and Haskell. In fact, I get the opposite problem and often wish it was more capable of using abstractions.
- I can build SPAs with typescript and offload expensive operations to a rust implementation that targets wasm
- I can build a multi-platform bundled app with Tauri that uses TS for the frontend, rust for the main parts of the backend, and it can load a python sidecar for anything I need python for (ML stuff mainly)
- Haven't dived too much into games but bevy seems promising for making performant games without the overhead of using one of the big engines (first-class ECS is a big plus too)
It ended up solving the problem of wanting to use the best parts of all of these different languages without being stuck with the worst parts.
- Rust: nearly universally compiles and runs without fault.
- Python,JS: very often will run for some time and then crash
The reason I think is type safety and the richness of the compiler errors and warnings. Rust is absolutely king here.
Not wanting to disagree, I am sure with Rust, it would be even more stable.
not born out by evidence. rust is bottom-mid tier on autocoderbenchmark. typescript is marginally bettee than js
shifting to compile time is not necessarily great, because the llm has to vibe its way through code in situ. if you have to have a compiler check your code it's already too late, and the llm does not havs your codebase in its weights, a fetch to read the types of your functions is context expensive since it's nonlocal.
If you're running good agentic AI it can read the compile errors just like a human and work to fix them until the build goes through.
I've been successful with each, I think there's positives and negatives to both, just wanted to mention that particular one that stands out as making it relatively more pleasant to work with.
Let's set aside the fact that Go is a garbage collected language while Rust is not for now...
Do you prefer to let LLM reason about lifetimes, or debugging subtle errors yourself at runtime, like what happens with C++?
People who are familiar with the C++ safety discussion understand that lifetimes are like types -- they are part of the code and are just as important as the real logic. You cannot be ambiguous about lifetimes yet be crystal clear about the program's intended behavior.
As a human I can just decide to write quality code (or not!), but LLMs don't understand when they're being lazy or stupid and so need to have that knowledge imposed on them by an external reviewer. Static analysis is cheap, and more importantly it's automatic. The alternative is to spend more time doing code review, but that's a bottleneck.
Lifetimes are a global property and LLMs are not particularly good at reasoning about them compared to local ones.
Most applications don't need low level memory control, so this complexity is better pushed to runtime.
There are lots of managed languages with good/even stronger type systems than Rust, paired with a good modern GC.
Huh? Lifetime analysis is a local analysis, same as any other kind of type checking. The semantics may have global implications, but exposing them locally is the whole point of having dedicated syntax for it.
I wouldn't use it for the galaxy brain libraries or explorations I like to do for my blog but for production Haskell Opus 4.5+ is really good. No other models have been effective for me.
- Rust code generates absolutely perfectly in Claude Code.
- Rust code will run without GC. You get that for free.
- Rust code has a low defect rate per LOC, at least measured by humans. Google gave a talk on this. The sum types + match and destructure make error handling ergonomic and more or less required by idiomatic code, which the LLM will generate.
I'd certainly pick Rust or Go over Python or TypeScript. I've had LLMs emit buggy dynamic code with type and parameter mismatches, but almost never statically typed code that fails to compile.
It´s a weird-ass Forth-like but with a strong type system, contracts, native testing, fuzz testing, and a constraint solver for integer math backed by z3. Interpreter implemented in Elixir.
In about 150 commits, everything it has done has always worked without runtime errors, both the Elixir interpreter and the examples in the hallucinated language, some of them non-trivial for a week old language (json parser, DB backed TODO web app).
It´s a deranged experiment, but on the other hand seems to confirm that "compile" time analysis plus extensive testing facilities do help LLM agents a lot, even for a weird language that they have to write just from in-context reference.
Don´t click if you value your sanity, the only human generated thing there is the About blurb:
In particular the whole stack based thing looks questionable.
In fact the very first answer by Gemini proposed an APL-like encoding of the primitives for token saving, but when I started the implementation Claude Code pushed back on that, saying it would need to keep some sane semantics around the keywords to be able to understand the programs.
The very strict verification story seems more plausible, tracks with the rest of the comments here.
What has surprised me is that the language works at all, adding todo items to a web app written in a week old language felt a bit eery.
I have programmed about 3 Forth implementations by hand throughout the years for fun, but I have never been able to really program in it, because the stack wrangling confuses me enormously.
So for me anything vaguely complex is unreadable , but apparently not for the LLMs, which I find surprising. When I have interrogated them they say they like the lack of syntax more than the stack ops hamper them, but it might be just an hallucinated impression.
When they write Cairn I sometimes see stack related error messages scroll by, but they always correct them quickly before they stop.
- Strongly typed, including GADTs and various flavors of polymorphism, but not as inscrutable as Haskell
- (Mostly) pure functions, but multiple imperative/OO escape hatches
- The base language is surprisingly simple
- Very fast to build/test (the bytecode target, at least)
- Can target WASM/JS
- All code in a file is always evaluated in order, which means it has to be defined in order. Circular dependencies between functions or types have to be explicitly called out, or build fails.
I should add, it's also very fun to work with as a human! Finding refactors with pure code that's this readable is a real joy.
But I don't believe the effects are tracked in the type system yet, but that's on it way.
Well if it's a choice between these 4, then sure. Not sure that really suffices to qualify Go as "the" best language for agents
“Why Elixir is the best language for AI” https://news.ycombinator.com/item?id=46900241
- for comparison of the arguments made
- features a bit more actual data than “intuitions” compared to OP
- interesting to think about in an agent context specifically is runtime introspection afforded by the BEAM (which, out of how it developed, has always been very important in that world) - the blog post has a few notes on that as well
https://autocodebench.github.io/#:~:text=Experimental%20Resu...
Go as a language for LLM generation actually trails a lot of other languages, at least according to this research...
I’m not sure about cargo audit specifically, but most other security advisories are package scoped and will warn if your code transitively references the package, regardless of which symbols your code uses.
As a human programmer with creative and aesthetic urges as well as being lazy and having an ego, I love expressive languages that let me describe what I want in a parsimonious fashion. ie As few lines of code as possible and no boilerplate.
With the advances in agent coding none of these concerns matter any more.
What matters most is can easily look at the code and understand the intent clearly. That the agent doesn't get distracted by formatting. That the code is relatively memory safe, type safe and avoids null issues and cannot ignore errors.
I dislike Go but I am a lot more likely to use it in this new world.
But for how long will it matter? I do wonder if programming languages as we know them today will lose relevance as all this evolves.
On the other hand I think Rust is better by some margin. Type system is obviously a big gain but Rust is very fast moving. When API changes LLMs can't follow and it takes many tries to get it right so it kinda levels out. Code might compile but only on some god-forgotten crate version everybody (but LLM) forgot about.
From personal experience Haskell benefits the most. Not only it has more type system usage than Rust, but its APIs are moving on snail-like pace, which means it doesn't suffer from outdated Rust and code compilable will work just fine. Also I think that Haskell code in training sets is guaranteed to be safe because of language extension system.
Rust is great, but there's no need to manage memory manually if you don't need to.
So for general mainstream languages, that leaves ... Python. Sure, it's ok but Go has strong typing from the start, not bolted on with warts.
(I realized how incredibly subjective this comment turned out to be after I had written it. Apologies if I omitted or slighted your fave. This is pretty much how I see it).
But it does have the benefit of having a very strong "blessed way of doing things", so agents go off the rails less, and if claude is writing the code and endless "if err != nil" then the syntax bothers me less.
I've no idea myself, I just thought it was interesting for comparison.
https://news.ycombinator.com/item?id=47222705
Edit: cool article, I have myself speculated that we will get a new language made for/by llms that will be torture writing by hand/ide but easy to read/follow/navigate/check for a human and super easy for Llms to develop and maintain.
https://bernste.in/writings/the-unreasonable-effectiveness-o...
The part that surprised me: the bottleneck wasn't AI capability. It was that the tooling wasn't designed for AI as the builder. Once I locked architectural decisions upfront and enforced a single way to do everything, the AI stopped hallucinating boilerplate and started making genuinely good decisions.
Zero ambiguity in the codebase = zero drift in AI-generated code.
The bottleneck in agent systems is almost never your language runtime. It's LLM API latency (200-2000ms per call), external service I/O, and retry/error handling across unreliable tool calls. Whether your orchestration loop runs in 2ms (Go) or 15ms (Python) is irrelevant when you're waiting 800ms for Claude to respond.
What actually matters for production agent systems: (1) state management across multi-step workflows that can fail at any point, (2) graceful degradation when one tool in a chain times out, (3) observability into what the agent decided and why. These are design problems, not language problems.
Python wins on ecosystem breadth — every LLM provider ships a Python SDK first, every embedding model has Python bindings, and the tooling around prompt engineering and evaluation is Python-native. When you're iterating on agent behavior (which is 80% of the work), that ecosystem advantage compounds fast.
That said, Go's argument is strongest for the "agent runtime" layer — the part that manages concurrency, schedules tool calls, and handles streaming. If you separate the orchestration runtime from the AI logic, Go for the former and Python for the latter isn't a bad split.
I expect rust to gain some market share since it's safe and fast, with a better type system, but complex enough that many developers would struggle by themselves. But IME AI also struggles with the manual memory management currently in large projects and can end up hacking things that "work" but end up even slower than GC. So I think the ecosystem will grow, but even once AI masters it, the time and tokens required for planning, building, testing will always exceed that of a GC language, so I don't see it ever usurping go, at least not in the next decade.
I wish the winner would be OCaml, as it's got the type safety of rust (or better), and the development speed of Go. But for whatever reason it never became that mainstream, and the lack of libraries and training data will probably relegate it to the dustbin. Basically, training data and libraries >>> operational characteristics >>> language semantics in the AI world.
I have a hard time imagining any other language maintaining a solid advantage over those two. There's less need for a managed runtime, definitely no need for an interpreted language, so I imagine Java and Python will slowly start to be replaced. Also I have to imagine C/C++ will be horrible for AI for obvious reasons. Of course JS will still be required for web, Swift for iOS, etc., but for mainstream development I think it's going to be Rust and Go.
Syntax. Syntax is the reason. It's too foreign to be picked up quickly by the mass of developers that already know a C style language. I would also argue that it's not only foreign, it's too clunky.
I’ve tried LLM-assisted development across Java, JavaScript, Python, Rust, C++ and Go. The difference in how well models hold the system in their “head” becomes obvious once the codebase grows beyond a few thousand lines.
With most ecosystems the entropy explodes. Python and TypeScript in particular have an enormous combinatorial space: frameworks, build systems, typing styles, dependency patterns, project layouts. Two codebases solving the same problem can look completely different. That variability leaks directly into the training distribution and the output starts to drift.
Go sits at the opposite end of that spectrum.
There’s basically:
one formatting style one standard build system one dependency mechanism one dominant project layout a very small set of concurrency primitives a language that has barely changed in a decade
That constraint is exactly what LLMs thrive on. The solution space is narrow, so the model converges instead of wandering. In my own experiments, once a codebase passes ~8k lines, most languages start to show cracks: incorrect imports, wrong framework idioms, subtle API hallucinations. With Go the agents stay coherent much longer. The tooling helps too — extremely fast builds, deterministic formatting, and batteries-included tooling mean the feedback loop is tight.
Java often gets suggested as an alternative because it’s statically typed and mature. But in practice the ecosystem is a maze: Maven vs Gradle, Spring everything, annotation magic, layers of frameworks, multiple architectural styles. The language may be stable, but the surrounding universe isn’t.
I’m also a big Rust fan. If you actually need Rust’s capabilities it’s fantastic. But it’s slower to compile and significantly more complex for LLMs to work with. Beyond small codebases the difference becomes obvious.
One final point: architecture matters as much as language. Strong modular boundaries, API-first feature access, and good code maps help enormously when working with LLMs.
Used carefully, these tools are a serious productivity multiplier. The interesting question now isn’t whether they work - it’s which languages and system designs allow them to work reliably.
The most important downside of Python is that it doesn't compile to a native binary that the OS can recognize and it's much slower. However, it's a great "glue" for different binaries or languages like Rust and Go.
Rust is the increasingly popular language for AI agents to choose from, often integrated into Python code. The trend is on the side of Rust here. I don't want to mention all the great points from the original poster. One technical point that wasn't mentioned, from my experience, is that the install size is too large for embedded systems. As the article mentioned, the build times are also longer than Go and this is an even worse bottleneck on embedded systems. I prefer Go over Rust in my research and development but I yield to other developers on the team professionally.
What about C/C++? At the moment, I've had great success with implementing C++ code through Agentic AI. However, there are a dearth of frameworks for things like web development. Because Python compiles to C, and integrating C modules into Python is relatively straightforward, I find myself implementing the Numpy approach where C is the backbone of performance critical features.
Personally, I still actively utilize code I've written more than 10 years ago that's battle tested, peer reviewed, and production ready. The above comments are for the current state, but what about the future? Another point that wasn't mentioned was the software license from Go. It's BSD3 with a patent grant which is more permissive than Rust's MIT + Apache 2.0 licenses. This is very important to understand the future viability of software because given enough time and all other things the same, more permissive software will win out in adoption.
The rabbit hole goes deeper. I think we will sacrifice Rust as the "good-enough" programming language to spoil the ecosystem with Agentic AI before its redemption arc. Only time will tell, but Python's inability to compile to a native binary makes it a bad choice for malware developers. You can fill in the blank here. Perhaps the stage has already been set, and it looks like Rust will be the opening act now that the lights are on.
I actually spent some time trying to get to the bottom of what a logical extension of this would be. An entirely made up language spec for an idealized language it never saw ever, and therefore had no bad examples of it. Go is likely the closest for the many reasons people call it boring.
That's our take so, but we are just a sample.
But what makes Go useful is the fact that it compiles to an actual executable you can easily ship anywhere - and that is actually really good considering that the language itself is super easy to learn.
I've recently started building some A agent tools with it and so far the experience has been great:
https://github.com/pantalk/pantalk https://github.com/mcpshim/mcpshim
I've no idea myself, I just thought it was interesting for comparison.
But that's because it's tight, token efficient, and above all local. Pure functions don't require much context to reason about effectively.
However, you do miss the benefit of types, which are also good for LLMs.
The "ideal" LLM language would have the immutability and functional nature of Clojure combined with a solid type system.
Haskell or OCaml immediately come to mind, but I'm not sure how much the relative lack of training data hurts... curious if anyone has any experiences there.
Stack overflow tags:
17,775 Clojure
74,501 Go
I’m not finding a way to get any useful information from GitHub, e.g. count of de-duplicated lines of code per language. There might be something in their annual “Octoverse” report but I haven’t drilled into it yet: https://github.blog/news-insights/octoverse/octoverse-a-new-...- structurally edited, ensuring syntactic validity at all times
- annotated with metadata, so that agents can annotate the code as they go and refer back to accreted knoweledge (something Clojure can do structurally using nodepaths or annotations directly in code)
- put into any environment you might like, e.g. using ClojureScript
I haven't proven to myself this is more useful/results in better code than just writing code "the normal way" with an agent, but it sure seems interesting.
I love the expressivity of Rust, but compile times are a problem.
Someone with some sway, please convince a hyper-scalar to support something like https://borgo-lang.github.io/. I think it may be the AST that we all need.
I've started what I'm calling an agent first framework written in Go.
Its just too easy to get great outputs with Go and Codex.
https://github.com/swetjen/virtuous
The key is blending human observability with agent ergonomics.
Golang just gets bogged down in irrelevant details way too easily for this.
Though, I have found both to be better at C# than Swift, for example.
I really love this point-out. Not always an easy sell upstream, but a big factor in happy + productive teams.
On the other hands if there good conventions it’s also a benefit, for example Ruby on Rails.
May be this is good incentive to improve error handling in Go.
Once a codebase gets beyond a few thousand lines, models stop struggling with syntax and start struggling with entropy. If the ecosystem allows many ways to structure the same program, the model’s output begins to drift.
I’ve tried building non-trivial projects with LLM assistance in Java, JavaScript, Python, Rust, C++ and Go. The difference in how well the models hold the system in their "head" becomes obvious once you pass a few thousand lines of code.
Python and TypeScript in particular have a massive combinatorial space: frameworks, build systems, typing styles, dependency patterns, project layouts. Two teams solving the same problem can produce codebases that look completely different. That variability leaks into the training distribution and the model has to guess which universe it’s operating in.
Go sits at the opposite end of that spectrum.
There is essentially:
- one formatting style
- one standard build system
- one dependency mechanism
- one dominant project layout
- a very small set of concurrency primitives
- a language that has barely changed in a decade
That constraint turns out to be exactly what LLMs thrive on. The solution space is narrow, so the model converges instead of wandering.
In my experiments, once a project passes ~8k lines most languages start to show cracks: incorrect imports, wrong framework idioms, subtle API hallucinations. With Go the agents stay coherent much longer. Extremely fast build times and batteries-included tooling also tighten the feedback loop significantly.
Java often gets suggested as an alternative because it’s statically typed and mature. But in practice the ecosystem is a maze: Maven vs Gradle, Spring everything, annotation magic, layers of frameworks, multiple architectural styles. The language may be stable, but the surrounding universe isn’t.
I’m also a big Rust fan. If you genuinely need Rust’s capabilities it’s an excellent tool. But it’s slower to compile and significantly more complex for LLMs to work with. Beyond small codebases the difference becomes noticeable.
One thing that helps regardless of language is architecture. Strong modular boundaries, API-only feature access, and good code maps make a huge difference when working with LLMs.
Used carefully, these tools are a huge productivity multiplier. The interesting question now isn’t whether LLM coding works - it’s which languages, practices, and system designs allow it to work reliably at scale.
- I agree that go's syntax and concepts are simpler (esp when you write libraries, some rust code can get gnarly and take a lot of brain cycles to parse everything)
- > idiomatic way of writing code and simpler to understand for humans - eh, to some extent. I personally hate go's boilerplate of "if err != nil" but that's mainly my problem.
- compiles faster, no question about it
- more go code out there allowing models to generate better code in Go than Rust - eh, here I somewhat disagree. The quality of the code matters as well. That's why a lot of early python code was so bad. There just is so much bad python out there. I would say that code quality and correctness matters as well, and I'd bet there's more "production ready" (heh) rust code out there than go code.
- (go) it is an opinionated language - so is rust, in a lot of ways. There are a lot of things that make writing really bad rust code pretty hard. And you get lots of protections for foot meets gun type of situations. AFAIK in go you can still write locking code using channels. I don't think you can do that in rust.
- something I didn't see mentioned is error messages. I think rust errors are some of the best in the industry, and they are sooo useful to LLMs (I've noticed this ever since coding with gpt4 era models!)
I guess we'll have to wait and see. There will be a lot of code written by agents going forward, we'll be spoiled for choice.
Reduce entropy, increase probability of the correct outcome.
LLMs are surfing higher dimensional vector spaces, reduce the vector space, get better results.
Code is free, sure, but it's not guaranteed to be correct, and review time is not free.
... write the code yourself?
With Go it will increasingly become that one has to write the design doc carefully with constraints, for semi tech/coder folks it does make a lot of sense.
With Python, making believe is easy(seen it multiple times myself), but do you think that coding agent/LLM has to be quite malicious to put make believe logic in compile time lang compared with interpreted languages?
---
# Author likes go
Ok, cool story bro...
# Go is compiled
Nice, but Python also has syntax and type checking -- I don't typically have any more luck generating more strictly typed code with agents.
# Go is simple
Sure. Python for a long time had a reputation as "pseudocode that runs", so the arguments about go being easy to read might be bias on the part of the author (see point 1).
# Go is opinionated
Sure. Python also has standards for formatting code, running tests (https://docs.python.org/3/library/unittest.html), and has no need for building binaries.
# Building cross-platform Go binaries is trivial
Is that a big deal if you don't need to build binaries at all?
# Agents know Go
Agents seem to know python as well...
---
Author seems to fall short of supporting the claim that Go is better than any other language by any margin, mostly relying on the biases they have that Go is a superior language in general than, say, Python. There are arguments to be made about compiled versus interpreted, for example, but if you don't accept that Go is the best language of them all for every purpose, the argument falls flat.
1) Go runs faster, so if you're not optimizing for dev time (and if you're vibe coding, you're not) then it's a clear winner there
2) Python's barrier to entry is incredibly low, so intuitively there's likely a ton of really terrible python code in the training corpus for these tools