With minor exceptions, moving from one language to another isn’t a matter of simple syntax, trivia, or swapping standard libraries. Certainly, expert beginners do espouse that all the time, but languages often have fundamental concepts that they’re built that and need to be understood in order to be effective with them. For example: have you ever seen a team move from Java to Scala, js to Java, or C# to Python - all of which I’ve seen - where the developers didn’t try to understand language they were moving to? Non-fail, they tried to force the concepts that were important to their prior language, onto the new one, to abysmal results.
If you’re writing trivial scripts, or one-off utils, it probably doesn’t build up enough to matter, and feels great, but you don’t know what you don’t know, and you don’t know what to look for. Offloading the understanding of the concepts that are important for a language to an LLM is a recipe for a bad time.
I completely agree. That's another reason I don't feel threatened by non-programmers using LLMs: to actually write useful Go code you need to figure out goroutines, for React you need to understand the state model and hooks, for AppleScript you need to understand how apps expose their features, etc etc etc.
All of these are things you need to figure out, and I would argue they are conceptually more complex than for loops etc.
But... don't need to memorize the details. I find understanding concepts like goroutines to be a very different mental activity to memorizing the syntax for a Go for loop.
I can come back to some Go code a year later and remind myself how goroutines work very quickly, because I'm an experienced software engineer with a wealth of related knowledge about concurrency primitives to help me out.
I’ve been encouraging the new developers I work with to ensure they read the docs and learn the language to ensure the LLM doesn’t become a crutch, but rather a bicycle.
But it occurred to me recently that I probably learned most of what I know from examples of code written by others.
I’m certain my less experienced colleagues are doing the same, but from Claude rather than Stack Overflow…
Code are practical realizations of concepts that exists outside of code. Let's take concurrency as an example. It's something that is common across many domains where 2 (or more) independent actors suddenly need to share a single resource that can't be used by both at the same time. In many cases, it can be resolved (or not) in an ad-hoc manner. And sometimes there's a basic agreement that takes place.
But for computers, you need the solution to be precise, eliminating all error cases. So we go one to define primitives and how these interacts with each other and the sequence of the actions. These are still not code. Once we are precise enough, we translate it to code.
So if you're learning from code, you are tracing back to the original concept. And we can do so because we are good at grasping patterns.
But generated code are often a factor of several patterns at once, and some are not even relevant. Just that in the training phases, the algorithm detected similarity (and we know things can be similar but are actually very different).
So I'm wary of learning from LLMs, because it's a land of mirages.
LLMs are like a map of everything, even it's a medieval map with distorted country sizes and dragons in every sea. Still, it can be used to get a general idea of where to go.
In software these days that means learning from many different sources: documentation and tutorials and LLM-generated examples and YouTube talks and conference sessions and pairing with coworkers and more.
Sticking to a single source will greatly reduce the effectiveness of how you learn your craft. New developers need to understand that.
That's one of the argument that seems to never make any sense to me. Were people actually memorizing all these stuff and are now happy that they don't have too? Because that's what books, manuals, references, documentations, wiki,... are here for.
I do agree with you both that you need to understand concepts. But by the time I need to write code, I already have a good understanding of what the solution is like and unless I'm creating the scaffolding of the project, I rarely needs to write more than ten lines at time. If I need to write more, that's when I reach for generators, snippets, code examples from the docs, or switch to a DSL.
Also by the time I need to code, I need factual information, not generated examples which can be wrong.
The reason I used to stick to just a very small set of languages that I knew inside out is that because I was using them on a daily basis I was extremely productive at writing code in them.
Why write code in Go where I have to stop and look things up every few minutes when I could instead of use Python where I have to look things up about once an hour?
LLMs have fixed that for me.
> Also by the time I need to code, I need factual information, not generated examples which can be wrong.
If those generated examples are wrong I find out about 30 seconds later when I run the code: https://simonwillison.net/2025/Mar/2/hallucinations-in-code/
I run Python workshops on weekends, and honestly, I'm speechless at the things students are already doing. But is it realistic to encourage them to study computer science in a year or two, when it's still a 4-year degree? By the time they start uni, LLMs will be 100 times more powerful than they are today. And in 6 years?
Any advice?
I think this is not true for most programming languages. Most of the ones we use today have converged around an imperative model with fairly similar functionality. Converting from one to another can often be done programmatically!
You can’t tell me that precisely the same paradigms, mechanics, patterns and practices will be used to develop software in both Java/Spring and Ruby/Rails, for example. The way you need to (be able to) think about how you write the software using those two languages will be non-trivially different.
Like, languages aren't just different ways to express the same underlying concepts? It's clearly not the case that you can just transliterate a program in language X to language Y and expect to get the same behavior, functionality, etc. in the end?
You most definitely can; if you have Turing complete languages A and B, you can implement an interpreter for A in B and B in A; you can implement a simple X86/aarch64 interpreter in either A or B and use that to run compiled/interpreted programs in the other; you can also write higher-level routines that map to language constructs in one or the other to convert between the two.
some elements are necessities to the language while others are more for the human / team.
That's from 6 months ago and the tooling today is almost night and day in terms of improvements.
Also:
> This wasn't something that existed; it wasn't regurgitating knowledge from Stackoverflow. It was inventing/creating something new.
Didn't he expressly ask the LLM to copy an established, documented library?
It’s odd the post describes what it’s doing as creating something new - that’s only true in the most literal (intellectually dishonest) sense.
I find it kind of funny (or maybe sad?) that you say that, then your examples are all languages that basically offer the same features and semantics, but with slightly different syntax. They're all Algol/C-style languages.
I'd understand moving from Java to/from Haskell can be a bit tricky since they're actually very different from each other, but C# to/from Java? Those languages are more similar than they are different, and the biggest changes is just names and trivia basically, not the concepts themselves.