LLMs are just a faster and more wrong version of the copy-paste stackoverflow workflow it’s just now you don’t even need to ask the right question to find the answer.
You have to teach students and new engineers to never commit a piece of code they don’t understand. If you stop at “I don’t know why this works” then you will never be able to get out of the famous multi hour debug loop that you get into with LLMs or similarly the multi day build debugging loop that everyone has been through.
The real thing that LLMs do that is bad for learning is that you don’t need to ask it the right question to find your answer. This is good if you already know the subject but if you don’t know the subject you’re not getting that reinforcement in your short term memory and you will find things you learned through LLMs are not retained as long as if you did more of it yourself.
Humans suffer from automation bias and other cognitive biases.
Anything that causes disengagement from a process can be challenging, especially with long term maintainability and architectural erosion, which is mostly what I actively search for to try and avoid complacency with these tools.
But it takes active effort to avoid for all humans.
IMHO, writing the actual code has always been less of an issue than focusing on domain needs, details, and maintainability.
As distrusting automation is unfortunately one of the best methods of fighting automation bias I do try to balance encouraging junior individuals to use tools that boost productivity while making sure they still maintain ownership of the delivered product.
If you use the red-green refactor method, avoiding generative tools for the test and refactor steps seems to work.
But selling TDD in general can be challenging especially with Holmström's theorem, and the tendency of people to write implementation tests vs focusing on domain needs.
It is a bit of a paradox that the better the tools become, the higher the risk is, but I would encourage people to try the above, just don't make the mistake of copying the prompts required to get from red to green as the domain tests, there is a serious risk of coupling to the prompts.
We will see if this works for me long term, but I do think beginners manually refactoring with though could be an accelerator.
But only with intentionally focusing on learning why over time.
Well in modern software we stand on the shoulders of many many giants, you have to start somewhere. Some things you may never need to learn (like say learning git at a deep level when learning the concept of add, commit, rebase, pull, push, cherry pick and reset are enough even if you use a GUI to do it) and some thins you might invest in over time (like learning things about your OS so you can optimize performance).
The way you use automation effectively is to automate the things you don’t want to learn about and work on the things you do want to learn about. If you’re a backend dev who wants to learn how to write an API in Actix go ahead and copy paste some ChatGPT code, you just need to learn the shape of the API and the language first. If you’re a rust dev who wants to learn how Actix works don’t just copy and paste the code, get ChatGPT to give you a tutorial and then your write your API and use the docs yourself.
"C makes it easy to shoot yourself in the foot; C++ makes it harder,
but when you do it blows your whole leg off". -- Bjarne StroustrupIt is… not a terribly effective approach to programming.
It's one thing to have the llm generate a function call for you where you don't remember all the parameters. That's a low enough abstraction where it serves as a turbo charged doc lookup. It's also probably okay to get a basic setup (toolchain etc. for an ecosystem you're unfamilar with(. But to have it solve entire problems for you especially when you're learning is a disaster.
Or not.
All hail the mashup.
I'm reminded of the Wat talk: https://www.destroyallsoftware.com/talks/wat
Is it worth learning the mental model behind this system? Or am I better off just shoveling LLM slop around until it mostly works?
For example if you are a frontend developer doing typescript in React you could learn how React’s renderer works or how typescript’s type system works or how the browser’s event listeners work. Over time you accumulate this knowledge through the projects you work on and the things you debug in prod. Or you can purposefully learn it through projects and studying. We also build up mental models of the way our product and it’s dependencies work.
The reason a coworker might appear to be 10x or 100x more productive than you is because they are able to predict things about the system and arrive at solution faster. Why are they able to do that? It’s not because they use vim or type at 200 wpm. It’s because they have a mental of the way the system works that might be more refined than your own.
JavaScript is weird, but show me a language that doesn’t have its warts.
If you want to learn javascript, then yes, obviously. You also need to learn the model to be able to criticise it (effectively) -- or to make the next wat.
> am I better off just shoveling LLM slop around until it mostly works?
Probably not, but this depends on context. If you want a script to do a thing once in a relatively safe environment, and you find the method effective, go for it. If you're being paid as a professional programmer I think there is generally an expectation that you do programming.
If you just follow a few relatively simple rules JS is actually very nice and "reliable". Those "rules" are also relatively straight forward: let/const over var, === unless you know better, make sure you know about Number.isInteger, isSafeInteger, isObject etc etc. (there were a few more rules like this - fail to recall all of them - has been a few years since i touched JS) - hope you get the idea.
Also when I looked at JS I was just blown away by all the things people built on top of it (babel, typescript, flowtype, vue, webpack, etc etc).
If we are calling software developers who don’t understand how things work, and who can get away with not knowing how things work, engineers, then that’s a separate discussion of profession and professionalism we should be having.
As it stands there’s nothing fundamentally rooted in software developers having to understand why or how things work, which is why people can and do use the tools to get whatever output they’re after.
I don’t see anything wrong with this. If anyone does, then feel free to change the curriculum so students are graded and tested on knowing how and why things work the way they do.
The pearl clutching is boring and tiresome. Where required we have people who have to be licensed to perform certain work. And if they fail to perform it at that level their license is taken away. And if anyone wants to do unlicensed work then they are held accountable and will not receive any insurance coverage due to a lack of license. Meaning, they can be criminally held liable. This is why some countries go to the extent of requiring a license to call yourself an engineer at all.
So where engineering, actual engineering, is required, we already have protocols in place that ensure things aren’t done on a “trust me bro” level.
But for everyone else, they’re not held accountable whatsoever, and there’s nothing wrong with using whatever tools you need or want to use, right or wrong. If I want to butt splice a connector, I’m probably fine. But if I want to wire in a 3 phase breaker on a commercial property, I’m either looking at getting it done by someone licensed, or I’m looking at jail time if things go south. And engineering or no different.
The idea that wanting to become better at using something is pearl clutching is frankly why everything has become so mediocre.
AI empowers normal people to start building stuff. Of course it won’t be as elegant and it will be bad for learning. However these people would have never learned anything about coding in the first place.
Are we senior dev people a bit like carriage riders that complain about anyone being allowed to drive a car?
Engineering is also the process of development and maintenance over time. While an AI tool might help you build something that functions, that's just the first step.
I am sure that there are people who leverage AI in such a way that the build a thing and also ask it a lot of questions about why it is built in a certain way and seek to internalize that. I'd wager that this is a small minority.
To learn effectively you need to challenge your knowledge regularly, elaborate on that knowledge and regularly practice retrieval.
Building things solely relying on AI is not effective for learning (if that is your goal) because you aren’t challenging your own knowledge/mental model, retrieving prior knowledge or elaborating on your existing knowledge.
In contrast, if you're a frontend developer, you don't need to know C++ even though browsers are implemented in it. If you're a C++ developer, you don't need to know assembly (unless you're working on JIT).
I am convinced AI tools for software development will improve to the point that non-devs will be able to build many apps now requiring professional developers[0]. It's just not there yet.
[0] We already had that. I've seen a lot of in-house apps for small businesses built using VBA/Excel/Access in Windows (and HyperCard etc on Mac). They've lost that power with the web, but it's clearly possible.
When I want a quick hint for something I understand the gist of, but don’t know the specifics, I really like AI. It shortens the trip to google, more or less.
When I want a cursory explanation of some low level concept I want to understand better, I find it helpful to get pushed in various directions by the AI. Again, this is mostly replacing google, though it’s slightly better.
AI is a great rubber duck at times too. I like being able to bounce ideas around and see code samples in a sort of evolving discussion. Yet AI starts to show its weaknesses here, even as context windows and model quality has evidently ballooned. This is where real value would exist for me, but progress seems slowest.
When I get an AI to straight up generate code for me I can’t help but be afraid of it. If I knew less I think I’d mostly be excited that working code is materializing out of the ether, but my experience so far has been that this code is not what it appears to be.
The author’s description of ‘dissonant’ code is very apt. This code never quite fits its purpose or context. It’s always slightly off the mark. Some of it is totally wrong or comes with crazy bugs, missed edge cases, etc.
Sure, you can fix this, but this feels a bit too much like using the wrong too for the job and then correcting it after the fact. Worse still is that in the context of learning, you’re getting all kinds of false positive signals all the time that X or Y works (the code ran!!), when in reality it’s terrible practice or not actually working for the right reasons or doing what you think it does.
The silver lining of LLMs and education (for me) is that they demonstrated something to me about how I learn and what I need to do to learn better. Ironically, this does not rely on LLMs at all, but almost the opposite.
Is this "AI is good" or "Google is shit" or "Web is shit and Google reflects that"?
This is kind of an important distinction. Perhaps I'm viewing the past through rose-tinted glasses, but it feels like searching for code stuff was way better back about 2005. If you searched and got a hit, it was something decent as someone took the time to put it on the web. If you didn't get a hit, you either hit something predating the web (hunting for VB6 stuff, for example) or were in very deep (Use the Source, Luke).
Hmmm, that almost sounds like we're trying to use AI to unwind an Eternal September brought on by StackOverflow and Github. I might buy that.
The big problem I have is that AI appears to be polluting any remaining signal faster than AI is sorting through the garbage. This is already happening in places like "food recipes" where you can't trust textual web results anymore--you need a secondary channel (either a prep video or a primary source cookbook, for example) to authenticate that the recipe is factually correct.
My biggest fear is that this has already happened in programming, and we just haven't noticed yet.
I started my career around 2005 and honestly, I did great finding what I needed without an LLM. I recall being absolutely amazed with how broad and deep the reservoir of knowledge was, and as my skills in searching improved my results generally got better.
And then they didn’t. It has gotten worse over time now, in part due to Google itself and in part because of how the Internet tailors content to Google. Getting a Quora link in response to a technical question is a crazy farce. Stuff like that didn’t really happen the same way 20 years ago, much like the Pinterest takeover of Google images.
The signal pollution also seems like a huge problem to me. I’ve doubled down on creating real content in the form of writing and software in an attempt to kind of rekindle and revive the craft of, I don’t know, ‘caring enough to do things yourself’ in my life. I miss it the more I read generated text or see generated images and video. I like LLMs and other generative AI, I see the value, but their rapid takeover is seriously disheartening. It just makes me want to make real things more.
It's a -"I can ask it questions based on results and/or ask it to tweak things and/or ask it dumb what if questions" that I can't on web "good."-
Honestly one of the major plus towards LLMs is that it is immediate and interactive.
>Hmmm, that almost sounds like we're trying to use AI to unwind an Eternal September brought on by StackOverflow and Github. I might buy that.
I mean, what's the alternative? You're not going back to web of 90s (and despite nostalgia, web of 90s was fairly bad).
>you can't trust textual web results anymore--you need a secondary channel
I wonder how good LLMs would be at verifying other LLMs that are trained differently - eg sometimes I switch endpoints in LibreChat midpoint during a problem after I am satisfied with an answer to a problem, and ask a different LLM to verify everything. It's pretty neat at catching tidbits.
But after a year of autocomplete/code generation, I chucked that out the window. I switched it to a command (ctrl-;) and hardly ever use it.
Typing is the smallest part of coding but it produces a zen-like state that matures my mental model of what's being built. Skipping it is like weightlifting without stretching.
Or I'll go to great pains to be explicit about what I want (no code snippets unless I ask for them specifically, responses hundreds of lines long with dozens of steps it wants me to take), and for a little while it does that, and then boom, back to barfing out code snippets.
People talk about this tool as some kind of miracle worker and to me while it is helpful it is also a source of major frustration for me, because it cannot do these most basic things. When I hear about people talking about how amazing LLMs are, I'm extremely confused. What am I doing wrong? I really would like to know.
There was a time I really liked to go through books, documentation, learn and run the codes etc. but these days are gone for me. I prefer to enjoy free time and go to the gym now
Some may ruminate and pontificate and lament the loss of engineering dignity, maybe even the loss of human dignity.
But some will realize this is an inevitable result of human nature. They accept that the minds of their fellow men will atrophy through disuse, that people will rapidly become dependent and cognitively impaired. A fruitful stance is to make an effort to profit from this downfall, instead of complaining impotently. See https://news.ycombinator.com/item?id=41733311
There's also an aspect of tragic comedy. You can tell that people are dazzled by the output of the LLM, accepting its correctness because it talks so smart. They have lost the ability to evaluate its correctness, or will never develop said ability.
Here is an example from yesterday. This is totally nonsensical and incorrect yet the commenter pasted it into the thread to demonstrate the LLM's understanding: https://news.ycombinator.com/item?id=41747089
Grab your popcorn and enjoy the show. "Nothing stops this train."
I don't know where your interpretation came from. Perhaps you're an LLM and you hallucinated it? :)
By all means urge folks to learn the traditional, arguably better, way but also teach them to use the tools available well and safely. The tools aren't going away and the tools will continue to improve. Endeavour to make coders who use the tools well to produce valuable well written code 2x, 5x, 8x, 20x the amount of code as those of today.
I hear this so often, that I have to reply. It's a bad argument. You do need to learn longhand math - and be comfortable with arithmetic. The reason given was incorrect (and a bit flippant), but you actually do need to learn it.
Anyone in any engineering, or STEM based field needs to be able to estimate and ballpark numbers mentally. It's part of reasoning with numbers. Usually that means mentally doing a version of that arithmetic on rounded version of those numbers.
Not being comfortable doing math, means not being able to reason with numbers which impacts every day things like budgeting and home finances. Have a conversation with someone who isn't comfortable with math and see how much they struggle with intuition for even easy things.
The reason to know those concepts is because basic math intuition is an essential skill.
But...this applies to engineering and/or webdev too. You can't just expect to copy paste a limited solution limited to 4096 output tokens or whatever that would work in a huge system you have at your job, which the LLM has 0 context of.
Smaller problems, sure, but they're also YMMV. And honestly if I can solve smaller irritating problems using LLMs so I can shift my focus to more annoying, larger tasks, why not?
What I am saying is that you also do need to know fundamentals of webdev to use LLMs to do webdev effectively.
That's a shitty argument, and it wasn't even true back in the day (cause every engineer had a computer when doing their work).
The argument is that you won't develop a mental model if you rely on the calculator for everything.
For example, how do you quickly make an estimate if the result you calculated is reasonable, or if you made an error somewhere? In the real world you can't just lookup the answer, because there isn't one.
If you have an engineering culture at your company of rubber-stamp reviews, or no reviews at all, change that culture or go somewhere better.
But I realized a few things, that while they are phenomenally great when starting new projects and small code bases:
1) one needs to know programming/soft engineering in order to use these well. Else, blind copying will hurt and you won’t know what’s happening when code doesn’t work
2) production code is a whole different problem that one will need to solve. Copy pasters will not know what they don’t know and need to know in order to have production quality code
3) Maintenance of code, adding features, etc is going to become n-times harder the more the code is LLM generated. Even large context windows will start failing, and hell hallucinations may screw up without one even realizing
4) debugging and bug fixing, related to maintenance above, is going to get harder.
These problems may get solved, but till then:
1) we’ll start seeing a lot more shitty code
2) the gap between great engineers and everyone else will become wider
The truth of the matter, yes, organisations are likely to see less uniformity in their codebase but issues are likely to be more isolated/less systemic. More code will also be pushed faster. Okay so yes, there is some additional complexity.
However, as they say, if you can't beat 'em, join 'em. The easiest way to stay on top of this will be to use LLM's to review your existing codebase for inconsistencies, provide overviews and commentary over how it all works, basically simplifying and speeding up working with this additional complexity. The code itself is being abstracted away.
We’ll have a resurgence of “edge-cases” and all kinds of security issues.
LLMs are a phenomenal Stackoverflow replacement and better at creating larger samples than just a small snippet. But, at least at the moment, that’s it.
This one feels like it could be true but only in terms of number of lines of code because more code will be written since ai makes generation faster.
But so much of the code I’ve seen is shitty, I can’t believe we can get materially worse in terms of percentage. Especially because LLM stuff is often well commented to what it’s attempting to do.
Test-driven development can inherently be cycled until correct (that's basically equivalent to what a Generative Adversarial Network does under the hood anyhow).
I heard a lot of tech shops gutted their QA departments. I view that as a major error on their parts, if QA folks are current modern tooling (not only GenAI) and not trying to do everything manually.
Blackbox QA was entirely gutted, only some whitebox QA. Their titles were changed to software engineer from QA engineer. Dev were supposed to do TDD and that’s it, and there’s a fundamental issue there which looks like people don’t even realize.
Anyway, we digress.
What do you mean by that?
I’m assuming that an LLM will have to ingest that entire code base as part of the prompt to find the problem, refactor the code, add features that span edits across numerous files.
So, at some point, even the largest context window won’t be sufficient. So what do you do?
Perhaps a RAG of the entire codebase, I don’t know. Smarter people will have to figure it out.
But of course we don't need to regulate this space. Just let it go, all in wild west baby.
Now a few more years down the line, you might find yourself in a mess and productivity grinds to a halt. Most of the decision makers who caused this situation are typically not around anymore at that point.
https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml...
https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Ve...
Going through the actual real world issues of web development with an LLM on the side that you can query for any issue is infinitely more valuable than taking a course in web development imo because you actually learn how to DO the things, instead of getting a toolbox which half of the items you don't use ever and a quarter of which you have no idea how to functionally use. I strongly support learning by doing and I also think that the education system should be changed in support of that idea.
Better: use an LLM, get something working, realize it doesn't work _exactly_ as you need it to, then refer to docs. Beginners will need to learn this, however. They still need to learn to think. But practical application and a true desire to build will get them to where they need to be.
For a period of time there was a segment of development cleaning up this slop or just redoing it entirely.
The AI-generated slop reminds me of that era.
25 years ago it was font tags and table layouts, now it’s a styled components div soup generated by 2-3 orders of magnitude more code, in a sprawling NPM dependency tree that takes several minutes to install on modern hardware.
It’s all crap once you get away from the simplest prototype and single-person project.
My experience is that React is pretty much standard these days. People create new frameworks still because they're not fully satisfied with the standard, but the frontend churn is basically over for anyone who cares for it to be. The tooling is mature, IDE integration is solid, and the coding patterns are established.
For databases, Postgres. Just Postgres.
If you want to live in the churn you always can and I enjoy following the new frameworks to see what they're doing differently, but if you're writing this live in 2024 and not stuck in 2014 you can also just... not?
Perhaps people using it just think this is normal and required. So yes the one true modern web framework has been anointed, but it is better than what came before or just what everyone else is using?
Adding generative AI to the mix will just make it worse as the code generated will be an average of all the below average code out there and dthe group-think will be further encouraged.
If you're looking for a sane tech stack there are plenty of languages to use which are not javascript and plenty of previous frameworks to look at.
Very little javascript is needed for a useful and fluid front-end experience and the back end can be whatever you like.
Web development is full of arbitrary, frustrating nonsense, layered on and on by an endless parade of contributors who insist on reinventing the wheel while making it anything but round. Working with a substantial web codebase can often feel like wading through a utility tunnel flooded with sewage. LLMs are actually a fantastic hot blade that cuts through most of the self-inflicted complexities. Don't learn webpack, why would you waste time on that. Grunt, gulp, burp? Who cares, it's just another in a long line of abominations careening towards a smouldering trash heap. It's not important to learn how most of that stuff works. Let the AI bot churn through that nonsense.
If you don't have a grasp on the basics, using an LLM as your primary coding tool will quickly leave you with a tangle of incomprehensible, incoherent code. Even with solid foundations and experience, it's very easy to go just a little too far into the generative fairytale.
But writing code is just a small part of software development. While reading code doesn't seem to get talked about as much, it's the bread and butter of any non-solo project. It's also a very good way to learn -- look at how others have solved a problem. Chances that you're the first person trying to do X are infinitesimally small, especially as a beginner. Here, LLMs can be quite valuable to a beginner. Having a tool that can explain what a piece of terse code does, or why things are a certain way -- I would've loved to have that when I was learning the trade.
I personally have banned myself from using any in editor assistance where you just copy the code directly over. I do still use chatGPT but without copy pasting any code, more along the lines of how I would use search.
I find supermaven helps keep me on track because its suggestions are often in line with where I was going, rather than branching off into huge snippets of slightly related boilerplate. That’s extremely distracting.
Just had a glimpse at supermaven and not sure why that would be better, the site suggest it is a faster copilot.
All the LLM is doing is helping people make the same mistakes... faster.
I really doubt that the LLM is the root cause of the mistake, because (pre LLM) I've come across a lot of similar mistakes. The LLM doesn't magically understand the problem; instead a noob / incompetent programmer misapplies the wrong solution.
I once had a newcomer open up a PR that completely bypassed the dependency injection system.
> Vanilla JavaScript loaded in via filesystem APIs and executed via dangerouslySetInnerHTML
I wish I had more context on this one, it looks like someone is trying to bypass React. (Edit) Perhaps they learned HTML, wanted to set the HTML on an element, and React was getting in the way?
> API calls from one server-side API endpoint to another public API endpoint on localhost:3000 (instead of just importing a function and calling it directly)
I once inherited C# code that, instead of PInvoking to call a C library, pulled in IronPython and then used the Python wrapper for the C library.
> LLMs are useful if you already have a good mental model and understanding of a subject. However, I believe that they are destructive when learning something from 0 to 1.
Super useful if you have code in mind and you can get an LLM to generate that code (eg, turning a 10 minute task into a 1 minute task).
Somewhat useful if you have a rough idea in mind, but need help with certain syntax and/or APIs (eg, you are an experienced python dev but are writing some ruby code).
Useful for researching a topic.
Useless for generating code where you have no idea if the generated code is good or correct.
IME there is a lot more arcana and trivia necessary to write frontend/web applications than most other kinds of software, mostly because it’s both regular programming and HTML/CSS/browser APIs. While you can build a generalized intuition for programming, the only way to master the rest of the content is through sheer exposure - mostly through tons of googling, reading SO, web documentation, and trial and error getting it do the thing. If you’re lucky you might have a more experienced mentor to help you. And yes, there are trivia and arcana needed to be productive in any programming domain, but you can drop a freshly minted CS undergrad into a backend engineering role and expect them to be productive much faster than with frontend (perhaps partly why frontend tends to have a higher proportion of developers with non-CS backgrounds).
It doesn’t help that JavaScript and browsers are typically “fail and continue”, nor that there may be several HTML/CSS/browser features all capable of implementing the same behavior but with caveats and differences that are difficult to unearth even from reading the documentation, such as varying support across browsers or bad interactions with other behavior.
LLMs are super helpful dealing with the arcana. I’m recently writing a decent amount of frontend and UI code after spending several years doing backend/systems/infra - I am so much more productive with LLMs than without, especially when it comes to HTML and CSS. I kind of don’t care that I’ll never know the theory behind “the right way to center a div” - as long as the LLM is good enough at doing it for me why does it matter? And if it isn’t, I’ll begrudgingly go learn it. It’s like caring that people don’t know the trick to check “is a number even” in assembly.
I always wanted to program(and understand deeply) the web and could not. I bought books and videos, I went to courses with real people but I could not progress. I had limited time and there were so many different things, like CSS, and js and html and infinite frameworks you had to learn at once.
Thanks to ChatGPT and Claude I have understood web development, deeply. You can ask both general and deep questions and it helps you like no teacher could (the teachers I had access to).
Something I have done is creating my own servers to understand what happens under the hood. No jQuery teacher could help with that. But ChatGPT could.
AI is a great tool if you know how to use it.
Also ask yourself this the next time you feel compelled to just take an AI solution: what value are you providing, if anyone can simply ask the AI for the solution? The less your skills matter, the more easily you'll be replaced.
If you delegate a task to people working under you, would it be that different? They can also learn and replace you. Maybe even with more productivity since they may be willing to use AI.
[0] which isn't that impressive, as I've done very little assembly.
I use LLMs sometimes to understand a step by step mathematical process (this can be hard to search google). I believe getting a broad idea by asking someone is the quickest way to understand any sort of business logic related to the project.
I enjoyed your examples, and maybe there should be a dedicated site just for examples of code related to the web that used an LLM to generate any logic, the web changes constantly and I wonder how these LLMs will keep up with the specs, specific browsers, frameworks, etc.
And I have this to say: any impediment to learning web development is probably a good thing insofar as the most difficult stumbling block isn't the learning at all, but the unlearning. The web (and its tangential technologies) are not only ever-changing, but ever-accelerating in their rate of change. Anything that helps us rely less on what we've learned in the past, and more on what we learn right in the moment of implementation, is a boon to great engineering.
Every one of the greatest engineers I've worked with doesn't actually know how to do anything until they're about to do it, and they have the fitness to forget what they've learned immediately so that they have to look at the docs again next time.
LLMs are lubricating that process, and it's wonderful.
To be fair, having non programmers learn web development like that is even more problematic than using LLMs. What about teaching actual web development like HTML + CSS + JS, in order to have the fundamentals to control LLMs in the future?
I mean, a lot of effort has gone into making poorly formed HTML work, and JavaScript has some really odd quirks that will never be fixed because of backwards compatibility, and every computer runs a slightly different browser. True mastery of such a system sounds like a nightmare. Truly understanding different browsers, and CSS, and raw DOM APIs, none of this feels worthy my time. I've learned Haskell even though I'll never use it because there's useful universal ideas in Haskell I can use elsewhere. The web stack is a pile of confusion; there's no great insight that follows learning how JavaScript's if-statements work, just more confusion.
If there was ever a place where I would just blindly use whatever slop a LLM produces, it would be the web.
So he's not wrong, you have to ask the right questions still, but with later models that think about what they do this could still become a non-issue sooner than some breathing in relieve think.
We are bound to a maximum of around 8 working units in our brain, a machine is not. Once AI builds a structure graph like wikidata next to the attention vectors we are so done!
I do write my code myself, but LLM offer help. Less with suggesting code and more with comments, improvements or common errors. It isn't entirely reliable, so you need to know what you are doing anyway.
That said, there is little investment in new developers and some rely on AI too much. Companies need to invest more into training in all engineering domains. This is were the value of companies reside for the most part.
I’m sure this is true today, but over time I think this will become less true.
Additionally, LLMs will significantly reduce the need for individual humans to use a web browser to view advertisement-infested web pages or bespoke web apps that are difficult to learn and use. I expect the commercial demand for web devs is going to slowly decline for these college-aged learners as the internet transitions, so maybe it’s ok if they don’t become experts in web development.
https://web.archive.org/web/20080405212335/http://webmonkey....
This all sounds on par with the junior level developers I feel ai has pretty quickly replaced.
I still feel sad though :( how are people meant to get good experience in this field now?
this is somewhat analogous to learning arithmetic but just using a calculator to get the answer. but coding is more complicated at least in terms of the sheer number of concepts.
yet we dont ban calculators from the classroom. we just tell students to use them mindfully. the same should apply to LLMs.
i wish when I was learning coding I had LLMs. but you do need to have the desire to understand how something really works. and also the pain of debugging something trivial for hours does help retention :).
If I have a structured code base, I understood the patterns and the errors to look out for, something like copilot is useful to bang out code faster. Maybe the frameworks suck, or the language could be better to require less code, but eh. A million dollars would be nice to have too.
But I do notice that colleagues use it to get some stuff done without understanding the concepts. Or in my own projects where I'm trying to learn things, Copilot just generates code all over the place I don't understand. And that's limiting my ability to actually work with that engine or code base. Yes, struggling through it takes longer, but ends up with a deeper understanding.
In such situations, I turn off the code generator and at most, use the LLM as a rubber duck. For example, I'm looking at different ways to implement something in a framework and like A, B and C seem reasonable. Maybe B looks like a deadend, C seems overkill. This is where an LLM can offer decent additional inputs, on top of asking knowledgeable people in that field, or other good devs.
I use GPT all the time. But I do very little learning from it. GPT is like having an autistic 4 year old with with vast memory as your sidekick. It can be like having a super power when asked the right questions. But it lacks experience. What GPT does is allow you to get from some point As to other point Bs faster. I work in quite a few languages. I like that when I haven't done Python for a few days, I can ask "what is the idiomatic way to flatten nested collections in Python again?". I like that I can use it to help me prototype a quick idea. But I never really trust it. And I don't ship that code til I've ever learned more to vouch for it myself or can ask a real expert about what I've done.
But for young programmers, who feel the pressure to produce faster than they can otherwise, GPT is a drug. It optimizes getting results fast. And since there is very little accountability in software development, who cares? It's a short term gain of productivity over a long term gain of learning.
I view the rise of GPT as an indictment against how shitty the web has become, how sad the state of documentation is, and what a massive sprint of layering crappy complicated software on top of crappy complicated software has wrought. Old timers mutter "it was not always so." Software efforts used to have trained technical writers to write documentation. Less is more, used to be an effort by good engineering. AI tools will not close the gap in having well written concise documentation. It will not simplify software so that the mental model to understand it is more approachable. But it does give us a hazy approximation of what the current internet content has to offer.
(I mean no offense to those who have autism in my comment above, I have a nephew with severe autism, I love him dearly, but we do adjust how we interact with him)
If you use AI to teach you HTML / programming concepts first, then support you using them, that is learning.
Having AI generate an answer and then not have it satisfy you usually means the prompt could use improvement. In that case, the prompter (and perhaps the author) may not know the subject well enough.
It's like using a calculator for your math homework. You still need to understand the concepts, but the details can be offloaded to a machine. I think the difference is that the calculator is always correct, whereas ChatGPT... not so much.
As it is you're getting a smeared average of every bit of similar code it was exposed to, likely wrong, inefficient and certainly not a good tool for learning at present. Hopefully they'll improve somehow.
I think in the future it'll be pretty interesting to see how this changes regular blue collar or secretarial work. Will the next future of startups be just fresh grads looking for B2B ideas that eliminate the common person?
Folks in an academic setting particularly will sneer at those who don't build everything from first principles. Go back 20 years, and the same article would read "IDEs are an impediment to learning web development"
And if you know what you're doing, you don't need the AI.
"How tall is the Leaning Tower of Pisa?"
- "13ft tall"
"Okay well that's obviously wrong..."
>LLMs will obediently provide the solutions you ask for. If you’re missing fundamental understanding, you won’t be able to spot when your questions have gone off the rails.
This made me think: most of the time, when we write code, we have no idea (and don't really care) what kind of assembly the compiler will generate. If a compiler expert looked at the generated assembly, they’d probably say something similar: "They have no idea what they’re doing. The generated assembly shows signs of a fundamental misunderstanding of the underlying hardware," etc. I'm sure most compiled code could be restructured or optimized in a much better, more "professional" way and looks like a total mess to an assembly expert—but no one has really cared for at least two decades now.
At the end of the day, as long as your code does what you intend and performs well, does it really matter what it compiles to under the hood?
Maybe this is just another paradigm shift (forgive me for using that word) where we start seeing high-level languages as just another compiler backend—except this time, the LLM is the compiler, and natural human language is the programming language.
Also, you may not care now what assembly instructions your function gets compiled down to, but someday you might care a great deal, if for example you need to optimize some inner loop with SIMD parallelism. To do these things you need to be able to reliably control the output using source code. LLMs cannot do this.
Lack of debuggability is a good argument. Maybe it's only a problem if you want a human to debug the generated code? How about let an LLM iteratively run the code and figure out where it goes wrong by itself (o1 style).
The example you quoted could trigger a DDoS if a page using that code got popular.
llms are a tool, if you can't make it work for you or learn from using them, sorry but it's just a skill/motivation issue. if the interns are making dumb mistakes, then you need to guide them better and chop up the task into smaller segments, contextualize it for them as needed.