One thing I loved about doing technical enterprise sales is that I’d meet people doing something I knew little or nothing about and who didn’t really understand what we offered but had a problem they could explain and our offering could help with.
They’d have deep technical knowledge of their domain and we had the same in ours, and there was just enough shared knowledge at the interface between the two that we could have fun and useful discussions. Lots of mutual respect. I’ve always enjoyed working with smart people even when I don’t really understand what they do.
Of course there were also idiots, but generally they weren’t interested in paying what we charged, so that was OK.
> Helping a customer solve challenges is often super rewarding, but only when I can remove roadblocks for customers who can do most of the work themselves.
So I feel a lot of sympathy for the author — that would be terribly soul sucking.
I guess generative grammars have increased the number of “I have a great idea for a technical business, I just need a technical co founder” who think that an idea is 90% of it and have no idea what technical work actually is.
Before that, I worked in a digital print organisation with a factory site. This factory did huge volumes on a daily basis. It was full of machines. They had built up a tech base over years, decades, and it was hyper-optimised - woe betide any dev who walked into the factory thinking they could see an inefficiency that could be refactored out. It happened multiple times - quite a few devs, myself included, learned this lesson the hard way - on rare occasion thousands of lines of code had to be thrown out because the devs hadn't run it past the factory first.
It's an experience I'd recommend to any dev - build software for people who are not just "users" of technology but builders themselves. It's not as "sexy" as building consumer-facing tech, but it is so much more rewarding.
if [ -z "${Var}+x" ]
Where I can see what the author was trying to do, but the code is just wrong.
I dont mind people not knowing stuff, especially when it's essentially Bash trivia. But what broke my heart was when I pointed out the problem, linked to the documentation, but recieved the response "I dont know what it means, I just used copilot" followed by him just removing the code.
What a waste of a learning opportunity.
There were many times in my career when I had what I expected to be a one-off issue that I needed a quick solution for and I would look for a quick and simple fix with a tool I'm unfamiliar with. I'd say that 70% of the time the thing "just works" well enough after testing, 10% of the time it doesn't quite work but I feel it's a promising approach and I'm motivated to learn more in order to get it to work, and in the remaining 20% of the time I discover that it's just significantly more complex than I thought it would be, and prefer to abandon the approach in favor of something else; I never regretted the latter.
I obviously lose a lot of learning opportunities this way, but I'm also sure I saved myself from going down many very deep rabbit holes. For example, I accepted that I'm not going to try and master sed&awk - if I see it doesn't work with a simple invocation, I drop into Python.
It is not nonsense. You use that expression if you want to check if a variable exists or not (as opposed to being set to an empty string) which is an extremely common problem.
I think the problem with “AI” code is that many people have almost a religions belief. There’re weirdos on internet who say that AGI is couple years away. And by extension current AI models are seen as something incapable of making a mistake when writing code.
His clients were usually older small business owners that just wanted a web presence. His rate was $5000/site.
Within a few years, business dried up and he had to do something completely different.
He also hosted his own smtp server for clients.It was an old server on his cable modem in a dusty garage. I helped him prevent spoofing/relaying a few times, but he kept tinkering with the settings and it would happen all over again.
Arguably the term for a bad idea that works is "good idea"
Asking it to refactor / fix it made it worse bc it'd get confused, and merge them into a single variable — the problem was they had slightly different uses, which broke everything
I had to step through the code line by line to fix it.
Using Claude's still faster for me, as it'd probably take a week for me to write the code in the first place.
BUT there's a lot of traps like this hidden everywhere probably, and those will rear their ugly heads at some point. Wish there was a good test generation tool to go with the code generation tool...
Having mistakes in context seems to 'contaminate' the results and you keep getting more problems even when you're specifically asking for a fix.
It does make some sense as LLMs are generally known to respond much better to positive examples than negative examples. If an LLM sees the wrong way, it can't help being influenced by it, even if your prompt says very sternly not to do it that way. So you're usually better off re-framing what you want in positive terms.
I actually built an AI coding tool to help enable the workflow of backing up and re-prompting: https://github.com/plandex-ai/plandex
https://news.ycombinator.com/item?id=40922090
LSS: metaprogramming tests is not trivial but straightforward, given that you can see the code, the AST, and associated metadata, such as generating test input. I've done it myself, more than a decade ago.
I've referred to this as a mix of literate programming (noting the traps you referred to and the anachronistic quality of them relative to both the generated tests and their generated tested code) wrapped up in human-computer sensemaking given the fact that what the AI sees is often at best a lack in its symbolic representation that is imaginary, not real; thus, requiring iterative correction to hit its user's target, just like a real test team interacting with a dev team.
In my estimation, it's actually harder to explain than it is to do.
True, but the anecdote doesn't prove the point.
It's easy to miss that kind of difference even if you wrote the code yourself.
The developer in the story had no idea what the code did, hence they would not have written it themselves, making it impossible for them to “miss” anything.
Yes.
AI as a faster way to type: Great!
AI as a way to discover capabilities: OK.
Faster way to think and solve problems: Actively harmful.
Artificial Incompetence indeed!
Hear hear!
I feel like genAI is turning devs from authors to editors. Anyone who thinks the latter is lesser than the former has not performed both functions. Editing properly, to elevate the meaning of the author, is a worthy and difficult endeavor.
Maybe you don't want to pursue a career in software, but anyone can spend a week learning Python or JavaScript. I suspect/hope a lot of these people are just kids who haven't gotten there yet.
On the other hand, I have a wealth of other general computer and protocol knowledge and have been working circles around most of my coworkers since day one. In the typical tech startup world I _rarely_ encounter coworkers with truly deep knowledge outside of whatever language they work in.
IMO the skill isn't about being able to "write code", it's about being able to model how things work.
So, if you're solo dev'ing, you can get away with making things work with what you've learned in a week. You just wouldn't be hired by anyone else of a serious nature. So it just depends on the individual and projects being worked.
Still the most insane thing I've seen, but I know there are a lot of kids out of college who got used to gen AI for code writing who put out a lot of this kind of code. Also, coincidentally, we haven't hired any US college interns in about 3 years or so.
You just don't know how much they actually learned about programming as a discipline in its own right and it very well could be functionally zero. I've seen recent CS grads who didn't know how to use git, didn't know how to split code across multiple files or understand why you would even want to.
I think there's a fairly sound argument for these being different degrees, that a certain kind of researcher doesn't necessarily need these skills. But since it isn't there's just a huge range in how schools reconcile it.
I feel like a week isn't anywhere near close enough but depending on what you want to do it gets you to start tinkering. Ironically I do wish that I had started working on embedded with microcontrollers than starting with web purely because there isn't space for absurd abstractions.
On web even the DOM API is a huge abstraction over rendering calls to OpenGL/DirectX/Vulkan and I never could grok what is happening with the DOM API until I played with the underlying tech and learnt about trees and parsers and how that would be stored.
I still use the DOM and love the abstraction, but sometimes I just wish I could use an immediate mode approach instead of the retained mode that the DOM is...
Someone with a week of knowledge, or even someone who has spent 10 years building react may not understand half of that unless they have actively tried to learn it. Thwy might have an idea if they had formal education but a self taught programmer. They have been building houses using lego blocks, I you give them mortar and bricks you are setting them up for failure.
At least computers don't think it's their god-given right to treat you like garbage.
games tend to attract young people (read: beginners) but at the same time game programming's barrier to entry is pretty high with maths & physics, low-level graphical programming, memory management, low level language and dependencies, OOP... It's almost obvious that this should be the case, every kid who's interested to coding I talked to wants to do something with games.
In fact, there are so many beginner-friendly gaming engines out there for most languages, that I am convinced that we should start using games as the entry-point for teaching programming languages. It is a beatifully self-contained domain.
How many arguments have we heard here along the lines of "why teach algorithms universities should be teaching _insert_fad_technology_of_the_day_." Big Oh and time complexity is a special favourite for people to pick on and accuse of being useless or the like. You see it in arguments around SQL vs document databases, people not being willing to recognize that their for loops are in fact the same as performing joins, people unwilling to recognize that yes they have a data schema even if they don't write it down.
So I'm not surprised at all that people would use AI as a substitute for learning. Those same people have likely gotten by with stackoverflow copypasta before gen AI came about.
I mess around in goody and game maker and can write some shitty code there, but I’ve never written a line of code for work. I just like messin around for fun
Do you know how many botnets exist because of shit game-engines that are easily exploited and connected peer-to-peer etc.
Lovely frameworks and engines... but really harmful if you ask me. People are unwitting victims of other peoples hobby projects.
You need to be responsible in this day and age. If you don't want to do the due dilligence, or are stapped for resources (time, knowledge, etc.) then it's best to for example, make a singleplayer game, or LAN only and disallow any IP addresses in your game not defined as internal class ranges.
Do have a hobby, and do have fun, but do so responsibly. You wouldn't want one of your works of love and passion to end up hurting someone would you? Simple steps can be taken to project the consumers of your lovecraft from the vicious world out there. It's sad this is needed, but that's no excuse not to do it. Humans should be better, but they are not.
Before LLMs you'd probably have to reach for learning Python or Javascript sooner, at least if StackOverflow didn't have the right code for you to copy/paste, but I expect anyone who sticks with it will get there eventually either way.
A mere ten or so years ago I only wrote firmware in C and MacOS apps in objective-c. That was my world and it was all I knew, and all I’ve done for a long time.
Then something happened. Small startup. Website needs urgent fix and the web guy is MIA, so what the hell, I can take a stab at it.
Literally had no idea where to start. I didn’t know about npm, minified transpiling, much less actual testing and deployment. Could not make sense of anything anywhere. Hopelessly lost. I even grepped the JavaScript for “void main()” out of desperation. Just ridiculous
The result is people that have no technical background, no real interest in it either, no basic framework to start from learning to use a specific set of tools and a very basic understanding of programming.
I cringe every time they mention using the bots, but luckily it has been controllable.
If you're surprised by reality, that says something about your mental model, not about reality. People don't want to "learn programming", they want to "make a thing". Some people learn programming while making a thing, but why learn about how the sausage is made when all you want is to eat it?
Hahaha, I wish that were true, but it’s not. Lots of people want to learn programming for learning and programming’s sake, or because programming is more lucrative than sausage making. I think I’ve worked with more programmers that care more about programming than the end result, than making something. It’s constantly frustrating and a huge source of over-engineering mistakes that have proliferated through engineering.
> why learn about how the sausage is made when all you want is to eat it?
Then why do sausages get made? It’s because not everyone only wants to eat them. There’s a variety of reasons people make sausages and also like eating them, from making money, to making high quality or interesting variety sausages, to being self-sufficient and not paying others for it, to learning about how it’s done. It’s been my experience that the more someone cares about eating sausages, the more likely they are to dabble in making their own.
Stop poisoning the well and then complaining that you have to drink from it.
Something like.
Some of you may use ChatGPT/Copilot to help with your coding. Please understand this service is designed for professional programmers and we cannot provide extensive support for basic coding issues, or code your app for you.
However if you do want to use ChatGPT here is a useful starting prompt to help prevent hallucinations - though they still could happen.
Prompt:
I need you to generate code in ____ language using only the endpoints described below
[describe the endpoints and include all your docs]
Do not use any additional variables or endpoints. Only use these exactly as described. Do not create any new endpoints or assume any other variables exist. This is very important.
Then give it some examples in in curl / fetch / axios / python / etc.
Maybe also add some instructions to separate out code into multiple files / endpoints. ChatGPT loves to make one humungous file that’s really hard to debug.
ChatGPT works fairly well if you know how to use it correctly. I would defo not trust it with my crypto though, but I guess if some people wanna that’s up to them. May as well try and help them!
Considering that LLM can hallucinate in different ways unpredictably, not sure whether it will work in practice.
Also commonly referred to as the robustness principle; “be conservative in what you do, be liberal in what you accept from others“. An approach that is now considered an anti-pattern, for such future compatibility reasons.
The AI of tomorrow will be trained on the output of AI today.
(Somewhere in my memory, I hear an ex-boss saying "Well that's good - isn't it?")
There might be other ways of achieving the same goal. Also,LLM hallucination is changing/improving quite rapidly.
That said, "Designed for LLM" is probably a very productive pursuit. Puts you in the right place to understand the problems of the day.
In my opinion, using code from LLMs, which might see your program come to life a bit quicker, will only enhance the time needed for debugging and testing, as there might be bugs and problems in there ranging from trivial things (unused variables) to very subtle and hard to find logic issues which require a deeper knowledge of the libraries and frameworks cobbled together by an LLM.
Additionally, it takes out a lot of the knowledge of these things in the long run, so people will find it more and more challenging to properly do this testing and debugging phase.
Let's say you've got some rest API. Are they trying to send PATCH requests but you only support POST? Do they keep trying to add pagination or look for a cursor? Do they expect /things to return everything and /things/id to return a single one but you have /things and /thing/1 ?
None of this is to say you must do whatever they are trying to do but it's a very cheap way of seeing what introducing your API/library to a new user might be like. And, moreover, you can only ever introduce someone once but you can go again and again with a LLM. Review the things they're doing and see if actually you've deviated from broad principles and if a change/new endpoint actually would make things much easier.
Fundamentally you can break this down to "if there's someone who can do some coding but isn't a top class programmer, how much do I need to explain it to them before they can solve a particular problem?"
The other day my friend showed me a curious screenshot where an, ahem, dev influencer showed all the courses he's done.
Problem is, just one of those six courses amounted to two average net salaries in the region and they were fairly basic.
Either the guy in question had a year's worth of runway (then why even bother getting into IT?), got into debt or... didn't actually pay for any of this and it was not disclosed.
I do my best to dissuade people from spending their hard earned money on such things, but a significant chunk unfortunately does not listen.
After being stuck on a project with a few of them recently, I've started to figure it out.
They know they are incompetent, and are afraid of being around competent people who would call them out. So they link up with other likewise incompetents to maintain a diffusion of responsibility, so the finger can't be pointed at any one person. And when the system they are building inevitably fails horribly, management can't really fire the whole team. So they bumble along and create "make work" tickets to fill time and look like something is happening, or they spend their time "fixing" things that should have never existed in the first place by layering mountains of hacks on top, rather than reassessing anything or asking for help. Rinse and repeat once the project has clearly failed or been abandoned.
Cheap version offers minimal support. (Although you still have to sift "bug reports" into my problem/ your problem.)
Standard version allows for more support (but still rate limited.)
Developer version charges for time spent, in advance.
This helps because people only expect free support if you don't explicitly offer something else. If you offer paid support them it's reasonable and expected that there are limits on free support.
In some cases, you might be able to use this to your advantage to improve the product.
When working with third-party APIs, I've often run into situations where my code could be simplified greatly if the API had an extra endpoint or parameter to filter certain data, only to be disappointed when it turns out to have nothing of the sort.
It's possible that ChatGPT is "thinking" the same thing here; that most APIs have an X endpoint to make a task easier, so surely yours does too?
Over time I've sent in a few support tickets with ideas for new endpoints/parameters and on one occasion had the developer add them, which was a great feeling and allowed me to write cleaner code and make fewer redundant API calls.
While this is possible, I would caution with an anecdote from my ongoing side project of "can I use it to make browser games?", in which 3.5 would create a reasonable Vector2D class and then get confused and try to call .mul() and .sub() instead of the .multiply() and .subtract() that it had just created.
Sometimes it's exceptionally insightful, other times it needs to RTFM.
And I assume that's just because, y'know, ChatGPT can write sonnets and translate Korean to Swahili and whatnot. It's amazingly broad, but it's not the right tool for the comparatively narrow problem of code generation.
A lot of businesses are going to either think they can have generative AI create all of their apps with the help of a cousin of the wife of the accountant, or they unknowingly contract a 10x developer from Upwork or alike who uses generative AI to create everything. Once they realize how well that's working out, the smart ones will start from scratch, the not-so-smart will attempt to fix it.
Develop a reputation for yourself for getting companies out of that pickle, and you can probably retire early.
I suspect this quickly will be like specializing in the repair of cheap, Chinese made desk lamps from Walmart.
If the cheap desk lamp breaks, you don't pay someone to fix it. You buy another one and often it will be a newer , better model. That is the value proposition.
Of course, the hand crafted, high end desk lamp will be better but if you just want some light, for many use cases, the cheap option will be good enough.
The real savvy ones will use later generations of LLMs to fix the output of early ones
I can't tell in a pull request what someone wrote themselves, or to what level of detail they have pre-reviewed the AI code which is now part of the pull request before allowing it to get to me. I can tell you I don't want to be fixing someone else's AI generated bugs though... Especially given that AI writes more/less dry/more verbose code, and increases code churn in general.
[edit] unfortunately I think I need to point out the above is sarcasm. Because there really are people using AI to review AI-generated code, and they do not see the problem with this approach.
The way I see it is that OP can either complain about customers being annoying, which will happen whether or not OP does anything about his problem, or OP could proactively produce something to make their product better for the demographic they're targeting. At this point it's pretty clear that the users would rather be helped than help themselves, so meeting them where they're at is going to be more productive than trying to educate them on why they suck at both coding and asking for help (or free work).
If someone asked "I wrote some rough designs and interfaces, can you write me an app for free?" The author could easily detect it as a request for free work. But because the first part is hidden behind some ChatGPT generated code and the second part is disguised as a request for help, the author would be tricked into doing this free work, until they detected this pattern and write a blog post about it.
So what was a difficult problem can quickly become insurmountable.
in volume, this turns into support writing the code.
I think of how the south park movie folks sent so much questionable content to the censors that the compromise in the end let through lots of the puppet sex.
There's been multiple stories and posts here on HN about issues with AI generated PRs for open source repos because of people using them to game numbers or clout. This is a similar problem where you have a bunch of people using your API and then effectively trying to use you for free engineering work.
On the other hand, I look around the room I am in and it is filled mostly with "low quality" Chinese made products. While you can't compare these products to expensive, expertly crafted, high end products, there is another scaling law at play when it comes to price. The low quality Chinese products are so cheap that I don't even consider the more expensive options. When the low quality desk lamp is close enough and 20x cheaper than the well made desk lamp, there is no decision point for me.
If it breaks, I will just buy another one.
I think amateur "trading" attracts a specific brand of idiot that is high/left on the Dunning Kruger curve. While taking money from idiots is a viable (even admirable) business strategy, you may want to fully automate customer service to retain your sanity.
https://www.mcgill.ca/oss/article/critical-thinking/dunning-...
> The two papers, by Dr. Ed Nuhfer and colleagues, argued that the Dunning-Kruger effect could be replicated by using random data.
I can also generate random data that looks like any distribution by carefully choosing the random distribution. What's their point?
AI is a great tool to help someone start an idea. When it goes past that, please don't ask us for help until you know what the code is doing that you just generated.
The single best thing to help someone in my place is clear and coherent documentation with working examples. It's how I learned to use Turbo Pascal so long ago, and generally the quickest way to get up to speed. It's also my biggest gripe with Free Pascal... their old help processing infrastructure binds them to horrible automatically generated naming of parameters as documentation, and nothing more.
Fortunately, CoPilot doesn't get impatient, and I know this, so I can just keep pounding away at things until I get something close to what I want, in spite of myself. ;-)
Providing an API service to customers means you will get terrible client code. Sometimes it's dumb stuff like not respecting rate limits and responding to errors by just trying again. One option, if you don't want to fix it yourself, is to partner with a consultant who will take it on, send your customers to them. Bill (or set your API access rates) appropriately.
Sometimes you have to fire customers. Really bad ones that cost more than they bring in are prime candidates for firing, or for introducing to your "Enterprise Support Tier".
ATMs were meant to kill banking jobs but ah theres more jobs in banking than ever.
The Cloud was meant to automate away tech people, but all it did was create new tech jobs. A lot of which is cleaning up after idiots who think they can outsource everything to the cloud and get burned.
LLMs are no different. The "Ideas Man" can now get from 0 to 30% without approaching a person with experience. Cleaning up after him is going to be lucrative. There are already stories about businesses rehiring graphic designers they fired, because someone needs to finish and fine tune the results of the LLM.
ATMs only handle basic teller functions and since COVID in NYC I had to change banks twice because I couldn't find actual tellers or banks with reasonable open hours. BoA had these virtual-teller only branches and the video systems were always broken (and the only option on Saturday). This was in Midtown Manhattan and my only option was basically a single branch on the other side of town 8-4 M-F.
I'm now happily with a credit union but at least since moving to the south things are generally better because customers won't tolerate not being able to deal with an actual person.
US banks, who are surprisingly behind the times in as far as automation goes. Here a lot of banks used a lot of automation to reduce the amount of manual jobs needed. to the degree that many offices are now also closing as everything can be done online.
And no, there is no need to visit banks here as I get the impression it is in the US. We don't even have physical checks anymore.
This is an opportunity. Add another tier to your pricing structure that provides an AI that assists with coding the API connection. Really simple Llama 3.1 that RAGs your docs. Or perhaps your docs fit in the context already if the API is as simple as it sounds.
I'm half joking, but in most cases I found that GPT was able to fix its own code. So this would reduce support burden by like 90%.
I hate chatbot support as much as the next guy, but the alternative (hiring a programmer to work as a customer support agent on AI generated code all day?) sounds not only cruel, but just bad engineering.
Hanging out at the DSPy discord, we get a lot of people who need help with "their" code. The code often uses hallucinated API calls, but they insist it "should work" and our library simply is buggy.
The next decade of support business is here. Fix my "hallucination" market. Thank you, Microsoft. You did it again.
Some people will enthusiastically fix other customers' generated crap for free, let them.
In this case, you could create examples for your API in common programming languages, publish them on the product website, and even implement an automatic test that would verify that your last commit didn't break them. So, most of the non-programmer inquiries can be answered with a simple link to the examples page for the language they want.
As a bonus point, you will get some organic search traffic when people search for <what your API is doing> in <language name>.
Annoying, but a good way to weed out bad customers. Life’s too short to deal with people who feel this entitled.
> I’ve gotten a number of angry messages from customers who essentially want me to build their whole app for free.
I guess AI maybe encourages more people to think they can build something without knowing what they are doing. i can believe that for sure. There was plenty of that before AI too. Must be even worse now.
Nice. The time has come that we need to design the API with the LLM in mind. Or, to rephrase that, test your API that it is working with the popular LLM tools regularly.
It's your fault really. You don't build custom software for guys having the intellectual capacity and budget of a tractor driver unless you enjoy pain.
Welcome to the life of any consultant/support agent/open source maintainer ever. AI isn't the problem here, managing expectations is.
And when you use a language that has better checks all around.
Make the type of support you are providing paid, this could be in tiers.
Outsource support as needed.
If you know how to use it, it will make you 100x more efficient.
OTOH, see script kiddies, WYSIWYG Stack Overflow C&P, etc.
It's just the way things are now.
Well, that didn't take long . . .
If there are more people able to prototype their ideas with AI, sooner or later they will need a real engineer to maintain/improve that code.
That will lead to more jobs and higher demand for SWE's.
It used to be that a 'bad' document had things 'missing', which were easy to spot and rephrase / suggest improvements.
Now a 'bad' document is 50 pages of confusing waffle instead of 5, you need to get through the headache of reading all 50 to figure out what 'sentence' the person was trying to express at the place where an autogenerated section has appeared, figure out if the condensed document is something that actually had any merit to begin with, THEN identify what's missing, and then ask them to try again.
At which point you get a NEW 50-page autogenerated document and start the process again.
F this shit.
Welcome to enterprise sales, where building custom solutions atop of your sprawling BaaS empire is exactly what you do.
Your customers have a need and you’re basically turning down an upsell opportunity that also buys you lock in to your platform.
Another way to look at it: are they all trying to build something similar or something with similar components? Maybe it’s time to expand the features of your product to reduce the amount of work they need to do.
It's a fuckin perfect hacker news bingo.
Magnificent!
It won't be flawless but if the issues are as basic as stated in this article, then it seems like a good option.
It will cost money to use the good models though. So I think the first step would be to make sure if they ask for an invalid endpoint it says that in so many words in the API response, and if they ask for an invalid property it states that also.
Then if that doesn't give them a clue, an LLM scans incoming support requests and automatically answers them. Or flags them for review.. To prevent abuse, it might be a person doing it but just pressing a button to send an auto-answer.
But maybe a relatively cheap bot in a Discord channel or something connected to llama 3.1 70b would be good enough to start, and people have to provide a ticket number from that bot to get an answer from a person.
Maybe the chat bot can recognize a botched request and even suggest a fix but what then? It certainly won't be able to convert the user's next request into a working application of even moderate complexity. And do you really want a chat bot to be the first interaction your customers have.
I think this is why we haven't seen these things take off outside of very large organizations who are looking to save money in exchange for making customers ask for a human when they need one.
But, I mean, that doesn't make sense even for humans, right? 99% of the errors I make, I can easily correct myself because they're trivial. But you still have to go through the process of fixing them, it doesn't happen on its own. Like, for instance, just now I typoed "stil" and had to backspace a few letters to fix it. But LLMs cannot backspace (!), so you have to put them into a context where they feel justified in going back and re-typing their previous code.
That's why it's a bit silly to make LLM code, try to run it, see an error and immediately run to the nearest human. At least let the AI do a few rounds of "Can you spot any problems or think of improvements?" first!
AI isn't necessarily saying "this is the one endpoint that will always be generated". Unless it is - if the customer generated code is always the same endpoints/properties then it'd definitely make sense to also support those.