> Putting aside existential risks, I don't see a future where a lot of jobs don't cease to exist.
I'm personally betting on the plateau effect with LLMs. There are two plateaus I see coming that will require humans to fix no matter what we do:
1. The LLMs themselves plateau. We're already seeing new models get worse, not better at writing code (e.g., Sonnet 3.5 seems to be better than 3.7 at coding). This could be a temporary fluke, or, an inherent reality of how LLMs work (where I tend to land).
2. Humans will plateau. First, humans themselves will see their skills atrophy as they defer more and more to AI than struggling to solve problems (and by extension, learn new things). Second, humans will be disincentivized to create new forms of programming and write about them, so eventually the inputs to the LLM become stale.
Short-term, this won't appear to be true, but long-term (on the author's 10+ year scale), it will be frightening. Doubly so when systems that were primarily or entirely "vibe coded" start to break in ways that the few remaining humans responsible for maintaining them don't understand (and can't prompt their way out of).
And that's where I think the future work will be: in fixing or replacing systems unintentionally being broken by the use of AI. So, you'll either be an "AI mess fixer" or more entrepreneurial doing "artisan, hand-crafted software."
Either of those I expect to be fairly lucrative.
- "Profession", by Isaac Asimov: http://employees.oneonta.edu/blechmjb/JBpages/m360/Professio...
- "Pump Six" by Paolo Bacigalupi (the story of that title)
I see it a bit like the creator economy, where you have these maker vs consumer tranches of people.
Go to a 4 day 24 hour work week and you will find almost everyone creating again.
Seems plausible to me that they could just keep writing python 3.13 till the end of time.
If you take say assembly - we didn’t stop writing it because it stopped working.
As a functional building block programming seems feature complete
This might be one of the more fascinating things I've read in a long time. Care to expand upon? Would be genuinely curious.
1. Plateau != Regress. Why point to regressions as evidence of plateau? Why only look at a single model and minor version? we are clearly still at AI infancy, regressions are to be expected from time to time.
2. Where's the evidence of this? Humans are using AI to branch out and dip their toes into things that they wouldn't have fathomed doing before. How would that lead you to "disincentivized"?
> Doubly so when systems that were primarily or entirely "vibe coded" start to break in ways
So in this fantasy everybody is vibe coding resilient code/systems that lasts for 10+ years and everybody stops learning how to code, and after a decade or so, they start breaking and everybody is in trouble? This world you're creating wouldn't stand up to the critique of sci-fi readers.
I'm sorry but if we can vibe code systems that last 10+ years and nobody is learning anything because they are performing so well, then that's a job well done by OpenAI and co. We're probably set as a civilization.
> I literally can't distinguish between reddit commenters and HN commenters.
No need to condescend. I have a fair amount of experience building with and using these tools daily. I'm not just some "reddit idiot."
> So in this fantasy everybody is vibe coding code that lasts for 10+ years and everybody stops learning how to code
I'm extrapolating. Look at what happened in the wake of the industrial revolution. Most people don't know how to fix or create anything today, and instead, rely on fast-and-cheap products or services made by or offered by other people. Hence the panic over China and tariffs. The AI-ification of everything is just another, modern version of a similar thing.
I could absolutely be wrong (and hope I am). But when you track human laziness over time, it leads to deterioration and incompetence. I view this as a gradually than all of a sudden type of problem. One that will be incredibly difficult to dig ourselves out of later.
Much of this feels like when they did studies on people who take mushrooms for example feel like they are more productive, but when you actually measure it they aren't. It's just their perception.
To me the biggest issue is that search has been gutted out and so for many questions the best results come from asking an LLM. But this is far different from using it to generate entire codebases.
Power looms were probably the first devices like this. Somebody has to thread the loom, but then it mostly runs by itself.[1] Production lines with lots of stations will have shutdowns, where a drill bit broke or there's dirt on a lens or some consumable ran out. Exceptions are hard to automate, and factory design focuses on minimizing exceptions and bypassing stuck cells.
It's helpful to understand how a factory works when watching how software development is changing. There's commonality.
So the phrase "vibe coding" is only two months old.[2] How widespread will it be in two years?
Software engineers ultimately are people with “will to build”. Just as hedge fund people have a “will to trade”. The code or tooling is just a means to an end.
Your car has way more code than a decade ago and so does your TV.
These things might make you miserable but it’s still “demand” in the economic sense of the term. It keeps developers employed.
Most thorny bugs fall into the latter in my experience.
It couldn't do it. I prefilled in all the fields (hundreds) and told it just to populate them, but it tried to hallucinate new fields, it would do one or two then both delete the fields I had added and add a comment saying 'then do the rest'. I tried a bunch of different prompts.
I can see how some vibe coders could make useful things, but most of my attempts to use LLMs in anything not-from-scratch are exercises in frustration.
Can we please make it a convention that whenever anybody posts anything about some LLM experience they had, that they include which model and UI driving it they used?
Parent's post is like saying: I tried to send an email with a new email program and it didn't work.
(Not glue, but close enough.)
If you build stuff for others AI (mostly) removes typing and debugging from the equation; that frees you to think harder about what you’re building and how to make it most useful. And because you’re generally done sooner you can get the thing into your users’ hands sooner, increasing the iterations.
It’s win-win.
A lot of us are stationary, thinking stuff and other people around us will be automated, but not us, “I am special”, well I fear a lot of people will find out just how much special they unfortunately are (not).
And the complete lack of a game plan of a societal level is starting to get worrying.
If we’re going to UBI this then we’re going to need a bit more of a plan than some toy studies
The right way is to have it autocomplete a few lines at a time for you. You avoid writing all the boilerplate, you don't need to look up APIs, you get to write lines in a tenth of the time it would normally take, but you still have all the context of what's happening where. If there's a bug, you don't need to ask the LLM to fix it, you just go and look, you spot it, and you fix it yourself, because it's usually something dumb.
The second way wins because you don't let the LLM make grand architectural choices, and all the bugs are contained in low-level code, which is generally easy to fix if the functions, their inputs, and their outputs are sane.
I like programming as much as the next person, but I'm really not lamenting the fact that I don't have to be looking up parameters or exact function names any more. Especially something like Cursor makes this much easier, because it can autocomplete as you type, rather than in a disconnected "here's a diff you won't even read" way.
Then you got into code boilerplate, and if you find yourself doing this a lot, that's a signal to start refactoring, add some snippets to your editor (error handling in go), write some code generators, or lament the fact that your language can't do metaprogramming.
> but I'm really not lamenting the fact that I don't have to be looking up parameters or exact function names any more.
That's a reckless attitude to have, especially if the function have drastic behavior switch like mutating argument or returning a fresh copy. All you do is assume that it behaves in certain way while the docs you haven't read will have the related warning label.
The only difference in a “vibe coding” world is that now these “instructions” that we pass to the computer are in English, not Java.
Abstract
Artificial intelligence (AI) and psychedelic medicines are among the most high-profile evolving disruptive innovations within mental healthcare in recent years. Although AI and psychedelics may not have historically shared any common ground, there exists the potential for these subjects to combine in generating innovative mental health treatment approacheshttps://nyaspubs.onlinelibrary.wiley.com/doi/10.1111/nyas.15...