That comes down to why I hire developers in the first place: To share my responsibilities with people I can trust.
I don't hire them to write code or to close tickets. The act of programming, I consider an exercise that helps them understand the problems we solve and the logic of our solutions. I'm always excited when I have a well specified ticket I can hand to a new hire to learn the ropes. So the kind of thing I can imagine Devin can pull off at some point, that'd actually be detrimental to the kinds of teams I build.
I don't think I represent the majority of why people hire developers though, so I guess tools like that may well have a big impact on the industry. Nobody can predict that though.
Uncertainty sucks, but it's how things are. I find the best way to deal with uncertainty is to become better at adapting to unforeseen circumstances. Programmers have quite a bit of experience with that, for what it's worth.
Is there such a job? Even the tinniest bit of code I had to write involved learning about the problem and the platform to come up with the solution. There was some I could hand over to ChatGPT, but it always felt like when somebody gives you their vim or zsh config. You always wonder what's going on here, and you might just as well read the documentation.
I'm sure it's not the only way, not in the long run anyway.
If by these days you mean the two days after it came out several weeks ago, and by everyone you mean a handful of people on social media, sure.
People are hard up for content, a lot of their audience doesn't have employment anymore and are scared, and they need to put things out at a regular cadence. It's not (necessarily) a conspiracy that you see the same copy pasted tech thing appear everywhere simultaneously, and because it does, it makes you think everyone in real life knows and cares, but that's far from reality. Even someone who's terminally online and chronically checking these things could have taken a long weekend away from their phone and not have heard about it.
As an aside, no employer really gives a shit about your curiosities, so you need to separate all that chronic consumption from what is efficient and practical to do in a job if you have one, rather than leaning into what someone online thinks is the best way or whatever you think is the future.
Several subreddits were overtaken by Devin bots for a few days. When the dust cleared everyone realized what had happened.
And here we are again.
There was a study by Google years back that showed the exact contrary: https://catonmat.net/programming-competitions-work-performan...
It frees us up from doing menial tasks and is a great help for stuff like that.
It's not perfect, but it's a glimpse of the future that definitely needs to be noticed.
I've used Cody and Copilot and it just gets in the way because I know exactly what I need to write and neither really helped me.
However as I was researching, there are a few interesting ideas in this space that might help these LLM's solve more complex problems in the future. Post here if interested: https://kshitij-banerjee.github.io/2024/04/30/can-llms-produ...
When I'm creating a CRUD API I know exactly what I want, I know exactly how it should look like.
Do I want to spend 15-30 minutes typing furiously adding the endpoints? No.
I can just tell Copilot to do it and check its work. I'll be done in 5 minutes doing something more engaging like adding the actual business logic.
Like you say, it makes the most sense repetitive or easy tasks.
Checking other entities' code is not trivial and very error-prone.
I get what you're saying but I have my doubts if me doing the whole work manually would be slower than asking an assistant + doing an extensive code review.
I've been toying around with embedded development for some art projects, it was invaluable to have a kickstart using LLMs to get a glimpse of the knowledge I need to explore, get some useful quick results but when I got into more complex tasks it just breaks down: non-compiling code, missing steps, hallucinations (even to variables that weren't declared previously), reformatting non-functioning code instead of rewriting it.
As complexity grows the tool simply cannot handle it, as you said it's a good sparing partner for new territory but after that you will rely on your own skills to move into intermediate/advanced stuff.
Future hypothetical AI coding assistants that don't exist yet? While I won't say it's philosophically impossible that they'll move beyond extreme autocomplete with security holes, I'll say it's not up to me to disprove someone else's hypothetical. Show me the thing.
Anecdotally, I've had it mispredict from very simple contexts, such as skipping numbers in series' where the pattern should've been extremely obvious.
I've had it sneak in sublte and obvious bugs on a regular basis, to an extent where I don't have much confidence beyond any code I can grasp in a single look and be confident it's correct. Sorry bros, I'm not on the hype train this time. Feels like crypto all over again.
1) Many different AI with different role: business analysis, tester, developer. You as developer are treated as customer and write simple prompt but business analysis AI will make a proper step by step prompt to Developer AI - so that you don't have to very good with prompt engineering
2) bigger context for LLM so you can feed up to date documentation and full repo
3) LLM having access to do RAG on web search to get up to date information
4) LLM having access to terminal and debugger so Tester/Developer AI can automatically see the flow how code is executed and variables states during execution
5) faster and cheaper LLM so that you give a task before you go to sleep and all those AI in a loop try to solve this task trying many different options until passed all tests.
Essentially I now just architect and review. Cursor has good context, so if that gets extended to the way Devin operates I think this could go pretty far.
In the short-term (5-10 years, I cant see them autonomously producing products), it will need an experienced programmer to interpret and use the output effectively.
An implication of this is, in the short-term, developers become even more valuable. You still need them, and these tools will make the developer significantly more productive.
I was reading Melanie Mitchell's book 'Artificial Intelligence: A guide for thinking humans' recently (which I'd recommend). She has this chapter on computer-vision. And as an example, she shows a photograph of a guy in military clothing, wearing a backpack, in what looks like an airport, and he's embracing a dog. She makes an insightful point, that our interpretation of this photograph relies a lot on living-in-the-world experience (soldier returning from service, being met by his family dog). And the only way for AI to come close to our interpretation of this, is maybe to have it live in the world, which is obviously not such an easy thing to achieve. Maybe there's an analogy there with software development, to develop software for people, there's a lot of real-world interaction and understanding required.
In terms of autonomously producing products, I see these tools as they are now a bit like software wizards, or a website that Wordpress will create for you. You get a 'product' up-and-running very quickly, and it looks initially fantastic. But when you want to refine details of it, this is where you get into trouble. AI has an advantage over old-fashioned wizards, in that you can interact with it after the initial run, and refine it that way. But I'm not sure this is so easy, to have that fine-grained control you have with code. This is where I see the challenge being, to develop tools to talk to it, and refine the product sufficiently.
It is a tool for building software. You still need to know software development to use the tool.
You might not need to actually write code in the future - just like very few write Assembly today.
But you still need to know and understand system requirements, systems architectures, integrations, distribution, deployment, maintenance, etc.
Software Engineering is more than just coding.
if AI will speedup software development by 3x and if demand for software not increase by 3x then I don't see why the above wouldn't apply. And I'm talking about longer timeframe like 10 years since this is more relevant for people who are just starting out software path at collage.
I think we're in different spaces, because I barely hear anything about it. That said, I think the LLM replacing jobs train was blown out of proportion. I heard so much about it replacing developers, but I've seen time and time again it output code with subtle bugs (I'd argue worse than obvious bugs) and no be able to operate with more than just a little bit of context.
I think we're in a Pareto distribution situation right now. The majority of getting an LLM to write code was pretty quick to do. To get it to do anything a moderate dev can do will take decades.
I've seen it multiple times over the last 15ish months where I'm reviewing code and I spot a subtle bug in an htaccess file or a bash script that doesn't make any sense. The PR comment follow up is then "oh I got it from ChatGPT". I think these tools become assistants to a human developer who can guide them. That use case is already available and seems to be pretty decent for a lot of folks. Full replacement is so far away that I don't have a single thought about it.
i contend that given that the universe of computing (ie the capabilities of the processor) is finite, all software is engaged in essentially the same activities (write to memory here, a file over there, etc) such that the difference between any two could be reduced to a matter of interpretation. if so, any automated agent capable of assigning some meaning to these activities should be able to produce a sound program, in whatever computer language, even one very proprietary to the agent.
If we are talking about a specific use case black box to automate the work, then it might be possible up to a certain limit.
because when we are talking about training Neural networks, we are looking for the best numbers. The so-called "emergent abilities" which say that increasing model size makes it smart can be true but what's the probability of getting most of the parameters to their correct values? There are billions of them.
(Total)Replacement? I highly doubt that.
Maybe the pool of work for developers will increase proportionally to the productivity increase, so there's no layoffs, but there's no guarantee of that.
Although AI is advancing rapidly, if a person learns to adapt in such situations by broadening their learning scope, one can smoothly navigate through such hypes.
The current LLMs are not perfect but I recommend anyone to try it out - it really speeds up development, great for creating boilerplate or when trying to use a language you have little knowledge in.
Architecting, writing requirements, debugging nasty issues and optimizing tricky problems will remain valuable.
Like all the other AI codebots it's a tool that can potentially optimize a developers workflow. In the same way that a nail gun optimizes a carpenter's workflow. But sometimes the carpenter might just use a hammer.
> This riff derives from a recent "AI Programmer" story that's making people in my corner of the nerdiverse sit up and talk, at a time when hot new AI happenings have become mundane.
> ...
> It is yet another prompt for me to take the lowkey counterfactual bet against the AI wave, in favour of good old flesh and blood humans, and our chaotic, messy systems.
So - do FSD coding assistants pose a threat to developers? Sure, just like any tool using GPT3+ class engines do, we are in revolutionary times.
But the revolution here is the engines now available thanks to OpenAI and successors, not the wrappers like Devin and others.
If you're not concerned already, if you're not pondering the future already you've missed the point by seeing it only in the likes of Devin.
Personally, I think there is both risk and opportunity - it really depends on what sort of mindset you have as to whether you need to feel threatened.
The real question is, what will be the rate of progress from this point forward?
That's L5, and it may well be the future, but as it stands today, I believe L3-L4 is a more productive target for a coding agent. Before building full autonomy, we first need the infrastructure to precisely guide the models and iterate efficiently on our interactions with them.
Once that foundation is in place, it will then be possible build more and more robust layers of autonomy on top. But crucially, the developer will always be able to a drop a few layers down and take over the controls.