It's not all that different from the state of big corp software today! Large organizations with layers of management tend to lose all abiliy to keep a consistent strategy. They tend to go all in on a single dimension such as ROI for the next quarter, but it misses the bigger picture. Good software is about creating longer term value and takes consistent skill & vision to execute.
Those software engineers who focus on this big picture thinking are going to be more valuable than ever.
>Those software engineers who focus on this big picture thinking are going to be more valuable than ever.
Not to rain on our hopes, but AI can give us some options and we can pick the best. I think this eliminates all middle level positions. Newbies are low cost and make decisions that are low stakes. The most senior or seniors can make 30 major decisions per day when AI lays them out.
I own a software shop and my hires have been: Interns and people with the specific skill of my industry(Mechanical engineers).
2 years ago, I hired experienced programmers. Now I turn my mechanical engineers into programmers.
Not to rain on our hopes, but AI can give us some options and we can pick the best.
a.k.a. greedy algorithms, a subject those of us on HN should be well-acquainted with. You can watch the horizon effect frequently play out in corporate decisionmaking.This is why all the arguments about context windows and RAG exist, because at the end of the day even if you asked the question of a human with all the context there's such a thing as opinions, stated vs unstated goals, requirements vs non functional requirements, etc which will give you wildly different answers.
Most of the time people don't even know the questions they want to ask.
But that's kind of my point. A bunch of decisions like that tend to end up with a "random walk" effect. It's a bunch of tactical choices which don't add up to something strategic. It could be, but it takes the human in the loop to hold onto that overall strategy.
I think if anything, we have a better chance in the little picture: you can go to lunch with your engineering coworkers or talk to somebody on the factory floor and get insights that will never touch the computers.
Giant systems of constraints, optimizing many-dimensional user metrics: eventually we will hit the wall where it is easier to add RAM to machines than humans.
Because LLMs don't understand things to begin with.
Because LLMs only have access to aource code and whatever .md files you've given them.
Because they have biases in their training data that overfit them on certain solutions.
Because LLMs have a tiny context window.
Because LLMs largely suck at UI/UX/design especially when they don't have referense images.
Because...
Only thing to add, maybe we have the most senior of seniors verifying the decisions of AI.
There are just so many small decisions that add up to a consistent vision for a piece of software. It doesn't seem like LLMs are going to be able to meaningfully contribute to that in the near future.
I tried vibecoding my own workout tracker, but there were so many small details to think through that it was frustrating. I gave up and found an app that is clearly made by a team of experienced, thoughtful people and AI can't replicate the sheer thoughtfulness of every decision that was made to create this app. The inputs for reps/sets, algorithms for adjusting effort on the fly, an exercise library with clear videos and explanations; there's just no way to replicate that without people who have been trainers and sport scientists for decades.
LLMs can help increase the speed that these details turn in to something tangible, but you definitely can't "skip all that crap and just jump to the end and get on with it."
There’s good reason to think that they could understand the big picture just fine, even today, except that they’re currently severely constrained by what we choose, or have time, to tell them. They can already easily give a much more comprehensive survey of suitable options for solving a given problem than most humans can.
If they had more direct access to the information we have access to, that we currently grudgingly dole out to them in dribs and drabs, they would be much more capable.
While I do tend to believe you, what evidence based data do you have to prove this is true?
IMO the onus is to prove that they can be strategic. Otherwise you're asking me to prove a negative.
What is definitely going to be abundantly clear is just how much better machines can get at creating correct code and how bad each of us truly is at this. That's an ego hit.
The loving effort an artisan puts into a perfect pot still has wabi sabi from the human error; whereas a factory produced pot is way more perfect and possesses both a Quality from closeness to Idealism and an eerieness from its unnaturalness.
However, the demand for artisan pottery has niched out compared to Ikea bowls, so that's just how it is.
Exactly how I feel. AI has allowed me to work on projects that I've wanted to work on but didn't have the time/energy for.
So let's flood the world with projects nobody, including their authors in the first place, cared for enough to dedicate time and energy to.
Not that I can complain much, worse things have happened to better people bla bla. But it's disorienting. I still have non-automatable skills and enjoy learning, but who says they are not going to come up with Claude Opus 4.9 or something and turns out it can do that too, ha ha.
How is a young person supposed to establish themselves in this new world?
Given the models are unlikely to stop getting better, I think it is fair to say the human contribution is going to keep getting "leaner".
That is going to change the job, but also head count.
But I agree harnessing models opens up opportunities for better product design, ... but only ... everywhere.
The people who design the most usable software have always been in a minority. They will be valuable for some time.
That's all that is really required. I mean look at the Microslop fiasco. They ruined a perfectly good editor: Notepad with AI slop. But this is not reflecting in their sales. They still are showing record revenues.
Just because a competing product exists does not mean your product is suddenly obsolete. There will always be people who will want to buy (provided the market is not oversaturated). Because that is how humans do things. AI won't change that behavior overnight [1]. Look around you and you will see every product you hold in your hand has at least 5-10 competitors.
[1] Think about all the things that are still not computerized and which requires you to fill some or the other form of paperwork. We have had computers for over nearly 6 decades now. We STILL have physical forms that we fill from time to time. Computerization was touted to revolutionize this and yet here we are. Still not achieved 100% digitalization. The same will happen with AI as well. There is this initial burst of excitement (which is the phase we are in) until reality sets in and that's when people will learn how to best use the technology. What you are seeing today (vibe coding et all) is NOT IT.
Can you email me? Would love to chat/get more tunes from you.
Mob currently controls my email