I know one project did not have my involvement and couldn’t have succeeded without my knowledge. They were so bad they would work in questions casually to their actual work.
I started avoiding all of them when I found out management had been dumping on my team and praising theirs. It’s just such a slap in the face because they could not have done well and their implementation was horrible.
I often say "Sometimes, you have to let the manager fail."
Some managers don't like being told their ideas won't work. If you refuse or argue, you are seen as the reason his idea failed. I've found what works best with them is to proceed with the work, but keep them informed very frequently, so they can see how things evolve, and will be able to see the failure you had anticipated a long time ago before it is too late.
Then you're seen in a positive light, and he'll separate you from the project failure.
What keeps you motivated?
This strategy is one that I would expect to work on children, not adults. But it does actually work, I know because I've done it too.
It shouldn't be the case that criticism and compromises are seen as attacks. Everyone wants to succeed, so just help each other succeed. But it's never that easy.
There have been a bunch of times in my career where I've allowed people under me to "fail". Often times, an individual failing at something is just not that expensive; while being highly educational. Sometimes, it turns out that there approach actually worked, and we as a group gained a new bit of institutional knowledge.
Forced executive churn has been higher than for individual engineers at a lot of my past jobs. Especially for disciplines like marketing/advertising/sales.
Gotta accept that a likely outcome is that they do fail and they don’t learn and you have to let them go. But if you tried to support them beforehand, did what you could, at least you can have a clear conscience.
Yup -- I've learned a lot from my failures. Far be it for me to deny others that experience. Assuming their failures won't result in the company imploding or other serious harm, of course.
Similar to one I heard about navigating this sort of thing: “People have to gather their own data.”
I can’t emphasize this part enough.
I’ve been part of some projects where someone external to the team went on a crusade to shut our work down because they disagreed with it. When we pushed through, shipped it, and it worked well they lost a lot of credibility.
Be careful about what you spend your reputational capital on.
I went through this at a corporate job not too long ago for the first time. One of the more insane things I've dealt with so far in my career. I know life isn't always sunshine and rainbows, but up to that point I'd never seen that level of naked, antisocial self-interest in an org that was ostensibly created to allow people to cooperate toward solving some problem for our customers. Call me naive, but it was really disheartening.
If I see something heading toward failure, I let people know they may want to consider a different approach. That’s it. There’s no need to be harsh or belabor the point but it’s better to speak up than to quietly watch a train wreck unfold.
This clause is doing a lot of heavy lifting. One needs to have good judgement about when and how to help. A lot of people can imagine how things could go better if a bunch of other people changed their behavior in surprisingly simple ways. It's a much smaller subset of people that can correctly push the right buttons to get the other people to actually make those changes succeed at a systematic level.
In a small org it's actually not too hard for good ideas and feedback to get traction. In a larger org for broad concerns it can be fiendeshly difficult. Often the reason why a large project will fail is only truly knowable by a few senior technical people with enough experience and broad context to see the forrest for the trees. Past a certain volume of people involved you can not explain to people why it will fail fast enough to offset the army of clueless stakeholders incentivized to socialize a good-sounding narrative convincing everyone that we need to try. In these cases reductive explanations with the right counter-narrative can work, but they require significant reputational and/or hard authority to pull off.
This is why the article advocates picking your battles in a large org. Often the chance of actually helping is much lower than destroying your own reputation, even if you're right.
The point the author makes is that sometimes you are not in control of those projects. Therefore "letting them fail" seems a false choice constructed by the author.
A better title "You don't know what other people are doing and you don't know why unless it is your job to do so."
Yes, it seems cruel and also counter to ensuring the org succeeds. Your perceived ability as an engineer might go up if your colleagues fail, but your colleagues failing when you knew a possible way for things to go better is harmful to your org's goals and culture. It only takes a small few failures for the bar to be lowered to the point that you yourself may not want to work there.
Even sometimes when other people's projects are NOT your problem and they aren't seeking feedback, sometimes you SHOULD make their flaws your problem if it is of crucial importance to your org. Knowing when you should expend your energy on an initiative like that is in itself a mark of seniority.
The blog itself mentions this a bit.
In hypothetical situations where every single person has good intentions, sure. Human beings are complex and sometimes, this doesn’t sit well with others. I personally know of someone who when did this, ended up with a manager escalation and eventually losing their job. Because someone else felt their competence being questioned and took it as an opportunity to get someone who tried to help, get fired.
Sometime a good deed doesn’t go unpunished. Corporate culture mostly dictates that only help when asked, when it will come back to bite you, or if the you know the people who are being helped closely. Everything else, don’t get involved.
There are places where this doesn’t happen and I’d argue you learn a lot more at them.
This is what I do as well, in writing. Then I drop it. Professionalism demands that I say something. That's part of what I'm being paid to do. But experience has taught me that it's almost certainly not going to change anything, so I just do my duty and move on.
Don't assume that. Why would you assume that? The entire thesis of the article is that you do in fact get penalized for that. Even if you don't care about anything else, you're penalized by loss of ability to make people to take you seriously on other problems.
Funny enough, 2 years after I was told to get on board or keep my mouth shut, customers complained about the very thing I said they would complain about. I felt slightly vindicated, and they had to rearchitect the whole thing to try and accomplish it. It’s been 5 years since the project started and they still haven’t fully shipped the feature.
I think you both are right in different ways.
You weren't asking me, but I'll chime in anyhow. If by "backfire" you mean have I suffered any adverse consequences, then no.
Interestingly, in several cases, I've had other engineers talk to me privately to express gratitude that I said something. They had the same concerns as I, but were too afraid to speak up for fear of consequences.
My attitude has always been that if I'm being punished for doing my job then I'm in the wrong job anyway, so I don't worry about it.
Some people don’t actually want advice. In those cases, the issue isn’t technical, it’s interpersonal. In my experience, engineers who refuse to hear advice tend to struggle the most for obvious reasons.
Where I’ve gone wrong is taking on the emotional weight of other people’s projects. When I do that, the balance shifts toward more bad outcomes than good ones.
If you have the power (as the post mentions - like a CEO) you can suggest, direct or butcher a project and no one would see you as a negative person.
But you can get butchered when you don't have the authority to poke around your concerns.
I would prefer to see the ship sink instead of shooting myself in the foot and risking my influence and credibility - as another comment on this thread said "Sometimes, you have to let people fail".
When I was officially the architect at two companies between 2016-2020, I felt comfortable stating my opinions on the “how” of the underlying infrastructure and cross cutting concerns and shared code. But even then I didn’t give my unsolicited opinion to the team leads who built for instance the user interface or the business logic when that wasn’t my responsibility.
The second saying is “The avalanche has already started. The pebbles no longer have a vote”. If a decision was decided by my skip manager or above, I’m not saying anything. I’m going to go along with the program.
I’ve been working for consulting departments (at AWS) and then (full time) at consulting companies since 2020. When I was a mid level consultant (L5) at AWS, for larger projects where I was assigned to lead one slice of work (a workstream), if it didn’t affect me, I said nothing. I was just trying to keep my head down to get through my four year initial contract.
I definitely didn’t stick my nose into projects I wasn’t assigned to.
Now I’m a staff consultant at a third party consulting company. I still go by the same rule. I keep my mouth shut about internal corporate decisions, I tow the company line, I don’t give unsolicited advice about other projects and I lead my own projects.
There is one specific speciality I’m working on building up within the company where I will subtly interject. But even then, it’s only because I have the blessing of C suite and they reached out to me.
In any other setting you can't afford to watch money and motivation burn just to stay 'politically solvent'.
(Lalit is very good at fitting complex corporate dynamics in a single blog post though.)
If you’re constantly nitpicking and expressing concerns, you become “that person” who’s constantly negative about other people’s ideas. After a while people tune out; they already know that you’ll find “problems.” We all know these people. No one really likes working with them. Thus they’re _not effective_ at what they’re trying to do.
Ultimately you mostly get credit for shipping things that work, and only rarely for preventing the mistakes of other people.
At its core, what the blog post is saying is: keep your powder dry for when it matters. Not every problem is going to make the company insolvent. Not every concern will prove correct. Pick your battles strategically.
It’s good advice no matter the size or nature of the org.
The only alternative is to advocate for inaction, but then why are they paying you? Those kind of bets can make sense for private equity investors, but not for employees, and my builder-brain just finds them dull and annoying.
At large companies, I've rarely found a reason to speak out on a project. Unless it has a considerable effect on my team/work (read: peace of mind), it just doesn't make sense to be the person casting doubt. There's not much ROI for being "right".
If you manage to kill the project before it starts, no one will ever know how bad of a disaster you prevented. If the project succeeds despite your objections, you look like an idiot. And if it fails - as the author notes, that doesn't get remembered either.
As a senior IC, the only real ROI I've found in these situations is when you can have a solution handy if things fail. People love a fixer. Even if you only manage to pull this off once or twice, your perception in the org/company gets a massive boost. "Wow, so-and-so is always thinking ahead."
A basic example I saw at my last company was automated E2E testing in production. My teammate had suggested this to improve our ability to detect regressions, but it was ultimately shot down as not being worth the investment over other features.
A few months later, we had seen multiple instances of users hitting significant issues before we could catch them. My teammate was able to whip out the test framework they had been building on the side, and was immediately showered with praise/organizational support (and I'm sure a great review as well).
> You put more effort, you take responsibility for stupid people's decisions, and then you get a disproportionately small reward
On that I disagree. Managers might have to take responsibility for bad decisions, sure, but get a disproportionately larger reward than those under them. It's certainly less stressful at the bottom of the ladder, but don't expect to get much praise or monetary reward, and you're the first to go as soon as something goes wrong. There's a reason why late-stage companies are full of middle managers, and few people actually doing the work.
That's true. And it is currently one of the main reason why startups are so efficient compared to MegaCorps.
In small companies, it takes few engineers voicing out ' this is bullshit ' to stop a disaster.
In large corps, it takes 2y, 10M USD and a team in burnout to reach the same result.
And the main reason is the usual source of all sins: *Politics*.
Whereas if real existential need is on the line then people are incentivized to give a shit about the outcome more.
Tech is so rich in general that the norm is to just shut up and enjoy your upple-middle-class existence instead of caring about the details. After all, if this company blows up, there's another one way that will take most of you.
Not that this excludes the same behavior in industries that are less lucrative. There's cultural inertia to contend with, plus loads of other effects. But I have noticed that this attitude seems to spontaneously arise whenever a place is sufficiently cushy.
Also, this take doesn't (on its own) recommend one strategy or the other. Maybe it makes the most sense to go along with things or fight them for personal reasons, uncorrelated to the economic ones. But it's good, I think, to recognize that the impulse is somewhat biased by the risk-reward calculation of a rich workplace. Basically it is essentially coupled to a sort of privilege.
The engineers' role should mostly be as technical advisors, i.e. calling out bad projects for technical reasons (UX, architecture, etc.) But even the seniormost engineers do not have the corporate standing, let alone political cachet, to call out or fix political issues (empire building, infighting between orgs, etc.) They can and should point out these conflicts to leadership (very diplomatically, of course) but should bear no responsibility for the outcomes.
However, as an engineer you should ABSOLUTELY be aware of these dynamics because they will impact your career. Like when the project is canceled with no impact delivered.
The example given of the latent turf war between the product and platform teams might have been avoided via a very clear mandate from senior leadership about who owns what exactly. This would probably have involved some horse-trading about what the org giving up its turf gets in return. (BTW if you've ever wondered "Why so many re-orgs" this is why.) That this didn't happen is a failure on the execs' part.
As an aside, I know this happens in every large company, but somehow it appears to be a lot more common at Google? Or at least Googlers are more open about it. E.g. I observed something similar on that recent post about lessions from Google: https://news.ycombinator.com/item?id=46488819
* Know your audience. Saying things they are unable to hear is a waste of energy.
* Choose your battles carefully.
The flip side:
* Trust your gut
* Speak authentically and with an aim to help (not convince)
* Don’t be overly invested or dependent on the actions and reactions of others (can be hard to do if someone has power over you)
Balancing these things is something I’m learning about…
Thank you for that wording.
I've never worked at a company as large as Google but in my experience things can be a little more optimistic than the post. When earn enough trust with your leadership, such as at the staff/architect level, you'll be able to tell them they are wrong more often and they'll listen. It doesn't have to be a "$50,000 check" every time.
That leads to a very important question - Why doesn't leadership always trust their engineers? And there's a very important answer that isn't mentioned in the blog post - Sometimes the engineers are wrong.
Engineers are extremely good at finding flaws. But not so good at understanding the business perspective. Depending on the greater context there are times where it does make sense to move forward with a flawed idea.
So next time you hear an idea that sounds stupid, take a beat to understand more where the idea is coming from. If you get better at discerning the difference between ideas that are actually fine (despite their flaws), versus ideas that need to die, then you'll earn more trust with your org.
I’ve lost too much sleep and fought too many battles and lost too much clout over the years trying to make sure bad things didn’t happen. “Nobody could have foreseen this” is still said, even if there’s a ton of evidence, recommendations, pleading, etc, to keep it from happening.
Everyone likes to pretend it doesn't happen. But ask around and you'll find many people have experienced it
This also applies to the capacity of the industry to generate bad (and evil) ideas.
Now that we're one of the biggest-money fields, there is no end of people thinking/behaving badly.
You'll wear yourself out, calling out all of it.
For example, I fled cryptocurrency entirely when it got overrun with bad faith. But I don't intend to flee AI, and so will have to ration the criticism I have for abuses there.
> The nuclear option is [...]
BTW, be careful in what context you use this idiom. It doesn't always translate well outside the US. (I realized this as soon as the words came out of my mouth, under perhaps the worst possible circumstances.)
Upper management agreed to geoIP blocking of the app, without consulting engineering. Why this matters is that GeoIP blocking is at best a whack-a-mole with constantly updating lists and probabilistic blocklists. And is easy to route around with VPNs.
The verbiage they approved was "geoblocking", not "best effort of geoblocking". Clients expected 100% success rate.
When that didn't work, management had to walk that back. We showed proof of what we did was reasonably doable. That finally taught upper management to at least consult before making grandiouse plans.
You might be able to get the engineers to tweak the design, but actually getting it canceled can be hopeless. You'll get told the CEO approved it.
I often use the term "social capital." You have to be careful with how you spend it.
The "price tag" of voicing concerns is lower, yet raise them too often and you still earn a reputation as obstructionist. Meanwhile, the cost of accepting problematic changes can be higher—you may end up maintaining that code long after changing jobs. And unlike corporate politics, the "influence bank account" is public: communications are archived indefinitely.
There is a fascinating shift in how "withdrawals" are calculated: In a corporate hierarchy, the cost of dissent feels exponential: something like `cost = exp(their_level - your_level)`. Say, as a Google L3/L4/L5 engineer, opposing L6-L8 feels like trying to make a massive withdrawal with a tiny balance. In contrast, in OSS the cost almost stays constant despite the corporate level difference.
This created a paradox for me: leaving Google means less time for LLVM maintenance, but it also lets me voice objections more freely, without the shadow of internal performance ratings or hierarchical friction.
That said, I know I've been "withdrawing" heavily, including from a lot of previous colleagues. In a recent LLVM Project Council meeting:
> There is a pattern of behavior here of blocking contributions due to concerns about maintenance cost and design simplicity.
(I appreciate the transparency of making these meetings public, by the way.)
I had to respond at https://discourse.llvm.org/t/llvm-project-council-meeting-no...
This results in a net loss of ROI: we risk being seen as a negative person, while no one acknowledges our goodwill or instincts, because the project lead presents the improvement as their own win through a strategic change of plan.
- It will adversely affect me directly (e.g. cause me to get paged a lot)
- It will harm users or other people outside of the org (various kinds of externalities)
Otherwise it's the company's problem. (Of course, I'm generally happy to give advice and critiques if asked.)
If you are simply the wrong person in a "toxic" culture, there is no action that can increase your social capital. In a well-functioning culture, constructive criticism would be investment, rather than spending.
I think it speaks poorly of their manager's professionalism, and what sort of behavior they consider to be acceptable with regard to colleagues.
If you clean it up, you're taking responsibility for it that might not be yours to take, and in an organization with many managers, that can permanently wreck your chances for advancement if those above you perceived your involvement as intruding on their territory, or trying to make them look bad, or trying to make the culprit look bad, and so on, and so forth.
Rarely is it "wow, there was a problem and they fixed it, without even being asked!"
Organizations that are rational and have good management let people take responsibility like that, and it's a good thing. Most organizations are not like that, and the bigger they get, the more likely it is you'll have an adversarial, territorial, hyper-political environment with saccharine smiles and backstabbing, and doing anything that even hints at negatively framing a manager, even just in their own minds, is sufficient reason to make it not your problem.
If you have good reasons to fix it, or if it's your problem for reasons that make management look good, you have the opportunity to fix an issue and be appreciated for it. Otherwise, it's just not worth jumping on other teams' grenades.
It'd be nice if everyone was rational and competent and secure and anti-fragile, but humans kinda suck in groups.
I’ve seen people who played the game well at Google or Amazon fall completely flat on their ass at a different company, thinking the game hasn’t changed (or that there even is a game), barely lasting a few months on the C suite before being softly moved along.
When the game rules shift, those people flail. Recent example was that in 2021-2 nobody could hire fast enough, but now staff expansion is not common. Managers who excelled at coming up with reasons to spawn new teams did great until the money dried up. Some of them shifted gears and adapted, but others just couldn't get the message.
> You rarely get credit for the disasters you prevented. Because nothing happened, people forget about it quickly.
There is another problem left implicit in the article: clueless people doing drive-by project reviews without any context or understanding of the whole problem domain, and proceeding to give unsolicited and unreflected advice supported by partial knowledge.
Also, sometimes projects with a perfect design end up failing for some reason or another, and projects doomed to fail end up pulling through and succeeding, even if they pile up technical debt. The truth of the matter is that software is soft and can adapt to changes in requirements and design, and with enough work anything can be made to work. Thus any observation on "failure" ends up being superficial opinions based on superficial observations.
In this case, I try to question the project owners on their assumptions and whether they have validated them. Usually this line of questioning reveals whether they have "done their homework".
Some part of it is that we are perceived as lazy obstructionist naysayer dinosaurs when we point out any flaws in new projects as the article warns. But the rest is that because some of the elders were effectively semi-retired and doing little, anyone over 40 has been uncritically dumped with them.
So we keep the lights on while all the new shiny stuff is given to fresh juniors that don't ever push back and are happy to say yes, but also can't do it alone, and are lost at sea.
So they don't get anything shipped while we keep polishing our legacy turds and wince every time we accidentally get a glimpse of what they are doing.
Imagine if instead of having to speak up, and risk political capital, you could simply place a bet, and carry on with your work. Leadership can see that people are betting against a project, and make updates in real time. Good decision makers could earn significant bonuses, even if they don't have the title/role to make the decisions. If someone makes more by betting than their manager takes home in salary, maybe it's time for an adjustment.
Such a system is clearly aligned with the interests of the shareholders, and the rank-and-file. But the stranglehold that bureaucrats have over most companies would prevent it from being put in place.
Aside from that, perverse incentives are a real problem with these systems, but not an insurmountable one. Everyone on the project should be long on the project, if they don't think it will work, why are they working on it? At the very least, people working on the project should have to disclose their position on the project, and the project lead can decide whether they are invested enough to work on it. Part of the compensation for working on the project could be long bets paid for by the company, you know like how equity options work, except these are way more likely to pay out.
If no one wants to work on a project, the company can adjust the price of the market by betting themselves. Eventually it will be a deal that someone wants to take. And if it's not, then why is the project happening? clearly everyone is willing to stake money that it will fail.
Knowing things is bad only requires knowledge of the product itself. But fixing it requires understanding of the whole infrastructure and members around the project.
An outsider can't do it. And the insider don't necessarily think the project is bad from his perspective. You would have to argue with him to convince him the project is bad. Which really don't bring any value to the outsider themselves. And it can even be harmful.
There's a good chance that when inevitably fail, the team will be laid off.
So beware of shiny new projects until they've proven themselves.
Simple as that. You can offer people your opinion on the matter but that's it. Some people invest way too much on what is essentially someone else's business. You are a replaceable cog, never forget that.
The reality is that a lot of people put ideas forward solely for the purpose of getting attention and trying to get promoted. A lot of those ideas are full of hope and enthusiasm and lacking in fundamentals. But to shoot it down makes you come across as a nay-sayer, or a Debbie-downer. Even though you are sure it’s going nowhere, the reality is that it’s a lot easier to let the market prove you right.
Less hurt feelings and interpersonal drama that way. Seems wasteful, but at the end of the day you got data and learnings that you didn’t have otherwise. So hey, silver linings.
In this case it was an incompetent VP of Engineering who was seriously lacking domain knowledge when a new set of projects outside the norm came into the company. Instead of having a professional attitude, understanding his limitations and convening domain experts to help him and the team move forward, he actively opposed and derailed the project.
What's sad is that we, as external engineering consultants, were yelling at the top of our lungs trying to make management understand the serious liability this had revealed. They were absolutely blind to it until even a toddler could recognize the issue.
This cost the company millions of dollars as well as market reputation.
I think he is an Uber driver now, it's been a few years.
From a creator's standpoint, a software project exists to solve a problem - or at least make the lives of the software users easier. But the moment a company bigwig clique decides to make money out of company, "bad" projects pop up.
To my chance, I experienced this for three times. The signs are nearly the same. The company has a lot of workflows - usually handled by excel and/or internally developed apps that actually reflect those workflows. Then comes the buzzword team proclaiming miracles, snake oil and an app that will even cure the dandruff - just sign here. Of course, the clique has their cut - that's why they say yes or advice the board to say yes.
Then begins the grueling process of "analyzing workflows". Do they contact the actual users who are doing the work? Hell. No. What they do is, create a "Project Team" - usually hired anew, with no information about how the company does its work - and they try to "understand" the workflows. Then it becomes like that game, user says one thing, project team understands another and says a different thing and the outcome is a different product that solves a problem but not the user's problem.
Of course, this process burns money. You gotta do development, you gotta have a server to run the app, you have to book meeting rooms in hotels to train the users, you have to create fliers internally to promote the app - and create pdfs, many many pdfs to make the users understand how the app works. And no one asks "hey, if this app is reflecting our workflows... why are we getting this training?"
Because at the end of the day, this app only exists to make some people money. And after a certain point, no one dares to say anything because of all the money spent. An ambassador who says "the app we spent $10M does not work" will be get shot. People get retired with the f-you money they gained and the company tries to work with the app they "built" usually it ends up hiring an internal team and do it from the zero - and the expensive shit becomes a thing nobody talks about, a company omerta so to speak.
Of course, the wisdom of taking the person risk is a continuum. In some cases it is and in some it isn't. But.. To omit the ethical angle entirely seems like a bad take.
Getting personally attached and emotionally invested in work you get paid for is a risk too. There's nothing wrong with that. But there's also nothing wrong putting your time in and churning out requirements if that's what you want.
not to say that there aren't experiments worth running. but in my experience (and in the example in the OP's article), the experiment often isn't even worth running. Intelligent people knew from the jump that it was a bad idea. No experiment necessary. Just pointless waste, enabled by hubris and apathy.
Is it? We live in a world in which social safety nets are eroding; an economically-divided one in which the middle class is rapidly disappearing.
These things (e.g. bullshit projects/jobs) are a form of "white collar welfare", no?
That's not bad. It's not like we're actually going to fix the underlying problem.
Perhaps another bored patent clerk will use his downtime to change the world.
Depending on scale, a couple large train wrecks may take the company out and leave you unemployed.
Employment is a business transaction not a transaction based on ethics viewpoints
(also that of a "non-ethical" person, like an animal or a person with no agency in the matter, if you want to make the distinction. I'm not sure we should but I guess it's an interesting question)
I mean the supply and demand problem was so bad that you had people so narrowly focused that their expectations were absolutely wild. Expecting wild titles out of college or practically out of college. How many "senior" engineers are there who have 3 years of experience or less? How many principals? CTOs? Tons. It's wild and a horrible expectation. Then this cohort of people couldn't possibly fathom actually learning and putting in time so they attacked the people who were simply around longer. I guess ageism but really that naive and toxic phrase you'd hear all over "jack of all trades, master of none." People just couldn't get over the fact that others knew more than a week's worth of YouTube content...and I get it, the large companies hired many people to do one specific job. It's just how the industry supported the insane demand. You "specialize" ... If you want to call it that. As a result, I do not often hire people from large companies because they expect way too much money and do far too little.
So all of this is to say quality has fallen off a cliff and very few know any better. It's really a result of industry demand.
I think many senior engineers let bad projects fail because they don't actually know how to save them. But yes, I'm agree, there's also no incentive.
Here's where I love AI. I'm hoping that AI can help fill some skill gaps, provide education, and separate the people who have motivation from those who don't. At the end of the day, I hate to say it - many engineers took advantage of people. Maybe not intentionally of course, it what was the market would bear. I think AI is going to put an end to the gravy train and as a programmer of over 20 years? I'm thrilled.
Will AI prevent bad projects though? No. Because we still have the same problems. Few programmers are going to plan, communicate, and even bother to put forth the effort to ask about software design. They're going to crank out AI slop.
You know my bet? My bet is that product minded people who didn't understand coding will end up out performing most programmers. In fact, the more junior people on my team absolutely shred many of the "senior" programmers. So much so that I'm faced with a very very difficult gut wrenching challenge. Upskill the "senior" programmers or let them go, because it's just bad for the business when you just look at the numbers. I wouldn't be doing my job and be protecting the company if one of those two outcomes didn't happen. I'm not going to protect people who don't want to lift a finger.
A great reckoning is coming. I think there's going to be a Renaissance from an unexpected cohort of people who will produce good projects again. It won't be the "senior" programmers.