Maybe it's still insufficient advice, but it hasn't worked for them at least in part because they haven't figured out how to apply it.
From the post, I see low empathy and an air of superiority, (perhaps earned by genuinely being smarter than their peers-- doesn't make it more attractive).
That's going to cause friction because a team is a _social_ construct.
Your comment is hilarious on a meta-level: it's an example of exactly the sort of socially-mediated gatekeeping the author of the article (machine or human, I don't care) criticizes. It is, in fact, essential to match authority and responsibility to achieve excellence in any endeavor, and it's a truth universally acknowledged that vague consensus requirements are tools socially adept cowards use to undermine excellence.
Competent dictatorship is effective. Look at how much progress Python made under GVR. People who rail against hierarchy and authority, even when deployed correctly, are exactly the sort of people who should be nowhere near anything that requires progress.
Imagine running a military campaign by seeking consensus among the soldiers.
Or, you know, Linus Fucking Torvalds. If you were carrying the success or failure of most of the world's digital infrastructure on your shoulders, you also might be grating to some.
In the first example, for example, they suggested a new metric to track added warnings in the build, and then there was a disagreement in the team, and then as a footnote someone went and fixed the warnings anyway? That sounds like the author might be missing something from their story.
I do not find anything missing here. This is how things often plays out in reality. Both your retelling of it and what was actually written in the article.
Your retelling: Some people agree and some disagree with new metric. That is completely normal. Then someone who agree or want to achieve the peace or just temporary does not feel like doing "real jira" tasks fixes warnings. Team moves on.
Actual article: the warnings get solved when it becomes apparent one of them caused production issue. That is when "this new process step matters" side wins.
Before the end, I had them all fixed. Zero is far easier to deal with…
> Organizations don't optimize for correctness. They optimize for comfort
...do I need to say it?
Stopped here. That pattern.
I recognize this pattern from this AI "companion" my mate showed me over Christmas. It told a bunch of crazy stories using this "seize the day" vibe.
It had an animated, anthropomorphized animal avatar. And that animal was an f'ing RACCOON.
- It is not X. It is Y.
- X [negate action] Y. X [action] Z.
The titles are giveaways too: Comfort Over Correctness, Consensus As Veto, The Nuance, Responsibility Without Authority, What Changes It. Has that bot taste.
If you want I can compile a list of cases where this doesn't happen. Do you want me to do that?
Your efforts to improve quality could be vetoed by your coworkers for a variety of reasons: they don't care, they don't trust your judgement, they see other things as a higher priority... the list goes on and on. Some of these things can't be changed by you, but some can, and that's where the soft skills come into play.
That's only marginally sped up even if you could generate the code with a click of a button.
This was somehow related to the "social activity" part :D
If it was better specified I'd be done already, but instead I've had to go back and forth with multiple people multiple times about what they actually wanted, and what legacy stuff is worth fixing and not, and how to coordinate some dependent changes.
Most of this work has been the oft-derided "soft skills" that I keep hearing software engineers don't need.
Bad advice given to them:
> The standard advice is always "communicate better, get buy-in, frame it differently." [...] The advice for this position is always the same: communicate better. Get buy-in. Frame it as their idea. Pick your battles. Show, don't tell.
That sort of naive kindergarten advice is how people want things to work, but how they rarely work. Literally the only functional part of it is the "pick your battles" part. That one is necessary, but not sufficient. The listed advice will make you be seen as nice cooperative person. It is not how you achieve the change.
So OP comes to the "the problem isn't communication. It's structural." conclusion.
You're right that organizations do often become consensus-driven. It's a failure mode, not something to which we should aspire. And we certainly shouldn't tell people to deal with a shortfall of authority in an organization by becoming social slime balls that get their way through manipulating emotions and not atoms. People who advise doing this ruin good technologists by turning them into middling politicians.
"Disagree and commit" is a good thing. Escalating disagreement to a "single threaded owner" for a quick decision is a good thing. It avoids endless argumentation and aligns incentives the right way. Committees (formal or not) diffuse responsibility. Maturity is understanding that hierarchy is normal and desirable.
> The "soft skills" framing is wild. You're supposed to learn to communicate your way out of a structural problem. Like taking a public speaking class to fix a broken org chart.
If learning to communicate well wouldn't fix a structural problem, then communicating well wouldn't fix it either.
As years pass, you are judged against the standard you set, and if you do not keep raising this standard, you start being seen as average, even if you are performing the same when you joined.
I've seen this play out many, many times.
When an incompetent person is hired, even if issues are acknowledged, if they somehow stay, the expectations from them will be set to their level. The feedback will stop as if you complain about same issues or same person's work every time, people will start seeing this as a you problem. Everyone quietly avoids this, so the person stays.
When a competent person is hired, it plays out the same. After 3/5/10 years, you are getting the same recognition and rewards as the incompetent person as long as you both maintain your competency.
However, I've seen (very few) people who consistently raised their own standards and improved their impact and they've climbed quickly.
I've seen people lowering their own standards and they were quickly flagged as under-performers, even if their reduced impact was still above average.
Basically manager asks me something and asks AI something.
I'm not always using so-called "common wisdom". I might decide to use library of framework that AI won't suggest. I might use technology that AI considers too old.
For example I suggested to write small Windows helper program with C, because it needs access to WinAPI; I know C very well; and we need to support old Windows versions back to Vista at least, preferably back to Windows XP. However AI suggest using Rust, because Rust is, well, today's hotness. It doesn't really care that I know very little of Rust, it doesn't really care that I would need to jump through certain hoops to build Rust on old Windows (if it's ever possible).
So in the end I suggest to use something that I can build and I have confidence in. AI suggests something that most internet texts written by passionate developers talk about.
But manager probably have doubts in me, because I'm not world-level trillion-dollar-worth celebrity, I'm just some grumpy old developer, so he might question my expertise using AI.
Maybe he's even right, who knows.
- Luke 4:24
It's why people often trust consultants over the people inside the organization. It's why people often want to elect new leaders even if the current leaders are doing a decent job.
The baby almost always gets thrown out with the bath water.
https://en.wikipedia.org/wiki/Don't_throw_the_baby_out_with_...
Given the standard advice to job hop every 1-3 years, and the intern/coop work pattern of semester long stints, is this not just a structural consequence?
Do you gain competitive advantage as a company with longer tenures? Or shorter, even?
Or is it an attitude problem, compare with old people planting shade trees:
“Codebases flourish when senior devs write easily maintainable modules in whose extensions they will never work”
Other places I worked it is usually another engineer throwing a spanner in the works. Smaller companies have a lot of pets in the code and architecture. But if you avoid the pets you can change things.
I’m confused. The polite way to say no at work is to make it about not having time.
But if your idea blows out the quarter it had bettet be game changing!
How about "have you tried unionizing?" Because the common theme here is lack of respect which is ultimately limited by your own bargaining power. That means it's only your individual value against the collective will of the company, and the individual is going to lose that fight more often than not (with very rare exceptions for extremely talented and smart people who won the life lottery who are smarter than everyone at a company).
That's a very strong foundational claim right at the start. And in my experience, a completely false one. Which makes the whole argument that follows it completely unsound.
Also, the author seems to treat the terms "consensus" and "buy-in" as synonymous. They're not, and this distinction can make a huge difference in terms of healthy teams can operate. Patrick Lencioni covers this well in his classic book, "Five Dysfunctions of a Team".
Can you explain more? I'm not familiar with that distinction, nor that book.
EDIT: I asked ChatGPT, and it came up with this [0]. Please let me know if it's accurate (I don't necessarily dislike LLMs, I just think they're wildly oversold, and also value human input).
0: https://chatgpt.com/share/69a05ce2-95e4-8006-ae56-bd51472894...
If my memory serves correctly, in the book there's a bit more emphasis and nuance on the point that, for buy-in to exist in the first place, all team members who have a strong opinion prior to a decision, need to feel that they've been given an opportunity to give their opinion, their opinion has been heard, and its merits and demerits weighed earnestly against all other options on the table, before a decision has been made by the team (or in the case of a 'stalemate', by the ultimate decision-maker), and that there is a clear rationale for going with the final decision and rejecting all others. The rationale may well acknowledge the risk from not following other opinions, but cite other operational reasons for the decision.
A healthy team then should proceed as if this was a team decision that they all commit to. But consensus isn't required, only buy-in. In a healthy team, all members should trust each other (in that, they are all working towards a common purpose), and not fear "creative conflict". This enables them to share their opinions in the first place, and the buy-in discussion helps everyone in the team to feel accountable for the joint decision, as well as hold each other accountable in a healthy way. If this doesn't happen, the usual outcome is that decisions are made, but people feel "I didn't agree with this decision and nobody asked me, so I won't really engage with it".
I really do recommend the book. It's a very light read, presented as a story (with a final chapter examining the concepts a bit more theoretically), with excellent insights. You can probably find it online for free if you must, but it's a good book to have on any bookshelf.
So, if I understood correctly, complaining that his architectural advice for other teams/people was constantly ignored, and his solution is the same thing he was complaining about.
ie The teams he was advising also thought authority should match responsibility - and they did want they wanted and ignored him?
> Authority matching responsibility. That's the only fix I've seen work. Either you get decision-making power that matches the decisions you're already making, or you find a place that treats your judgment as an asset instead of something to manage.
I don't think the solution is to become some kind of dictator. And I don't think it's about not valuing your judgement.
The key issue is a fundamental misalignment of core values. In the examples given, the culture is such that quality is not the highest priority. A system based on consensus only really works if core values are shared, or there will always be discontent. Consensus won't work under these circumstances. You'll never be able to 'trust' your colleagues to 'do the right thing'.
If you care about quality, you have to look for another organisation and have a lot of questions about how they assure quality.
Agreed, but my main frustration is what glitchc wrote a few comments down: "No one actually claims their product is crap and quality doesn't matter."
I have never met anyone in management who will admit that they value velocity over correctness and uptime, but their actions do. If you want to optimize for velocity, growing your user base, expanding your features, that's fine - but you need to acknowledge that you're making a trade-off in doing so. If you're a solo dev, or working at an extremely small shop with high trust, it's possible that you can have high velocity and high quality, but the combination is vanishingly rare at most places.
Not the framework you developed. Not the fact that your work powers millions of users. To them, you're just a replaceable worker bee. You are only needed when something breaks. Architectural decisions are made by anecdotal experiences by them and it's just stone, paper, scissors all over again.
And when shit blows up right in their faces, it will not be about their judgement or lack thereof - it will be about how you didn't communicate about the issue properly. It will always be you who will be under the bus. And then the bunch of these clowns go and vibe code some stupid-ass product and sell it to gullible investors "wHo NeEds EnGiNeErs?"
And then you read about how 1000s of users' information went public all over the internet post their launch...the very next day.
/endrant
Go beyond identifying all these problems towards solving them. Choose a small problem, where you won’t have to fight and argue, just a little dust bunny you can sweep out of the way. Do it again, and again, and again. This is how you build trust. As you build trust, it becomes easier to seek change.
Additionally, you may also find that not all the little problems are worth solving, and what’s more interesting are the bigger problems around product-market fit, usability, and revenue.
TFA author (and me), and you have wildly different motivations. I don't know the author, but have said verbatim much of what they wrote, so I feel like I can speak on this.
Beyond the fact that I recognize the company has to continue exist for me to be employed, none of those hold the slightest bit of interest for me. What motivates me are interesting technical challenges, full stop. As an example, recently at my job we had a forced AI-Only week, where everyone had to use Claude Code, zero manual coding. This was agony to me, because I could see it making mistakes that I could fix in seconds, but instead I had to try to patiently explain what I needed to be done, and then twiddle my thumbs while cheerful nonsense words danced around the screen. One of the things I produced from that was a series of linters to catch sub-optimal schema decisions in PRs. This was praised, but I got absolutely no joy from it, because I didn't write it. I have written linters that parse code using its AST before, and those did bring me joy, because it was an interesting technical challenge. Instead, all I did was (partially) solve a human challenge; to me, that's just frustration manifest, because in my mind if you don't know how to use a DB, you shouldn't be allowed to use the DB (in prod - you have to learn, obviously).
I am fully aware that this is largely incompatible with most workplaces, and that my expectations are unrealistic, but that doesn't change the fact that it is how I feel.
I also share some of your philosophy — life is too short for us not to find joy at work, if we can. It’s a lot easier to find that joy when the team’s shipping valuable software, of course.
Business outcome comes first, and it is only rarely aligned with technical excellence. Closing a deal might involve making an unreasonable promise, and implementing it might not require more than an ugly hack, so you go with the ugly hack and make the money.
Comfort could be important but many people don't perform well when comfortable, so the organisation has to add some degree of confusion and pressure to keep them at a productive equilibrium where they don't fall into either apathy or burst into flames.
And yes, the boss decides, not because they are especially accountable or responsible, but because the power comes from ownership. In some organisations this is veiled and workers get a say most of the time, but in a pinch it'll be the higher-ups that actually have that power.
I literally had this discussion with my boss yesterday. I spent time writing up what I already knew to be true (we have systemic issues which are unsolved, because we only ever fix symptoms, not root causes), replete with 10+ incidents all pointing to the same patterns, and was told I need to get the opinions of others on my team before proceeding with the fixes I recommended. “I can do that, but I also already know the outcome.”
> Responsibility Without Authority
This. So much this. Every time I hear someone excitedly explain that their dev teams “own their full stack,” I die a little inside. Do they fix their [self-inflicted] DB problems, or do they start an incident, ask for help, and then refuse to make the necessary structural changes afterwards? Thought so.
Some organizations do in fact optimize for correctedness, and some people are good at it.
Some people are good in everything (totally possible, universe doesn't care about keeping dichotomies). Maybe that technical guy was only technical up until now because it was what added more value. People often don't consider that.
Right now, we're seeing some small changes in value dynamics. It makes us foster those (mostly pointless) meta-conversations about what organizations are and how people fit in them. But the truth stays the same, both are incredibly diverse.
* Their codebase is written in something relatively obscure, like Elixir or Haskell.
* They're an infrastructure [0] or monitoring provider.
* They're running their code on VMs, and have a sane instantiation and deployment process.
* They use Foreign Key Constraints in their RDBMS, and can explain and defend their chosen normalization level.
* They're running their own servers in a colo or self-owned datacenter.
And here are some anti-signals. Same disclaimers apply.
* Their backend is written in JS / TS (and to a somewhat lesser extent, Python [1]).
* They're running on K8s with a bunch of CRDs.
* They've posted blog articles about how they solved problems that the industry solved 20 years ago.
* They exclusively or nearly exclusively use NoSQL [2].
0: This is hit or miss; reference the steady decline in uptime from AWS, GitHub, et al.
1: I love Python dearly, and while it can be made excellent, it's a lot easier to make it bad.
2: Modulo places that have a clear need for something like Scylla - use the the right tool for the job, but the right tool is almost never a DocumentDB.
Look at any high quality open source software, and the care people put into them. Those are organizations, made up of people, some of them highly technical.
Startups often don't optimize for correctedness. They can't afford it. But that's a niche. Funny enough, it's the one that's being most affected by the shift in value dynamics right now, so I understand that some people here might see the world as just this, but it isn't.
and then the blame could be shifted to the future generations, it's their incompetence after all.
> Correctness wins when the cost of ignoring it becomes impossible to miss: an outage, a customer complaint, data loss. Until then, comfort wins every time.
Those who tolerate comfort-winning aren't engineers and shouldn't be admitted to stand close to engineering systems overall, especially outside the software industry.
Insert fire writing gif here.
some situations are just fundamentally broken.