In my opinion the universal variable across all these is risk, and it's easy for all to grasp what we mean when we say the word "Risk".
There is delivery risk, will be get this out in a timely fashion for the market.
There is operational risk, will this fall over if we get 10x users or if someone looks at it funny.
There is market fit risk, are we building the right thing for the customer.
Framing these conversation with the business as functions of risk analysis and management is a fundamental part of leadership in my opinion.
It's not really about finding the right metaphor it's about lacking a common, shared unit of account.
Risks can't really be factored into decision-making unless you can measure them. Theyre not risks otherwise theyre just black swans waiting to happen (or not).
Technical debt and code coverage "as risks" cant be factored in either. Instead of trying to cargo cult the way "the business" talks or coming up with yet another metaphor we should be coming up with better ways to measure these things so that they can be plugged into an excel spreadsheet.
This is done incredibly badly right now. Most measurable code metrics which proxy things we care about are downright terrible at proxying them (e.g. test coverage). In place of working metrics most businesses (in my experience at least) rely on guesswork and trust in high level executives.
They can be measured. Lots of companies don't, which is mind-boggling.
When a release goes sideways and has to be rolled back, you figure out why. Ah, you launched a feature that revealed an intersection of edge cases in your testing? That's n developers * m hours * p dollars of blended dev salary down the drain.
When your feature delivery slows to a crawl over time, you dig in – your programmers aren't getting worse over time... are they? No, you find that a ticket that took your average developer x hours to deliver at the beginning of your development now takes 2x or 3x. Make a value stream map and you'll find out that your test suite has become so sluggish that your developers can't iterate quickly, that QA now measures regression time in days not hours, and as a result your developers are taking on less work to compensate. X dev hours * Y blended rate in ongoing waste, plus factor in the value of missed sales because of missed features if you want to really put a point on it.
> It's not really about finding the right metaphor it's about lacking a common, shared unit of account.
Name the local currency you get paid in – dollars, Euros, pesos. That is the shared unit of account. If you don't care about it, start walking up the org chart. You won't have to go far before you realize that's what actually matters, and that engineers who can translate technical risks and inefficiencies in their world to dollar values are highly valued.
I've seen it frequently slow feature development down, though.
But... I've also seen a lot of rewrites fail to improve feature development speed.
So until we, as a discipline, can quantify and predict development speed w.r.t. shitty vs good code, it's going to be a tough conversation that'll rely on persuasion and gut estimates.
We normally can't even predict how expensive (time-consuming) doing that rewrite that we want to do would be! Let alone the benefit!
I've built a software business over the last 15 years using this exact communications theme, and whilst my little business is a sample size of 1, we've certainly found that framing conversations with customers in terms of "risk" has kept us all on the same page.
I'd also add that by adopting this communication style, one can then look at opportunities in a new light as well. On that note, I was lucky to read a book called "IT Risk" (by Westerman & Hunter, HBS Press [0]) back when I first started the company, and it gave me an interesting perspective on risk.
In a nutshell, once you identify and minimise/eliminate all the usual risks (many of which you identified in your list), you can then reorient your business in such as way so as to start actively taking risks which stand to improve your overall offering.
This in turn allows you to build a strategic moat of sorts, because whilst your competitors are still scrambling to address the usual risks, you're actively taking on opportunistic risks which at times reap tremendous rewards.
[0] https://www.amazon.com/Risk-Turning-Business-Competitive-Adv...
On the other hand, in not a trivial number of cases, you'll be working with people who get all the upside when things go right and the blame the tech side successfully when they don't. Their interests do not align with the interests of the business either but since the business side consists of their bros, they all instinctively align on that.
So, you keep talking about "risk", that gets portrayed to upstairs as "I am doing my best, but so and so here is throwing technical minutiae at me which is slowing us down."
They are used to being graded on the curve where your position gets better if everyone else does worse and taking advantage of that system.
Among the many reasons I never graded on a curve, but you can't do that with upper management in business.
I say this as a person who believes business cost/benefit calculations trump everything else. However, decisions must be made by people who understand the tradeoffs and are accountable.
Perhaps we should go with "needs repair" or something like that ?
"Risk" feels like insurance territory (which goes along with "there is always some risk"), and a lot of people beautify the notion of taking risks to get higher gains.
There's laws against bad cars, there's no laws against bad software.
That said, I feel that car shops just work because, well, it's the law, especially in some countries.
In the end "needs repair" is the same as saying "mitigating accident risk".
And if you have a 15 year car with a clapped out motor, you are not going to achieve modern safety and fuel efficiency by swapping the worn-out motor with a NOS replacement, so there's a point where investing money in the old solution isn't going to move you into the future you need.
The real enemy I ran into was the SVPs desire to please the CEO who wanted to please the board who wanted to please the investment PR racket and make sure we could tell Gartner that X feature is ready by Y date to ensure we were included in their Magic Quadrant. The answer to the bosses had to be "yes it will be released by Y date". Saw that pattern repeated, realized "Agile" was just a word for waterfall, quit for a startup.
http://web.mnstate.edu/alm/humor/ThePlan.htm
This poem is my favorite description of this pattern, because it focuses on the the bad communication that creates the problem. Regardless of the intent of the people involved, gradually filtering out important information at each level as people try to please their superior guarantees a GIGO mess for the that the people at the top. As the poem says, "this ... is how shit happens",
Adopting something like the airline industry's "no-blame" culture that focuses on getting accurate reports by explicitly not focusing on blame might help avoid the natural tendency to eventually fall into this pattern.
That is, the downside of accepting risk tends to outweigh the upside of achieving a goal at the cost of running over schedule. In that environment.
This is one of the ways to learn how to use language that business gets the point across.
“Son, I’d like to work on that painting forever, because for me it’s never finished. But I have to stop and sell it because I have to feed the 3 of you children”.
His sole source of income was his art, and we grew up not wanting anything, because he was pragmatic enough to understand at some point he had to ship his art to survive - even if to him, it was not completed.
That’s why when I heard Steve Jobs say “real artists ship”… I immediately understood at a visceral level what he meant. And iPhones over the years have had countless bugs.
Upper management values – and pays accordingly – people who go beyond that. Learn some finance, learn to mentor others, learn to talk with a customer and understand what they're really after. You'll gain trust and find that you're being pulled into things far earlier in the development pipeline, which often puts you in a place to build better products.
Or, possibly upper management is just clueless to the cost of the bugs that are shipping. But you still probably won't get far if you just say "this is too buggy" and can't show them why they're wrong and you're right about it harming the business. Maybe you can persuade them if you go get some data. Maybe not.
Ultimately, if the kind of software they want doesn't line up with the kind of software you want to build, don't expect to get rewarded for constantly complaining about it. You're probably better off finding a place where you align better in terms of what type of software they want to ship.
>Upper management values – and pays accordingly – people who go beyond that
They love when many do that while they pay accordingly only to the very few in order to entice the others.
You apparently have never been sued for your managers decisions.
Here is how that was for me:
My manager released something I worked on, despite me saying that it would need a few more months of work. Apparently that would have made this unprofitable, so he disregarded my advice and they released it anyways. I joined the company when this project was already a Year late. So it was not surprising that this project was close to costing more than the customer would pay.
Except for a few interns, I was the newest dev in the company, so I didn't push super hard for more work on this project. However I was the most experienced dev on the team, so I should have pushed more.
Or should have at least got it in writing that they decided against my recommendation.
When it eventually crashed and caused a 30 minutes outage I was fired.
Why? Because it was my code that was the problem of course. Never mind the fact that I told them it is not ready and that the error wasn't even code that I WROTE. Some intern wrote that part. I wasn't a lead or manager and had nothing to do with it.
They then tried to sue me for more money then they ever paid me. 30 minutes of factory outage can be very expensive which they of course tried to recuperate out of my bank account.
I was very lucky that I had a good lawyer. Actually I could only afford a very junior lawyer, but he was sick on the day of trial and asked his professor to go instead. I was probably super lucky with that one.
Also, not having agency about how you spend your energy eats away at the soul. That's more subjective, but I'll do a way better job with way less effort if I am a primary decision maker of what work I'm doing instead of an intake machine.
If your code blows up, it will come back to haunt you. Now you get to fix your code, except it involves a bunch of other teams. Maybe that's what you want, but most likely not.
That being said, if you're at said company/have said manager, you should get a new job. I hear the market is hot.
Upper management doesn't go around looking at what individual people are doing. They just see Team X as a whole keeps having problems, so Team X is literally sitting around and doing nothing.
There's no need to talk to people and communicate. Just fire them. There's no other possible reason they're having problems other than lazy engineers not doing anything.
The first one is called requirements gathering (the method, requirements engineering). I was wondering how far I was going to have to read to find someone mentioning that.
It can be as simple as writing down a list of things the software has to do. It could be one afternoon brainstorming meeting, to start. But we don't even bother. Why? Because this discipline is not engineering, not science, not manufacturing. One my darker days, I think we are just playing in the mud.
I am grateful to work on virtual machines and compilers. They at least have functional requirements--an input specification--and a well-specified target machine. The rest in the middle is a fun design exercise, but there is a crucible at the end of the day. If you can't run the programs without bugs, you have failed. That makes it easy to put functional correctness first and performance second, as it should be. We need to find more ways of translating problems into requirements in order to reduce black art to science to practice.
Requirements gathering works wonderfully for internal projects with a fixed set of stakeholders and a well defined problem with measurable outcomes. Product development is a whole different world whose outcome is the much more vague "generate profit for the developers". Particularly in the realm of B2B software.
You need to find some local minimum that will meet the needs of thousands of organizations in order for development to be worthwhile.
You need to get many busy people who don't know you to actually talk to you.
You need to talk to multiple different stakeholders within the organization.
And even after you group all your stakeholders into some kind of user personas, you still haven't accounted for what the larger organizational behavior will be, as B2B products generally require large-scale buy-in.
And at the end of the day, no matter what people say they will do, the true test is when a company actually opens their coffers and pays your invoice.
We've arrived down here in the mud through years of experience of doing requirements gathering, building to a spec, and then finding out that we built the wrong thing and nobody will pay us for all our hard work.
Sometimes you really do have to build it to see if they will come. And the more formalized (ie expensive) you make the requirements engineering process, the more compelling the iterate and start getting paying customers asap plan becomes.
That only works when performance isn't a core deliverable. In my area, that have to be co-developed as it is too expensive to build merely functional and make it fast later.
I guess if your work has no consequences correctness can be discarded or rendered a secondary criteria for your programs.
It’s OK to not have every feature implemented. But it’s not OK to ship half-baked, untested functionality, no matter what the “code” looks like.
It also reads like the Agile manifesto from 20 years ago, which caused a huge splash when it was first published - before it was misinterpreted by everybody who might have been able to actually apply it to obtain results as "do exactly what you've always been doing (especially in terms of fixing a delivery date long before you describe what the software is supposed to do), but use terminology like 'sprints' and 'standups' to do it".
I know you were just posting a funny Dilbert quote but I don't respect Scott Adams anymore so I was triggered, please accept my apologies.
If you release a really shitty product, and your customer doesn’t have choice (has no competitor, made a large upfront payment, has fallen into vendor lock-in, etc), you don’t have to respond.
You can translate an unhappy customer into a compliant one by putting up barriers to the reporting and documentation of the issue. Make automated resolutions that don’t quite fit the situation, put the issue into a ticketing system that never addresses the issue, give employees roles that either overlap with each other or don’t intersect at all over the customers issue, etc.
These are the situations that Scott Adams parodies with Dilbert.
Not in my experience. Our team has a couple core perf metrics that are alarmed, like page load and missed frames, but it's easy to do really bad things that won't trigger the alarms. Or do them such that the automated tests are using different content for the pages they test than the real users who will see the commits weeks later. E.g. someone commits a change to feature x that locks up the screen, but the test user pool never uses feature X, or never puts content into it and just sees the empty state screen.
Quite common for developers here to write stuff that works for 99% of users, but falls over otherwise as well. Like today I fixed an issue where tapping a button on one screen to go to another really fast, like under 1 second, crashes because of a race condition. Testers aren't going to notice that. It just shows up in the company's overall crash rate which is spread across 4000 developers. Automatic UI tests caught it, but the responsible team had just filed the crash stack trace JIRA into their backlog and left it to sit for months. Similarly, today, we had a production issue because someone wrote some code that only works for certain users who had already accepted a certain terms of service screen.
Shipping a feature is rewarded heavily. Not screwing up the app for edge cases and perf and people who have to implement the next feature after you? Good test coverage? Not at all. If you dare to give an estimate that includes full test coverage, PMs will just take you off the project and pick a developer who doesn't do that.
This is wrong. You can and should deliver "by the date", but you should also prioritize ruthlessly, and ship at least the top of most-important-features list with reasonable quality when time runs out.
Of course, this implies that both managers and engineers have to understand deeply, what value each feature brings, and communicate among themselves and with their client continuously.
Literally the list of new feature requests from product is a mile long. So where do you draw the line? And once you draw the line, how can you justify pushing off on everything else that needs to get done while you are hardening and making sure that what you have decided to ship is actually ready to go?
The problem statement resonates with me, but I’d love more help on the solution. What is a viable way to get out of date driven development? Please help!
Establish the core needs, what must be shipped as a basic system, and ship it. Then extend over a series of iterations where you incorporate more needs and wants. You won't totally avoid the deadline, but you will mitigate one of the deadline problems: All-or-nothing.
If you establish a deadline 1-3 years from now it is expected the entire system will be finished on that date. Well what if it's not? Establish quarterly releases (or more often, especially if you can work with the team deploying the system like if you're an internal dev shop) and aim for a first release in a few months that hits the essentials. Everything else is an extension, you may still take 1-3 years (or longer) but you'll be releasing and getting value from the system right away.
I've even done this with safety critical real-time systems in my work. We built what was necessary for flight test, but not desirable for operations (missed a few needs, a lot of wants). By the time the plane was rolling off the factory floor for release to airlines, we had all the needs covered and most of the wants. The rest of the wants were covered in the next year or so as well as some newly discovered needs. But if we'd waited to have everything done, the plane wouldn't have been delivered to customers for several more years (because flight testing would have been delayed by about 3 years).
The last one is so different than the rest. Bugs shouldn't even "ship" internally in that they shouldn't really be making it out of feature branches often for some reasonable value of often.
But back to the last one, that's something that should be decided with high confidence before the first line of code is written!
If your business is agile, it begins with the style of engagement between you and the customer. Really agile businesses don't deal with absolutes (such as fixed price contracts), instead they accept change at the core.
Accounting hate that, so most businesses aren't agile.
Then, the rational manager would also manage expectations with other stakeholders, not committing to things they cannot see the end to.
But most managers are not rational and do the complete opposite. They want to "appear" amazing managers to other stakeholders while simultaneously bringing their "ego" in discussions with team leads and senior engineers.
Nobody is happy with such managers. I think a better org structure would be managers reporting to senior engineers instead of senior engineers reporting to managers.
Senior engineers are responsible for the implementation. Managers merely act as assistants delivering messages everywhere.