Get a jar full of jelly beans, and every time you have to make a compromise (e.g. adding some hacky code or introducing some problem you'll have to fix later) take a bean out of the jar. If it's a terrible hack, then take out several. For the first few jelly beans, you won't even notice the change to the jar, and the first few compromises aren't going to be a problem for the code either.
Once the decrease in the jar contents starts to become noticeable, though, use the fraction of empty space as an enlargement factor for any estimate. So if the jar is 10% empty, consciously and vocally add a 10% additional "interest payment" on every estimate you give. If the jar is half empty, then you can double your estimates.
This is all very approximate and not rigorous at all, but it does help a team to visibly keep track of the amount of compromises they are making, and does surface the hidden cost to future development that gets ignored by a "normalization of deviance" bias.
At the very least, even if a team isn't allowed to change their estimates based on this model, you still get to eat a jelly bean or two every time you add a hack to your code, which can make the decision less stressful (unless it becomes an incentive to deliberately add hacks).
Realistically, something like 10-20% of the engineering resources at a company like Google or Microsoft are going to service technical debt at an institutional level. Sometimes it is literally all an engineer does, like maintain a deprecated system that can't be turned down for five years or a team migrating large systems from the old and busted to the new hotness, sometimes it's just the tax on engineers doing other projects to work around the technical debt. It's just what happens when you have tens of thousands of engineers working for decades.
Whenever I am done with a cord I just throw it in there... it gets all tangled up with all the others. When I inevitably need one of those cords I impatiently pull it out and it makes all the other cords more tangled.
Here I am needing an HDMI cable that won't just come out easily, I have to pay off my past laziness. But I have choices/tradeoffs/opportunities here.
I can just hurry up and get the minimum untangled and get back to watching TV.
I could untangle all of them since untangling one of them will help me untangle the others and wrap and label them.
I could just untangle the minimum, but also throw a roll of tape and a marker in there and wrap and label all future cords that go into that drawer, eventually they'll all be nicely wrapped up and well documented.
¯\_(ツ)_/¯
The reason why this is often possible, is that it's _less_ work to start with compared to adding in the layers...
... and then when unexpected requirements turn up, there is less complexity to deal with, there are fewer layers to work/hack around, and refactoring those that _are_ there to be more suitable to the current set of requirements is often easier, and so more likely to fit into time constraints.
I like to think of it as:
> Just about everything is cheaper than the wrong abstraction
(With a clear nod to Sandi Metz)
We are, and it is.
>I think there _is_ a technique that is possible in a lot of situations: not putting in abstractions/complexity that don't really do any of transformation/transfer required of the underlying requirements.
That's no good, because we can pile debt even when we don't put in "abstractions/complexity" that are not required.
Plus, the abstractions we built (that were 100% necessary) 1 year ago, can impact a big debt into the requirements we have now.
Debt comes from (a) the mismatch between the cleanest/fittest implementation and what we have done instead (which would always be there), (b) things set rigidly, even if they were OK for then (c) changing requirements that force us to work around our imperfect and rigid implementation.
So, it's multifold:
1) We'll never be able to write the cleanest implementation (due to time/skill/etc. constraints)
2) Even if we get close to the cleanest implementation, we'll never be able to foresee all kinds of future needs (and if we try to plan ahead for "possible cases" before we know what they actually are, we're just making things more verbose and rigid if we were wrong).
3) When the new requirements actually come, we won't have time to rebuild everything, and thus we need to build on top of what we have, adding things that go to new directions and for which our previous structures are not ideal
Rinse and repeat.
The only way to avoid debt is to work on smaller, constrained problems, that don't change and which doesn't have to adapt to changing environments much either (e.g. write the 'grep' program). This gives us all the time to polish our implementation, rewrite everything we want, etc.
Besides to get the right abstraction you often need to iterate a few times. It actually requires failure.
Asking for a friend.
But bad comments (even if it's just grammar) and misspelled variables are indeed part of debt. To see whether something is debt or not, imagine it piling up everywhere in the code.
What if 60% of variables in the code base had "incorrect capitalization"? What if most comments had the "wrong grammar"?
It would be more difficult to understand, refactor, and extend the program. Well, that's debt.
Debt can be negligible when it's like one dollar, but it can creep up and accumulate if we let it.
Also think of the "broken window" theory. Sloppy comments here, mistyped vars there, give a signal that "everything goes".
For example, if all variables in a code base are camelCase but one variable StartsWithUpper, chances are every time you reference that variable in a future code change you'll call it startsWithUpper and then hit an error (depending on the language, either at compile time or run time), and have to go back and fix it. So it adds a little time to development in that way. Negligible, but non-zero
Personally I would consider a stupid name or out of place capitalization to be tech debt of it leaks out into config parameters or something such that it can confuse more than just the people looking at one isolated chunk of code.
(Or call it "tech necrosis", because americans with college debt tend to view "debt" as an inevitability in life and that's not a helpful attitude for getting stuff done)
All tech debt, like loans, have some interest rate. Some debt has a great rate, say 3%, and other debt has unmanageable rates, say 18%. Once we attain some stability to begin paying down our debt, it's necessary to pay down the higher interest debt (architecture, libraries, speed, etc.). Once we whittle our way through the high interest bearing debt and we're left with low-rate rate, it may make sense to continue investing new capital into the market (novel product/feature development). The new investment could return 8-10% annually, which garners higher annual net returns (10-3 > 3) than continuing to pay down all debt.
TLDR: Some technical debt affects critical systems and flows and need to be paid down immediately. Some technical debt may be cosmetic or semantical, therefore less critical and can continue to accrue interest.
Had me thinking about different strategies you might use in Tetris and how they mirror development.
For example, when you do the big setup to drop a vertical 4x1...and it doesn't arrive.
Classic case of Yagni or Premature Optimisation leading to technical debt.