But if you plow through a feature and get it "working," you'll do much of that work cleaning up the logic and refactoring through your first pass. What rewriting allows you to do is crystalize the logic flow you developed the first time and start cherry-picking in a more linear fashion to meet the blueprint. It also tends to reduce the urge (/ need) for larger scale refactorings later on.
As I'm writing, I do go back and make changes as they pop into my head. But once I'm done writing it, I'm done unless I notice an obvious mistake after the fact.
To be fair, it makes everything twice as expensive. Managers are always going to reflexively push back against that, even if the new feature covers that cost and more.
The article argues it makes it less expensive to reach any specific quality level (above some threshold).
The threshold isn’t really addressed in the article, but it is implied that for any realistic quality need, the write twice approach will be cheaper.
To conclude it makes everything twice as expensive, you have to ignore any cost except the initial write. That’s not realistic.
> Rewriting the solution only took 25% the time as the initial implementation
Seems reasonable.
Unfortunately biznes wants features and more of them and if possible for free.
Silo-isation compounds this. If the maintenance costs are borne by another team or if any rework will be funded out of a different project, the managers are not going to care about quality beyond the basic "signed off by uat".
I should probably mention that I was doing consulting engineering here because no employees work work for the guy...
PSA: if you are a project manager / owner or some other similar position you do not get to ask this. This is a personal educational excercise not a way to get stuff done faster.
But I've also worked at places where things were underbuilt (e.g 0 test environments whatsoever except prod). If there was a gun to my head, to finish something in 1 hour, I'd test in prod.
So I think advice that sometimes is useful, sometimes is damaging, isn't really helpful. Not unless there's an easy way to tell which situation is which.
"I set aside the slides for the pointless CEO presentation tomorrow and work exclusively on this."
"No, you can't cancel on the CEO. Let's say you have two guns to your head and 24 hours, what do you do?"
"I take lots of coffee, skip sleeping tonight, cancel the group status meeting for Wednesday and focus on these two things."
"If you do that we'll look bad in front of the whole group. Let's say you have three guns to your head..."
In other engineering disciplines like say civil or architecture, this problem is solved by using a good blueprinting paradigm like CAD layouts, but I find a distinct lack of this in software[1]. Ergo this advice which is a rephrasing of "know first and build later". But it is also equally easy to lose oneself in what's called an analysis paralysis i.e. get stuck in finding the best design instead of implementing a modest one. In the end, this is what experience brings to table I suppose, balance.
[1]closest I can think of are various design diagrams like the class diagrams etc.
(The calculus here is a little different when you are doing something truly novel, as long periods of downtime are required for your brain to understand how the solution and the boundary conditions affect each other. But for creating variations of a known solution to known boundary conditions, speed is essential.)
There's an enhancement in a software I use/maintain that I wrote once and lost (the PC I wrote kaput and I was writing offline so I also didn't backup). It was an entire weekend of coding that I got very in the zone and happily coded.
After I lost that piece of code I never could get the will to write that code again. Whenever I try to start that specific enhancement I get distracted and can't focus because I also can't remember the approach I took to get that working and get lazy to figure it out again how that was done. It's been two years now.
I remember rewriting some piece of infrastructure once when I moved to another job, but I failed to summon the energy to rewrite it a second time at another job.
really good, this is key. building a 'vocabulary' of tools and sticking to it will keep your velocity high. many big techs lose momentum because they dont
> for each desired change, make the change easy (warning: this may be hard), then make the easy change"
(earliest source I could find is @KentBeck on X)
I love the idea of that vocabulary of tools and libraries, too. I strongly resist attempts to add to or complicate it unnecessarily.
He has some good visuals that illustrate how incorrectly dependent and impossible to unwind wrong abstractions can become.
I’d say « Write everything three times » because it usually take 3 versions to get it right: first is under-engineered, second is over-engineered and third is hopefully just-right-engineering
Of course, there are exceptions. ClickHouse implemented dozens of variations of HashTable just to squeeze out as much performance as possible. The algorithms used in ClickHouse came from many recent papers that are heavy and deep on math, which few people could even understand. That said, that's just exception instead of norm.
Don't get me wrong. Having a stable list of algorithms is arguably a hallmark of modern civilization and everyone benefits from it. It's just that I started studying CS in the early 2000s, and at that time we still studied Knuth because knowing algorithms in-depth was still a core advantage to ordinary programmers like me.
1. First write down a bunch of idea of how I might tackle the problem - includes lists of stuff that I might need to find out.
2. Look at ways I break the task down to 'complete-able in a session'.
3. Implement, in a way the code is always 'working' at the end of session.
4. Always do a brain dump into a comment/readme at the end of the session - to make it easy to get going again.
Pretend to be capable of doing this, and in the short moment where the other person is not attentive, get the gun and kill him/her. This satisfies the stated criteria:
> The purpose here is to break their frame and their anchoring bias. If you've just said something will take a month, doing it in a day must require a radically different solution.
> The purpose of the thought experiment isn't to generate the real solution.
:-)
---
Lesson learned from this: if you can't solve the problem that the manager asks you for, a solution is to kill the manager (of course you should plan this murder carefully so that you don't become a suspect).
:-) :-) :-)
What you should be worried about is the code that hasn't been rewritten in ten years.
[1] https://github.com/spc476/mod_blog
[2] As therapy for stuff going on at work.
>What you should be worried about is the code that hasn't been rewritten in ten years.
Why would I worry? it's been running for 10 years without significant changes. Isn't that a sign it's more or less accomplishing its purpose?
Needs shift. Expectations shift. The foundations that the code relies upon shift.
And familiarity with how things actually work inside of the black box evaporates leaving things distressingly fragile when the foundation finally gives way.
It's like when an old dam has "stood the test of time". More and more people (and business practices) wind up naively circle their wagons around presuming it will remain in operation forever and the consequences of what will happen when it finally does fail add up faster than unchecked credit card debt.
> A spike is a product development method originating from extreme programming that uses the simplest possible program to explore potential solutions.
In my career, I have often spiked a solution, thrown it away, and then written a test to drive out a worthy implementation.
0. https://en.wikipedia.org/wiki/Spike_(software_development)
Design by Contract + system tests are a far superior technique that take less time and find more bugs.