> If Jeffries started with a different core representation, then it's likely his subsequent design decisions would also change. The bookkeeping for constraint propagation might push him towards Norvig's relational approach to the rules of Sudoku; rather than continually recomputing the rows, columns, and boxes, he could simply have a map of each cell onto its peers. He could distill every lesson of the previous posts, creating something simpler and faster.
> But Jeffries isn't in the business of starting over. He not only believes in incremental design, but in using the smallest possible increments. In his posts, he regularly returns to GeePaw Hill's maxim of "many more much smaller steps." He is only interested in designs that are reachable through a series of small, discrete steps.
Jeffries has radicalized me. This sort of puttering-around with "incremental design" is too pervasive in the corporate world. In software we have the luxury of rethinking from first principles, and we must use it. Death to MMMSS
What does this even mean? I'm aware of the meaning of "first principles" and it doesn't seem to relate at all to software development at all. I can imagine using this when trying to figure out how code works, but I can't figure out how it relates to building the software to begin with. What questions are you trying to answer where first principles would even come up?
> Both Norvig and Jeffries are genre programmers; they have spent most of their career solving a specific kind of problem. And that problem, inevitably, lends itself to a particular kind of solution.
I wish more people would be circumspect about the genre of problem they're trying to solve and how their framing railroads them into a particular type of solution that might have a better method.
Do you mean "explicit?" "Circumspect" seems like not what you want.
> Carefully aware of all circumstances; considerate of all that is pertinent.
"I wish more people would put more consideration of all that is pertinent about the genre of problem they're trying to solve and how their framing railroads them ..."
But re: design vs. increment, I do think incremental TDD is pretty useful in domains where you have low confidence that you could come up with a good design. If you asked me to implement an LLM today, 0% I could design a good one. But given enough time I could slowly start to implement one (maybe?).
The two quotes that got me are
Jeffries:
> So I try to make small decisions, simple decisions, decisions that will be easy to change when, not if, a better idea comes along.
Norvig, about Jeffries:
> He didn't know that so he was sort of blundering in the dark even though all his code "worked" because he had all these test cases.
I fear someone could say this about me, about almost everything I've ever built. I guess something like "I just kept crawling and it just kept working!"
Of course with past experience, my first 'simple' thought is usually quite practical or sensible. In the same way it's hard to invent an inefficient algorithm that does no useless busywork--we've wired our brains toward good.
Design is crucial. The optimal time for that design may be different for different things, but I do know that incremental design can, and often does, fail.
I did do that to design the UX, but for algorithms, I am not so sure. I am really bad at coding competitions, for example.
I have never mentioned it by name, I think. Only linked to it occasionally.
Edit: Another reason I am surprised is that Yore became its name only a few months ago.
But then again, you know I was referring to Jeffries' SOP of designing while coding and refactoring incrementally.
Sucks, as it is largely a dunk on the author. It really is a sobering experience, to attempt something like that and use what are advertised as good tools, only to fall flat on your face.
I think what people often fail to appreciate is if you see ANY strategy work, it has almost certainly been rehearsed. Many many times. Even when you are doing something where you are using the exact correct tools, for it to work smoothly pretty much requires rehearsal.
And this is exactly why you do gamedays for how to react to failures. If you have not practiced it, then you should not expect success.
TFA's thesis is roughly that incremental design dooms you to a local maximum:
Since Jeffries (the TDD/Sudoku guy you seem to be aware of) starts out with a suboptimal representation for the board, there is no small change that can turn the bad code into good code. At some point along the line, he makes motions in the direction of the design that Norvig used, but as there is no incremental way to get there (maintaining two representations was a dead-end since it hurt performance so much), he never does.
I'm curious on the thesis. I'm assuming "locked in by tests" increments are the problem? I'm curious why you couldn't treat this like any learning task where you can take efforts that are effectively larger steps to see where they can get you?
I should also note that I am not clear I understand how bad of a representation of the board you could get locked with. I got a working solver years ago with what is almost certainly a poor representation. https://taeric.github.io/Sudoku.html
My data model was X,Y,V. Nothing nullable. Separately you need a table (possibly generated on the fly) of range 1 to 9. You wind up joining that a lot.
The whole program consisted of running INSERT INTO ... SELECT ... in a loop until 0 rows were inserted, indicating you were either done or you hit a point where no cell had a single solution. I'll spare everyone the rest of the details.
Incomplete, I know, but it fired the neurons, particularly with respect to the utility of the EXISTS expression in SQL.
I had no idea about things like "naked pairs" at that time, but I'm sure I could extend it to suport that.
It's interesting that if I translate that to a more traditional language, I independently came up with what is a cousin to Norvig's solution. I sure don't have his background, in fact my background is probably closer to Jeffries.
The main difference is that Norvig pre-enumerates all 9 possible values for all 81 cells then sieves them out, whereas my SQL constructs the 9*4 matrix from a temporary range 1..9 table, discovers the "must be" values, inserts those, then just repeats the process. Basically I'm Map<Coordinate, Integer> whereas Norvig is Map<Coordinate, Set<Integer>> and the algorithm is slightly different.
My experience agrees with the author's. Incremental design does not work. Prototyping and being willing to throw away your prototype and do something wildly different has always been the better approach in my experience.
You wouldn't believe how many less experienced engineers I help with problems by coming in and approaching the problem with a fresh set of eyes. It takes years to build the skill and willpower to incinerate days or weeks worth of work in favor of an alternative solution you can write in an afternoon. But it's not a waste! If you gained enough insight from doing it the wrong way to be able to write it the right way, then you maximized the value of your time. It is actually a waste of your time to keep iterating on a fundamentally flawed design.
From that mental model, the choice of data structure would seem to follow directly, which would tie nicely with the subthesis of programming within your genre.
https://coin-or.github.io/pulp/CaseStudies/a_sudoku_problem....
One nice thing about this approach is that by adding each solution in as a constraint and re-running, you can exhaustively enumerate all possible solutions for a given puzzle.
https://coin-or.github.io/pulp/CaseStudies/a_sudoku_problem....
This example helped me enormously in developing my understanding of how to use binary variables in an LP solver
If the account is accurate, this Jeffries guy wasn't getting the ball through the hoop, whether or not LeBron was around.