From an example I'm currently working through on a hobby project... do I use a RS-485 transceiver with a custom line code, or do I use a 10base-T1 PHY? Ethernet, especially one pair ethernet, is undoubtedly more /complex/, with echo cancellation, a complicated line code, etc; but if I use the PHY, then /I own/ that complexity.
(For pure software folks, the equivalent question is internal implementation vs external dependencies. Do you implement a 1D barcode yourself, or do you import a third-party dependency for QR code reading?)
The problem is that answering this depends not on some objective notion of simplicity, but on a realistic assessment of /where time will go/ during the development process. If development time dominates, then exporting complexity is a win. But if testing time dominates, and since exported complexity still needs to be fully understood during testing, then in-house simplicity wins.
And which of these cases dominates is very much a project-by-project and team-by-team decision.
,,Complex comes from the Latin complecti, which means “to entwine around, to embrace''
Simplicity requires layering, so in your examples the main requirement for simplicity is about how intertwined your hobby project is with the transciever code or ethernet code.
As long as the abstraction layer works well for you without getting too much into the details of the implementation, it's a simple solution.
Of course it's not a clear answer whether you should do things yourself or use a third-party, but if the third-party works perfectly for use case without significant tradeoff in your system, of course it's better to use it.
But this is where the engineering intuition has to come in. "As long as you will not end up spending more time debugging the system than implementing it" is an equivalent statement -- and that requires prediction of the future. If I'm going to spend hours staring at signals on a 'scope to debug the system, I'd way rather they be RS-485 than 10base-T1, for reasons of simplicity -- but I don't know, today, if I will or not.
Layering works /great/ during implementation. Layering is a strong impediment to understanding during testing and debugging. Debugging a system efficiently requires being able to bridge between layers of the system and see where your assumptions break down. And once you're going between those layers, you're exposed to the complexity within them.
So: simplicity in implementation, or simplicity in debugging?
You can have very large number of layers, and to understand inner goings and interconnetions of all becomes very hard.
I highly doubt you can equalize "set of superb interfaces" with simplicity.
Many people, when they said "do the simplest thing" they really mean "do the easiest thing". That's fine if that's what you want, but if you find yourself talking past someone else who means "do the simplest thing", that's why.
I want a bagel. Is it simplest for me to start tilling the land and looking for wild wheat relatives to breed, or to drive my incredibly complex car built in centuries of industrialization to the corner store and buy (using money, one of the most complex concepts we've developed!) a bagel, bring it home in a plastic (!!!) bag, and stick it in the toaster?
If I should, during my lifetime, succeed in completing a bagel with the former, I have reasonable confidence it can't be reduced further without changing the output.
But I disagree that it's the simplest way /for me/ to get breakfast.
I don't think anyone mentions time as a proxy for simplicity. At least, the article certainly doesn't. You're right that the author doesn't objectively define simplicity, but I don't think anyone can. What is simple tends to be different to different people/teams, based on skills, tools, etc., available.
I know what's simple to me. I know it may not be simple to you. I know what's simple for a team in my org and I know it may not be simple for another team in another org. But, I do know what skills someone in my position and in my org is expected to have, and I know what tools are available to us, so I can make some real assertions here about what is "simple". Get worried beyond that, and you get bogged down on unknown unknowns.
This is a lot of work. And your prediction can end up wrong anyway (by your mistake or by the world changing).
How are we then to make choices? Perhaps just, if one solution seems clearly simpler (to you), then choose that. If one looks unnecessarily complex, don't choose that.
Simpl-est derails us perfectionist programmers. So maybe "Do the simpler thing that can possibly work"
"You don't need to know a man's weight to know that he's fat" - Benjamin Graham.
EDIT "Could possibly work" also implies a lack of foreknowledge as to its actual simplicity, or whether it will function correctly... or at all.
In the simple case of a solo project, as much complexity as you understand is fine; in a team you obviously need some idea of a threshold, not that you could quantitatively define it. Complexity isn't necessary, though isn't a problem - complication on the other hand is always bad, it's just making things hard to reason about, but may be necessary if the only alternative is adding unacceptable complexity.
The problem with discussing 'simplicity' is that it's an antonym for both complexity and complicatedness.
8N1 as a line code introduces all sorts of other issues, assuming you're passing messages instead of byte streams over it. In particular, how do you do packetization? How do you synchronize? So many "serial interfaces" have implicit timers (corresponding to interpacket gap for ethernet) used for sync'ing, or play horrible games with embedded CRCs… there's a huge amount of hidden complexity here, especially if you do it implicitly without understanding the dependencies.
By the time you've solved reliable packetization over 8N1, you're going to have something that looks a lot more like an Ethernet-level-complexity line code.
Yeah, it'll be a project-by-project and team-by-team decision, and that's as it should be.
https://www.youtube.com/watch?v=SxdOUGdseq4
Simple is a matter of intuition, and that can't be transmitted to others easily, or with a single class or book.
At one particular job we got punished by the business for calling things 'easy' when what we mean is that we understand the problem and all of the steps are (mostly) known. Our boss coached the hell out of us to say 'straightforward' when we meant 'understood', instead of using 'easy' as an antonym for 'quagmire' or 'scary'.
Certainly there are some things I've just forgotten, and others I just wasn't ready to hear.
Liabilities. Take the Windows EULA, its a contract that states MS is not liable for anything, standard software contracts state the same, so if boils down to being able to prove negligence, which can be sued for.
For example, do you trust the suppliers? IF they are in a different country, what's the chance of legal recourse if negligence can be proved, knowing about political interference if the entity is valuable enough?
So yes I agree, how do you assess simplicity, and as Billy Gates would say... it 's complicated!
Simple isn't the same as easy, and it isn't always obvious where the complexity is. One should beware of "simple" solutions that either hide the complexity, or shove it someplace else. The skill is to identify and minimize unnecessary complexity, which is another way of phrasing "Do The Simplest Thing That Can Possibly Work".
Something we can ship very fast, then we can add the banners, tracking for marketing, account creation, user ratings, community forums, results commenting and sharing, image carousels and a mobile app with push notifications that the results changed. You know, the regular MVP stuff.
So many people think agile means waterfall using sprints.
The greatest example of this is Unix.
Multics was a huge produce that failed (initially). Bell Labs washed their hands of it, and didn't want anything to do with Operating Systems again.
Ken Thompson wrote an initial scrappy version of Unix in 3 weeks. Re-writing to C was a tremendous move because it meant that Unix could be ported easily to many other systems.
I heard someone say that the genius of Dennis Ritchie was that he knew how to get 90% of the solution using only 10% of the work.
I'm working my way through Unix Haters Handbook [1], and it's a good read, even for someone like myself who really likes Unix.
Unix and C are the ultimate computer viruses -- Lawrence Krubner
Unix first appeared on a PDP-7 (not PDP-11). PDP-7 was pretty old even by the standards of the time.
"Originally, UNIX was written in PDP-7 assembly, and then in PDP-11 assembly, but then when UNIX V4 began to be re-written in C in 1973 and was run mostly on the PDP-11.
So far as I can tell, there is no Ancient C compiler that targets the PDP-7, nor any provision for running UNIX V4 or later on the PDP-7" [0] The link also contains some other interesting commentary.
I seem to recall that Thompson wanted to write code in Fortran.
I'm probably getting a few details wrong. The systems were extraordinarily constrained, something like 4K of RAM. "++" exists because it was more concise that "+= 1" (although K&R C uses "=+ 1", I think). They really wanted to make every byte count.
[0] https://retrocomputing.stackexchange.com/questions/6194/why-...
As a relatively senior software developer, I'd say don't worry about it too much. The article accepts that reducing complexity is hard, and it's ok if you can't make it any simpler. Try not to add intentional complexity when you can, because statistically speaking, YAGNI.
This industry is full of clowns trying to upsell things that nobody needs, just don't fall for it.
And then find yourself surrounded by tech debt an a system that was cobbled together, not designed.
This takes a ton of discipline, but in my experience the only alternatives are to either build up a ton of tech debt, or build things extremely well from day 1, only to end up dying due to low velocity (even if you get some critical decisions spot on in the beginning, no PM or engineering team that I've ever seen has been able to make only good decisions over several years...).
A lot of software lacks a clear plan. A big patchwork of local maxima commits that won't get you where you need to go.
So I say go ahead and draw some pretty pictures. What's the overall vision here?
If you're working on things that are intended to be short lived, then just do whatever is needed to get the job done and move on. If you're working with something where you know there's a good chance it'll be around for some time, then every once in a while, someone will have to take on the role of saying "no, we're not gonna do the simplest possible thing right now".
And it's not really about the fallibility of people. Often in engineering you can be designing in a space with a lot of unknowns, that simply can't be resolved without building out a bit to explore the space more. In such case some level of future proofing is warranted.
I'm kind of suspicious of adages like these that assume perfect information.
Sometimes the difficulty in distinguishing what the simplest thing could be comes from being in a group setting where people have equal say in the matter.
I think everyone has personal anecdotes to support the idea of doing the simplest thing suitable for that moment. But how to convince the group? I'm not sure, I don't always succeed.
A situation where I did do just the simplest thing is when I was asked to use project management software and a build server for a very early stage project with only myself as a developer. I declined. Instead I made a script to compile and package everything and emailed that to the others. We used an instant messenger for communication. It worked great for the early stage when the focus is on the MVP, though the project didn't go anywhere due to business reasons.
It will always still be possible to use the project management software and build server later. But it wasn't necessary at the very start.
I do think that many people make the wrong tradeoff in terms of complexity to features ratio though.
Even "passes all the tests" isn't a great definition. What are you testing?
For example think about build systems. "Works" could be "builds everything correctly" in which case the simplest thing is just a shell script with all the commands written out.
That's obviously terrible, so then "works" becomes "doesn't unnecessarily repeat work" and you end up with Make.
But then Make doesn't scale to large monorepos with CI so then "works" becomes "and doesn't allow undeclared dependencies" and you come up with Bazel.
So the same meaningless advice can justify wildly different solutions.
I think better advice is just "try to keep things simple where possible". It's vague because it requires experience and design skill.
It is far more active than the imperative version.
One might say the interrogative is the simplest thing that could possibly work...
I think those using safe languages and broad frameworks have a much greater ability to execute on "keep it simple" than those who use something like C and build 100% of their code in-house.
https://c2.com/xp/DoTheSimplestThingThatCouldPossiblyWork.ht...
"XP" almost, but not quite, became a real cult.
But it is a nice counter to some people's decisions to go for overly complex or risky (unproven?) technologies or designs.
So, grain of salt, and all that.
Othertimes do simplest thing that will most simplify similar tasks in the future.
Keep It Simple Stupid