Realistically you should engineer for the problem you have or can reasonably expect you are going to have pretty soon. You can solve future problems in the future. I'm also not saying to write horrible unmaintainable code, but don't try to abstract away complexity you don't actually have yet. Abstractions and where to separate things should become apparent as you build the system, but it's really hard to know them until you are actually using it and see it come together.
Yes, someone who argues that "over-engineering" leads to "future-proving" is caught by the bug.
When you future-prove something, that's called "engineering". Over-engineering is by definition failing to foresee future needs, imagining generic future needs ten steps ahead instead of the less ambitious future needs two steps ahead.
It is easier to modify early, simplistic assumptions than it is to walk back from premature generalisations over the wrong things.
Kinda sorta. It's not a binary: you can "predict the future," just not too far out and not with complete certainty. The art is figuring out what the practical limits are, and not going past them.
> Realistically you should engineer for the problem you have or can reasonably expect you are going to have pretty soon. You can solve future problems in the future.
Another factor is comprehensibility. Sometimes it makes sense to solve problems you don't technically have, because solving them makes the thing complete (or a better approximation thereof) and therefore easier to reason about later.
When I think "Under engineer", I think "keep it simple, because you can't predict the future". Simplicity is a great enabler of flexibility and tends to go hand in hand with scalability.
It's often much harder to make something that's too complex more simple.
In an overly simplified textbook example of designing/building a CPU, you have an ISA you're building the CPU to support. The ISA defines a finite set of operations and their inputs, outputs, and side effects (like storing a value in a particular register). Then you build the CPU to fulfill those criteria.
In my experience, designers that want reusability usually don't have enough precision in how they want to reuse a system so an ISA-like design can't be created.
And practically, it's the rare (I might even say non-existent) day-to-day business problem that needs CPU-like flexibility. Usually a system just needs to support a handful of use cases, like integrating with different payment providers. An interface will suffice.
Building EV chargers is a good dose of electrical engineering combined with talking to dozens of car models and their own particular quirky interpretations of common protocols, which is like designing websites for a market with dozens of unique browser implementations.
In spite of that, it seems half of the complexity is making sure people pay.
I doubt this for most of us. Computers have been around for a long time. Most of us are not work on new problems. We by now have a pretty good idea of what will be needed and what won't be. There are a lot of things left that haven't been done yet, but if you understand the problem at all you should have a good idea of what those things will be. You won't be 100% correct of course, and exactly when any particular thing you design for will actually get implemented is unknown, but you should already have a good idea of what things your users will want on a high enough level.
Of course if we ever get something new you will be wrong. 10 years ago I had no idea that LLM type AI would affect my program, but it is now foreseeable even though I don't really know what it can do will turn out useful vs what will just be a passing fad. Science fiction has 3d displays, holographic interfaces, teleportation, and lots of other interesting things ideas that may or may not work out.
Likewise, 20 years ago you could be forgiven for not foreseeing the effects that privacy legislation would have on your app, but you better assume it will exist now and the laws will change.
Keeping code around is not free. Cost of carrying refers to the ongoing efforts of maintaining code, not to mention side effects such as increased complexity and cognitive load.
If you over-engineer a system you aren't getting value out of the extra bits, but you are still paying the cost of keeping them around.
A good example of this would be avg's teardown of the Juicero, in which IIRC he described it as under-engineered despite it's expensive ultra durable components. The rationale being that rather than build a design suitable for the purpose of squeezing juice out of bags they built a machine that was specced for a much more demanding task, thus driving up costs and wasting materials. The implication being if they'd spent more time or care engineering it they wouldn't have poorly engineered over-specced components.
Perhaps a preferred term should be "well engineered" or "poorly engineered". A well engineered thing is something that is well suited in a number of different dimensions, including product capability, business needs, cost (and its impact to end users in terms of price), etc. That sometimes means ugly code, it sometimes means technical debt, but it always implies elegance at a higher level than just the code or components, but an elegance that encompasses a wholistic understanding of the context in which that code exists.
In the software world some examples of poor engineering might be using kubernetes for a small internal app that could run well on a single VM or container. Or, in a different context NOT using kubernetes for the exact same app, but in an organization where k8s is standardized, thus creating more inconsistency and driving up organizational complexity in order to reduce local complexity.
- Fragile code
- Technical debt
- Reduced Agility
- Understanding Complexity
Over-engineering can be abused an excuse for poorly-engineered solutions and cutting corners. The future being hard to predict is often used as justification, but this swings both ways, you also often won't know what code is going to get built upon. Frequently an obscure one-off piece of code can become more useful than expected, with functionality tacked on over time, until the point where an entire product is resting on really shaky foundations.
Build a culture of quality engineering. Build the minimal solution but build it well. Have a strong (and flexible) product vision as a guiding light but always take small steps towards it. Optimize towards understandability and replaceability.
While not the intended audience, Systemantics is one of the most educating books on software architecture in existence.
My gold standard for well written tests is if the engineer who broke the test can fix their regression without looking at the test implementation, you have achieved nirvana.
My gold standard for well factored code is if people add a feature exactly where you would have added it. But that can be arrived at through socialization or by leaving spots where a feature would need to go if you actually need it.
You don’t need to build in conditionals for speculative features. You can just think about how you would start. What’s the first refactor? Can I arrange the code so that’s not a pain in the ass?
Bertrand Meyer felt that actions and decisions should not be mixed. For one they make testing a pain. They also increase the lines of code in impure functions, which reduces scalability of your system. A common effect of new features is adding more complexity to the decision process, so it’s easier to add 3 lines to an 8 line function than 3 lines to a 40 line function.
You can design your code so that it'll be easier to evolve into the most likely path. However, you don't actually implement the future cases.
Example: it doesn't take a lot more effort to create a configuration struct instead of hardcoding a value. However, you don't want to implement handling of any other values that the one you planned on using. You can easily throw a "value not supported" error if the configuration has anything else.
However, this will greatly help any newcomer on the codebase to understand what possibilities your component potentially offers and how it can evolve.
But given the choice of over- vs. under-engineering, overshooting a modest amount absorbs the inevitable scope creep more readily.
I have had this fight with Agile absolutists. They claim you should only think about the next sprint and not design any further.
However, work with really bad (or just extremely inexperienced) devs and/or people who are not primarily coders (e.g. data scientists), and you'll see a lot of under-engineering to the point that it's almost impossible to figure out what the code is actually doing (especially when it's coupled with random code that is inserted without understanding, just because it makes things work somehow, at least on the dev's machine).
(Shout out to Don Martini, who helped me more than he knew in my first job, as I was trying to grow into a professional programmer. Ditto Steve Hanka, who helped me in the same way on my second job. If either of you see this, thanks!)
I appreciate the attempt to carve a positive definition of over-engineering. However I think most people will disagree that there are any pros to it as their definitions tend to be quite negative.
While I tend to agree that anticipating future needs is best avoided, there are situations where it can be done successfully. It’s good to recognize that these situations are rare before considering it. When building a system for a project where you have literally built the same thing before more than once and the team you’re working on lacks the experience to understand or appreciate the complexity of the problem, this is one area where you can get away with what appears like, “anticipating future needs,” but to you is, “solving the problem that will come up before it’s a problem.” There are times where you can avoid learning a lesson the hard way (again).
Update However one must also be mindful to avoid chasing ghosts. It can be detrimental to progress to anticipate problems you’ve encountered before as, “the same as,” ones you’re facing now or to imagine they are there. Always good to pause and get a rubber duck session going to bounce your ideas off of before heading into the weeds.
When the author says over-engineered stuff is future-proff and all of that, he's implying _successful_ over-engineered stuff. Same for under-engineering.
Of course most over-engineered projects fail. Most under-engineered too. That's why we learned all those things people are parroting in this thread (we parrot because it's useful).
However, _some_ over-engineered projects will succeed. Those will most likely present a nice combination of future-proofing, scalability and reusability at the core of their success. Some under-engineer projects will iteratively limp their way into excellence as well.
He's not asking you to think of scalability and such, on the contrary, his recipe for following the middle way is quite reasonable (although a bit generic, like these kind of things always are).
It doesn't seem like Glassonos is describing over engineering as I understand it. Possibly he's describing scope creep? In general, people should actively discourage extra scope from creeping in to their projects. But I need some examples to illustrate what I'm talking about, as does the OP.
If what you are building will be public, then: "over-engineer the concept, under-engineer the implementation". When you over engineer the concept, you start thinking about what might come next, you start seeing different applications on top of your solution. But deliver only what's needed now.
if what you are building is fully internal: "under-engineer, move as fast as you can so you have an idea how well to under-engineer next rewrite of the system"
Exactly. Design in a way that it's possible to add future requirements.
However: If you have senior devs knowledgeable in the domain who can realistically show how the system has changed in the past, write code to accommodate those types of changes because it is far more likely that the system will change again similarly than change some other way. If someone pipes up with some "what if" hypothetical, shut them down.
https://news.ycombinator.com/item?id=22062590
DonHopkins on Jan 16, 2020 | parent | context | favorite | on: Reverse engineering course
Will Wright defined the "Simulator Effect" as how game players imagine a simulation is vastly more detailed, deep, rich, and complex than it actually is: a magical misunderstanding that you shouldn’t talk them out of. He designs games to run on two computers at once: the electronic one on the player’s desk, running his shallow tame simulation, and the biological one in the player’s head, running their deep wild imagination.
"Reverse Over-Engineering" is a desirable outcome of the Simulator Effect: what game players (and game developers trying to clone the game) do when they use their imagination to extrapolate how a game works, and totally overestimate how much work and modeling the simulator is actually doing, because they filled in the gaps with their imagination and preconceptions and assumptions, instead of realizing how many simplifications and shortcuts and illusions it actually used.
https://www.masterclass.com/classes/will-wright-teaches-game...
>There's a name for what Wright calls "the simulator effect" in the video: apophenia. There's a good GDC video on YouTube where Tynan Sylvester (the creator of RimWorld) talks about using this effect in game design.
https://en.wikipedia.org/wiki/Apophenia
>Apophenia (/æpoʊˈfiːniə/) is the tendency to mistakenly perceive connections and meaning between unrelated things. The term (German: Apophänie) was coined by psychiatrist Klaus Conrad in his 1958 publication on the beginning stages of schizophrenia. He defined it as "unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness". He described the early stages of delusional thought as self-referential, over-interpretations of actual sensory perceptions, as opposed to hallucinations.
RimWorld: Contrarian, Ridiculous, and Impossible Game Design Methods
https://www.youtube.com/watch?v=VdqhHKjepiE
5 game design tips from Sims creator Will Wright
https://www.youtube.com/watch?v=scS3f_YSYO0
>Tip 5: On world building. As you know by now, Will's approach to creating games is all about building a coherent and compelling player experience. His games are comprised of layered systems that engage players creatively, and lead to personalized, some times unexpected outcomes. In these types of games, players will often assume that the underlying system is smarter than it actually is. This happens because there's a strong mental model in place, guiding the game design, and enhancing the player's ability to imagine a coherent context that explains all the myriad details and dynamics happening within that game experience.
>Now let's apply this to your project: What mental model are you building, and what story are you causing to unfold between your player's ears? And how does the feature set in your game or product support that story? Once you start approaching your product design that way, you'll be set up to get your customers to buy into the microworld that you're building, and start to imagine that it's richer and more detailed than it actually is.
Also:
Will Wright on Designing User Interfaces to Simulation Games (1996) (2023 Video Update)
https://donhopkins.medium.com/designing-user-interfaces-to-s...
I would say that nothing in this article is interesting or new. IMO, it is not helpful in literally, any single way. (That's not me trying to be rude, I believe this is more-so a function of how people perceive information, and there are certain types of information that click vs not... ie, the wrong kind of fuel).
If I had to pick the right balance on any of the projects or companies I start... it always begins with pacing around my house in a deep sort of trance... pacing around endlessly for hours, standing in the shower while I mumble over and over, trying to visualize the entire "problem space." Of course with complex problems you can't do that, so in a sense the visualization I have will undoubtedly "fade" and I must rebuild it again in my mind, again, and again, and again. Each time trying to iteratively hold more in my mind simultaneously, all the different interwoven layers, without it crumbling.
I also use a huge amount of notebooks to draw different diagrams of the problem, usually messy to start but then becoming more refined as I start to slice and dice the different dimensionality of the problem in different ways. How to hierarchize (is this a word?) the different facets, how to split the functionality up cleanly and elegantly.
For me, anyway, I always focus on just 1 word: Elegant, as I feel like that's a great "sweet spot" for me. I try to find elegant solutions to hard problems. I like this word a lot because there exists solutions which cleanly slice the dimensionality of information, either by jiggling the information in slightly different ways (which has a net positive effect on the end user), and the cost/benefit is such that it improves the business side as well. Always a sort of cost/benefit, cause/effect energy going on. Shifting these building blocks, (which also equates to the actual code/features) until there is a nice fit. A good example might be org structure for a company that has all kinds of reselling/affiliate/white labeling options, and how accounts and financials/books are set, where they live, at what level, etc.
They will undoubtedly become more complicated later, as more layers of features and such get piled on, but I just intuitively piece through those scenarios not by reading articles like this and being like "GEE I GUESS I BETTER INCREASE AGILITY!" but by just feeling out the energy of the problem in my mind, poking at it, following it around like a hard to catch elusive rabbit... slowly hiding behind bushes and trees trying to sneak up on it. This is the idea of chasing the elegant solution which almost always (for me) is very painful and hard to find, with a great amount of stress and energy (unable to sleep or eat until I find the solution). It will haunt every waking second of my mind until I have it, and when I do it hits me like a freight train and I exclaim (!FUCK!) and I quick write it down, and the process repeats, usually from 7-30 days depending on how hard the problem is, how hard the app is, how hard the idea is.
From this way, I've nailed the balance of over/under-engineering nearly perfectly in each of my later projects, a few of which have become absolutely immense
I’d add “staring into the distance” alongside the mumbles (yes, when pacing, that often means I run into corners of walls, couches, etc).
I think of it as iterative development in my head, and it’s often focused on finding the fastest path to proving that an architecture or implementation is wrong; when I find it hard to prove why something won’t work, then I begin to explore in depth the idea and have a pretty big running start on the code, which flows a lot more quickly when I’ve done this than when I just sit down and start typing.