An engineer who doesn't overengineer will first ask a lot of questions about why you need X, and then will build X, and then only after you have tweaked the problem requirements will generalize that into a framework. They are likely to think this is a bad interview question, which it is, and push back to get the actual concrete requirements. That's exactly what you want; overengineering happens when people guess at the requirements instead of making sure of them.
An engineer with a propensity toward overengineering will love this question, and immediately jump into all the features their framework will support. They'll carefully get lost in thought brainstorming new features that might be cool, but when it comes time to implementing it, they'll either completely fail to reconcile all the contradictions that have crept into the design, or they will come up with a very complicated mess that does everything but trades off unspoken requirements (like performance or simplicity) for them.
This mirrors how frameworks actually work in the real world: basically every framework that people actually use was extracted out of a working app that solved one problem well, while every framework designed from scratch is an overengineered mess.
Have an upvote, I love advice like this: a good engineer has the capability to have the one posing the problem also reflect on the problem and the requirements and should never just say 'ok let's do this' without making sure it is really needed. Because face it, there are a lot of people out there coming to engineers with a question and a 'solution' which is actually already partly an implementation - in a domain they're not really good in. (example of that style: I want a sandwich! Engineer, build me a sandwich vending machine!) Good engineers know how to deal with that as well, and will ask as many questions as needed to figure out the optimal (combination of good enough/least work/least technical debt) solution in a given situation. Which, sometimes, might be as simple as 'wont fix, no real need for it'.
I'd say the test shouldn't be, how can you find out if they are in the overengineering stage or not. Instead, I'd filter for engineers who can be reasoned with. If you are an engineer in stage 3, it's okay to have a stage 2 engineer on your team, but if he can't take direction then he'll be a liability.
If they choose unit testing, the solution might be over-engineered to be testable at a much finer level than would be necessary in production code, and have more abstractions than usual.
Or the candidate might be trying to communicate some kind of design aesthetic, whether it's familiarity with functional programming, object orientation, design patterns, or some framework du jour that requires a lot of boilerplate. All these will smell a lot like over-engineering on typical toy problems seen in programming problems.
IMO it's very easy to get a mismatch in expectations and presentation in the solution to a programming problem, such that the company and the candidate aren't communicating on the same wavelength. Unambiguous, very simple problems with binary solutions may be better as a basic filter (e.g. Codility or something like it), followed up with pair programming in something more similar to a working environment, where pair communication and direction can get people on the same page.
> Tell me about the project you are most proud of, which best demonstrates what you're capable of. It could be academic, it could be professional, but be careful to not share any proprietary information from previous employers.
Then I proceed to ask a bunch of code, design, and implementation questions based on that scenario/project. If runtime is important, how they optimized for it; if scaling seems difficult, how they approached that; etc.
Note: This is a varsity-level interviewing question, since you need to be able to quickly come up with meaningful and consistent questions based on their proffered product. It works very well though when interviewing intermediate/senior engineers.
I've seen both in real life scenarios and usually you just throw the underegineered code away and start over with knowledge gained from the previous solution. And the reason you can do that is because you usually realize that you have an underengineered solution fairly early.
If someone wants to do something just because of a vague one-liner about "future needs" or "best practices" and when you press them on why they don't really have any coherent reasons, that's over engineering. It's typically mediocre engineers who don't really understand the concepts and are just applying a formulaic solution.
E.g. "how would you count the number of lines in a file?", "given a regex, please print lines in a file matching the regex", "given N lines, sort them by numeric value" etc.
The most egregious over-engineering I see is someone writing a 100 line Python script for something that could be at most a 100 character shell one-liner with a couple of pipes between different programs.
If they pass that ask them about a problem that would be trivially solved by resisting the urge to reinvent make, a HTTP server, or an SMTP server.
Edit: Re downthread: Yes obviously only in the context of someone expected to know *nix in the case of those examples. But the general approach is very transferable.
Ask the candidate about some trivial problem solvable in 1 minute with standard tooling they use every day, you'd be amazed at the architecture astronauts that come out of the woodworks keen to waste their time on some needlessly over-engineered solution.
Also, any interview where I'm expected to have memorized obscure shell flags and one-liners with no access to so much as a man page can go fuck itself. That's just an IT/DevOps spelling bee.
The overengineer will introduce a build system, several functional programming utilities, an HTTP client wrapper library, a DOM abstraction library, and maybe a full blown framework.
The competent developer will write a <form> element with inputs.
I found it pretty bizarre. My guess was that he simply applied a different standard to code than most other people. Instead of preferring simple and obvious code, there was some other arbitrary criteria of cleverness he was looking for. This guy was very experienced as well. I think it may just have been boredom.
If you communicate your priorities properly, you put real emphasis on not creating unneeded code, people aren't stupid, they wouldn't do that. Most projects don't live in vacuum, and the difference between overengineered solution and a good solution depends on outside requirements, and that's the job of the team member to keep it on track.
Some people have 10 years of coding experience. Others have 1 year of experience 10 times.
$ perl -wle "print 'hello world'"I'd also look for people who are constantly starting from scratch. The complex system builders I've known are _terrible_ at maintaining systems. So, they throw them away every 12 to 18 months and start over. If you think they're complecting too much, ask them how they supported one of their creations over _years_.
Lastly, ask them what customers thought of the result. Look for specifics here. They should give names, use cases, etc... Really, they should understand the business problem their customer needed solved and be able to communicate why the complexity of their solution was necessary to solve the customer's problem.
I also ask about things like test coverage and programming paradigms they have an affinity for. Some things I look for that can be negative signals are:
* Propensity to choose classes and OO programming in a domain it isn't really suited for, or functional/procedural programming where OO might fit better
* Propensity to build for what might be rather than what is (e.g. always have an abstract class even if only one concrete class is necessary)
* Propensity to maximize test coverage (e.g. splitting the logic of functions into smaller ones just to generate test cases, even when those smaller functions are only called at one location) rather than to design for a solution
Let me explain, a person thinking through a solution that should only take 1 hour and spends an entire day on it is overengineering. What goes through this persons mind goes something like this: "Maybe solution X is better?", "Is my code good enough?", "What happens in area X?", "Maybe add more tests to cover X?" ...
You get the point. That's why I always focus on the end result and try to be pragmatic as possible. Would this solution be enough to solve this problem, if yes, then move on. Don't get me wrong, performance wise, it should also be taken in consideration but not that it consumes all your time.
I often do "cool" stuff only to realize that it was way too complex for the problem. But I make a point to trim it down later.
I would add a step towards the end of the project that focuses on deleting unused stuff and simplifying as much as possible. As always there is no hard type and you often can't predict at the start if something is a good idea or not.
On the flip side people that immediately cry "YAGNI" and never try anything can be a drag too.
Here's a comment of mine, from a month ago, pertaining to interview coding exercises. My example accomplishes the goal you've described, however I'm not sure it's an exact fit for your situation, given the context you've provided. Perhaps some more information about your particular case would help identify good solutions.
Because it seems that todo apps are incredibly susceptible to overengineering?
$ npm install
Another indicator is how elaborate their build process to munge w/e they wrote into w/e a machine understands.Fyi, by using webpack, couple loaders, babel, etc I net 100s of dependencies just for the build. Ymmv.
We all work with really complex systems. The trick is to abstract them to the point where they're simple to understand and use. 1000s of dependencies doesn't even have to be complex at all if you use simple patterns to manage those dependencies.