Companies have not found a process at scale where they can train their employees to systematically gauge candidates.
Small and startup companies where leadership is still involved in the interview process can gauge candidates based on their own technical intuition. So it's false to say NOBODY has a proxy for good software engineering skills. The founders can also look at the candidate's public code contributions.
The problem is that this method does not scale. If you trusted employees to hire based on feel and intuition, you open up the door for people to bring in incompetent friends.
The real problem here is incentives. Founders have a financial incentive to not hire bad candidates. Employees don't, and that's why a standardized process is needed.
Is there any proof that this method actually works better though? I've heard plenty of stories of people in smallish (30 FTE or so) startups working alongside some really disastrous hires.
As a manager in a scale up, my incentive is to hire competent people into my team so we can meet our goals. Sure, it's not the same as the company being mine, but it's a good enough incentive. I think a standardized process is better because it reduces unconscious biases.
That results in biased hiring based on "I like this person, they think/talk like me".
> look at the candidate's public code contributions
Also biased hiring on candidates who have the time & inclination to do extra-curricular coding at the required level.
> Founders have a financial incentive to not hire bad candidates
All companies have financial incentives to not hire bad candidates, the cost a bad hire on productivity and cost to manage them out is high. In fact, the incentives are so strong that the entire hiring system is optimized to avoid bad hires at the cost of missing out on good hires.
That's why the system is so frustrating - only strong candidates with clear signals get hired.
But if we look from the perspective of the company – maybe it's just an filter to get someone very dedicated and nerdy about programming. Less churn, less questions.
Leet code is useless in the real world, everybody knows it. But one would find that people that are able to sit down and learn something useless and stupid only because their boss asked it makes for better employees.
The modern corporate world does not like free thinkers, if not within easily controllable parameters.
The solution for the ones that have better to do that to memorize dumb quizzes for weeks, only to get paid to write basic React components all day, is to go work for yourself, become a consultant, or move to another career.
So I think leetcode interviews get both those groups - the ones you mentioned, who work hard and study (and thus demonstrate good qualities of being employees), _or_ those who are so good at this stuff that leetcode won't even make them sweat.
Dedicated is a nice way of putting it.
The school system was funded by industrialists who needed well behaved and obedient workers. Academic and Leetcode problems which exist in a vacuum select for those who follows the rules. Tech companies don't actualy want people with a mind of their own.
It's not a perfect proxy for how well they'd solve real user facing problems, but so far it has given me a decent indicator of their motivation/passion and apetite for problem solving. There's a dozen other, mostly non-coding questions in the interview, but this one has been the best predicator for me.
(The problem is closer to a more elaborate fizzbuzz with some math parts, than it is to a memorizable b-tree or other leet-code algorithm question)
However, those activities tend to reveal information about their abilities that are useful in an actual game.
Similarly, while you may not be doing those coding problems in day to day development, those coding problems can (ideally) reveal information about your knowledge and abilities in hat does apply to daily coding.
Candidates: you need to appreciate how difficult it is to hire good people, the dishonesty of many candidates, and how limited the tools are for identifying talent and filtering out weasels. It is true: party trick Big O questions have low utility in most jobs. It is true: there is talent out there which sucks at Big-O questions. It is true: there are people who are good at little else than the party tricks. Nevertheless: those questions provide an objective judgement about a candidate. That objective judgement is valuable.
You spent several years doing a degree to make yourself employment-compatible. You will be able to achieve this standard in far less time, and there are cash rewards at the end of it. Accept that you will need to burn 1-2 months to develop this capability, and that you will feel awesome once you have.
Get /Cracking the Coding Interview/. I was not impressed by the object-oriented design chapter, but seek to master the other first-11 chapters. Take what you need from the remaining chapters. Develop a set of flashcards. Find problems online. Work intensively for at least two weeks but ideally a month. Then book some interviews at firms where you don't mind if you fail. Some of the FAANGs have awesome pre-interview study material. Try to get into process with a firm like that. Keep training on your flashcards and doing a few problems a day to keep in form. Expect to struggle with early interviews. Learn from those experiences. Revise what you made a mess of, keep working with your flashcards until you have mastered your weak areas. Now start applying at the firms where you want to succeed.
That objective judgment is valuable at something different from finding out whether people can actually do the job. So, not very valuable at the purpose that people try to use it for.
My last job interview cycle (January 2023), they asked me a ton about the specific details of C++. It was going to be very difficult to do as a dishonest candidate. I mean, you might make it as a language lawyer who doesn't actually code, but you'd have to be a genuine language lawyer. One phone interview, then a 2-hour in-person interview, then another phone interview with higher level people. That was it. There was one "what does this code do" problem, which takes people 5-15 minutes.
So my point is, you don't have to grind leetcode to get a software engineering job. You don't have to use leetcode to hire good software engineers, either. And if you do use leetcode as an interview filter, then you hire people who are good at grinding leetcode, which is not the same skill as writing software.
But I seem to understand from your comment that we should just all accept that they are terrible and keep doing things the same way?
https://blog.plan99.net/in-defence-of-the-technical-intervie...
It is, natch, hosted on Medium, so you will see a banner, but unlike this paywalled article it's dismissable. Just click the X to make it go away and then you can read the whole thing for free.
The summary is that many interviewers ask "unrealistic" algorithmic questions because:
1. They fit in the short amount of time available.
2. They get people writing programs that cover all the basic features of the language.
3. They don't ask people to do excessive amounts of work (e.g. takehome assignments)
4. They wash out people who lack basic skills you'd expect programmers to have, like actually starting a new project and being able to compile/run it in their self-chosen editor.
5. They are general and don't tend to require knowledge of specific frameworks or even specific languages.
The questions are unrealistic because they're designed to be fast ways to extract information in an interview setting, not to actually be an accurate sample of the daily work (which in the time available may only cover a fraction of the skills required of a working programmer).
So yeah I'm sure some firms do this, but there are plenty of others that don't.
The issue at hand is that the "unrealistic" algorithm questions aren't fizzbuzz.
They would ask me to write Fizzbuzz. I'd see how simple the problem is and immediately think of something like this (assuming they want it in Python):
def fb(n):
for i in range(1,n+1):
if i%15 == 0: print("fizzbuzz")
elif i%3 == 0: print("fizz")
elif i%5 == 0: print("buzz")
else: print(i)
fb(100)
But then I'd think that this is too simple. That's something you could write just after glancing through a Python book at a bookstore. Surely I should use more language features to try to stand out among the candidates, right?So I'd probably end up trying something like this:
def fizzbuzz(n, *args):
cur = ['' for x in range(1,n+1)]
for m, postfix in args:
cur = [y+postfix if x%m==0 else y for x, y in zip(range(1,n+1), cur)]
cur = [str(x) if y == '' else y for x, y in zip(range(1,n+1), cur)]
return cur
print("\n".join(fizzbuzz(100, (3, 'fizz'), (5, 'buzz'))))
But Python isn't my main language and if this was a whiteboard problem I'd probably make some mistakes and they would be very unimpressed.The leetcode issue is to a large extent caused by, literally, leetcode.com the website. When I first started doing technical interviewing this website didn't exist, so if you developed a question that wasn't too easy or too hard, covered the bases, fitted well into 45 minutes, that you could calibrate the difficulty of on the fly etc - basically a good question - then you could make it last quite a long time.
At some point people started sharing questions and pre-canned solutions online, and many candidates will cheat by looking up answers and copying them from another screen if they can. Also some shady recruiters started leaking questions to candidates ahead of time. That forced interviewers to start constantly inventing new questions.
One of the points I used to make in interview training is that questions are like programs, you have to design them. Some are better than others, they can vary in efficiency etc. Often you can't really know how a new question will work out until you try it a few times. Making a good interview experience for the candidate is hard work and it's easy to screw up, lots of interviewers do and it leads to a lot of bitterness (e.g. questions that are far too hard or specific or don't fit in the available time).
Unfortunately if you're constantly having to invent new questions to keep ahead of the people sharing them and grinding the process, then the quality will inherently get way more variable. By the time you've calibrated the question it's been leaked already. And there are a finite number of programs that it's reasonable to ask someone to write in a short amount of time, so by now these giant databases of interview questions and candidates memorizing the answers at scale makes it harder to get accurate information, which was the only goal in the end. Because indeed the specific tasks themselves don't matter much.
It is very hard to change established processed - especially when the people doing the interviews have gone through the process.
They had to suffer, so why should the people after them get an easier ride?
It's an extremely poor predictor of a candidate's quality.
It should only be used in those companies where the number of applicants is way larger than the number of open positions where you can accept the trade off of losing few great applicants if you can also lose the many more bad ones.
Just because you say those questions are an extremely poor predictor of a candidate’s quality doesn’t actually make it so. It might just be one of the best and most cost effective ways of finding good candidates.
Now, about the scale. Anything run at scale needs standardisation. You need to hire 100 senior developers. How do you know they have more or less same level? If every single interview is hand crafted, you'll either need to get all devs envolved (and not everybody is good at coming up with interview questions) or get standard questions and answers to allow a smaller group of people to deal with candidates and coding puzzles fit great there because they're slef contained and isolated. Every realistic question has multiple facets and that's what you get at the system design interview step.
Another problem I've observed was that the more you give the same puzzles to candidates, the easier they look to you. What the means is that you as an interviewer either need to keep your self in check regarding your reactions or you're risking giving a bad interview score in situations where it's not really warranted or there is a chance that the next question you take will be the one you find more complex (to be equal to position level in your head) and that would push the level of questions up. That's another observation I had when some interview questions would become so complex that I knew some of the existing devs would fail the interview for sure.
Does that give good results on the other end? I don't know, but what we definitely know is that there is the whole industry around leetcode to train peeople to pass these challenges specifically and that means that they only thing the interviewer know is that the candidate has put some effort to prepare for the interview meaning they're motivated to join the company. And maybe it's not the worst data point! Some companies actually explicitly mention this fact on their hiring pages.
To add to it, big corps have different ways to make interviews objective in whatever sense they think and that by definition reduces the personal impact. "Why did you ask this question? We've never done it before, we'll need to have a group call to calibrate". After a handful conversations like this you'll just stick to the standard process.
Is there a way to come up with a more human approach? Personal recommendations with some skin in the game I guess. I'm sure in some niche areas like browser engines all good devs know each other well and no interview is often necessary.
Sometimes they are not worried about the specific answer, they're trying to find out your (software engineer) unconscious mind reveals during the process.
For many decades, Floyd's Cycle Detection algorithm was unknown in academic circles as the most efficient method of cycle detection.
It is not uncommon to encounter this problem in an interview. There's nothing revealing about the unconscious mind and process, other than that the candidate remembered the solution to this problem.
If it was so readily obvious to derive and arrive at a solution, you would have expected the army of PhDs and academics to have done so.
I thought this was a well known.
No Thanks.
I find it hard to trust the opinion of any “techie” who writes on Medium personally. It reeks of “I want to be an influencer!” type personalities.
(Take this with a grain of salt, I don't want to tell people what to do, I just stop reading your blogpost when it fades out and asks me to sign in...)
https://scribe.rip/why-do-big-companies-ask-unrealistic-soft...
Edit: just saw that it won't load the entire article, sad.
To criticize it for doing what it was created to do, seems odd.
Sure, there are a lot of software packages out there so any dev can create a new Medium. Who cares? The point was to pull a lot of people to one site to monetize it. They would browse around and find other authors, which is not possible if every dev out there creates their own site.
Just like Twitch isn't it?
Are you saying it is being censored? To what end? What ideology cares about programming. "Those Democrats and their Objects are using the wrong patterns again". "Republicans and their Emacs, holding on to the past".
If this is about screening potential software engineers with programming questions during an interview, hasn't this always been a thing even before Google popularised it?
I started my career after the dot-com era but it was in Systems Administration, so I didn't get to experience any programming style interviews until I changed careers and attempted an interview in 2008 with Google.
But hasn't this been the standard modus operandi for big tech companies like Microsoft, Sun, Oracle, IBM, etc. even in the 90s? I recall reading this [1] article from Casey Muratori about programming questions for a Software Engineering intern at Microsoft.
I'm not against this style of interviewing, but I do also think that some questions can be absurd or unnecessarily tricky. I've had my fair share of programming interview questions, and I found my solve rate to be around 3/7 for my last interview with Google back in 2015. Some of the questions posed were really tricky, and I just don't have the ability to solve in a timely manner without properly experimenting with the problem. From my impression and perspective, interviewers would sometimes choose questions with high coolness/leetness factor instead of choosing something more practical for a 30-50 minute session.
[1]: https://www.computerenhance.com/p/the-four-programming-quest...