It's an artificial test that doesn't reflect your working environment at all, and so you're not actually getting to see what they'd be capable of doing faced with real world coding work.
It's a discriminatory practice that is proven bad for a lot of neurodivergent candidates, like folks with autism, or ADHD.
You end up eliminating a whole lot of good candidates just by the structure, and then end up picking up a candidate who happens to do well out of that small subset and it still won't stop you from hiring developers that are terrible.
One of the worst developers I've ever worked with will absolutely sail through leetcode exercises. He's brilliant at it. Something about his brain excels in that environment. If only work was like a leetcode interview, filled with leetcode style questions, and someone watching and evaluating his work as he did it, he'd be a fine hire anywhere.
He can't write actual production software to save his arse, needs deadline pressure breathing down his neck, and then what he'll produce at the last minute (always technically makes the deadline), will probably work but is the most unmaintainable nightmare that needs to be rejected and redone by another developer.
When the layoff time came, everyone had to scramble to move by themselves to a small selection of teams. The second I heard that guy was interviewing for the same team as me, I had gotten an offer at that point, I told the hiring manager, me or him. That's how bad working with those kinds of people can be. He ended up elsewhere and I'm still in that team. I just could not deal with him anymore. Perfect interviewer, but couldn't write production code to save his life... He's still employed by the way, bumbling around from team to team because the consequences of his incompetence take months or years to feel...
What tech companies did right is found a ridiculously profitable business model. It is not clear that their success is correlated with hiring practices. Likely other hiring practices would have worked fine as well.
>> Literally, all the best tech firms and ibanks do it. They must be doing something right.
Reasoning by first principles isn't exactly the software industry's strong point.
Agreed though I'm not sure I'd be as generous as you are when it comes to their business models being that great in absolute terms.
Strip away all the confirmation and survivorship bias and IMO it is pretty obvious a lot of the success of tech in general for multiple decades running was almost entirely the result of the free money lottery system funded by zero interest rates.
- someone else's money (zirp, middle-eastern sugar daddies, juicy government contracts)
- adverts
All the best banks and tech firms do a lot of things that could be categorized as wasteful, useless, inertia maintaining, etc, adopting their practices without a thorough understanding if it applies to your business is plainly just stupid.
Your business is not structured like those big business, you are not as anemic to risk as they are (otherwise you wouldn't even create your business in the first place), you don't have their cash, you don't have their ability to spend an enormous amount of time hiring every single person because your profits cushion you from all your dumb decisions.
Edit: To add some color, I want a candidate who is excited to program, I don’t care as much about their ability beyond having the basics which I find pretty easy to figure out in an initial conversation. Candidates who are excited for the opportunity are generally the ones who I find to excel in the long run.
I've found that looking for mediocre and sub-par results will give you professionals that spend their time getting good at the profession instead of getting good at leetcoding.
I have never and will never hire code monkeys. AI already takes care of that.
I've had better success, by a wide margin, doing this, than any code challenges ever gave.
I don't know why the industry is so averse to this, it really does work, and I know others who also swear by it.
You can find the bullshitters pretty quickly even in this ChatGPT driven world
1. Because it is so dynamic and subjective, it is very hard to systematize this kind of interview, which makes it very hard to work into a repeatable or scalable process. The need to compare incoming candidates is an unfortunate reality of the industry for many companies.
1b. It is basically impossible to control for bias and things like "was the interviewer having a good day".
2. This kind of interview overly rewards charismatic speakers -- this is partially ok, depending on the role, because being able to speak accurately and cogently about the work you're doing and are going to do is part of the job (especially for staff+ engineering). It isn't necessary for all jobs, however.
3. Many people aren't good at driving this kind of conversation for whatever reason. When it goes well it goes well but when this kind of conversation goes poorly it reflects badly on the company.
4. People Ops want more control then this gives them, across many dimensions.
> deep technical questions
Can you provide some examples?The fact this question needs to be asked really reinforces parent's point.
Perhaps we should examine at how other respectable fields validate their candidates and follow suit?
If we don't have any form standardization for this stuff, I think that speaks more to the lack of maturity of our field than anything else.
I guess bridges, buildings and houses generally don't fall.
Is that only due to hiring though? It seems more like physics doesn't change. And people who can do audits and inspections are probably pretty good.
I did an audit for my software at work. It's like talking to a child. Talking to a home inspector is way different experience.
So do all the worst - look at the extent to which hiring this Soham Parekh fellow has become a badge of honour instead of abject failure.
Instead of correcting themselves, those interviewers chose to dive deeper into delusion.