This is why I've always been so confused. Why is the software engineering interview wildly different from the traditional engineering interview (seniors sit down with candidates and discuss how to solve a relevant proxy problem the team is currently undergoing (has the side benefit of making the interview process potentially fruitful if you don't go with that candidate. This can be (and sometimes is) abused though)). I mean... we all speak the same language, and that isn't standard English... right?
I have my personal theory.
1) Top companies receive way more applications than the positions they have open. Thus they standardised around very technical interview as ways to eliminate false positives. I think these companies know this method produces several false negatives, but the ratio between those (eliminating candidates that wouldn't make it and missing on great candidates) is wide enough that it's fine. It does leads to some absurd results (such as people interviewed to maintain a library not being qualified despite being the very author) though.
2) Most of these top companies grew at such rates and hiring so aggressively from top colleges that eventually the interview was built by somewhat fresh grads for other fresh grads.
3) Many companies thought that replicating the brilliant results of these unicorns implied copying them. So you get OKR non sense or such interviews.
And even for Google, leetcode has become noise because people simply cram them. When Microsoft started to use leetcode-style interviews, there were no interview site, and later there was this Cracking the Interview at most. So, people who aced the interview were either naturally talented or were so geeky that they devour math and puzzle books. Unfortunately, we have lost such signals nowadays.
And the great irony is that most software is slow as shit and resource intensive. Because yeah, knowing worst case performance is good to know, but what about mean? Or what you expect users to be doing? These can completely change the desired algorithm.
But there's the long joke "10 years of hardware advancements have been completely undone by 10 years of advancements in software."
Because people now rely on the hardware for doing things rather than trying to make software more optimal. It amazes me that gaming companies do this! And the root of the issue is trying to push things out quickly and so a lot of software is really just a Lovecraftian monster made of spaghetti and duct tape. And for what? Like Apple released the M4 today and who's going to use that power? Why did it take years for Apple to develop a fucking PDF reader that I can edit documents in? Why is it still a pain to open a PDF on my macbook and edit it on my iPad? Constantly fails and is unreliable, disconnecting despite being <2ft from one another. Why can't I use an iPad Pro as my glorified SSH machine? fuck man, that's why I have a laptop, so I can login to another machine and code there. The other things I need are latex, word, and a browser. I know I'm ranting a bit but I just feel like we in computer science have really lost this hacker mentality that was what made the field so great in the first place (and what brought about so many innovations). It just feels like there's too much momentum now and no one is __allowed__ to innovate.
To bring it back to interviewing signals, I do think the rant kinda relates. Because this same degradation makes it harder to determine in groups when there's so much pressure to be a textbook. But I guess this is why so many ML enthusiasts compare LLMs to humans, because we want humans to be machines.
The problem I have with it is that for this to be a reasonably effective strategy you should change the arbitrary metric every few years because otherwise it is likely to be hacked and has the potential to turn into a negative signal rather than positive. Essentially your false positives can dominate by "studying to the test" rather than "studying".
I'd say the same is true for college admissions too... because let's be honest, I highly doubt a randomly selected high school student is going to be significantly more or less successful than the current process. I'd imagine the simple act of applying is a strong enough natural filter to make this hypothesis much stronger (in practice, but see my prior argument)
People (and machines) are just fucking good at metric hacking. We're all familiar with Goodhart's Law, right?
If it were otherwise, and those trendsetting companies actually believed LeetCode tested programming ability, then why isn't LeetCode used in ongoing employee evaluation? Surely the skill of programming ability a) varies over an employee's tenure at a firm and b) is a strong predictor of employee impact over the near term. So I surmise that such companies don't believe this, and that therefore LeetCode serves some other purpose, in some semi-deliberate way.
i did interviews for senior engineer and had people fail to find the second biggest number in a list, in a programming language of their own choosing. it had a depressingly high failure rate.
One angle is that SWE is one of the very few professions where you don't need a formal degree to have a career. It's also a common hobby among a sizable population.
I think this is truly great. A holdout breathing hole where people can have lucrative careers without convincing and paying off a ton of gatekeepers!
But I also think that when you hire in other industries, you can get much more milage from looking at the candidate's formal degrees and certifications.
In our industry, you kinda have to start from scratch with every person.
> In our industry, you kinda have to start from scratch with every person.
Not really - in software people leave a bigger and more easily trackable track record than any other engineering field. From previous work projects/experience to open source projects/experience, from personal projects to the communities a person belongs to. A lot of stuff is directly visible on the Internet. In other engineering fields, you have to trust what the applicant says in his or her resume and maybe at most you can call the previous companies he worked at for reference. In software, a lot of the trail is online or easy to tell, and you can still call.
Even for totally new graduates, it is better in software: Its much easier for a software undergrad to work part-time or in a hobby project or contribute to open source and produce something before he or she graduates, so that you can assess his skills. Its much harder for a mechanical or civil engineer to do that, so for that reason you have to rely solely on the relevant university/college and the grades of the candidate.
That only apply to software people who either (a) are getting paid to work on open source or (b) have enough spare time to work on open source as a hobby after hours. Option (b), in particular, usually implies having no children or other familial responsibilities.
In general however, of course, there is/should be a round of interview that covers architecture/system design. It's just that the coding interview is a different interview type, which gives a different kind of signal, which is still important. It doesn't replace architecture interview, it complements it.
Why's that a problem? What you're going to be doing on the job is going to change at the exact same rate. But people also tend to talk about recent problems and those may be even a month old. Honestly, the questions are about seeing how the person would approach it. It is not about solving them, because you're going to be doing things you don't know the answers to beforehand anyways.
> It's just that the coding interview is a different interview type
For what reason? "Because"?
The first half of the sentence you're responding to answers this question already. Because you can't compare candidates fairly if you ask everyone a different question. Is a candidate who aced an easy question better or worse than a candidate who struggled with a difficult question?
> For what reason? "Because"?
What are you asking? Why is an interview where you ask about high level design different from an interview where you ask to write code? Isn't that like asking why an apple is different from an orange? They just are, by definition.
Basically an equivalent of simple algorithmic questions. Not "real" because it's impossible to share enough context of a real problem in an interview to make it practical. Short, testing principles, but most importantly basic thinking and problem solving facilities.
I've been an engineer in the past (physics undergrad -> aerospace job -> grad school/ml). I have never seen or heard of an engineer being expected to solve math equations on a whiteboard during an interview. It is expected that you already know these things. Honestly, it is expected that you have a reference to these equations and you'll have memorized what you do most.
As an example, I got a call when I was finishing my undergrad for a job from Raytheon. I was supposedly the only undergrad being interviewed but first interview was a phone interview. I got asked an optics question and I said to the interviewer "you mind if I grab my book? I have it right next to me and I bookmarked that equation thinking you might ask and I'm blanking on the coefficients (explain form of equation while opening book)". He was super cool with that and at the end of the interview said I was on his short list.
I see no problem with this method. We live in the age of the internet. You shouldn't be memorizing a bunch of stuff purposefully, you should be memorizing by accident (aka through routine usage). You should know the abstractions and core concepts but the details are not worth knowing off the top of your head (obviously you should have known at some point) unless you are actively using them.
For a proper engineering question (as in not software), I'd expect the expected answer to be naming the reference book where you'd look up the formula. Last thing you want is someone overconfident in their from memory version of physics.
"How do you know your memory was infallible at that moment? Would you stake other people's lives on that memory?"
So what you did on that phone interview was probably the biggest green-flag they'd seen all day.
Being asked a theoretical chemistry question at a job interview would be...odd.
You can be asked about your proficiency with some lab equipment, your experience with various procedures and what not.
But the very thought of being asked theoretical questions is beyond ridiculous.
My degree is in computational/theoretical chemistry. Even before I went into software engineering, it would have been really odd for me to be asked questions about wet chemistry.
Admittedly it would have been odd to be quizzed on theory out of the blue as well.
What would not have been odd was to give a job talk and be asked questions based on that talk; in my case this would have included aspects of theory relevant to the simulation work and analysis I presented.
https://thenewstack.io/joel-spolsky-on-stack-overflow-inclus...
“I think you need a better system, and I think it’s probably going to be more like an apprenticeship or an internship, where you bring people on with a much easier filter at the beginning. And you hire them kind of on an experimental basis or on a training basis, and then you have to sort of see what they can do in the first month or two.”
Well, if he fucked it up, I don’t see any reason why his ideas can’t also fix it.
I am guessing here, but wouldn't a candidate for a traditional engineering role normally hold a college degree in a relevant field, so that part of quality assurance is expected to have been done by the college?
Being able to evaluate a person is a difficult soft skill to learn. An interviewer cannot learn nor improve it over night nor months nor years. This is basically being good at reading a person. Not to mention an issue with bias that is highly subjective.
If an interviewer isn't good at this, the solution would still be to supplement your evaluation with a coding interview.