I think the concerns people have about the efficacy of this interview style is valid, but extending it to the point where you start to make claims about how people who can pass them aren't as good is ridiculous.
The types of candidates who spend the time necessary to memorize algorithm trivia for the sake of passing these exams are exactly like overfitted learning algorithms. What they happen to know is unlikely to generalize well. Of course you could get lucky and hire someone like that who can generalize, but that's rare. More often, since hiring is political, you pat yourself on the back for how "good" the candidate is (based on some trivia) and make excuses when their on-the-job performance isn't what you'd hoped, and find ways to deflect attention from that so that you, as an inefficient hirer, won't be called out on it.
Willingness to waste time overfitting yourself to algorithm trivia absolutely predicts worse later-on performance than candidates with demonstrated experience and pragmatism (e.g. I'm not wasting my time memorizing how to solve tricky things that rarely matter. I will look them up / derive them / figure them out when/if I need them).
If given the choice between hiring a math/programming olympiad winner vs. a Macgyver/Edison-like tinkerer who may not be able to explain how to convert between Thevenin and Norton circuit forms, but who took their family radio apart and put it back together, Macgyver/Edison wins every time (unless you're hiring for bullshit on-paper prestige, and of course many places are while proclaiming loudly that they aren't).
But I grant this is reasoning just from the anecdata that I have. I can believe that winners perhaps represent a higher degree of skill, but then we're talking about an extremely small number of people.
Generally you're facing a tradeoff where you have to choose between a sort of rustic self-reliance skill set versus a bookworm skill set. People from either group can learn the other over time, but you can't predict how well by testing them solely on trivia that constitutes their current main group. My preference is to hire for self-reliance and learn bookworm stuff later. I used to believe the opposite (e.g. hire someone good at math because they can always learn to be an effective programmer later) but my job experience made me believe the opposite (e.g. actually it's pretty easy to teach people stochastic processes, machine learning, or cryptography, but it's incredibly hard to teach people how to be good at creative software design).
Source? or just trying to justify your own shortcomings?
[0] < https://en.wikipedia.org/wiki/Overfitting >
> Overfitting generally occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfit will generally have poor predictive performance, as it can exaggerate minor fluctuations in the data.
[1] < https://en.wikipedia.org/wiki/Generalization_error#Relation_... >
> The concepts of generalization error and overfitting are closely related. Overfitting occurs when the learned function f_S becomes sensitive to the noise in the sample. As a result, the function will perform well on the training set but not perform well on other data from the joint probability distribution of x and y. Thus, the more overfitting occurs, the larger the generalization error.
Shall I fetch a ruler?
Also, the question is, is the ability to write a BFS on a whiteboard a useful way to screen programming candidates for most positions. No it is not.
Could I have dug up a library to do what I needed, sure I guess. Easier to take 20 minutes and just write the tool.
What? That would be a huge red flag in my book. Where are your unit tests? I'm not trusting your 20 minute off the cuff reproduction of classic algorithms in any business critical piece of the code, not ever.
This would get you booted from a lot of places, or at least given a stern talking to for doing something that seems slick, cool, and time-saving in the short term (yay, let's roll our own!) when really it's immature and time-wasting in the long run.
Never (!) homebrew that shit unless you have to (like, you're in an embedded environment or your use case requires some bleeding edge research algorithm).
It's like seeing someone write their own argument-parsing code. Holy shit, what a bad idea. Never (!) do that.