Nitpick, but I'd love to see a citation on that. Google, like every technology company on the planet (modulo a small error value) goes to great lengths to recruit and hires lots and lots of people.
Here's another one showing 75,000 resumes received in one week: http://www.sfgate.com/business/article/Google-gets-record-75...
When you have the luxury to pick the best, from the best of the best, you don't have to follow the advice given by HN armchair quarterbacks like us.
And engineer headcount costs are a huge component of their business --- so much so that they've been accused of breaking the law to collude with other companies to avoid competition in hiring!
Google is succeeding in spite of the waste and unreliability of their hiring processes, not because of them.
But there's no evidence that switching to self-paced work samples would cost them less. With Google's popularity, they'd get more false-positives from candidates that copied the code from widely disseminated previous projects. False-positives cost money.
Your medium size firm with smaller volume of candidates won't have that problem of increasing false-positives.
Sure, with whiteboard interviews, the rejected candidates (and even ex-Googlers) can write a "brain dump" blog with blow-by-blow algorithm questions but history seems to show that these don't work so well as cheating mechanisms.
What kind of work sample project could Google realistically design for 10000 programmers to complete? (It can't be as hard as "solve this Clay Millennium problem" or as easy as "reverse this string". Anything between those 2 extremes is trivial to copy to github.) How often do they need to redesign the work sample? What about objective "comparisons" which was touted as a feature of that method? What about the programmers that don't want to do the work sample? (They do exist!) Is there also a cost to filtering them out?
It's great you're really enthusiastic about work samples and want more companies to adopt it but I see no slam dunk evidence that they are the universal best method for every company.
One thing that really rustles my jimmies is the constant assertion that "false negatives are (effectively) free". I think Google and the companies who hire like them seriously underestimate how much this costs them, both the direct costs of spending so much to ultimately reject people and the indirect costs from the work that is not getting done or being foisted on another overloaded engineer.
SELECT TOP 1000 FROM resumes ORDER BY received_date
will produce 1000 job offers but speaks nothing to selectivity.But that's what the definition of "selectivity" is for database retrieval. Selectivity == n_rows_selected/n_row_count. The "larger number" was the denominator and the "small number" was the numerator.
Your example SQL is not consistent with your previous sentence:
SELECT TOP 1000 FROM resumes ORDER BY received_date
Notice that nowhere is the total row count for resumes known in your isolated example? So yeah, we don't have the denominator to determine selectivity.For examples of Harvard and Google, we know the denominators (the total applications and total resumes). Therefore we know the selectivity.
I suspect you're mixing up "mathematical selectivity" from "decision process selectivity" because Google's internal decision tree for hiring might look to outsiders as "black box" or nonsensical.