I've long wanted to see in general some more experimental testing of selection variations. What if YC (or some other funder's) candidates were just selected completely randomly from the applications? What if they were selected solely according to some dumb criterion, like take everyone with the most degrees, or the longest CV, or the most GitHub LoC? What if they were selected purely based on the applications (without the dumb-criterion requirement) but without interviews? For a few tens of thousands of $$, someone willing to try those kinds of things out could get some pretty interesting information on how reliable different selection methods are.
My own hypothesis is a negative one: that beyond screening out a few obviously-bad candidates and taking a few obviously-good candidates, the bulk of the YC selection process is randomly related to outcomes, and the YC mentoring/contacts/press/etc., rather than predictive value of the selection process, is the main driver of their generally strong outcomes. But I can't prove that. :)