* Able to clearly communicate technical concepts. Evidenced by seeing displaying in wiring logical ordering of thought, separation of complex pieces into smaller, less complicated, and clearly delineated pieces, effective and actuate command of technical vocabulary.
* Able to code. Evidenced by watching them code.
* Familiarity with the data structures and algorithmic approaches native to the problem domain. Evidenced by discussion around that domain, perhaps a psuedo-code exercise with a relevant problem paired with discussion of design tradeoffs of different approaches.
* Understanding of the cross-cutting concerns related to maintainable software: testing, documentation, modularity, etc. Evidenced by Socratic discussion of said topics. "Given this problem common to sustainable software development, what would you do/have you done?"
I still believe in the value of meritocracy. Actually pursuing meritocracy solves a lot of the inclusivity problems we think we have. The problem is that people are inherently biased and unless we are purposeful in accounting for these biases it is easy to weave then into any system you design, no matter what the name or started goals.
Doesn't change the value of an actual meritocracy. Just highlights one of the challenges of being human.
All of these can be objectively measured to a degree if you actually care to take the time:
* Logical ordering of thought: identify and diagram the main ideas in the text. Identify transitions in the text. Identify explicitly named connections between pieces. Multiple people can do this and expect to have a high degree of similarly in their results.
* Separation of components: similarly identify and diagram the components they list by name, the relationships the identify by name, the responsibilities they identify by name.
* Technical vocabulary: list all of the technical terms. Compare their usage against a dictionary.
* Ability to code: run their code. Does it complete and produce the expected output? This is absolutely objective. You can add further constraints and retain absolute objectivity: does it complete within a certain time, stay within a certain memory budget, stay within a certain cyclomatic complexity threshold, have a certain percentage of test coverage, etc.
* Familiarity with data structures and algorithms common to the problem domain: list the major constraints of the problem domain, list the data structures according to feature which addresses the constraints, similarly list algorithms. Compare to the candidate's answers. How many of the major concerns did they address? How many of the applicable data structures/algorithms did they know? Did they volunteer anything new and were they able to explain how it addressed the problem constraints?
* Understanding of the cross-cutting concerns. This could almost be a checklist. I would make it a little more involved. As a mentioned, Q&A, see what solutions they present, but to have a quantifiable metric we can identify major components and identify the major concerns each of those addresses, see how many the candidate reached, give bonus points for value concerns they addressed that we didn't.
I'm sure if I spent more time I could expand both of these lists.
I will concede that this is still subjective in many ways, especially in the interviewers choices of what is "correct" ( what are the problem constraints, etc.) and what parts of the answers after important.
In that regard I will concede to you that there is an ultimately subjective nature to most of this, because deciding what is valuable has an element of subjectivity, but that is going to be true of pretty much any pursuit outside of pure mathematics (and I'm not convinced we have entirely objective values there either). However, once we have decided what we value it's possible to eliminate a lot of the subjectivity from measuring it. In most interview processes it's not a lack of ability to be objective, it's a lack of concern about being objective.
And actually, I'm not too bothered by that. A healthy meritocracy does not require absolute objectivity. What it requires is an explicit statement of what the values are and a transparent means of evaluating people against those values, and but according to any other values. The values can be subjective. The evaluation can be subjective. As long as the values are known and the evaluation process is transparent it can function as intended. Even better, by clearly communicating the values of the system you send a strong signal to others so the can determine if your organization is something they want to be a part of.
Objectivity is a good tool to help maintain that transparency. But I'm not worried so much about the subjectivity of it as I am hidden values and opaque evaluations tied to things that should be irrelevant according to the stated values.
Defining the "best people" is _obviously_ subjective. _People_ are subjective. There isn't just one "best"-- there is a set of "bests" that you can strive for. Just like the above example, it depends on your requirements, your priorities, etc.-- but most importantly, it doesn't need to be objective to work well, which brings us full circle to:
> "Best people" can mean the best team.
If you prioritize teamwork among individual contributors, this is what best people would imply.
The awesome part about a capitalist system is that companies have the freedom to experiment with these configurations of how they define "best". GitHub may define it differently from you, but that doesn't make their definition less valid.
Meritocracy is an idea, not a specification-- there is no one true meritocracy implementation. The discussion needs to start from there.
Im not convinced it does. If you want to say meritocracy says merely that we should try to hire the best people all things considered then no-one would disagree. The disagreement is precisely about which things it's appropriate to consider.
Typically meritocratic systems in practice make the assumption that it is possible to determine merit outside the context of a specific team. I think this assumption is highly suspect. Merit is not a fixed characteristic of the individual but rather an emergent property of them in their context and in relationship with those around them.
And what about in reverse? What if, rather than finding the "best", we merely have a metric/s that weed out the worst? If I remove the bottom 15% effectively, and replace them with average performers, then the net gain is massive, especially as each extra bug introduced is a massive time sink for any team, and poor developers are a massive cause of that.