Author made a couple of fundamental mistakes: the first is they assume employees are (or should be) paid according to how much they "individually" earned the company. Employers strive to pay employees the minimum they can bear, on employer's terms. Those terms are information asymmetry and a Gaussian distribution. Fairness is the last thing one should expect from employers, but being honest about this is not good for morale, so instead, they rely on keeping employees uninformed, while the employers collude to gather everyone's remuneration history via the Work Number.
The second mistake they made is assume that companies would prioritize being lean and trimming the mediocre & bottom 5%. There are other considerations, combined productivity is more important than having individual superstars working on the shiniest features. How much revenue do you think a janitor or café staffer generates? Close to zero. The same goes for engineering. Someone has to do the unglamorous staff, or you end up with a dysfunctional company, with amazing talent (on paper).
Edit: there's an infamous graph that shows when aggregate worker productivity and average income. The two tracked closely, rising in tandem until the 1970s, where they got decoupled. With income becoming much flatter, and productivity continuing to rise. That's how the world has been for the past 50 years on the macro and the micro
I'll add a perverse incentive too that I've talked about elsewhere – hiring is a goddamn mess right now.
If I trim the bottom 5% of my org (in my case, 2-3 engineers), I may not get a backfill for them. Or I'll have to drop their level from L5->L4 to make finance happy, or hire overseas or convert a FTE to a contractor.
I also have to be ready for the potential of RIFs happening, which means having an instantly identifiable bottom 5% puts me at the advantage of being ready when my boss says "give me your names".
So the time value of a staffed engineer is way higher right now than it might be in a few months. It'll never be zero, because proactively managing people out makes all of our managers happy. But for now, I definitely need my low performers.
However, low performers are not always toxic. Often, low performers are just kind of lazy, or they take longer than they should to finish their work, or they take too long to reply to emails or messages, or their work needs extra review and checks and balances, or they are only capable of delivering on a relatively small set of fairly simple tasks, or they just want to work on the same part of the same product forever and can’t emotionally handle change, or …
Non-toxic low performers can be great because they’ll often do the unglamorous work for you for relatively low pay, and all you have to do is not bother them too much. The worst thing you can do with non-toxic low performers is try to force them into high performers. It won’t work, because they’re either not capable or they just don’t care. For some people, their work just isn’t that important to them, and there’s nothing you can do to change their perception of the relative importance of their job to the other aspects of their life. What might look like low performance in a corporate environment can just be someone setting boundaries and refusing to let work infringe too much on their personal life.
Not to take away from any of your points...
But this statement has been made every year for as long as I've been in the industry (about twenty years). I suspect it's been made much before that too.
>hiring is a goddamn mess right now.
Any insight you can give on why? I know enough from the hirees end, but how's it on the other side?
From the view of senior management (and yours), would these layoffs adversely harm your business model or profitability? If the answer is no, then layoffs are probably the economically correct decision. (Of course, there are many other factors to consider.)
Firing people if you can't get backfill is illogical, obviously. Once a company institutes a hiring freeze, low performers get locked in until forced layoffs. You'll see some people stop working and start job searching because they know that any contribution they make at all is better for their manager than having them fired.
However, deliberately keeping low performers around as a buffer becomes a self-own on a longer time horizon. Smart managers will negotiate hiring exceptions to replace a low performer now rather than keep that headcount occupied for safety. Yes, it's frustrating to have to lay off a good performer, but it's more frustrating for everyone to have a poor performer dragging the team down for some invisible game of chess that goes on for potentially years without resolution.
Paying what employees "earn" for the company is incompatible with our economic system where companies want to be profitable. Paying employees what they "deserve" based on contribution is probably also undesirable. I think you'd get the same income inequality dynamics but within companies. There is an averaging effect when you work at large corporations. That's either a good thing or a bad thing depending on the person. Individual contributions are averaged out, but so are responsibilities. I think Paul Graham articulated this wonderfully in his essay on what a job is and why some prefer to work for startups [2].
[1] https://www.kienbaum.com/blog/prices-law-and-the-trouble-of-...
Here's an opinion piece in the Harvard Business Review in 2022: https://hbr.org/2022/01/we-need-to-let-go-of-the-bell-curve
Here's another article on the same topic from 2014: https://www.forbes.com/sites/joshbersin/2014/02/19/the-myth-...
Here's more press on the same topic from 2012: https://www.npr.org/2012/05/03/151860154/put-away-the-bell-c...
From a data science point of view, if you want to compare the fitness of different distributions to data, go ahead and do some fitness tests, like AIC or BIC, to compare distributions. Ordinary Gaussian outperforms skew-normal and log-normal in many settings where the physics of the measurements would suggest otherwise.
However, it matters what you are measuring.
Here's a summary quote that explains what this Pareto versus Gaussian stuff is talking about:
> "We found that a small minority of superstar performers contribute a disproportionate amount of the output."
That is very different than saying that employee performance is Pareto instead of Gaussian distributed. "Output" and "employee performance" measures two different things. If there is any big picture flaw to all of this: it is quintessentially Individual Contributor to conflate output with employee performance.
Another POV is that people who get fired from IC jobs understandably lament a lot of the details of their circumstances. One detail that comes up is that other people take credit for their work, which should illuminate how output and employee performance measure different things in a way that interacts in the opposite of what the article is advocating for.
If a corporation lays off any people in a particular job category/title, that corporation should not be allocated ANY H1B visas for that job category/title for the next year.
If a corporation institutes any policy that requires decimation (or any other statistic-based termination program) of employees with a particular job category or title, or if IN EFFECT they perform this (because they will just hide it otherwise), then they will not be allocated any H1B visas for that job category or title, for the next year following any such act.
In essence, the point here is that if a corporation decides it can live without X% of their workforce, then they don't get to go bring in foreign workers. The H1B program is to help find workers for positions that can't be filled; if you're laying off or mass firing people then obviously you CAN find people to fill those jobs.
The open secret is that layoffs are also used as a gentle way to fire low performers.
By including people in layoffs, you can give them a potentially very generous severance package and you allow them the courtesy of saying they were laid off as opposed to being fired. They get mixed in with all of the good performers who were laid off due to budget cuts.
Putting a lot of restrictions on a company that does layoffs creates a perverse incentive to fire these people explicitly instead of giving them a gentle landing with a layoff. You would see far more people fired instead of "laid off".
At the extreme, you incentivize companies to start firing people to make budget cuts.
So, this is actually a very bad idea. You do not want to start putting handcuffs on companies who do layoffs instead of constant firings.
I feel like I'd prefer some balance. There are superstars. We know them. We can easily point them out in peer reviews. But, we're also a team, there's lots to do, not everyone gets to work on the high profile easy to identify "impact" parts.
There's two ways to make a profit. Gain more revenue, and not lose more revenue. Those kinds of staff are the latter, in addition to other aspects like HR (preventing lawsuits/settlements which are expensive).
But yes, there's so many hidden factors on measuring "productivity". That's why stack ranking is a bit stupid in the long run. Some people aren't just producing value but bringing out productivity in others. But that's an opportunity cost for a stacked system. Such individuals should be considered for management, not kicked out.
>The two tracked closely, rising in tandem until the 1970s, where they got decoupled. With income becoming much flatter, and productivity continuing to rise. That's how the world has been for the past 50 years on the macro and the micro
Yup, very well known that we really should be close to that ideal John Maynard Keynes predicted all the way in 1930 of 15 hour workweeks by 2030. Instead, I believe the average work week in the US is 50 hours and it's still a very controversial battle to get to a 4 day work week.
> Economists will teach you something called the Marginal Productivity Theory of Wages, the idea being that the amount of money that a company is willing to spend on an employee is essentially the value that the company expects to get out of their work. This strikes me as mostly true, most of the time, and likely to be the case in the corporate world that we’re considering here.
This is false. Supply and demand is a factor. I could clean the toilets at the office, if janitors were in short supply my boss might setup a rotation schedule - nobody wants to but it must be done and so he would pay me. However because janitors are cheaper than me he doesn't. This isn't just theoretical - McDonald's mostly has the crew clean the floors - janitors make more money than McDonalds crew.
It is pretty clear that the employment market suffers from severe inefficiency and information asymmetry. It takes a pretty bad economist to look at a market like that and think that its pricing is accurate.
Employees often don't know how much value they bring and thus are severly limited as counter party and other companies have a hard time predicting how much value you'll be able to add for the. These (plus many other factors) mean that you should expect significant mismatches between pay and performance.
Edit: None of this is evidence against performance being a paretor distribution (which makes sense to me), but we're gonna need more than just pay data to determine that.
...marginally more. Still nowhere near the actual value their labor brings in. We simply don't have a competitive enough employer market to provide the upward wage pressure that would be sufficient to pay people fairly.
- IQ is gaussian
- IQ correllates well with performance
hiring practices would probably produce an employee population that went through some right-curve cutoff test, meaning most people would be much closer to the hiring threshold, with a few positive outliers.
For a given arbitrarily chosen values, you could massage the distribution and make it look Pareto, but I'd be hard pressed to come up with a reason why it makes rational sense.
This is one of the things I try to drive home when I mentor young people. Employment is a market and it responds to the forces of supply and demand. Never think that your relationship with a company is anything other than a business transaction.
It's a hard lesson for young people to accept these days, but everything becomes much more clear once you stop fighting the idea.
From the article:
Economists will teach you something called the Marginal Productivity Theory of Wages, the idea being that the amount of money that a company is willing to spend on an employee is essentially the value that the company expects to get out of their work. This strikes me as mostly true, most of the time
From internet: The marginal productivity theory of wages states that under perfect competition, workers of the same skill and efficiency will earn a wage equal to the value of their marginal product. The marginal product is the additional output from employing one more worker while keeping other factors constant. However, the theory has limitations as it assumes perfect competition, homogeneous labor, and other unrealistic conditions. In reality, competition is imperfect, labor is not perfectly mobile, and other factors like capital and management efficiency affect productivity.
The marginal argument is confusing to me.When economists say “marginal” they usually mean what an engineer would call “derivative”. So “marginal cost”, for example, is usually “d(cost)/d(production)” or “d(cost)/d(sales)”. Similarly, marginal productivity means “d(productivity)/d(workers)”.
Usually this pops up in ideal economics because under ideal circumstances, maximizing revenue and productivity and so on means “set the derivative of something to zero” to find the optimum point.
(Disclaimer: I’m a physicist not an economist, but I’ve taken an intro economics course. The above was my main takeaway from that…)
“ It’s my opinion that the biggest factor in an employee's performance – perhaps bigger than the employee’s abilities and level of effort – is whether their manager set them up for success “
On very low level it's up to your manager. As time goes, even as IC you have a lot of agency. It's not just company selection, team selection, but also which part of the project you are working on and how you are approaching solving it.
Of course "if everyone does this, who will fix the bugs". However, the quickest promoted people I've seen are the people who were excellent at politics-izing (and sometimes foresight) the best work assigned to them.
If you've ever worked in tech management, your experience likely was "IDK, you're senior, you vaguely have an idea what we should do, here, go manage a few folks".
No training, or minimal training. Often with an expectation that of course you can still be a strong technical contributor, because how much time could managing folks possibly take. And then mostly being evaluated based on how your reports delivered.
As long as we follow that approach, we'll struggle with managers doing the right thing, because they neither have learned it, nor have they seen it modelled.
Sure, that expresses in bad manager performance, but often nobody can really see it or tell people what they should do better. Performance review is too late to fix that. (This is, btw, mostly true for employees as well - if you only talk about performance 1-4 times a year, people are being set up to fail)
I don't think it's worth thinking like this. An employee's salary is floored by their value to any company, and ceilinged by their value to the company currently employing them.
That said, do you care to guess where in this floor-to-ceiling range the employers' ideal would fall? Does that answer conflict with my thesis?
This completely depends on how you do your internal revenue accounting.
From the perspective of a employee and/or human, that does seem like the most fair way of distributing what the company earns, sans the money that gets reinvested straight back into the business itself. But I'd guess that'd be more of a co-operative, and less like the typical for-profit company most companies are today.
Hipothetical two people cooperative that produces simple hammers. One specializes on wooden part, the other on metal part. How much each of them earned to the company? (Or producing and selling; or one spending his lifesavings to buy pricey hammer-making-equipment while other presses buttons on said equipment)
The problem with calculating based on value provided not market rate is value provided easily sums to more than one unless you consider replacement cost.
What you're describing, that money would go to whoever brings in revenue directly, is the myopic viewpoint of Sales with an emphasis on closing deals with nothing else. If it wasn't for the rest of the work, there'd be nothing to sell!
TL;DR: Productivity–Pay Tracker: Change 1979q4–2024q1; Productivity +80.9%; Hourly pay: +29.4%; Productivity has grown 2.7x as much as pay
Also, if people are unfamiliar with economics term "productivity", it roughly corresponds to profit per employee, or better, return on equity (ROE).
The problem is bad Econ like that latch onto certain untrained brains like quack cures to an antivaxxer, because they reinforce unfounded beliefs (beliefs formed by eating too much of stuff like this).
The actual items under consideration are far less spectacular or supporting the dearth of conspiracies, so the truth doesn’t spread as fast. It’s not so shiny or conspiracy reinforcing.
If you have 1000 possible IQ questions, you can ask a bunch of people those questions, and then pick out 100 questions that form a Gaussian distribution. This is how IQ tests are created.
This is not unreasonable... if you picked out 100 super easy questions you wouldn't get much information, everyone would be in the "knows quite a lot" category. But you could try to create a uniform distribution, for instance, and still have a test that is usefully sensitive. But if you worry about the accuracy of the test then a Gaussian distribution is kind of convenient... there's this expectation that 50th percentile is not that different than 55th percentile, and people mostly care about that 5% difference only with 90th vs 95th. (But I don't think people care much about the difference between 10th percentile and 5th... which might imply an actual Pareto distribution, though I think it probably reflects more on societal attention)
Anyway, kind of an aside, but also similar to what the article itself is talking about
To go from an IQ of 100 to 130 might require an increase in brainpower of x, and from 130 to 170 might require 3x for example, and from 170-171 might be 9x compared to 100.
We have to have a relative scale and contrive a Gaussian from the scores because we don’t have an absolute measure of intelligence.
It would be a monumental achievement if computer science ever advances to the point where we have a mathematical way of determining the minimum absolute intelligence required to solve a given problem.
While that would be nice, it's likely a pipe dream :( There's a good chance "intelligence" is really a multi-dimensional thing influenced by a lot of different factors. We like pretending it's one-dimensional so we can sort folks (and money reinforces that one-dimensional thinking), but that means setting ourselves up for failure.
It doesn't help that the tests we currently have (e.g. IQ) are deeply flawed and taint any thinking about the space. (Not least because folks who took a test and scored well are deeply invested in that test being right ;)
For a huge number of problems (including many on IQ tests) computer science does in fact have a mathematical way of determining the minimum absolute amount of compute necessary to solve the problem. That's what complexity theory is. Then it's just a matter of estimating someone's "compute" from how fast they solve a given class of problems relative to some reference computer.
Might be a mix because quite a number of older or overweight people runs very slowly and some can't at all.
To the point of being accurate predictors of these things even when controlling for things like socioeconomic background.
It's used because it works as a measuring tool, how the tests are constructed is largely irrelevant to the question of if the outcome of the test is an accurate predictor of things we care about.
If you think you have a better measuring tool you should propose it and win several awards and accolades. No one has found one yet in spite of many smart people trying for decades.
The distribution implies something like "someone at 50% is not that different than someone at 55%" but "someone at 90% is very different from 95%". That is: the x axis implies there's some unit of intelligence, and the actual intelligence of people in the middle is roughly similar despite ranking differences. That distribution also implies that when you get to the extremities the ranking reflects greater differences in intelligence.
This doesn't say anything in particular about whether it's useful, just that people should be careful interpreting the values directly.
I’m sure we agree that doesn’t constitute “intelligence”, but it’s more than disability.
You missed an extremely important final step. People's scores on those 100 questions still aren't going to form a Gaussion distribution. You have to rank-order everyone's scores, then you assign the final IQ scores based on each person's ranking, not their raw score.
If you rank-order scores and fit to the distribution after the fact, the questions are nearly irrelevant, as long as you have a mix of easy, medium and hard questions.
Join those two, and the test only becomes reasonable near the middle. But the middle is exactly where the pick of questions makes the most difference.
All said, this means that IQ is kinda useful for sociological studies with large samples. But if you use it you are adding error, it's not reasonable to expect that error not to correlate with whatever you are looking at (since nobody understands it well), and it's not reasonable to expect the results to be stable. And it's really useless to make decisions based on small sample sizes.
How they’re actually made is a batch of questions thought to take some form of reasoning are curated, then ALL of those questions are used in the test. It is an empirical fact the percentages of decent sized groups of people will score a bell curve, in exactly the same way humans do on hard calc exams, on hard writing items, on chess problems, and across a bewildering amount of mental tasks, none of which are preselected and fidgeted with to fake a Gaussian.
A simple example: see how many simple arithmetic problems people can do in fixed time. What do you find? Gaussian. No need to fiddle with removing pesky problem. Do reading. Do repeat this sequence for length. Just about any single class of questions has the same bell curve output in human mental ability. The curve may bend based on some inherent difficulty, say addition versus calculus, but there will be a bell curve.
Now take plenty of types of questions to address various wobbles in people’s knowledge, upbringing, culture, etc, giving a host of bell curves per category (and those also correlated by individual). Then the sum of gaussians is gausdian. All IQ tests do is shift the mean score to be called 100 (normalized) and the std dev to match a preset amount of people so such tests can be compared over time.
And the empirical evidence is these curves do strongly correlate over time, so scaling a test to align with this underlying g factor is well founded.
This latter fact, that score on one form of intelligence seems to transfer well to others, forms the basis of modern intelligence research on the g factor. IQ tests correlate well with this g factor. And across all sorts of things the results are bell curves.
For anyone wanting to hear all this and a ton more, Lex Fridman has an excellent interview with a state of the art intelligence researcher at https://www.youtube.com/watch?v=hppbxV9C63g. The researcher goes into great depth on what researchers do know, how they know it, what they don’t know, and what has been proven wrong. This is all there.
For example, if we would postulate that height is gaussian, we could measure people's heights and just ordering them we could create a gaussian distribution. Then we could verify the hypothesis of height being gaussian by mapping the probability distribution function's parameter to a linear value (cm) and find that these approaches line up experimentally.
We could do the same thing with any comparable quantity and make an order of them and try to map them to a gaussian distribution, but we would have no knowledge if what we were making actually corresponded to a linear quantity.
This is a serious issue, as basically making any claim like 'group A scores 5 points higher than group B' is automatically, mathematically invalid.
Even if human talent have a Pareto distribution (which is not clear), the people employed by a company are a selected sub-set of that population, which would likely have a different distribution depending on how they are selected and the task at hand.
I think that any of these simplified distributions are likely not generalizable across companies and industries (e.g. productivity of AWS or Google employees are likely not distributed like employees of MacDonalds or Wal*Mart because of the difference in hiring procedures and the nature of the tasks.)
Get hard data within the companies and industry you are in and then you can make some arguments. Otherwise, I feel it is too easy to just be talking up a sand castle that has no solid footing.
This exact statement applies to the practice of Gaussian performance ranking. It is pure corporate politics, it isn't founded in sound statistics.
The present author at least provides multiple sources of statistical evidence for their beliefs, if you read the footnotes.
IQ is famously Gaussian distributed... mainly because it's defined that way, not because human "intelligence" (good luck defining that) is Gaussian.
If you look at board game Elo ratings (poor test for intelligence but we'll ignore that), they do not follow a Gaussian distribution, even though Elo assumes a Gaussian distribution for game outcomes (but not the population). So that's good evidence that aptitude/skill in intellectual subjects isn't Gaussian (but it's also not Pareto iirc).
E.g. if there are N loci, and each locus has X alleles, and some of those alleles increase the trait more than others, the trait will ultimately present in a Gaussian distribution.
i.e. if there are lots of genes that affect IQ, IQ will be a Gaussian curve across population.
100%. I was going to write something similar.
> If you look at board game Elo ratings (poor test for intelligence but we'll ignore that), they do not follow a Gaussian distribution, even though Elo assumes a Gaussian distribution for game outcomes (but not the population). So that's good evidence that aptitude/skill in intellectual subjects isn't Gaussian (but it's also not Pareto iirc).
Interesting, yeah, Elo is quite interesting. And one can view hiring in a company as something like selecting people for Elo above a certain score, but with some type of error distribution on top of that, probably Gaussian error. So what does a one sided Elo distribution look like with gaussian error in picking people above that Elo limit?
It’s going to be multivariate statistics with dependent variables. The quality of non developers at company affects the quality of developers they can retain, and the quality of the developers you have affects the quality of developers you can recruit and improve. Almost all the people I’d want to work with again left my last employer before I did.
You can take on more and more work yourself but it causes everyone around you to disengage. At some point you have to realize it’s more fruitful, emotionally and mathematically, to make coworkers produce one more unit of forward progress a month than to do it to yourself. Because it’s 2% for the team one way and 5-10% the other.
Uh. Not really. Our industry is notoriously bad at measuring productivity.
And the bigger problem is that when we try to measure it - "performance review" - we like grading on a gaussian curve. We'll never know if that's correct because we put our thumb on the scale.
An even bigger problem is that productivity is strongly influenced by completely non-technical factors. How enthusiastic are folks about what they are doing[1], how much variety do their tasks have [2], what are their peers like, etc. (Of course, that whole field of study has issues rooted in the inability to measure precisely as well)
Ultimately, it's a squishy judgment applied by humans.
[1] https://www.semanticscholar.org/paper/What-Predicts-Software...
[2] https://research.google/pubs/what-predicts-software-develope...
The analogy we used was a sports team. Pro sports teams have really good players and great players. Some people are superstars, but unless you're at least really really good you're not on the team.
Performance and compensation were completely separate, which was also nice. Performance evals were 360 peer reviews, and compensation was determined mostly by HR based on what it was costing to bring in new hires, and then bumping everyone up to that level.
So at least at Netflix 10 years ago, performance wasn't really distributed at all. Everyone was top 10% industrywide.
Another reason I really don't trust that to be true is that I've never seen a good way to measure who is a top performer and who is not. I don't think there's one, people are good in different things, even within the same job... for one assignment, Joe may be the best, but for another, Mary is the winner (but again, to measure this reliably and objectively is nearly impossible IMHO for anything related to knowledge work - and I've read lots of research in this area!).
Finally, just as a cheap shot at Netflix, sorry I can't resist as a customer: they absolutely suck at the most basic stuff in their business, which is to produce good content in the first place, and very importantly, NOT FREAKING CANCEL the best content! I won't even mention how horrible their latest big live stream was... oh well, I just did :D.
So much this. OP's description of the work environment is stressing me out and I don't even work there.
At best a strategy like the one described above will get you the top 10% of people who are willing to put up with that kind of work environment, which means you might get the top 10% of single, childless 20–35-year-olds—people who are motivated first and foremost by ego and pay and don't value stability and work-life balance. But in the process you're more or less explicitly saying that you're not interested in people who are further along in their lives and value stability and reliability more than ego and raw paycheck size.
This means that you're missing out on the top 10% of 35–65-year-old engineers who are now parents with responsibilities outside of their career, even though the top 10% of that bracket would typically be "better" by most metrics than the top 10% of the younger bracket you're pre-filtering down to.
In a startup environment this might be a perfectly rational tradeoff—you want to filter for people who don't have much else to do and can give you a huge amount of unpaid overtime in exchange for you stroking their ego—but past a certain size and market share you need the stability offered by mature, experienced professionals.
If Netflix failed to get over that hump, it's not so surprising after all that they fell so hard in the last 10 years.
It isn't that simple. Making money from content is not 1-to-1 related with the quality of the content. There are many examples of great content that doesn't make money, and many examples of content that makes a lot of money that isn't great. Also there are many differing opinions on what 'great content' even is.
It's difficult to achieve, but it's not an unreasonable objective to have. After that there is a question of measurement. How do you measure that? Did they? What was their score? - and yes, until the evidence is released, they probably didn't. (But I would also cut slack on the measurement - it IS difficult to measure so a decent attempt - a top 10% attempt? - will do.)
Where the "top performers" meme obviously fails is when every new business and their sister claims the same thing. We are all winners here and all that.
Of course there is no hard data on it, but I can say anecdotally the people I know who went on elsewhere were consistently rated at the top of whatever organization they landed at. And also, there wasn't a single person there that I would not want to work with again and would jump at that chance.
> For one, knowing the cut-throat nature of employment there, I would expect only a minority of developers would be willing to try working there, despite the awesome rewards.
On the flip side, a lot of people wanted to work there because of that culture. But you're right, some really great people wouldn't even apply, won't deny that.
> Finally, just as a cheap shot at Netflix, sorry I can't resist as a customer: they absolutely suck at the most basic stuff in their business, which is to produce good content in the first place, and very importantly, NOT FREAKING CANCEL the best content!
Actually, objectively, it's not the best content, which is why it gets cut. The way that decision is made is every piece of content is charted on a cost vs minutes watched. Then that chart is looked at by actual humans.
Some content, like reruns from the 1950s, is super efficient. It's not watched a lot but it also costs very little, so it stays. Some content, like the latest Marvel movie (before Disney had their own streaming service) was very inefficient, but it was kept because it was a big marketing draw. But some content didn't quite make it over the line because it was expensive but niche. It was popular amongst a small set of die hard fans.
I think your complaint it more about the industry in general though -- it's not just Netflix that doesn't give a show room to grow. Even the old school TV networks cut shows much quicker now than they did before.
> I won't even mention how horrible their latest big live stream was... oh well, I just did :D.
Netflix knows it didn't go well. Streaming in general used to break just as much. But the nice thing was that they gave us the resources to hire the right people and the autonomy to fix it. And so we did things like create Chaos Engineering and OpenConnect. I suspect the same will happen with live streaming.
I can work at a new place for a week and know who the top performers are. Their names are all over the commits, and whenever you ask someone a question, you get funneled to the top performers.
Then you talk to them. If they're open and engaging, and don't seem like they got their status just by being around forever, they're almost certainly a top performer.
https://medium.com/dice-insights/netflix-ceo-explains-why-he...
Here's a thought experiment: pretend that Netflix is lying and that their employees are not actually made up of the top 10% of talent industrywide. Let's for this thought experiment assume the realit is that they have slightly above average talent because Netflix pays slightly above industry average.
But now they've convinced those employees that they're not just slightly above average, they are like elite NFL players. And that means they have to work like elite NFL players. Netflix convinces their employees to work XX% harder with longer hours than the rest of the industry because they think they are elite.
"Only amazing pro athlete geniuses can work here" is way more motivating than "You have to work yourself to death with extra hours to make quota or you're fired!" because it's a manipulation of the ego.
I think this thought experiment is closer to reality than Netflix or their kool-aid-drunk employees will admit, and that Netflix's "pro athlete" culture is worker-harming psychological manipulation.
Also, since when is telling people they're good at what they do "worker-harming psychological manipulation?"
In my experience, these labels in corporate environments often correlate more with social dynamics and political acumen than actual work output. People who are less socially connected or don't engage in office politics may find themselves labeled as 'low performers' regardless of their actual contributions, while those who excel at workplace networking might be deemed 'top performers'.
The interview process of these kind of companies also often falls into a problematic pattern where interviewers pose esoteric questions they've recently researched or that happen to align with their narrow specialization from years in the same role. This turns technical interviews into more of a game of matching specific knowledge rather than evaluating problem-solving abilities, broader engineering competence or any notion of 'performance'.
Let's be honest: how many people can truly separate personal feelings from performance evaluation? Even with structured review processes in place, would most evaluators give high marks to someone they personally dislike, even if that person consistently delivers excellent work?
The days of the “brain teaser” interview question are gone, at least from the “magnificent 7” and similar big tech companies. Nowadays it’s coding, behavioral, and design, at least for engineers.
I concur with the sentiment that performance ranking has a very significant social component. If you have a bad relationship with your manager, watch out. But also, if your manager has a bad relationship with THEIR manager, or are not adept at representing their employees, you can get screwed too.
Could you please describe how the unlimited vacation policy worked? How did people feel about it and whether they were anxious regarding using it (afraid that it will reflect on them badly when they take "too much" time off)?
It helped that senior leadership set a good example. The CEO took a few weeks off every year and made sure everyone knew that it was ok to do that. He also made sure all his directs took a few weeks every year at a minimum.
There was a culture of management encouraging you to take advantage of the program.
I think this is probably how labour and capital should compete - I expect we need to equalise tax treatment so that becomes more possible
Huh? How is that nice? Does performance and compensation not correlate in your ideal world, or am I misunderstanding it?
Where the two correlate is that if you're hiring a mid-level person they get mid-level pay, and if they are top performing mid-level, they get promoted to senior and get commensurate pay.
So performance leads to promotions which leads to better pay. But pay is not directly correlated with performance. I expect everyone in the same level to have equal performance (over the long term, of course there will be short term variations).
I'm not saying that everyone on a 360 review process does that. But the incentive is there and it's working against fair reviews.
Wouldn't that(how you view and fit in with your team) be part of your review? If I was Bob's manager and all reviews he gave of his teammates were "Teammate M is a dumbass and the only reason they are productive is because I do 80% of their job for them", wouldn't leave me thinking Bob is great. It would leave me thinking Bob is a jerk who doesn't work well with others.
If anything the incentive is problematic in the other direction. People tend to be nice because they don't want to say mean things that they know the manager will see.
But in my experiance employee perf evals are more political than data based.
At the end of the day a lot of mgmt at BigCo, esp these days, wants that 10% quota for firing as a weapon/soft layoff and the "data" is a fig leaf to make that happen. More generously it's considered a forcing function for managers to actually find underperformers in their orgs, even if they don't exist. Either way it's not really based on anything other than their own confirmation bias.
IME the scrutiny of perf evaluation is basically tied to the trajectory of the company and labor market conditions. Even companies with harder perf expectations during the good times of ~2021 relaxed their requirements.
In a previous job I modelled this and concluded that due to measurement error and year-over-yead enrichment, Welchian rank-and-yank results in firing people at random.
His performance at GE was 100% fueled by financial leveraging that blew up in 2009, basically killing the company. Nobody should be taking management lessons from this guy.
I found that team composition and role assignment matters a lot, at least if you hire people who are at least above a certain bar. Match a brilliant non-assertive coder with someone who is outgoing and good at getting along and at least decent coder, and the results from the two outperform generally either of them individually.
You can bring out the best of your employees or you can set them up against each other. This either brings everyone up or brings everyone down.
Performance is "visibly doing the things that the company rewards during the performance review process".
Theoretically, each role at a company should have a set of articulated accomplishments that are expected. (This is sadly often not the case.)
But you're right that the subjective nature of "performance", and the lack of a clear numerical scale, are a difficulty of the entire process!
The amount of money the manager is willing to match is the perceived value to the company. This is how the company actually behaves (we know for sure whether they match the offer or not) and that behavior implies a value to the company, regardless of what anyone says in performance review season.
This assumes the manager is irrelevant here. But we all know that different managers (or non-managers) can communicate value differently for the same employee. So this metric can't be solely measuring the value of the employee.
I've found Term Logic[1] to be useful for figuring out why certain discussions confuse me. I've also used to avoid unnecessary arguments by seeing if the participants are starting with clear concepts (signaled by terms).
[1] https://en.wikipedia.org/wiki/Term_logic#Basics also this explainer https://adoroergosum.blogspot.com/2015/05/the-three-acts-of-...
The problem, from a company's perspective, is you probably need to retain everyone at least five years, and actually give them a wide variety of assignments in that time, to really get any usable data about their long-term prospects.
The only people who benefit from performance reviews are shareholders whose price pops when layoffs happen, and those who game the system for their own political ends. Top talent never really thrives in these, because they’re too busy doing actually meaningful and important work.
In case people want to read more about this:
https://www.essentiallysports.com/nba-active-basketball-news...
https://marginalrevolution.com/marginalrevolution/2024/08/go...
> But there are low-performing employees at large corporations; we’ve all seen them. My perspective is that they’re hiring errors. Yes, hiring errors should be addressed, but it’s not clear that there’s an obvious specific percentage of the workforce that is the result of hiring errors.
I think it is clear that we expect a certain percentage of hiring "errors". And that they are not binary but rather a continuum. And that there are lots of other factors like employees who were great when they were hired but stopped caring and are "coasting" or just burnt out, who got promoted or transferred when they shouldn't have been and are bad at their new level/role, and so forth.
The Pareto distribution isn't particularly relevant here, because a hiring process isn't trying to get a whole slice of the overall labor market with clear cutoffs. For any position, it's trying to maximize the performance it can get at a given salary, and we have no reason to expect the errors it makes in under- and over-estimating performance to be anything but relatively symmetric.
So a Gaussian distribution is a far more reasonable assumption than a slice of the Pareto distribution, when you look at the multiplicity of factors involved.
When A doesn't like B it doesn't mean A or B are necessarily unfit to work at the company, but it generally results in the subordinate being framed as underperforming or not being given the resources to perform.
It's not an assumption. See the evidence referenced in the footnotes.
It is absolutely an assumption. The "evidence" in the footnotes is about national salary data. Not the distribution for any individual position at a company.
And it is entirely possible (and probable) that performance at each position is distributed as a Gaussian, and all those Gaussians add up to a Pareto at a population level.
But you simply cannot take national-level data and assume it applies at the micro level. That's not how statistics works.
Would that be cool? We could posit the implications of all sorts of improbabilities. But I feel more strongly about how cool it would be that P = NP.
All this aside, being laid off sucks - being pushed out, even when you're a high performer, sucks even more. The truth is that "data science" does not help you process grief the way reading Dostoevsky does, so maybe getting an A in your liberal arts education is valuable even when you are working as a software developer.
What's interesting is that school grades often doesn't follow a normal distribution, especially for easier classes. I suspect that getting an "A" was possible for 95%+ of students in my gym class and only 5-10% of the students in my organic chemistry class.
In the same way, some jobs are much easier to do well than others.
So we should expect that virtually all administrative positions will have "exceptional" performance, which is to say that they were successful at doing all of the tasks they were asked to do. But for people who's responsibility-set is more consequential, even slightly-above average performance could be 10x more meaningful to the company.
Another example where this analogy stops working is that in school the students usually get the same/comparable assignments, that is somewhat the point of those. As the goto hard-problem-person at my current workplace I am pretty sure that it is absolutely impossible to compare my work to the work of my collegue who just deals with the bread and butter problems, it isn't even the same sport. How would you even start doing a productivity comparison here, especially if you understand 0 about the problem space
A significant percentage of people in an organization create the problems they solve.
Managers suggest that an employee must "go above and beyond" their ordinary duties to get an exceptional rating.
But that just means that "going above and beyond" is, in fact, a duty. The problem is it's an ill-defined duty which is even more susceptible to the whims of what the manager thinks counts as "above and beyond." Good managers give clear rubrics of performance.
To me, "meets expectations" says that the employee's error rate was at acceptable levels and "exceptional" means they had almost no errors whatsoever.
There's ample research that Welchian stack ranking, and assuming a Gaussian distribution of employee performance, is not well-founded. Even its original pioneers (General Electric) have abandoned the practice (see [1]).
Not sure why there are so many commenters here defending the Gaussian model. Most researchers at this point agree that a pareto distribution is more realistic.
[0]: https://hbr.org/2022/01/we-need-to-let-go-of-the-bell-curve
[1]: https://qz.com/428813/ge-performance-review-strategy-shift
There are certainly times that you would want them included, but those can be classified under "budgeting," not gaining insight on a workforce.
If you are in a hiring freeze or not promoting, most of the curve should shift right, assuming you are hiring great people. They will probably perform better quarter after quarter. Some might counter-argue that if everyone performs better, this should be the "new expectation," but I disagree: the market sets expectations.
If you have someone at a senior level with expectations of staff, for example, they won't be in the company for long. I hired many great engineers who later said they only looked for a new job because they were never promoted despite being overperformers.
can you expect the management correctly identify the right people to fire, especially when they are themselves heavily over-represented in that class?
It feels weird to gloss over this since transaction costs this high have a huge impact on how the system should be designed.
In a properly functioning team, people perform different, discrete roles which are probably not entirely understood by other team members or management.
2) the are also not aligned with the replacement cost of employees because the religion of management is that labor is effortlessly replaceable and low value
3) employee retention is not aligned with corporate performance in Machiavellian middle management, it is aligned with manager promotion for things like loyalty and maintaining fiefdom power, budgetary size, headcount, etc
4) there are no absolute or ever directly derived metrics in software development that have ever worked, to say nothing of other positions
Those are off the top of my head.
I do wonder whether those implementing stack ranking are really that committed to a particular statistical model of employee productivity, or if they’re trying to solve a human and legal problem with an algorithm.
X = the individual's contribution
Y = the contribution of the system they work within
[XY] = the interaction of the individual with the system
8 represents some measure of productivity, e.g., rate of errors, millions of dollars in profit, whatever you're measuring
The person who can solve for X is competent to rate people on their performance.
What to do instead of (destructively) rating people?
Build better systems for doing the work, make their work easier, give them psychological safety and job security so they can relax and enjoy their work and share better methods with each other.
(All paraphrased from W. Edwards Deming.)
Competition within organizations is for amateurs.
Well, the Gaussian distribution gives positive probability to any interval of the real line, including the whole real line (probability 1), so, strictly speaking, no.
But maybe the issue is a distribution with a bell curve or even with just a unique maximum and falling off monotonically from that maximum.
Well, then, in my college teaching, still no: Instead, commonly, roughly, there were three kinds of students: (1) understood the material at least reasonably well, (2) understood some of the material a little, and (3) should have just dropped the course but from me got by with a gentleman C. So, the distribution had a peak for each of (1) -- (3), three peaks, no Gaussian!
Approximate Gaussian is guaranteed, under meager assumptions, from the central limit theorem (CLT) of averaging random variables, the easiest case, independent, identically distributed (IID), and, more depending on how advanced the CLT proof is. A proof due to Lindeberg-Feller long was, maybe still is, regarded as the most powerful CLT.
Apparently ~100 years ago, especially in education, the CLT was commonly regarded as standard, true, without question, maybe some law of nature. Maybe some of the people measuring IQ, SAT scores, etc. also thought this about the Gaussian.
For me, I, in mathematical and applied probability, care first about finite expectation, conditional independence, independence, several convergence results (e.g., the martingale convergence theorem), then IID, and hardly at all, Gaussian.
He cites similar work by William Shockley who taught both electrical engineering and scientific racism at Stanford https://en.wikipedia.org/wiki/William_Shockley (no swipe at the author, just pointing at the biased motiviations of some of the researchers foundational to the idea of "high performers").
In general, when you see pareto structures or power laws, you should think of compound or cascade effects, which in human structures generally means some form of social mediation. Affinity for a desireable skill might be gaussian, but the selection process means that the people who _get_ to do that skill might become pareto shaped because if you aren't much better than the next guy, you wouldn't stably stay at the top. Similar logic can hold for other expressions.
In general, I wish more people would read https://blackwells.co.uk/bookshop/product/Causality-by-Judea... or at least the more accessible https://mixtape.scunning.com/ before starting to conjecture from data about social systems - the math will tell you what you can and cannot speculate on.
(fun exercise: draw the causal models of IQ in https://dagitty.net/ and ponder the results)
Height is generally not considered to be Gaussian and this is exactly the kind of statistics mistake the author seems to be accusing employers of. Adult height is somewhere between Gaussian and bimodal.
Perhaps better stated as "adult human height is approximately Gaussian for a given biological sex", with an asterisk that environmental factors stretch the distribution.
I love the anecdote that people born in the American colonies came back to England to visit family, and were remarkably taller compared to their cousins due to environmental factors.
1. There is a certain skill in communicating all the important things you've done, we shall lump likability + politicking into this one for convenience.
2. There is a premium that is placed on shiny new features and saving the day heroics. A lot less priority is placed on refactoring and solving the problems before they require heroics.
3. Finally there are individual's technical and self-management skills. I.E. it's important to work on important things and be good at it.
If the company would be dysfunctional without that janitor or software engineer, and not bring in as much revenue as a result, it sounds like the model that attributes close to zero revenue to them is already dysfunctional. If the company can't function without the janitor, then a significant portion of the revenue of the company should be attributed to them.
That is a good argument for diverse hiring: people will have bad days/seasons, fact of life. If the team is diverse is less probable that those bad days will correlate between different employees.
IQ and other personality traits are gaussian, with which I would expect performance to be correlated
But, the mythical "10X employee" would seem to imply pareto, along with 80/20 notions of both personnel and an individual employee's day-to-day workload
How do we resolve this dichotomy?
=3
However, any single customer interaction is exponential or weibull distributed.
https://www.amazon.com/Remember-me-God-Myron-Kaufmann/dp/B00...
Which tells the story of a Jewish person who fails to persevere against prejudice in a multifaceted and sensitive way. In one scene he gets a job as a bank teller and then realizes in some jobs you’ve got the potential to screw up but no potential to distinguish yourself. The world needs people to milk cows every morning, a job you can screw up but not do it 10x better than competent, there is no Pareto or other “exceptional events” distributions for many essential jobs. ER doctors, taxicab drivers, astronauts, etc.
(Productivity is a product of the system + the people)
I worked on one system that had a 40 minute build if you wanted it to be reliable, the people I picked it up from could not build it reliably which is why the project has been going in circles for 1.5 years before I showed up. With no assistance (and orders that I was not supposed to spend time speeding up my build because it didn’t directly help the customer) I got it to a 20 minute build.
Other folks on the team thought I was a real dope because my build took too long and I was always complaining but they couldn’t build it reliably at all.. I mas two major releases of a product with revolutionary performance in one year at which point I felt that I’d done the honorable thing and that I’d feel less backlash anywhere else whether or not I was creating more value —- so I moved on, and was told by recruiters that they hadn’t found a replacement for me in six months.
Had the place I was working at had a 2 minute build they might never had hired me because they would have had the product ready long before.
Like, is the system helping to maximize happiness distribution within humanity while maintaining biodiversity in its highest concomitant expectable dynamics?
Height cannot be negative, thus, it is not Gaussian. IQ cannot be negative too. Great many things that most people think are Gaussians, are not.
One of such distributions that describe one-sided values, log-normal distribution (logarithms of values are distributed normally) has interesting property that for some d values x=mean+d are more probable than values x=mean-d (heavy tail). Also, sum of log-normal-distributed values does not converge to Gaussian distribution.
Have you been keeping up with current events?
Wages tend to be smaller than asset income. Top sports players and musicians work for wages and become billionaires. Startup founders, who own assets, become trillionaires.
Obviously, there are differences. Wages are not productivity. (But the article didn’t say how productivity was measured.). Also, a company can choose who joins and leaves it. So one company’s wage distribution doesn’t have to follow the distribution of the wider economy.
1) treat poor performers as bad hires and ignore them in your dataset
2) treat 10x performers as needing to be promoted and also ignore them in your data
3) treat everyone else as relatively equal
…and use “Pareto distribution” and “no one has mentioned this before” to write a blog post?
Is the point of the article to get people who disagree with 10% corporate culling a pseudo intellectual economic buzzword argument to stroke their hatred of an inefficient hr practice? If so:
1) 10% culling in performance review is a mechanism to cull “bad hires”. I find it difficult to understand how the author can argue it’s a bad practice and then state that you cull bad hires from your dataset without thinking that they are the same thing or at least largely overlapping.
2) If the author is proposing to separate performance review, culling bad hires, and promotions, into 3 separate systems and assume no overlap, he should think through the structural issues more. While it’s possible to design a management structure where the organization is at a constant state of no bad hires, all 10xers promoted, that is putting a lot of responsibility on individual managers to run review, culling and promotion by themselves at a very high level. It’s brittle - a few bad managers not running the system can easily leave your organization bloated with bad hires and no fallback (fallback = performance review process).
3) The system of performance review is equally about risk management to the business as it is about rewarding your employees. IMO, the author’s framing simplifies the problem too much and pushes the complexity out for other people to deal with. It’s the kind of thinking that is damaging to organizations… I wonder if there is a process to cull this kind of thinking from your org… wait what time of year is it??
That being said, I like to think that startups growing into large corporations have an opportunity to be better when it comes to things like performance management.
Most of the big companies just throw endless interviews, high pressure firings, and a lot of money at the problem and make the people below them solve the rest of the problems.
They see how much they are paying for the mess, but any medium term effort is torpedoed because of all the other things the business focuses on (lack of resources for the process and training), and other powerful individuals who want to put their own brand on hiring and firing who have significantly more ego than sense.