I've had to deal with plenty of colleagues who moved very fast, committed extremely often, and were praised by management for the amount of work they produced. But in reality their work was rushed, sloppy, riddled with issues and a nightmare to maintain. And it would inevitably fall upon us "lesser" performers to actually investigate and fix all those issues, making us look less productive in the process.
In my opinion a better metric to quantify someone's performance is to count the amount of problems they solve vs. the amount of problems they create. I bet that many of the rockstar programmers out there would land on the negative side of that particular scale.
Management literally can't tell the difference, sometimes even if it is a former dev. There are many ways to ship code faster.
-Don't test
-Don't worry about sanitizing inputs--extra effort slows you down.
-Optimize for writing, not maintaining
-Take a tech-debt loan. Bolt your feature somewhere convenient it doesn't belong.
-Put on blinders. Your n+1 query problem is Tomorrow's Problem
-Avoid refactors
-Choose the first architecture you think of, not the one that is simpler, more flexible or more useful
-DRY code takes too long to write--you'd have to read more code to do that! That's bottom performer thinking!
-Securing credentials is hard. Hardcode them!
Remember, if you want to be a top performer and have breathless articles written about you, ship many commits, fast! Also, this is a virtuous circle--if any of these become "problems" you'll get to ship more commits!
I feel like many people downplay how important this is. I've wasted way too much time because of this. Doing code archeology to understand why data persisted to database many years ago breaks some seemingly unrelated feature for a customer is definitely not my favourite part of the job. Working on a validator that someone was "too busy to add" in the first place is also not fun (and a waste of time - because original author could probably do this in matter of minutes; whereas someone fixing things post-factum need to reverse engineer what is going on; check whether some funny data wasn't persisted already and potentially handle it).
To phrase my frustration in more constructive way: it's always a good idea to put explicit constraints on what you accept (What characters are allowed in text-like id - only alphanum? Only ASCII? What happens with whitespace? How long can it be?). Otherwise it's likely you will find some "implicit" constraint down the road; ie. other piece of code breaking down.
On the contrary, small refactors are a great way to boost commit count! Management once tried to boost productivity by rewarding commits. Our team basically started manually committing automatic refactorings. We won by a landslide, but I don't think the company did. I got a nice prize out of it though.
In an environment that rewards commits, unit tests can be a source of many little commits.
Put a Non-technical VP/Manager in charge. Maybe someone who started out as a UI designer or non technical PM and then somehow became VP of engineering because they were an early loyal employee.
Charismatic coders will take advantage and the manager will eat it up. The "rock stars" commit tons of code put create massive tech debt. They write their own buggy versions of stuff they should have just imported industry standard OSS versions of. Rule evaluation engines, broken file formats, broken home grown encryption, broken distributed job systems, heavily broken home grown DB frameworks.
They'll have a huge bias towards terse code that doesn't make a lot of sense. Abuse of functional code. Everything is the opposite of KISS.
Everyone else on the team is just constantly fixing their showstopper level bugs, of which there are always many.
They talk a good game and management constantly thinks the rock star is super smart and the rest of the team is deficient.
Then the tech debt reaches ridiculous levels, or you get bought and different management comes in and sees right through it. Managers get let go, new managers don't buy the rock star story. Rock star gets frustrated, leaves for another company where they can pull the same trick, and puts all this inflated stuff on the resume that they wrote all this stuff they shouldn't have written. A new naive manager falls for it think they must be really smart cause they're never going to find out how broken all the stuff was.
All this is WAY easier for a rock start to pull off post around 2010 because the office culture in tech has become so PC that no one can be blamed for anything. Lots of stuff that used to get someone shown the door in my early career would not even get called out today at all.
This is one of the most common complaints I hear about fast devs, is the produce lots of bugs. But what I've seen is fast dev produces 5x more code than normal devs and produces 5x more bugs. Which means the ratio is the same but it feels different because they're producing so much more code. You then get the devs who say they have to spend lots of time investigating these bugs so look worse. But I've literally seen the fast dev go on to a bug that one of the other developers was spending 2 weeks investigating and find the issue in the data within 5 minutes. You then have the slow and careful devs who will write 5 functional tests, 1 unit test and add 10 seconds to the build (not much but it adds up when it's ever issue) and still have the same ratio of bugs as the guy who is doing 1 units tests.
I think the realitity is, a good dev is someone who produces the most business value. And something lots of devs don't want to hear is that the tiny little tweaks they make to improve the code quality, adds very little business value. Where as a required feature that works 90% of the time adds lots of business value. I think a lot of the complaining about these rockstar devs is just jealously. Code quality is one of the smallest tech problems yet so many devs think it's super important. No, getting stuff delivered so other departments and customers can do things is. Being able to plan things out is super important. Having a good data model is super important.
this this this.
More code might equal more bugs, but if it's a net gain in business value everything else is a secondary concern.
It doesn't mean you can just ship crap all the time. If customers start complaining/seeing errors/bad performance, business value decreases and there are going to be some $discussions.
If developers are rewarded for shoveling garbage into the pipeline all the time, there are deeper organizational issues afoot.
What would those fast-dev do in a vacuum? Or with only clone of themselves coworkers.
What happens when one of those fast-dev quit?
--
If everything is/would be fine, then yes, they are truly rockstars (in all the good meanings of that word and none of the bad). Or maybe merely decent among mediocre ones.
Otherwise, there is a free-rider aspect in their approach.
--
Also it can work only depending on the criticality level of the industry.
Ship broken crap quickly (but fix it quickly too)? Good if you are creating yet another social website maybe. Less good for e.g. medical devices.
--
One more point: the business value approach is not necessarily the only one to apply esp. if there are tech infrastructure components in what you ship. You can easily be stuck in local optimums too far from global ones, and fossilize some aspects of your product. See for example the classical http://blog.zorinaq.com/i-contribute-to-the-windows-kernel-w... that includes some example of death by thousands cut.
What I mean is that moving fast is partly in the eye of the evaluator. Maybe you implement what PM wants quickly, and that's cool, but maybe also doing only what PM wants is not the best thing for the project.
--
If you are easily able to plan things, have a good data model, and can develop quickly, probably you don't have a real code quality problem to begin with. At least not in the parts you contribute to. I don't actually distinguish the data model from "code" that much: it's all design.
--
Final last thought: imagine you are actually a good fast-dev like you describe, and your colleagues are less good, but imagine a case where the whole organization would actually benefit from you slowing down a bit and working on developing better way to work more efficiently with others or making them improve, overall yielding even more business value at the end. This can happen too.
A couple months later, a pair of engineers on the team had to spend 4+ weeks developing an "installer" to properly install & configure our app, as it had grown too complex to install & configure by hand. Management couldn't really figure out why...
There needs to be some incentive to not let people shit all over the code based for everyone else to clean up. Reviews are enough. All code was required to be reviewed by owners and there was still lots of this.
Whatever the fallout from people feeling "blamed" may be, attrition of all of your genuine programming talent due to tech-debt-machine peers being promoted ahead of them is not exactly an ideal outcome either.
What's more, very often genuine potential in naturally talented new programmers can be stunted if they're rewarded for lazy faux "productivity". I've seen this: a programmer has a genuine interest and passion for quality, but loses it over time due to a focus on doing (different) work that gets them promoted.
Code ownership (being required to "own" ones work along with the bugs & maintenance burden that come with it) is one of the most valuable ways programmers learn. This needs to be balanced, as one runs the risk of having a bus factor of 1, but it's still vital.
Competitive entities nearly never fall victim of the anti-blame kind, but where there is little competition, they are just as common as their opposite.
If you are a trial and error type of programmer then of course speed matters a lot because it has a direct correlation on how fast you can iterate and increase your sample space of trials. The frequency of commits and PRs can be seen in the same light.
So IMO it's hard to find a true universal measure to identify the top programmers.
What's even worse is if this metric ever becomes a target, Goodhart's Law will apply, and then Campbell's law will make it as useless as LOC.
Check out pages 71:26 and 71:27 for "Codebase introduction and retention" between Clojure and Scala. I'd like to see more graphics like these to illustrate "lifespan of commits"
This increases the change count.
If there is a review process where another engineer has to approve work, this exacerbates the gap, as the go-to person can get their reviews done quickly. If they're trusted, the reviews might not be thorough.
This increases the rate at which changes go in.
These and other factors suggest that it's hard to split cause and effect here. Being seen as productive increases change count :)
And then measuring how many commits they do and wondering why those are correlated? Even besides the good reasons you point out ... this seems very obvious.
Also, my experience has shown that smaller commits are easier to work with so people with experience tend to make more smaller commits. This also seems to be fairly widely talked about.
I have noticed that pretty much every software engineer to some degree has problems that they procrastinate on. Folk can spend 10x more time talking about doing something than it takes to do it. For hard problems, that discussion is necessary and beneficial, but lots of problems just need someone to open up the text editor and get it done.
What I found was that its not just procrastination. Many folks are just afraid to commit code, like literally scared and I could never get a real reason for that. At one point I added some code based on the direction what requirements were taking. But I could not convince him to commit it.
So we finally agreed to let it be there commented out, only a week later to find we need it now.
Unfortunately, the most business critical pieces of code tend to be the least regression tested. It's the earliest stuff that was made before any test frameworks were matured, it's been hacked on countless times by half the team based on shifting requirements to the point that no one understands it fully, and any future changes are such a high priority that it's "faster" to test it manually or in production.
I am of course guilty of that myself on some pieces of code. I try to prioritize cleaning up expensive tech debt, though. When folk are hesitant to modify a piece of code, it's a strong indication that the code is due for refactoring. It's always worth it as long as you implement regression testing in the process.
I was "afraid" to commit in a team where code review evolved into huge micro-management with inconsistent requirements on what good code looks like. The reviewer iteratively forced you to change it again and again each time with "this is bad code cant go in" comments. But you could not learn what he considers good code, cause it was different every time. I left after.
Second time I was "afraid" to commit to part of code when our main architect had completely disproportionate blow up over previous bug. I made easy to fix bug which was my mistake. But it turned into massive public blow up over work being shitty, us intentionally ignoring his needs, there being tons of bug unusable version (there was literally one bug). Then he wanted massive refactoring to avoid possibility of same bug ... and I made bug in refactoring which led to same blow out, again claiming it was done without care etc.
After that, I really did not wanted to do any changes in that code. I cant guarantee complete lack of bugs in my code. Other people do bugs too for that matter, I dont think I make so much more of them. But, he was under pressure and stress that had nothing to do with me and I became rod for that.
When a developer fails to follow through on their tasks and takes extra time then this increases the budget of a project.
I'll give a coworker a hard time for making large infrequent commits, but I've never seen someone afraid to commit code. This sounds like the value proposition for version control hasn't really clicked for them.
Are they comfortable branching?
When I see this happening in the team, I immediately try to get the engineers to implement 90% to then talk about the other missing 10%. There are a million reasons why a solution isn't perfect and we need to put as much thought into it as possible. At the same time we have to keep in mind that everything we do is a tradeoff. If you think your solution is perfect you probably lack knowledge.
I prefer to ship 90% and then see how we can improve on that with data I can't produce if I don't ship anything. Talk about hypothetical future problems going in circles doesn't solve problems we have right now.
Don't get me wrong, I'm one of those 10x thought times guy myself. But I know how to keep the ball rolling in a team and take the responsibility for these types of decisions which look like educated corner cutting.
The best, most long-lasting code I ever wrote took a long time to write. Sometimes I have to think about a single small feature for multiple days before I begin to implement. Some foundational structural technical decisions require weeks of thinking and analyzing. It's the best way to guarantee that you don't have to come back to rewrite the code later.
When I was younger, I would refactor some of the foundation logic every few months as I added more changes on top. These days I almost never refactor the foundations. That extra time is totally worth it. It's very hard to come up with a good design.
Let's say a team has 2 developers. Josh is 20% "better" than the John, or simply started earlier and has more context on the code base. So initially, Josh is 20% faster, but now John has to spend an extra 20% of his time reviewing Josh's code in a pull request (or understanding Josh's code so he can make a change) instead of making forward progress. So now, actually John is even 20% less effective than he can be, and he has less time to actually code. And since he's even less productive, Josh has __more__ time to code, so he's even faster, which means he writes more code. It kind of compounds, and even an initial slight advantage in speed or context for one developer can amplify itself over time.
A good engineering manager or senior engineer can detect when that's happening and try to correct the balance. But often the team kind of settles into a mode where Josh is known to be better and more productive and everything is funneled to him.
Josh has the initiative, forcing John to react.
--
In my experience, Josh is a firehose of chaos, doesn't test their own work, colors outside the lines. So in addition to John reviewing Josh's torrent of bs, John is always playing catchup, always has to do more rework.
Further, it's not a balanced relationship. Josh creates urgency to fast track approval for their own PRs. Then will goal tend John's work. Pedantry over everything. Let PRs get stale, so John has to remerge, reseting the whole process. Insist the commits are "too big", "hard to understand", and therefore need to be broken up.
Etc.
Individual agility and velocity are evil, rewards dysfunctional behavior.
If the whole team isn't committed to getting the whole team across the finish line, it's not a proper team.
PS- Additional dysfunction if John is constitutionally incapable of refactoring, removing dead code, and other good citizenry.
I'm lucky in that my company recognizes and appreciates the quality that provide with my work and encourage from others, but I'm not sure of how to actually address the imbalance. Many times I've thought of just asking engineers to put more effort into self-reviewing their code, but I always feel like it would just be too rude.
One consistent tension is that the old-hand developers have a "mental issue queue" that is enormous, but without fail, every time, they just can't be transferred.
These can't be made into issues and farmed out to other people. Inconsistencies in the data model, for instance, might exist, but a better solution isn't obvious. You can hand it to someone fresh, and after significant effort (on both of your parts) they agree with the inconsistency, but they won't propose a solution that's any better.
Once you've contributed enough of the main functions of a code base, you just never lack for something to do. All code is bad, because the business focus is on expansion of responsibilities over refinement of the existing ones.
Those ghost issues are best communicated with a code change, at which all observers say WOW WHY DID WE NOT SEE THIS?! But the issue without the code change gets gawking blank stares.
EDIT: but fixing unrelated issues as you go is bad, don't do it!
> a better solution isn't obvious. You can hand it to someone fresh, and after significant effort (on both of your parts) they agree with the inconsistency, but they won't propose a solution that's any better
> Once you've contributed enough of the main functions of a code base, you just never lack for something to do.
Hello, friend, I see we know each other well.
He confided after drinks it was because he didn't know how to squash/amend his commits.
So I'm not so sure about that metric....
I view it as a clear antipattern (since the history within a branch can be valuable later if you need to cherry-pick apart a feature or find a bug with git-bisect) and have asked superiors in numerous places why they require it, and the response is usually a vague mention of “it cleans things up” and “history isn’t important”. It feels like the kind of practice that was mentioned on a screencast and just got cargo-culted, but I have to think it originally had some purpose.
If your workplace requires just 1 commit per change, no matter what the scope, that doesn't make a lot of sense, but there's a lot of room between never squashing and squashing all changes always to 1 commit. Both those extreme approaches don't make much sense to me. Some history is important, some is not.
Squashing commits doesn't have to mean turning 50 commits into 1. It can mean reordering, squashing some commits, tidying up commit messages and generally editing until the set of changes is clear and coherent. This lets you commit early and often during development on an unpublished branch without concern, then tidy that up into a coherent set of changes for readers (including your future self). The absolute numbers don't really matter, for me at least it's more about reorganising and editing changes to read coherently and be properly separated. For example if you need steps 1,2,3,4 to make a change, keep those separate but don't include 2a,2b,2c which were exploring 2 and finding a few places you missed a change when you tested it.
I see it as basic respect for future readers, much as you might revise and edit an essay or novel before publication, revising and editing your code changes at least once often makes them better and clearer.
So I think just before merging squashing the commits makes completely sense. Before that the committer is free to do whatever he/she likes.
Of course in the case the diff of files is so big that make multiple commits sensible, means that the PR is not broken correctly. Then, it's fine to have multiple commits, but the problem is elsewhere not in squashing them.
Also in open source I think it can be easier to keep track of the history if one commit == one PR.
I agree people kinda just cargo cult it though, good to be thoughtful about the trade offs for your team or project.
I understand why its there. People want to sweep the details away. I feel it about my own commits as well. However, if I have to go back and read the history, I'd much rather read the ground truth.
(I.e., stating the obvious, if you are intensely working on something on your own, magically starting to atomically commit each change with a thoughtful message will not make you better but can easily eat 1/3 of your time. It’s OK for your commits to be “fat” as long as you yourself manage to keep track of your work in meantime.)
I occasionally read back multiple pages commit message I wrote and wished I dumped even more info from my brain at the time.
If you work on small and short projects, you can do pretty much anything though.
Of course it can still be unreasonable, but that's quite like anything else. But if in doubt, I would say write more; because if you really track your time in a detailed way, you will see that the marginal additional time is often shorter than you feel. And it taking 1/3 of your time may even be justified in some cases (but maybe you should write some doc in another way then).
And I would not be so sure about it not magically making you better though: it is sometimes extremely useful to just write things down, quite like it is useful to explain a bug to a rubber duck.
But a lot of the time my work is a string of exploration and in-depth sizeable PoCs until something solid and worth documenting rigorously is born. I strongly feel that if I spend time writing up each change as a “green” atomic commit at earlier stages when things are routinely rewritten and rethought, I might never arrive at the latter stage in a satisfactory way.
During such intense work, commenting about my implementation is a radically smaller context jump for me than switching my mind into “commit mode”. This is likely individual: I tend to be overly perfectionist with commits, trying to never mix style and logic changes in one, to make sure a fully functional version of software can be built at any given point in history, etc.
From another angle, I think a lot can be achieved with other forms of documentation, which you mention and which should not be underused either. Of course, best is having your APIs, architecture and units properly documented. It is undeniably bad form to refer others or future yourself to comments, especially for why’s. However, it’s arguably even worse to direct them to commit messages (though in a team it could be an invaluable troubleshooting instrument, that is not the context I have in mind).
(I would never advocate “fat commit” approach to be applied to teamwork on larger projects. Just wouldn’t recommend taking commit count as an axiomatic measure applicable at all times.)
So much for velocity and ownership. As for the part about staging commits, I'd like to see some evidence. In my experience there's no difference, or sometimes the maintainers are even less likely to break up commits. This can be because maintainers are often charged with making commits with lots of internal dependencies that make them harder to break up, or because it's easy to get a stamp even on a questionable commit from people who are dependent on their goodwill to get their own commits in.
Ideas are cheap. Show me the code.
Asking, explicitly or implicitly, for others to "implement/finish" their ideas is easy. I would even call never finishing / polishing anything very disrespectful: I'm not (should not be) here to cleanup after "talented" individuals. This is detrimental to my own "ideas."
So at least, if not anything else, I better be recognized for the boring and tedious maintainership work I do and that talented people with their "ideas" refuse to perform...
I mostly agree with you, but also don't want to be too developer-centric. Sometimes the ideas come from people who aren't primarily developers - production engineers, system architects, etc. Let's say an idea from such a person is fundamentally good and will benefit the project but they lack the time or skill to do more than prototype it. What should happen?
(a) Drop it on the floor.
(b) Let their patch(es) languish and eventually get reaped. Same result in terms of functionality, plus contributes to an "maintainers aren't open to new ideas" reputation which harms recruitment/retention.
(c) Get a developer with more knowledge/skill to finish it. In general, this is just going to be the maintainer, because nobody else will have a strong enough sense of ownership to sacrifice time toward their own goals for it.
Obviously this is going to be case by case, but I'd say that (c) is at least sometimes a valid answer. Depending on the makeup of people involved with the project, which in turn might depend on the nature of the project itself, it might even be quite often.
I think, sometimes that mountain of code starts to become a liability. I'm a little better at reading a lot of code, consolidating, and fixing bugs. Importantly, fixing bugs without breaking other stuff (usually. coding is hard)
if you buy the adage "make it work, make it right, make it fast", you'd probably buy that most people fall into one of those categories and excel (there are rare jewels that are amazing at all three. Carmack maybe is a good example).
Anyway, I'm not a top performer. I have my moments of glory, and I think I deliver good value. I try to avoid git stats. I peek from time to time, and I'm super pleased that I've deleted about 2x the amount of code I've added but, that's maybe me protecting my ego.
Everybody needs code, some people need code to be right, even fewer people need code to be fast. Different people bring different skills to the table. Be real careful about how those different aspects play into reaching goals.
Not knowing leads to lot of misunderstandings.
https://en.wikipedia.org/wiki/Big_Five_personality_traits
https://en.wikipedia.org/wiki/Temperament_and_Character_Inve...
Different traits become strengths or weaknesses depending on the type of problem being solved.
If the solution is known the disciplined/conscientiousness trait holders shine. If the solution is unknown the neurotic shines cause they dont methodically explore the search space (matters a lot if its large). If there is lot of conflict everyone loves the agreeable trait holder. And teams full of introverts get boring as people don't develop deep personal connections that extroverts enable etc etc etc
Indeed I think this is a big component in differences in how many lines someone is adding. Some people solve problems by copying code and them modifying it, while others try to come up with general solutions to remove duplication. It's two different styles, both with their advantages and disadvantages. I think duplicating gets you to an initial prototype more quickly, while general solutions are more maintainable as you only have to fix a bug at one position.
I think someone good will avoid just copy and pasting the code. Shallow similarity makes it easy to thread a boolean or do something tricky with dependency injection. What I'm getting at is folks who are very capable, very fast programmers. if things aren't easy to reuse, they'll just go ahead and implement a whole new subsystem with different logs, metrics and error conditions.
I guess an example might be a type checker versus a compile time evaluator. They have a lot in common. But they're different. Adding another traversal of the AST isn't that big of a deal, really. But, there will come a time when all those passes start to be an issue, they have a lot in common. Maybe it's better to fold them all up into one or two passes.
Sometimes things are complicated. Sometimes you need to hold all that complexity in your head at once and really pick out the commonality. But that's rare. Just adding more code is a great answer for a long, long time.
I like running the above command because it gives a good sense of coding productivity at the very least. And then you can dig into specific people to understand why exactly they have lots of commits or not. Most people use Git in a very similar fashion so you can very quickly make a generalization about how they commit if you look through their last N commits.
Some 'high commit count' people have lots of low value commits, e.g. lots of 'fix it' commits.
Other 'high commit count' people are very productive but mix in a lot of atomic commits, e.g. lots of 'another commit to change this comment', leading to a PR of 9 super small commits + 1 real commit, that's readable alongside their actual work.
Others are actually just more productive than other people. That might be because their code changes are simpler, or in an area that's easier to be productive in and write lots of code. Or they just work more. Or they work at a higher velocity because they understand the codebase and domain better.
Definitely don't _only_ use these metrics because some people just code slower and put out 1 large PR, but I can definitely believe a pattern of people at the top end of productivity who put out both small and large PRs at a higher velocity than the 'less productive' people.
I would honestly just attribute those high value, high commit count people to being stronger developers overall, in my limited experience. Overall as in, not weak in any particular area, and quite strong technically in every area. The people you can put in any situation in the domain and they'd probably succeed. Because they're strong across the entire codebase, their productivity is just generally higher no matter what they're doing.
Part of that is motivated by the will to save often in case something goes wrong. Part is to let continuous integration validate with high granularity. Part is to allow for binary search on revision history to reproduce rare bugs and isolate the specific change which introduced it.
These considerations may explain the correlation with “top performing” - although for what it’s worth I try to make commits which remove more lines of code than they add whenever possible.
Edit: I do think the considered criticisms of this piece are valid, especially the idea that when a correlated measurement (e.g LOC) of a thing (e.g productivity) becomes rewarded, it blows that correlation as everyone starts optimizing the measurement rather than for that thing itself.
Yes, that is true by definition. What is not necessarily true is that all of that work involves writing code.
> generally people capable of writing more code faster are also capable of writing the right code
I disagree here though, and is highly dependent on what you consider to be "the right code". In my experience the people capable of writing more code faster are the people who don't necessarily take the step back and think "should this code even be written at all?".
You have to be able to make your change; either a refactoring or a functionality change and get it through the QA, review, deploy process. Now days I spent a lot of time, dealing with multiple branches and deployment environments and my productivity is in the floor. In a previous job I was able to deploy 2 or 3 changes a day in prod. It was very full filling.
That measure is also one of the arguments for highly expressive languages, because developers seem to produce approximately the same length of code whatever language they use. And leads to some very good questions about why that length changes a lot from one place to another, but doesn't change much when one changes the more development-centric options (like the language or editor/IDE).
But, of course, if you start ranking people based on it, it will stop being a good predictor of anything.
https://www.folklore.org/StoryView.py?story=Negative_2000_Li...
Measuring programming progress by lines of code is like measuring aircraft building progress by weight.
I have a kid, so I don't have any time to study outside of work or put in tons of extra hours. After years of being screwed over and passed over, I don't really have the drive/hope to get to that level since it wasn't rewarded the first time. Also, the work is very boring now and isn't transferrable to other groups or companies, so I don't have any interest in being an expert just to throw away that knowledge (like I was forced to do in the past).
Not that this will stop a segment of the industry from flushing a non-trivial amount of resources down the drain learning this lesson, and inevitably leaving a chunk of stalwarts who never really learn anything. It was clear that was going to happen when the "social coding" site that everyone was flocking to put so much emphasis on activity measured in number of commits, forks, and other administrative details. Things which were only any good for advertising the site's own user engagement in its pre-IPO/-acquisition phase, and too many people mistaking it as a measurement of something else.
Committing your source code to the company git makes it very easy for others to point at you later, if things break. And it's pretty much impossible to undo once someone else has pulled your change.
In my opinion, many top performers are simply people who do what needs to be done, and when it's needed.
Even trying to do something puts you in the top half, even if it's poor quality, because a huge fraction of people are completely useless and don't ever do anything productive at all.
But is there a selection bias here? Some kinds of work invariably involve more github activity, small fixes and the like. Those things are generally, unambiguously 'productive' and 'leave the code better', which lends us to believe 'top performers'. Surely, this might be true but I think within a specific context.
If find solving new or novel problems involves a lot of work that is hacky, experiemental, quick trial, often the kinds of things that in most cases don't even get checked in.
200 lines of crazyballs scratch code is not 'the insight' really 'the insight' was that the API works 'really slowly upon first iteration, but very quickly after n iterations' - which implies x, y, z possible courses of action.
I suppose you could jam it in the VCS but I've never personally cared.
Now that I think about it ... it's interesting because that's definitely not what a VCS is for, though it could absolutely be used in that way.
A VCS really is not that great to store arbitrary, secondary related activity and notes wherein 'the code' really isn't the important thing.
What's missing here is really a form of document/information sharing that just hasn't caught on very well. Or perhaps I'm still caught up in the ridiculous Confluence/Atlassian garbage, which is the worst wiki ever made.
The goal of software management is to keep everybody productive. If you have an imbalance in commits or pull requests in the main repo/branch, that’s a strong sign of a broken process. Tasks should be given according to familiarity and skill so this doesn’t happen.
This also says something about software design. Good design is easy to split up. Bad design requires a ‘go to’ person. Therefore, a ‘go to’ person is by definition not a good software engineer, because their design was bad, and/or they never fixed it. And if it worked the first time, you wouldn’t have to ‘go to’ anybody.
The skill ladder of software engineering goes something like: watching, practicing, contributing, designing, teaching, leading. Every developer goes through this process from scratch in every project. Getting stuck is a problem (as is skipping steps). Preventing other people from progressing is a bigger problem.
Perhaps rethink this.
It is related to the unavoidable tech debt that any project has and only the strongest people see it and work on it.
I have seen this pattern so many times. It'a a good strategy to be recognized by management.
Also what happens when the top-committer leaves, suddenly the rest of the team can breath and flourish.
If I've got bunch of problems, I'll work on the easiest ones first, saving the hardest for last. This clears my mind from the drag from the easier problems, and as a side effect I wind up committing more.
There's no particular correlation between how hard a problem is to solve vs how much time it takes to solve them. So hit the easy ones first and make your users happy!
I wont consider myself a top performer. But I have my moments of glory. Most of my impactful commits were deleting unwanted code and needless libraries in the codebase.
> ... define top performers as ... a go to person
So circular logic. Got it.
In a related story, why employed engineers have code contribution to the company that's way higher (actually infinitely higher) than those who are not in the company.
Price's Law?
"50% of the work is done by the square root of the total number of people who participate in the work" -> "You are working on a team, and there are the superstars who do most of the work or seem to produce most of the outcomes and then there is everyone else."
https://expressingthegeniuswithin.com/prices-law-and-how-it-...
Fast forward a couple of months, and we had a customer complain about the Gantt charts being off. I had a closer look at the thing, and it turned out that the bars on the chart had been drawn using the Math.random() function. They came out different each time you refreshed the page and were in no way related to the real thing.
Really! version control is one of the big improvements to software development I have seen.
Now if we could get the rest of the customer chain to start version controlling their requirement docs and properly minuting meetings and action points
If you really don't like git try Mercurial. I haven't worked with it in years but it was very easy to work with
If I was a manager and asked to lay off X percent of the engineers, I would totally take into account number of commits among other signals.
- Top performers tend to be motivated.
- Just because someone is a "top performer" in one project/team/company doesn't mean this person will be one in another project/team/company. Reasons can be manifold, motivation can play a big role between someone being a good and someone being a top performer.
There are top performers that do not produce the largest number of commits and PRs.
They produces the most difficult ones to get right.
There are mediocre performers that produces an awful lot of commits and PRs, confusing volume with substance.
In June of this year, my entire team was struggling after moving to a new Kafka cluster because of low throughout in one of the backend services (about 250k/min with approx 32 instances). This was causing issues for our downstream dependencies as we were not able to reply back within one hour of consuming the record from upstream and our L2 support was daily getting numerous pages which were also escalating to us. Then my manager asked me to take a look at what is going on and if we can improve the throughput somehow. After almost spending one week of banging my head against different theories and tons of experiments in QA I finally figured out that the new Kafka client we were using had a setting where it required acks from all brokers (which has increased to 5 in the new cluster from 3 in previous one) and this caused a huge increase in the blocking time even though we were using an Async framework. The async task just waited too long for completion and once completed could not get the threadpool back due competition with other threads. Solution: simple, I just changed the Kafka producer ack from "ALL" to "ONE" requiring acknowledgement from one broker only. Throughput with same 32 instance jumped from 250k/min to around 700k.
I ask the intelligent readers of HN - do you think that based on this I should be penalized since the change was only in one line of config code? Yes, that's all what it took to resolve this outstanding issue - one line of config change/one commit albeit one week of thinking and experimenting time!
I've worked on teams where I had colleagues who were incredibly deliberative. They would spend lots of time working with stakeholders to deeply understand their problems, and then produce elegant solutions that made their jobs drastically easier. Small changes with huge payoff. But management, and even most other team members, didn't recognize this. They just saw a slow programmer. Credit tended to go almost exclusively to other teams, when their members started using those tools to radically improve their own processes. The company's dev org largely didn't care about that effect, because it didn't positively influence their own KPIs.
"I don't encourage people to cheat. But if anybody is judging your professional skills by the graph at your GitHub profile, they deserve to see a rich graph"
Those who make a lot of commits are not top performers, they are mostly engineers who are overly concerned with their 'optics'; they are engineers who are good at projecting themselves as 'top performers' but if you actually look at the results of their work a few years down the line, you will see that they are in fact low performers of the worst kind because they tend to add a lot of unnecessary complexity and technical debt because they don't think through things enough and just implement.
I'm absolutely shocked to learn that people are falling for this.
The developers who make a lot of commits are often more concerned with optics and they will argue for hours over unimportant tedium while sometimes missing the big very important ideas (the ones which will have real impact and flow-on effects for years to come). Basically they can't tell the distinction between what is important and what is not. They can't tell apart bureaucracy from value creation. Obsessive focus on commit size and other tedium is a key signal that someone doesn't know what is really important. They tend to be conventional thinkers (driven by peer pressure and peer approval) and their idea of productivity is distorted by false consensus (like the one this article attempts to instigate).