In my experience, setting an expectation that teams need to be respectful and communicative in code reviews, combined with no tolerance for bad behavior, is sufficient to keep the code review process running smoothly.
I'm not a fan of explicit process documents that prescribe communication practices to employees. There is a very specific type of person who loves to be handed a rule book for how to communicate with others, and they love these systems. They prefer to operate within structured environments where they can read the rulebook ahead of time, or perhaps even write the rulebook for others to follow. Having the rules spelled out like this brings an artificial sense of comfort, especially to those who otherwise struggle with organic communication.
For everyone else, the complexity of these systems is unnecessary mental overhead. Do two developers who were already communicating just fine need to adopt this process? More importantly, does the process stop here, or will there be additional sets of communication rules whenever another possible communication ambiguity arises?
The rule follower could neither handle the ambiguity of the communication media (written text) nor could he understand that the intention of some communications were not to dispute or criticize (cultural and personal communication idiosyncrasies) his work.
The rule maker was just a control freak. I'm glad he's gone.
There's no reason it can't be both intuitive and human while also reflected in process documentation that serves as a shared reference and grounding point.
I find it helps a lot if people's first interactions are not just in text but preferably in person but at least in video conf. The overall culture of the company plays a big part as well and so does the experience people bring with them from other companies.
If your only experience is adversarial PRs then even if the new company you joined has a perfectly calm conversation in PRs you may read them as adversarial.
Its the whole 'tone is not included in written communication' thing. So whatever expectation I have about how you would actually say the written thing will color everything. If we haven't had any real life interactions so far I may substitute my really bad 'last 3 companies' PR experience.
That's the best part. Junior hires think their code review went horribly because they had to fix a lot of things. And then their performance review says that they are doing a great job.
People are terminated for a single hot-under-the-collar comment?
"I think this is a boulder of a problem."
"No, no, no this is clearly dust."
"Looks like a pebble to me!"
I get the idea but IMHO you really need from the start to have some way to measure and quantify if this is actually improving code reviews in your organization. Do you have metrics showing amount of time spent on reviews, amount of bugs generated by commits, etc. and are you tracking them going up or down?
I can also see a certain path of dysfunction where people get known by their perception of problems. Suddenly Joe doesn't get asked for many reviews because he's "the mountain guy" that always thinks things are blockers. There really needs to be some thought around calibration and review of people's prioritization. Just adding names and terminology doesn't solve that problem.
As this type of process accumulates, it becomes more and more painful for newcomers trying to join the team without breaking the rules. Obviously one requirement about prefixing code review comments isn't much, but in my experience it usually doesn't stop here. People who like creating process and rules tend to want to create a lot of process and rules for everyone else to follow.
The solution isn't to try to design a process to prevent bike-shedding, because you can't. It's to practice a culture that frowns upon bike-shedding and focuses on the using time well.
> I can also see a certain path of dysfunction where people get known by their perception of problems. Suddenly Joe doesn't get asked for many reviews because he's "the mountain guy" that always thinks things are blockers.
Joe probably doesn't need to declare everything "mountain" to become the guy that makes mountains out of molehills, and probably doesn't need the formalism to be the guy people don't like to ask for code reviews.
> There really needs to be some thought around calibration and review of people's prioritization.
Usually this is where we'd expect the manager to talk to Joe about being more constructive in code reviews.
i understand the desire to be data driven. and your point about making everything a blocker. if you're at a point where you dont trust the judgment of your colleagues, you should drop everything and address that right away.
but, most of the time, operating in good faith works.
At the end of the day it's all just people management problems. You're right it's a time sink for devs to quibble over and quantify prioritization--that's why I think it's more of a thing for management to assess and review. Trust devs to assess priority on their own, but verify with regular followup and review that those assessments are accurate. Business needs will change too and that might alter priorities--what was dust last week is now an enormous mountain because customer X demands the obscure feature no one cared about before.
The world of open source have already settled for: "nit" or "nitpick" for things that you can live with without changing in a code review.
Everything else is just noise. Either you must change, or not. Anything in between is just harming communication.
- Site: https://conventionalcomments.org/
- Discussion: https://news.ycombinator.com/item?id=23009467
yes it works very well for Netlify; when i joined the frontend dev team i immediately saw the value of this idea and begged the eng and design team members (that came up with the idea) for permission to write it up
for those interested, I recently adapted this idea for project prioritization https://twitter.com/swyx/status/1366849232959807489?s=20 because I was searching for a better alternative to the nonsensical effort estimation vs impact backlog grooming systems prevalent in product management.
What are you thoughts on enforcing future action (pebbles)? One thing I try to push for is to never have TODOs in code, instead make a backlog ticket/issue and link it. Unfortunately those tend to stay de-prioritized.
How often do your pebbles get the desired follow-up?
in my time there (i've since left the company) we didn't get a lot of pebbles. boulders and sand were more common. perhaps i shouldve communicated that distribution more clearly.
I used to work in a 10k employee mega organization where we had this kind of over formalization. It was probably invented to prevent feelings being hurt.
I then moved to a tiny five man consultancy where efficiency was what we lived (or died) by. Never forget the day when my first code review from the CEO was "that is f##king s@#t code!
I'm working on a new tool to vastly improve human code reviews for teams on GitHub. One of the main things is making sure that each conversation is "resolved" so that there is no disagreement on when a PR is ready to go.
If you're interested in better code review for your GitHub repo, email me at sam@habosa.com and I can show you what I'm working on. We're in a private Alpha but definitely ready for use by real teams in everyday reviews.
Um, we're doing it wrong.
Edit: The down vote was because it was YAML wasn’t it? I know, I know, it’s not real programming.. But it still felt like an accomplishment!
That's not the only type of feedback that's useful. Sometimes I might just want to ask a question, or propose another idea that may or may not be better than what was done. Sometimes I have testing suggestions. These are not problems.
A great example I like to cite is this: You are at a company offsite. All the department leads from around the world are there. The mandatory to attend main event is about to begin. The main coordinator goes around to the various groups doing small talk ahead of the event and tells each of them: "If you would be so kind to proceed to the main event hall when you are ready, we are about to start".
Main event starts.
Someone notices that the department leads from Germany aren't there and still outside finishing up their coffee and patisserie, talking loudly.
They don't understand at all, why the coordinator, who is from the UK (or Canada) is upset that they didn't follow her instructions to go to the main event now.
(in case you're wondering, the Germans would have expected you to come up to them and say something like: main event is starting in 5 minutes. Make sure to be there, it's mandatory)
I mean, you can impose all sorts of rules and whatnot, but it's all [dust] until it improves a metric that the company cares about (e.g. review time, # of bugs, etc.).
The consumer of the rating (in this case, the code author) usually needs to know one thing above all else: Do I take action or not. A two-point system is the default.
IMO, in a code review, the feedback should be either expected action or not. "I recommend you change this" or "FYI, consider this", aka, "required" or "optional".
Ultimately, as the author, I want to know if I'm being told "please change this" or "just FYI". Details can be haggled if necessary.
The strength of the recommendation isn't relevant unless there's push back, in which case the details can be haggled for the situation at hand. But trying to do that bucketing up front sounds like extra work that's usually unnecessary. As long as the reviewer is pursuing productivity, they can adjust their recommendation as they learn more.
- required, to be implemented before the code is merged
- required, but we can merge the code as is and create a separate action for resolving later (e.g. open a Jira)
- optional, for the author's consideration