Engineers work to deliver business value and commit the code that flawlessly delivers that value the very first time.
When colleagues go to review the code, they are edified in its reading and quickly approve the merge request.
There is no need for feedback because the code was perfect.
I strongly disagree with the premise of this article. I look at code review the exact same way as I would look at the first draft of an email or a blog post. If you look up any piece of writing advice, one of the biggest things people say is "Don't worry about getting everything perfect the first time". Instead, it's really important to focus on getting something working first, and then you can spend your "second draft" refactoring that base of working code so it's more maintainable & readable. From that perspective, find code reviews really valuable as a contributor because it lets me get an outside perspective on the "first draft" of my code, and then it'll make it way way easier to write the code "correctly" the second time. So I really really want to push back on this idea that the "ideal" code review process involves a rubber stamp. That's just entirely foreign to the way I think about code reviews.When it comes time to actually get approval to check in, then I try very hard to get it right. I put on my code reviewer's hat and review on my own code. Ensure that I've run the formatter, linter, pre-submit tests, and code coverage tools. And only then do I actually mail it for review.
My ideal review at that step is a rubberstamp. I have a few colleagues who do the same; it's always a joy to review their changes because they're focused and well-written with tests, coverage, etc. all in place.
I'll often do the "first draft" as a design doc with an API sketch or a quick prototype that I'll post in chat or email to the team but not actually "mail for review" per the tool. That way I get early feedback that I'm going in the right direction for a code change.
That makes sense for some types of changes, but it's too high-level to address the actual well-factored nature of the code, which is one of the main things I'm thinking about in the "second draft" of my code. Nobody's design doc is going to address "what's the right set of helper functions to extract here?" or "what's the right way to structure this logic to make it the most readable?". Design docs are great and they're super important for bigger changes, but they're often completely orthogonal to "is the actual code that implements this design doc readable and maintainable", which is the biggest question I'm seeking to answer in code reviews.This is also completely orthogonal to "do I have tests for this", "have I run the linter", etc. Certainly there are types of readability problems that running the linter or writing tests for your code can catch. Having focused changes with test coverage is only one part of making sure your code is long-term maintainable by the rest of the team.
Yeah, we do a database & api design review before anything is actually coded. If you don't get that right you might have to basically start all over.
If you get feedback on your code that you already knew about, you wasted someone's time, and I think that's inefficient and a little bit disrespectful.
But I definitely agree that code review is very much a drafting process. It's just that you need to walk a line.
And code reviews are even more collaborative then that. When I'm looking over my friend's email, it's still ultimately their email and there are types of feedback that it would be inappropriate to give. But when you're contributing to a shared codebase, it's more like a novel that's been co-written by multiple different authors—obviously you want everybody on the team to be able to have input and feedback into what the "collective style" of the project is going to be.
I get this but I prefer the opposite. By over communicating your preferences, asking questions, leaving “if I follow this correctly” comments, and generally not holding back, you create lots of chances for discussion and knowledge sharing. And you’ll catch more bugs.
This is especially true when a junior dev is involved.
If comments don’t impact the code base or the people involved, then yeah they’re not useful.
Maybe it's worth trading off on going one way or another? Or maybe there's a happy medium somewhere...
This way, if most of those are "very little" then it's up to the author to decide whether to do it or not, and they don't have to wait for you to circle back about it.
Social and power dynamics in async code review can make things very difficult.
It works out pretty well, the marginal cost of fixing the nits is very low when there's other changes to be made, so they get done. If they aren't addressed because there are no actual problems with the code, those things will get addressed later. (all the members of the team tend to take cleanup passes occasionally when they aren't feeling up to the deep work stuff but still want to do something productivish). The net effect is the codebase stays reasonably tidy without feeling bogged down in the "bureaucracy" of nitpicking.
Granted I’m probably biased towards high churn SaaS apps, which is what I work on.
Somethings are not worth the taps.
- Engineers work to deliver business value and commit the code that flawlessly delivers that value the very first time.
- When colleagues go to review the code, they are edified in its reading and quickly approve the merge request.
- There is no need for feedback because the code was perfect.
- The business then benefits from the code expeditiously.
I think this is a misunderstanding of the many purposes reviews serve. Your goal isn't to produce a pull request that attracts no comments and requires no revision, even though that's what it might feel like because revisions take extra work. Other than the pedagogical goal, your goal is to take advantage of the team to build something better and more efficiently than you could by yourself.
Reviews that take multiple revisions are only bad if a lot of time is being spent on careless mistakes. But driving (forcing) meaningful discussion about the structure of the code and whether there are any edge cases the code missed is the point of the review, and an ideal one helps the team tackle them head on before the code goes into production.
I wish there was a way to automate some of the lower buckets too. Things like "project consistency" (see his "12 buckets"), for instance, are incredibly important to me personally. IMHO, code should be written so that there is no visible personal style, so that newcomers and new engineers are able to follow what's going on regardless of who wrote it. A human can easily determine if the style of code is the same, or if it's different. It'd be fantastic if a machine could do the same.
This seems reasonable at first but depending what you mean by visible personal style can get you into the weeds so fast. You can end up creating a process where no minor, specific improvements or experiments are tolerated unless they can be justified as generalizable improvements for the whole org.
> , so that newcomers and new engineers are able to follow what's going on regardless of who wrote it
This isn't actually related to the first thing? There's some overlap sure but for the most part whether it's in camel case or snake doesn't affect my ability to comprehend it.
This isn't about camel case or snake case. It's about the overall structure and look and feel of the code. When you know that public functions are at the top, constants are in a certain place, and where package-level comments are; and when you know that important abstractions and models are defined in one file, and their implementations in separate files; and so on.
Basically, it's the combination of all the tiny things that make code look and feel like they belong.
This is similar to when people say "this is not idiomatic Go". What I'm saying is pretty much "idiomatic for this project/company/product/team". Write it so that it matches the rest of the project. That helps people navigate and read your code, which is the most important thing.
if foo > bar:
return foo
else:
return bar
Should just be: return max(foo, bar)
That's a nit because what they wrote was fine, but it's longer than it should be.