Well; code review is not QA. My approval means - I'm OK with how the code layer is knitted. It doesn't mean I've tested the changes.
Of course, if there's some logical problem with the implementation (such as the author seemingly failing to handle an edge case), any careful reviewer should catch this out.
Still, this is a situation where the overlook could be detected through theoretical code analysis, so to speak: which is what a review is. That's not testing.
"some 2~3 line changes may not need to be tested, but those are the exception, not the rule."
Generally speaking everything needs to be tested (not necessarily manually), but that's beyond the point. Testing is not reviewing, and vice versa.
On my current team, we want most tests to be automatic. But the manual regression testing that might be needed is mostly done by other developers. A separate test-environment is created for each PR, and the link posted in the pull-request. So when looking at a pull-quest we expect people to do both a code-review and QA.
In my team approving a PR means you approve for the code to go to production. So you need to consider both code quality and whatever QA is needed. But thats just our way t do it - other teams do things differently. And I think thats OK, there is no silver bullet.
Changes from this type of review are usually fine details (“use a constant for this”, “consider this edge case”, “prefer this style…”) that can be done easily by the reviewer with no context.
If you’re lucky, you might get meaningful feedback (“consider using this approach instead…”), but many people just use code reviews as a gateway to merging code (1).
This is the pragmatic approach; just trust the other developers and doing their jobs and do a light pass check to ensure that everyone is aligned on approach and style.
..but, that’s not what you were tasked with.
You were asked to review the code, not the syntax.
You cannot review code you do not understand.
I’m sorry, but your mental parser does not run code. It does not render ui, update databases or generate concurrent race conditions. You may be able to approximate some of those, but most people can only do all of those things by actually running the code.
So… you may get some value from looking at code; but I question, in almost all cases, if teams contributed significant value by doing so.
[1] - see https://blog.jetbrains.com/upsource/2017/01/18/code-review-a..., etc. google “code review as gateway”.
I'm 100% on-board with this. However, I understand were OP's mindset comes from. A few years ago I was working at a development shop in a fairly large project, where we had little to no automated tests. We frequently had PRs that would break main features. At some point developers were required to perform smoke tests alongside code reviews.
> Well; code review is not QA. My approval means - I'm OK
> with how the code layer is knitted. It doesn't mean
> I've tested the changes.
You will be able to do more substantiative comments if you pull down their code and test it. It will improve the quality of your review, since it can be situated in the context of how the program works instead of how the code is abstracted. Your reviews should consider the practical workings of a program, and not just consistency with patterns or superficial bugs.I know that more experienced people are able to notice bugs or edge cases due to having a lot of experience and understanding of prior art, but noticing incorrectly designed/implemented logic/behaviour is much easier if your review includes execution of their code.
If you leave this to some later QA stage or QA team then you're just making your reviews less effective and increasing the size of the feedback loop.
1. The submitter approves of the code being merged into master 2. The submitted has tested the happy path and everything seems to work
If you want to enforce standards, automate it (there are multiple highly configurable tools for that). If you want good solutions, pair program. If you want to maintain good codebase, review it periodically (codebase, not some changes to it contained in pull request) and refactor. Use TDD / BDD to ensure the app still works.
This implies the need for smaller pull requests maybe?
If work is broken into smaller chunks, addressing peer feedback isn't as costly, because there's time to back out of poor decisions if the peers get alerted to them early on.
Changing the course once the implementer has indeed invested lots of time and effort into a flawed solution is ineffective, true; but that might just mean the feedback was asked for too late.
"If you want to enforce standards, automate it"
Agreed, as long as you're thinking standards as in indentation, naming rules etc. All the simple stuff can be done with code inspections, and employing humans to enforce these "mechanical" standards is a waste of time. No argument there!
However, there are higher-level standards that can't really be automated though, or at least not effectively. Eg. various architectural conventions and the like.
"If you want to maintain good codebase, review it periodically (codebase, not some changes to it contained in pull request) and refactor"
Well, I can see how it could work in a small, lean project... but in some multi module behemoth with millions of LOC and history spanning back years?
"The code was already written. Time/money spent" rings much more strongly in that scenario that in reference to "regular" code review.
This makes me feel like perhaps pull requests shouldn't always pull/merge the changes directly, but instead there should be something like the GitLab "Start a review" functionality, where you can queue up code comments, before submitting all of them at once, but for pull requests.
As a Jira analogy, imagine the totality of the changes that you want to merge as an issue, but the individual steps/smaller requests as sub-tasks. The entire task would only be considered done after all of the sub-tasks are done, similarly, the changes should only be merged after all of the sub-requests have been successfully reviewed. Thus, development could happen more iteratively, with feedback sooner.
Otherwise you end up with something like the following being separate requests:
- FEATURE-123 add data models for the feature
- FEATURE-123 add service logic for the feature
- FEATURE-123 add logging and validations for the feature
- FEATURE-123 add unit tests for the feature
- FEATURE-123 add REST endpoints for the feature
- FEATURE-123 add front end code to forms
- FEATURE-123 add front end logic to use the REST endpoints
- FEATURE-123 add integration tests for headless browser
You don't actually want these changes to end up in the main branch before they are done. Alone, they provide no actual value and could only confuse people. Furthermore, if there are changes that are needed to the data model after it has already been merged, then you'd also need to do those within a later merge request, or one in the middle, which would just pollute the history and serve as spam, instead of being able to add more commits to the other sub request.To me, all of this seems like a problem that stems from developers viewing code review as something to only be done at the moment when you want to merge the feature to your main branch. Even the tooling that we use is built with this in mind. It feels like there are better alternatives.
EDIT: So, you'd need something like the following:
- (WIP, 3/8 reviewed) FEATURE-123 add new feature // feature-123 branch, cannot be merged into main yet, review the below first
- (reviewed) add data models for the feature // feature-123-data-models branch, will be merged into previous
- (reviewed) add service logic for the feature // feature-123-service-logic branch, will be merged into previous
- (reviewed) add logging and validations for the feature // feature-123-logging-and-validations branch, will be merged into previous
- (in review, 9/13 comments resolved) add unit tests // feature-123-unit-tests branch, will be merged into previous
- (registered) add REST endpoints for the feature // feature-123-rest-endpoints branch, will be merged into previous
- (registered) add front end code to forms // feature-123-front-end-code branch, will be merged into previous
- (registered) add front end logic to use the REST endpoints // feature-123-front-end-rest branch, will be merged into previous
- (registered) add integration tests for headless browser // feature-123-integration-tests branch, will be merged into previous
(personally i think merges make the most sense, but some people prefer rebasing; it's just an example)You can technically do the above already, by having a main feature branch and functionality specific feature branches all of which get merged into the main feature branch, before then being merged into the main branch after the feature is complete. It's just that people usually live with the view of: "One merge request == one feature", which i don't really think should be the case.
Code reviews are in my opinion still a good tool. It works well in other industries as well. You want to bounce stuff from the other people before pushing it further.
Where it goes wrong is people - so it is people problem not the tool problem. If you do not have good people you won't fix them with code reviews.
It is the same with agile, managers tend to think they can just follow some agile coach and can hire anyone to the team. But the truth is, I see bad teams going bad with or without agile and good teams agile or not going strong.
I don't agree 100%, but I am on-board with the sentiment. I've seen systemic problems of under-communication pre-review which results in me looking at a PR and seeing a fundamental issue, but not knowing how to approach tackling it without hurting feelings. I don't think eliminating reviews is the solution, but I think it's much less of a critical step than it's often made out to be. My take is that if reviews are actually preventing major issues with released code, they're almost certainly including work that should've happened much earlier in the development process.
If the management rebukes, you move employer.
Quality is not something the customer or your manager is responsible for. It's entirely on us as professional developers.
That said - code review is not the only way to ensure quality. Do what works. Pairing generally works even better for getting quality review time in.
I'm building CodeApprove (https://codeapprove.com) to be a much more powerful, enjoyable, and efficient code review tools for teams on GitHub. If you're interested in trying our Alpha email me.
Doing the reviews myself:
A lot of simple code splintered in 100 files -- I and all others that I know only review the diffs.
if(THIS_COERNER_CASE){do this ; do that}; xml configs relating to this or that business case
There's nothing to review, really, I don't know the business case, the code looks okish.
I really don't care about unused imports (unless you make a special commit called "code cleanup" that pollutes history and I hate you for that), lines too long, if you used nested ifs or returns, unless it's really nasty code (which I very rarely stumbled upon, usually written by some coding style nazzi in his youth);
Feedback for my own/others code:
- this or that "coding style" issues by some coding style nazzi; almost fine, it improved my inner compiler/code formatter
- "don't use this language feature, because I don't like it" from the highly oppinionated nutjob that leaves the company in a couple of months anyway to join a smaller company where his genius is appreciated more
- "why did you do this and that, you should have done...." half an hour later in person/over the phone conversation... "oh, yeah, I guess that actually makes sense"
- "whaaaaaat, you expect me review this monster commit? Why didn't you break it in multiple smaller commits?" -- because I care about the convenience of the person that is actually interested in investigating a feature/bug 5 years from now and not of the lazy code reviewer now
- "How could this security bug have slipped us, we have CODE REVIWS, don't we?"
Doesn't seem like such a big problem to make a PR with multiple commits, and then squash and merge into 1 commit after the review.
I fundamentally disagree here. If you're reviewing code, the code that is pushed should be what is reviewed, not a draft of what is reviewed..
- "you are right, i completely forgot about that. thank you for catching this, i am changing the code and asking you to review it again after the fix."