CR slows things down & often unnecessarily. Overly time consuming
Time lost with needless explanations.
Somewhat impossible when reviewer pool is one or two people. A bigger reviewer pool is probably not going to have to have context.
Creates a bias to not ship. "waiting for CR" can be a big impediment for shipping. Perhaps that tonight was the good time for you to get something into prod, not tomorrow - but you're waiting for a CR.
It's an example of process over people (AKA: cargo-culting). The context of when/where/how/why CR is important and is situational. CR best practices are going to have different results in different situations. Often CR is just done because it is a best practice, blindly. It would be preferable to think deeply about what is important & why - rather than just a "oh yeah, here is another hoop we jump through because #AmazonDidIt"
Stratifies the team. The senior/elite/favored people invariably get easier times during review.
CR can get political. Scrum masters & managers scrambling to unblock a team member and get something reviewed. Which is great for that one time, but reveals an utterly broken process otherwise. When a CR is "escalated", what's the chance the reviewer will actually spend the time needed and the back-and-forths to get things "properly" reviewed?
Conducive to nitting, which are are useless and counter-productive. Time spent on nits is waste and draining to keep coming back to something to then tweak "is it good enough yet?"
Drive by reviews without context
Coming to agreement during CR is difficult. Not everyone is able to observe/experience and/or resolve conflict.
CR is late in the code development process; it's one of the worst times to get feedback after something has been done, polished, tested, made production ready - and suddenly then someone thinks it should be done differently (which is a lot easier to see when something is already done and working). It's akin to writing code 3 times, once to get it to right, a second time to get it nice, and a third time for whatever the hell the reviewer wants.
Shows lack of trust in team. It is gatekeeping.
Does not scale well. I was once told by a reviewer that I write code faster than they could read it. (At the time I reviewed about 6 PRs every day, and sent out about 3. I was doing 80% of reviews, it was a shit-show; the reviews I received were slow and useless - the team members were focused on trying to deliver their over-stretched projects; I was too streched to give proper feedback and not work 100 hours a week).
Better options exist, namely the "ship/show/ask" strategy: https://martinfowler.com/articles/ship-show-ask.html
That latter branching strategy puts it up to the code author to decide, "does this comment typo fix need a review? Does this fix in this code that I authored originally and know everything about - really need a blocking review?" In that last case, if a person can just merge a bunch of precursor refactors right away, they can get that knocked out - the PR they send out for review is then clean & interesting; and there is no time lost managing branches or pestering someone
A second option to help make things better is to allow "after-merge" reviews too. A few teams I've been on, we did that enough where we learned what kinds of things are good to ship on their own. To another extent, there wasn't the bandwidth to review everything. It was not a bad thing.