I haven't been at a place yet where pair programming is so common or frequent that code reviews are not necessary, so it's definitely a very useful skill. I also see that many people indeed take it as burden or hoops to jump through, rather than an inherently useful exercise, as the article outlines it.
Mentioned in the "What" section is the matching of the stated goal of the PR with the reality: so rare to find anyone who puts any effort into writing good PR messages (not to mention good commit messages, but that's for another time). Linking to tickets, explaining the rationale, most other context is just skipped because being assumed (or assumed to be communicated through some other channels). Less than ideal.
In general the "what you put in is what you get out" applies very well both for writing good changes and PRs as well as good reviews. Applying similar principles as the author listed, I have a lot of good feedback from people that the comments in reviews helped them improve and occasions for rare actionable feedback on technical know-how.
The purpose of pair programming and the purpose of code review are not quite the same.
Pair programming is a high-context activity where two programmers collaborate closely on building something.
Code review is a tool to simulate looking at code that you haven't just worked on with high-context, and seeks to assess whether it is understandable to someone in that position. Since code is read far more often than it is written, code review can be a very effective tool for maintaining the long-term health of a code-base when applied in this way.
I enjoy writing good PR messages. They border on too elaborate, with too much detail (better to err on that side though). They often aren't read. "TL;DR" is what I hear, without shame. Similar to how "math sucks!" in high school was popular. Reading/writing documentation? That's for nerds!
Then later on confusion and questions, all easily solved if the PR (Jira ticket, etc...) had been read properly, or at all. It's very disheartening.
I also write pretty detailed PR messages… mostly as comments on specific lines of the PR, either in the code itself if they’re worth keeping around. Or GitHub comments if they’re only of temporary relevance.
But I’m not sad at all if they’re not read. It’s there if people want/need it. If they don’t then it’s easily skipped. I’m also not mad if someone asks me a question that could’ve been answered by reading a comment… different people have different ways of learning and I am happy to show up in whatever mode they find convenient.
If a specific colleague is frequently missing info that was available in comments maybe that’s a growth opportunity for them to up their pace, and I might drop something in their suggestion box, but we’re all learning.
Give the people what they want, then. Why not literally put a "TL;DR" at the top, like an executive summary, or something? The important thing is that all the gory details are present for those who want and need them. For those who may not, such as people not reviewing and approving the PR, a literal TL;DR can be a godsend.
> This is self-explanatory. Usually I start off a review to someone who I don’t know yet with a simple “Hi ” in the first comment. If you spot something missing, don’t express this with a statement such as, “This is incomplete”, but rather, ask: “Is there a reason why this is missing?”
I chalk this up to cultural differences, but to me questions like this appear from avoidant to interrogatory. Questions are good when you genuinely don't know something and are asking for help in reviewing.
Anyway, I got a lot of mileage out of the phrase "you can...", e.g.
You can use <language feature> here like this: [...]
You can get the same result with this: [...] (typically an idiomatic one-liner)
etc.
I find people are more receptive to suggestions if they're given a choice in the matter. They better explain it though - which is also what usually happens.
If there's an issue I write something among the lines of "To prevent X errors, use Y." to make it obvious that this isn't just my taste.
The Suggested Change feature is very useful and usually doesn't require additional commentary - especially for one liners.
Also: nothing beats a call review, especially if you don't know the person.
> Attention to detail
> Try to balance between important and not-so-important changes (experience will guide you). We don’t want to block an important feature or production bug because of a spacing issue!
Create a ticket if it's warranted, assign priority and move on.
> See code review as a conversation where both of you can learn something and help each other.
I genuinely do not like this. I do not want a conversation here and most of the time I do not want help. I did something, if you have reasonable issue with it, please tell me politely.
However, if you want to help or teach me something, talk to me like a normal person. Do not block my PR because you want to teach me something that should be non-blocking.
Reading your comment, one that entire agree with, I realize that the «review» is pretty loaded, putting the «reviewer» in a place of authority by default.
While most of the time, the review occurs when work seemed good enough for the requirements, the effort and cost in time in accord with real world micro economics, meaning that code is submitted by a professional doing the best effort in his judgment, and you better have good reasons for your feedback.
In that light, it's pretty clear why things can go sour pretty quickly if a reviewer is not cautious with his feedback, and also why it's hard to make it work if reviews are enforced.
> most of the time, the review occurs when work seemed good enough for the requirements
Most of the time, a blocking review should contain information that the coder didn’t have, which changes their own perception about the readiness of the code: a bug they didn’t notice, a pattern in the codebase they didn’t know about, etc.
If there is any debate about whether a change is necessary it should probably be a “nit” with an approving review.
The point of “request changes” is to flag branches that everyone can agree need changes.
This reminded me of "Always try to be nice; never fail to be kind." Actually a very good advice (and not all advice from that, well, person is good). I mean: sometimes you can't/shouldn't be nice, but it seems that kindness is universally good (or at least situations when it is not recommended to be kind are less frequent).
The point of the article seems to virtue-signal on behalf of code reviewing, and yes there are benefits, but they are primarily for the organization and the author of the code.
The issue is that as you become more senior and wise to the politics of a tech organization, the more you realize that being the one doing code review offers you very little benefit.
The strongest claims the author writes seem to be that:
> Doing code reviews can be enforced by your company’s CI/CD processes
Yeah ok but that's a problem for the author, not me, because they can't get their code in, and
> The reviewer can learn new tricks and idioms of the language used.
Which I doubt if I'm a senior engineer with the org.
More importantly, doing a code review has a tremendous opportunity cost - all the time and brain power you need to comprehend and consider problems for on someone else's code are resources taken away from your own contributions. In modern tech companies, this translates to directly affecting your performance review by lowering your potential impact.
The main solutions I've seen implemented to counter this have been
1) Some quota for code review, which means everyone just looks for the smallest pull requests possible (although also has the positive effect of encouraging authors to write smaller more comprehensible PRs)
2) Some measurement in your performance review for your "mentorship", an approximate score that factors in your code review and is deeply unsatisfying when trying to justify your promotion with
I was thinking perhaps the right answer would be to consider code reviewers "co-authors" during performance review, but that's not exactly fair either. I don't know, I'm not an engineering manager.
Anyway, for me the article reads as an engineering director or higher trying to virtue-signal their way into convincing mid-high level engineers into doing the dirty, unrewarding work.
The benefits such as learning and knowledge-sharing are trivial and don't need to be listed; If the code review process were rewarded properly, everyone would love doing it and this article wouldn't need to have been written.
In my experience, the quality and depth of my code reviews were a main contributing element in my last promotion conversation for a senior+ role. It wasn't even a metric I had to scrounge up but something my manager proactively raised to justify approving the promotion along with other criteria. Other team members reported to him that my feedback helped their growth.
However, there is an aspect of being in the right team and company at the right time. If I worked in a team as experienced and knowledgeable as me, my feedback would have less value. I could also easily imagine some managers making the number of reviews an empty metric that had to be tediously gamed, much like green squares on GitHub.
Is it more common than not that senior engineers aren't appropriately rewarded for investing time in code review? Probably, most companies are imperfect in one way or another.
# of code reviews is a metric I see on a lot of performance evaluations for TLs and senior engineers. As is some feedback from their team about the quality of code reviews. Shifting some coding norm through code reviews such that a better practice becomes the default is seen as a substantial form of impact.
Code reviews also have some more intangible benefits. If I am reviewing a substantial portion of the code in my codebase then I am less likely to be completely unaware of some module or design when there is a problem and we are digging in to debug.
> > The reviewer can learn new tricks and idioms of the language used.
>
> Which I doubt if I'm a senior engineer with the org.
Sorry but that sounds like a very small-minded take. I'm in a pretty senior position, doing tons of code reviews as both author and reviewer. And while of course the knowledge mostly flows from me to the junior folks, I still learn from them often.
This profession isn't something with linear knowledge progression where you are strictly more senior and better. Everyone has their own experiences and expertise and things to teach you if you are open to it.
>If a consistent pair programming activity is put in place, there’s no need for a formal Code Review process, where a “stranger” comes in and judges your code, often without any context, after all the work you’ve done and the time you spent.
>The correctness of the code should have been assessed already, using a TDD approach.
Those are not valid concerns
This is bullshit thats being sold by evangelists.
People doing pair prog. can "influence" eachother into bad directions or assumptions.
TDD doesnt ensure anything.
TDD is successful by an accident - it is good due to API Design being done first
>So... what if the entire team does a group programming session?
Thats not "normal pp"? Also I still believe that influence factor exists.
e.g person encourages the pair to some solution and they agreed, but after some actual thinking about it alone, the pair realizes that it aint gonna work.
>Works even in code reviews. "That'll be addressed in a follow-up PR" is a pretty common tactic as well, to avoid addressing an esoteric concern.
So, stop letting people do it?
Sure, theres nothing wrong with doing e.g whole codebase typo fix in other PR
>Ummm... TDD is a method of doing API design first...
Ive witnessed shitton of TDD discussions and nobody even discussed this most important thing
I worked on a team like this: 2 in Europe, 1 West Coast US, 2 in Australia. We did post-merge code reviews, otherwise we'd go back-and-forth over DAYS trying to merge a single PR. So, write your code with tests and deploy to production; monitor for errors. During post-merge review, if you saw a bug, create a fix + tests PR, tag the author and merge the fix. Generally, these would be 'subtle bugs' or bugs that weren't hit with the current production traffic. If you saw a better approach, create a PR with your suggestion and we'd do the whole back-and-forth thing -- but the feature was deployed, so we could sit back and bike-shed the approaches for days.
I loved it. The turn-around time from feature ideation to launch was very low. There was no 'authority' in reviews, it was 'our code' and we're always trying to make it better.
> Pull Request: the GitHub term used to indicate the differences between a new branch and the main branch, where you can leave your comments
Come on... why do people like this write articles? This isn't just lack of research, this just shows a complete misunderstanding of basic terminology the author uses...
This might be off-Topic but is there any good reason not to do code reviews? All the arguments against code reviews I've read so far are either about fast completion times on features and staying "agile" or about developers not wanting to bother to do "non-coding" time.
Devil's advocate: PRs are unnecessary and insufficient (this is the article's introduction).
For each claimed benefit there's another practice that is better suited, usually automatic, and should have been already implemented. For example, for catching regressions, a reasonably complete test suite. For formatting issues, IDEs and autoformat on commit. For latent errors, static analyzers, linters, run-time instrumentation, etc.
Things that cannot be automated, like understanding of the problem, design tradeoffs, etc. should have been discussed with the team before the PR, with or without pair programming.
Now, going back to reality, if all of the above is working, the PR should be a breeze and there wouldn't be much debate about them.
I'm curious now: have you ever been in an environment where automated checks have completely or nearly completely replaced code review? Mind sharing a more comprehensive list of what's hiding behind that "etc." in your comment?
One reason for my curiosity is that the author isn't claiming humans can do any of those things you talk about automated tools doing better than the automated tooling does. They're talking about it as a learning opportunity to see how other parts of the codebase work, and as an exercise in communication and teaching, all of which I can see the value in. In other words, your claims and the concerns backing them seem to be orthogonal to what the author has actually written.
I have no strong feelings about PRs (usually). I was acting as devil's advocate to try to answer supremekurt question in a positive way (others have pointed out the negative ways) and I ended up writing almost exactly what the article says in "Code reviews. Isn't it the past?".
But yes, I've been in environments where automated checks and communication during the previous steps nearly completely replaced the need for PRs. The "learning opportunities" happened, but almost always outside a formal code review. 3 devs + PO, competent, limited scope, same room, whiteboards.
Nothing magical behind that "etc.". Formatter, linter, unit tests, integration tests and code coverage in CI to keep you honest. Load tests launched manually to detect performance regressions, memory leaks and race conditions. Specific tools vary by language and product. You can do much more but for most products it's probably over-killing.
> Things that cannot be automated, like understanding of the problem, design tradeoffs, etc. should have been discussed with the team before the PR, with or without pair programming.
This is the problematic step because management wants high "velocity" and "impact", I'd say it is not possible for all team members to have understanding of the problem, much less design tradeoffs.
And then there is churn. You have to explain every time there is a new team member that no, the fact that we use var in our Google analytics mess is not a good reason for you to avoid let and const in your code.
Good code reviews are essential for finding bugs, especially those that are hard to spot.
They also ensure that the conceptual integrity of the codebase remains intact.
Moreover, they allow a developer who is less familiar with the codebase to propose a solution, even if they might not fully grasp the entire context of the project or codebase. This is a significant advantage.
However, unfortunately, code reviews can devolve into hazing or become a means to establish power dynamics that shouldn't exist.
We should never assume the reviewer knows everything. A productive code review is more of a collaboration rather than a teacher-student relationship.
How can we automate issues like using a task UUID as a device UUID within the code? Or addressing an n + 1 problem when another API with batch functionality is available?
I agree with you. I would argue that most if not all team dysfunction arises during reviews. Ive worked on places where senior engineers will allow bugs like you've mentioned through to production so they can catch the "hard off by one problem" to get the + and stick the - on someone else. Even serious bugs. Yep leads protecting themselves from ambitious seniors and seniors protecting themselves from juniors who shouldn't be juniors. Vindictive practices, gas lighting non-technical management, etc.
So with any decent system, the weakest link is the people using it. If you have adversarial people on your team, and in my experience, you probably do, sometimes the system surrounding code review is far more important. IE reviewers share the blame(better yet there is no blame shit just gets fixed), and hard rules around being an asshole need to be enforced.
I realize you are describing a healthy work environment. Unfortunately, I've yet to find one. I almost made one once, but a wave of clandestine hires destroyed it and it was time to move on. Pissed off the wrong narcissist...
Those reviewing being less knowledgeable but more willing to change the code being reviewed to be actually worse than original?
Happens more than you would've thought, but maybe still not enough to justify the abandonment of the practice... So, in sum, I don't think it's ever worth it to skip the review process.
Skipping code reviews is fundamentally non-agile. It's faster to fix code at review time than post deployment. Joel Spolsky said something similar with respect to technical design too.
In most places I worked recently reviewing PRs was a requirement that came with some sort of certification. The certification usually also imposes some restrictions on how many developers have to review the code, maybe some other aspects too, I didn't really dig deeper into this aspect.
From my understanding of the rationale behind the requirement: it's one more mechanism to ensure better code quality. A mechanism to try to catch errors that aren't "visible" to tests, linters and so on. So, I wouldn't put learning as the central point of PR reviews. Maybe in an organization that cycles through a lot of junior developers that might make sense, but if the team has been working on a project for some years, that doesn't really happen. Not in the quantity that would make a difference.
Now, as to how well it works... In my experience, it can both improve and worsen code quality. I've often been in a situation where PR reviews were made into some sort of tug-of-war power games, where less competent but more senior programmer tried to impose their will on someone less senior. Or just boggle down the whole process by asking to do unnecessary changes to the code to the point of confusing the PR author, which would then lead to a messy code that lacked uniformity and the original intention was obscured by the changes requested by the reviewer.
Essentially, it all boils down to there being no universally acceptable code quality metrics and no formal process that can vet the code. PRs are kind of like mathematical proofs -- they just have to be "convincing enough" for others to believe you, but there's no systematic / formal approach to accomplish this. So, at best, the rules around PRs are the "etiquette", but will ultimately depend on what the team working on the project believes to be the proper manners... Do they like it if you say "hi" in the PR comments? -- it's up to them. Do they want you to be nice, or do they want you to be brief and to the point? -- again, the team decides. There's no higher authority. None that I've ever heard of or would defer to.
Honestly, I don’t think code reviews make a ton of sense. If the things wrong with the code are so deep as to warrant a “teaching moment” then whatever help I try to give is probably so fundamental that it is unlikely to find purchase. I think we would be better off encouraging consistent and quick refactoring rather than diligent review. For example, a dev puts up a pr, things are technically fine but there are a lot of architectural issues, rather than waste time trying to teach them to make the changes, I would refactor them as part of another story, showing the “correct” way if they care and ensuring the code base is better off even if they don’t. Not to mention code reviews slow down dev and qe soooo much
I sometimes wonder how many thousands of people have quit / been fired because of Github code reviews.
I'm not saying they're bad idea, but Github's is too raw of an implementation that makes it too easy to create conflict.
Only after being burned by them a few dozen times, one can learn to use them in an effective and (mostly) drama-free manner.
Some practical improvements I'd like to see:
- invite/enforce reviewers to review their own batch of comments. Are they correct, relevant, kind?
- integrate AI for sentiment analysis. Could you say the same thing in a more constructive way? Are you supporting your claims factually with a snippet of code or link?
Github's UI (and Gitlab and every other system - they all suck) makes a lot of aspects of code-reviews needlessly difficult.
For example, you can submit PRs to non-main branches. Which means you can submit PRs to PR branches. Which should mean that a code-review asking for changes can just...be the changes.
This would work fine, but there's no UI for it in Github, so doing it is useless - no one wants to understand it.
And then if we had this capability, then realistically, shouldn't the comment UI work the same way? I can already checkout a PR as a branch, so why can't I just inline edit those files with comments and have them upload as comments directly? This would be much more effective since viewing the whole context of a change, rather then whatever git patch decides is the context, is usually necessary to properly appreciate a change.
Again: the functionality doesn't exist (and it's not technically difficult to implement) - but because of that, far too many reviews are just of whatever the change displays as in the UI.
Instead we have this miserable land of self-flagellation - "could you change X to Y?" - style comments, where the original author is needlessly put through extra actions to implement the change, re-push the branch etc. All things which have nothing to do with whether the request is a good idea.
I've been on the receiving end of some of those and honestly I prefer that to people not taking time to review your code because they don't have time, or because it ended up being too complex. Also worse than having zero comments.
I'm pretty sure some of mine were also perceived as adversarial, especially when I had to ask for a re-do.
I generally think of code review like static typing: many people think they don't need it but pretty much every serious large codebase uses it and needs it.
And just like coding with types: code review does not have to be a chore! With good processes and good tools it can be a superpower and a key element of developer productivity.
Mentioned in the "What" section is the matching of the stated goal of the PR with the reality: so rare to find anyone who puts any effort into writing good PR messages (not to mention good commit messages, but that's for another time). Linking to tickets, explaining the rationale, most other context is just skipped because being assumed (or assumed to be communicated through some other channels). Less than ideal.
In general the "what you put in is what you get out" applies very well both for writing good changes and PRs as well as good reviews. Applying similar principles as the author listed, I have a lot of good feedback from people that the comments in reviews helped them improve and occasions for rare actionable feedback on technical know-how.