Home » Code & Coffee - Software Dev Sagas

Navigating Code Quality: My Experience with MCR and Pair Programming

 · 15 min

In this blog, I explore the pivotal role of code readability in software development, emphasizing its influence on maintenance ease, collaborative efficiency, and long-term project sustainability.

code-quality
Table of Contents

In this blog, I explore the pivotal role of code readability in software development, emphasizing its influence on maintenance ease, collaborative efficiency, and long-term project sustainability.

I delve into the inherent challenges of assessing code readability and advocate for peer feedback as a robust solution, particularly through Modern Code Review (MCR) and Pair Programming.

The blog also examines the limitations of automatic feedback tools, compares MCR and Pair Programming in various contexts, and concludes with insights into making informed architectural decisions, underlining the importance of team alignment in such processes.

Code Readability Is Essential

A significant majority (83.8%) of developers consider code readability essential, and poor readability can impede understanding and evolving the code, as it directly affects the maintainability and extensibility of software. Here are the four most important reasons why code readability is crucial:

  • Ease of Maintenance: Readable code is easier to understand and modify. It reduces the time and effort required for developers to decipher and maintain the codebase, facilitating quicker updates and bug fixes.
  • Team Collaboration Efficiency: In team environments, readable code ensures that everyone can easily understand and contribute to the project. This fosters better collaboration and knowledge sharing, as well as reducing onboarding time for new team members.
  • Long-Term Sustainability: Code that is easy to read and understand stands the test of time. As technologies and teams change, readable code remains approachable and adaptable, ensuring the long-term viability of the software.
  • Error Reduction: Readable code helps in identifying and avoiding errors. When code is clear and logical, it’s easier to spot mistakes, inconsistencies, and potential issues, leading to more robust and reliable software.

Challenges in Assessing Code Readability

I think that assessing readability and maintainability of my code and code in general is difficult because of the following reasons:

  1. Familiarity Bias: Developers may overlook complexities in their own code due to over-familiarity, assuming it’s clear to others as it is to them.

  2. Assumed Context: Developers’ deep understanding of their code’s context can blind them to areas needing more explanation for those unfamiliar with the project.

  3. Cognitive Bias: Developers often view their own code positively, leading to overlooked flaws or unjustified rationalizations of complex code parts.

Okay, I’m guilty.

Peer Feedback: A Solution for Assessing Code Quality

In 2016, the study A study of the quality-impacting practices of modern code review at sony mobile ( Citation: , & al., , , , & (). A study of the quality-impacting practices of modern code review at sony mobile. 2016 IEEE/ACM 38th International Conference on Software Engineering Companion (ICSE-C). 212–221. Retrieved from https://api.semanticscholar.org/CorpusID:9324066 ) was published in IEE and they found that a higher rate of self-verification is associated with lower software quality. Other studies like ( Citation: , & al., , , , & (). Modern code review: A case study at google. Retrieved from https://research.google/pubs/modern-code-review-a-case-study-at-google/ ) and ( Citation: , & al., , & (). Characteristics of useful code reviews: An empirical study at microsoft. IEEE Press. Retrieved from https://ieeexplore.ieee.org/document/7180075 ) have found that Code Review improves code quality.

This study ( Citation: , & al., , & (). How do developers improve code readability? An empirical study of pull requests. Retrieved from https://arxiv.org/abs/2309.02594 ) from 2023 did a research particular on code readability improvements and they could prove the positive impact of the Modern Code Review (MCR) process.

I didn’t find an evidenced based explanation, but it’s easy to find plausible reasons like:

  1. Fresh Perspective: Peer reviews provide new insights, revealing overlooked issues in code and enhancing its readability and maintainability.

  2. Diverse Experience and Expertise: Peers contribute diverse skills and experiences, offering comprehensive feedback and alternative problem-solving approaches.

  3. Knowledge Sharing and Learning: Peer reviews are educational, helping developers learn and adopt new coding practices, thus improving overall team skills and code quality.

  4. Raising the Bar for Quality: The awareness that peers will review one’s code often motivates a developer to invest additional effort and attention to detail. This drive stems from a natural desire to be perceived positively by colleagues and to avoid being seen as a less competent developer. Consequently, this mindset leads to higher quality code submissions.

Does this resonate with you? I bet it does ;-)

Comparing Peer Feedback Methods in Coding

Understanding the importance of peer feedback is one thing, but knowing how to effectively obtain it is another. To navigate this, let’s delve into the two most prevalent approaches:

  1. Modern Code Review (MCR): This method is synonymous with the widely-used, tool-supported Merge/Pull Request process, which involves asynchronous commenting and resolution.

  2. Pair Programming: A well-known technique where two programmers work together at one workstation, collaboratively coding and reviewing in real-time.

Although both methods are designed to improve code quality and enhance team collaboration, their practical application can be quite intricate and varied.

Why not Automatic Feedback? The Role of Static Code Analysis Tools.

Considering another option, static code analysis tools emerge as potential aids. They are adept at automatically generating recommendations for code improvements.

However, when it comes to assessing code readability, these tools have limitations. Dantas and Rocha in ( Citation: , & al., , & (). How do developers improve code readability? An empirical study of pull requests. Retrieved from https://arxiv.org/abs/2309.02594 ) found a very low correlation between automatic code feedback and developer suggested improvements.

Say say:

However, there is still a lack of production-ready tools and new readability models that effectively categorize changes in code readability, motivating the need for further research on how developers improve the readability of their code in real-world projects.

This may change in the near feature due to very fast improvements of AI-supported tools.

Comparing Modern Code Review and Pair Programming for Code Quality Feedback

This section is dedicated to examining how Modern Code Review (MCR) and Pair Programming contribute to enhancing code quality. While acknowledging the varied benefits of Pair Programming, our focus here will be solely on its impact on code quality.

Numerous studies and articles have explored these two methodologies, offering insights on the most effective scenarios for each. The aim is not to advocate for the exclusive use of one over the other, but rather to provide a balanced perspective on when and how each method can be optimally utilized.

Let’s clarify first, what Modern Code Review and Pair Programming is.

The MCR: Modern Code Review

The following diagram describes the Modern Code Review process in its simple implementation.

graph TB
    A[Developer writes code] --> B{Code Review Request}
    B --> C[Peer Reviews Code]
    C --> D{Is Code OK?}
    D -- Yes --> E[Code Merged]
    D -- No --> F[Feedback Given]
    F --> A
    E --> G[Deployment/Next Steps]
  1. A developer writes code and then requests a code review.
  2. A peer reviews the code.
  3. The reviewer[s] decides if the code is satisfactory.
  4. If yes, the code is merged and moves to deployment or next steps.
  5. If not, feedback is given, and the developer goes back to improving the code.

It’s mainly that what you already know as Gitlab Merge Requests or Git Hub Pull Requests.

Fundamental Nature: Tool-Supported, Asynchronous Feedback

However, the fundamental nature of the Modern Code Review process is its tool-supported, asynchronous format. It involves systematic, digital feedback, with definitive approval or rejection steps, ensuring quality control in a flexible, efficient manner

Pair Programming

I think, it shouldn’t be necessary to explain what Pair Programming is, but I just want to point out that its adaptability extends beyond common perceptions.

Often implemented remotely, it’s a partnership where developers collaborate on the same project, seamlessly switching between the roles of ‘driver’ and ’navigator.’ This fluidity sparks enhanced creativity and problem-solving.

It’s more than improving code quality through reviews; it nurtures a dynamic learning atmosphere, promoting knowledge sharing and new ideas.

Essentially, Pair Programming is a union of talents, jointly navigating the complexities of software development.

Core Essence: Direct, Real-Dime Discussions

Regardless of how Pair Programming is implemented, its core essence remains the direct, real-time discussion about the code between two developers.

This continuous dialogue is pivotal, allowing for immediate feedback, rapid iteration, and the amalgamation of diverse perspectives. Such interactions not only solve coding problems more efficiently but also strengthen the collaborative bond, making the development process more engaging and effective.

Evaluating the Effectiveness of Pair Programming and Modern Code Review in Diverse Environments

Research such as ( Citation: & , & (). The effect of pair programming on code maintainability. Springer-Verlag. https://doi.org/10.1007/978-3-031-20218-6_3 ) and the Google Study have demonstrated that both Pair Programming and Modern Code Review ( MCR) significantly enhance code quality.

However, their effectiveness varies depending on the environment. The impact of MCR is influenced by factors like the size of the change set, the number of reviewers participating in the process, and the extent of refactoring changes implemented.

In contrast, Pair Programming faces challenges in distributed settings, and social compatibility among team members is also a critical factor for its success.

I’ll delve deeper into this topic and then introduce a hybrid model known as Pair Review. Following that, I’ll outline a basic framework to determine the appropriate context for each approach.

MCR: Patch Size

The study ( Citation: & , & (). Investigating the effectiveness of peer code review in distributed software development. Association for Computing Machinery. https://doi.org/10.1145/3131151.3131161 ) found that comment density in code reviews decreases up to a patch size of 600 Lines of Code (LOC). Beyond this point, the decrease in comment density is not significant.

Their conclusion was as follows:

The patch size negatively affects all outcomes of code review that we consider as an indication of effectiveness. Reviewers are less engaged and provide less feedback. Moreover, the duration is not linearly proportional to the patch size, which may affect the quality of code review.

In essence, Modern Code Review (MCR) is more effective with smaller merge requests. It appears that merge requests in the range of 100-200 LOC hit an optimal balance.

This finding might resonate with your experience. There’s often a noticeable drop in motivation to conduct a thorough review when faced with a “large” merge request containing numerous lines of code.

MCR: Refactoring Change Sets

The research described in ( Citation: , & al., , , & (). Code review practices for refactoring changes: An empirical study on OpenStack. https://doi.org/10.1145/3524842.3527932 ) noted that:

reviewing refactoring changes tends to be more time-consuming compared to non-refactoring changes.

Additionally, it was found that refactoring changes often necessitate increased communication for clarification purposes.

MCR: Work On Code Hot Spots

The research outline in ( Citation: & , & (). Rebasing in code review considered harmful: A large-scale empirical investigation. https://doi.org/10.1109/SCAM.2019.00014 ) found that in approximately 75% of code reviews, developers are required to perform rebasing. Notably, around 34% of these rebasing operations negatively impact the review process by making the changes under review invalid.

This conclusion was drawn from an analysis of 28,808 code reviews across 11 different systems. Given the extensive scope of this study, it’s reasonable to assume that similar patterns might be observed in your own projects.

Moreover, the study suggests that focusing on areas of code that are frequently modified or ‘hot spots’ can lead to increased chances of delayed code reviews, primarily due to the need for rebasing operations.

MCR: Number of Reviewers

The Google Study ( Citation: , & al., , , , & (). Modern code review: A case study at google. Retrieved from https://research.google/pubs/modern-code-review-a-case-study-at-google/ ) highlights:

The optimal number of reviewers has been controversial among researchers, even in the deeply investigated code inspections.

In their research, ( Citation: & , & (). Convergent software peer review practices. ) discovered that having two reviewers is optimal. However, an important observation from Google’s study suggests that for smaller merge requests, one reviewer often suffices:

Code review at Google has converged to a process with markedly quicker reviews and smaller changes, compared to the other projects previously investigated. Moreover, one reviewer is often deemed as sufficient, compared to two in the other projects.

From these findings, we can infer that increasing the number of reviewers doesn’t necessarily enhance the quality and may actually prolong the review process. While there are scenarios where more reviews could enhance code quality, this benefit is counterbalanced by increased time and resource investment.

Pair Programming in Distributed Teams

Pair programming enhances code quality, fosters collaborative learning, and accelerates problem-solving

However, in distributed teams, it faces unique challenges:

  • coordinating across time zones,
  • reliance on digital communication tools,
  • potential technical issues,
  • and the need for effective remote collaboration strategies that maintain the benefits of this approach while mitigating its drawbacks.

Pair Programming: The Social Aspect

In my experience as a programmer, I’ve encountered talented engineers who weren’t suited to Pair Programming, while others seemed to seamlessly complete each other’s thoughts.

Research, such as ( Citation: & , & (). The effect of pair programming on code maintainability. Springer-Verlag. https://doi.org/10.1007/978-3-031-20218-6_3 ) , has noted:

the effect of pair programming on maintainability will be more evident when pairs are systematically formed

Furthermore, it was observed:

big difference in the results of some pairs, perhaps the reason for this was the differences of their expertise.

This suggests that team leaders should not mandate Pair Programming universally. It’s crucial to consider the social dynamics and compatibility within the team before implementing this practice.

Pair Review: The Hybrid Model

I prefer to fully develop a mature feature after initially discussing its broad class structure. Post-development, I engage in what’s known as a Pair Review with a colleague. During this process, I explain the context and the changes made, and we collaboratively explore enhancements. This method is particularly effective because it fosters immediate feedback and constructive dialogue, leading to a deeper understanding of the code and improved quality.

I find this approach beneficial as it merges the advantages of personal code ownership with the insights gained from teamwork. Pair Review not only elevates the quality of our code but also enriches our collective knowledge.

Additionally, this strategy minimizes the need for frequent alignment meetings, instead capitalizing on real-time feedback when it’s most impactful.

When to use MCR, Pair Programming

When aiming to swiftly deliver a feature with reasonable quality, without obsessing over perfection (as the code might soon be revised), the choice between Modern Code Review (MCR), Pair Programming, and Pair Review becomes crucial. Based on various studies, MCR tends to delay feature shipping, while Pair Programming, though beneficial, can add some implementation overhead and isn’t always feasible.

Considering these trade-offs and key findings, the following guidelines can be helpful:

Opt for Pair Programming in Early Stage Projects

In early project phases, where larger merge requests (MRs) and frequent code rewrites are common, Pair Programming is especially advantageous. Larger patches can diminish the effectiveness of MCR, and constant refactoring tends to increase review iterations.

Implement an MCR Process with a Single Reviewer

As a foundational process applicable to all projects, having one reviewer in MCR is generally sufficient. This approach is efficient even in early-stage projects, where smaller MRs can be quickly reviewed asynchronously. Google’s practices suggest that one reviewer is often adequate in many scenarios.

Utilize Pair Review for Larger Patch Sizes

MCR efficiency drops with increasing patch sizes. In such cases, Pair Review is an effective method to address this issue, offering a more hands-on and collaborative approach to handling larger changes.

Limit Refactorings in MCR; Prefer Pair Reviews for Refactoring

Refactorings can potentially delay MR approvals. To mitigate this, it’s advisable to conduct refactorings during Pair Programming sessions or through Pair Reviews, especially when refactorings are part of your MR. This approach ensures quicker feedback and avoids the delays often associated with refactorings in MCR.

Conclusion

In summary, while MCR is a solid base process, the use of Pair Programming and Pair Reviews should be strategically employed based on project stage, patch size, and the nature of the work, such as refactorings, to optimize efficiency and code quality.

Appendix: What About Architectural Changes?

It’s crucial to align architectural changes with the entire development team before diving into code. I believe in making these decisions through discussions with the Product Owner, focusing on requirements, risk mitigation, and future planning. While it’s essential to consider foreseeable requirements, I’m cautious to avoid over-engineering.

I adhere to the principle: “What’s not on the whiteboard, did not happen,” leveraging visual tools for clarity. After these discussions, we often engage in MOB-Programming sessions to sketch out the basic program structure. This approach helps ensure everyone is on the same page, significantly reducing the risks of after-the-fact debates or disagreements.


References

Dantas, Rocha & Maia (2023)
, & (). How do developers improve code readability? An empirical study of pull requests. Retrieved from https://arxiv.org/abs/2309.02594
Bosu, Greiler & Bird (2015)
, & (). Characteristics of useful code reviews: An empirical study at microsoft. IEEE Press. Retrieved from https://ieeexplore.ieee.org/document/7180075
Santos & Nunes (2017)
& (). Investigating the effectiveness of peer code review in distributed software development. Association for Computing Machinery. https://doi.org/10.1145/3131151.3131161
Sadowski, Söderberg, Church, Sipko & Bacchelli (2018)
, , , & (). Modern code review: A case study at google. Retrieved from https://research.google/pubs/modern-code-review-a-case-study-at-google/
Sharovatov (2023, 9, 23)
(). Stop doing code reviews and try these alternatives.  Retrieved from https://qase.io/blog/code-review-alternatives/
Sharovatov (2023, 9, 23)
(). When automated code reviews work — and when they don’t.  Retrieved from https://qase.io/blog/automated-code-review/
Paixao & Maia (2019)
& (). Rebasing in code review considered harmful: A large-scale empirical investigation. https://doi.org/10.1109/SCAM.2019.00014
Shimagaki, Kamei, McIntosh, Hassan & Ubayashi (2016)
, , , & (). A study of the quality-impacting practices of modern code review at sony mobile. 2016 IEEE/ACM 38th International Conference on Software Engineering Companion (ICSE-C). 212–221. Retrieved from https://api.semanticscholar.org/CorpusID:9324066
McIntosh, Kamei, Adams & Hassan (2015)
, , & (). An empirical study of the impact of modern code review practices on software quality. Empirical Software Engineering, 21. 2146 – 2189. Retrieved from https://api.semanticscholar.org/CorpusID:1353923
Thongtanunam, McIntosh, Hassan & Iida (2018)
, , & (). Review participation in modern code review: An empirical study of the android, qt, and OpenStack projects (journal-first abstract). Retrieved from https://api.semanticscholar.org/CorpusID:4807407
Nawahdah & Jaradat (2022)
& (). The effect of pair programming on code maintainability. Springer-Verlag. https://doi.org/10.1007/978-3-031-20218-6_3
Misra (2021)
(). Pair programming: An empirical investigation in an agile software development environment. Retrieved from https://api.semanticscholar.org/CorpusID:231714653
AlOmar, Chouchen, Mkaouer & Ouni (2022)
, , & (). Code review practices for refactoring changes: An empirical study on OpenStack. https://doi.org/10.1145/3524842.3527932
Rigby & Christian (2013)
& (). Convergent software peer review practices.