Adam Tornhill is a programmer who combines degrees in engineering and psychology. Adam is the founder and CTO of CodeScene.
Code quality is largely undervalued at the business level. This is evident in the rampant problem of technical debt. Today, technical debt consumes one-third of technology budgets, yet only 10% of business managers actively manage technical debt (pg. 7).
Part of the root cause is that companies keep making technical shortcuts in the belief that they’ll be able to ship quicker. There’s still a perceived trade-off between speed and quality embodied in the infamous “move fast and break things” mantra (paywall). In that line of thinking, writing proper code is expected to take longer, which slows down the software delivery.
While these beliefs might be true at the micro-scale of individual tasks, the inevitable accumulation of code quality issues hurt the business as a whole. So in this article, I’ll shatter the speed versus quality myth via quantitative data on how a business benefits from improving its code. That way, you can motivate technical debt remediations with a clear return on investment driven by empirical research.
Cut your development time in half.
The data comes from a 2022 paper by me and research colleague Dr. Markus Borg. Our research involved collecting data from large enterprises, analyzing code quality and correlating it with potential business impact. The business impact was measured by collecting Jira data on the lead time for code changes as well as calculating the number of post-release defects down to the file level.
That is, shorter lead times mean faster speed to market, and fewer defects imply higher quality with a better customer experience.
The initial research question focused on investigating the difference in task completion time depending on the quality of the impacted code. These results revealed that the development time for solving a task such as new features or fixing a bug is, on average, more than twice as quick in high-quality code compared to problematic code. That’s quite a difference.
Recognize the competitive advantage of predictable delivery.
While a promise of cutting development time in half is attractive in itself, the actual gain is much larger once we consider the variation in task completion times. To prove this, the second part of the study investigated the variation in development time with respect to code quality.
Here, the data reveals a dramatic contrast: Task completion times can be an order of magnitude longer when working with problematic code. Let’s consider the business impact in the context of your competitive landscape.
If your organization has code quality problems and needs two months to implement a specific product capability, then your competitors with a healthier codebase can get the same thing in less than a month. It’s going to be really hard to remain competitive.
Go fast while reducing defects.
So far, we have learned that high-quality code lets us implement code changes faster and more predictably. But to shatter the speed versus quality myth, we also need to know what happens after we’ve pushed the code into production.
Since the data set collected from Jira included information on the type of work being done (e.g., features or bug fixes), we can also correlate code quality and defects. This data point might be the most impactful as it shows that poor-quality code has up to 15 times more defects than healthy code.
Such defect densities have a severe impact on any product maturity experience. Furthermore, defects induce further waste since discovered bugs need to be fixed, leaving less time for innovations and product improvements. It’s a frustrating cycle where code quality continues to deteriorate, leading to even more rework.
Discard the speed versus quality myth.
Armed with these numbers, we can swiftly discard the speed versus quality myth. In fact, the empirical data shows that the opposite seems to be true: We need high-quality code in order to move fast.
Unfortunately, and as evidenced by technology’s vicious circle of business demand, many companies move in the opposite direction. The pressure of adding new features with customer visibility takes priority over improving existing code. Refactoring is a task that, historically, lacked clear external impact. After all, why spend time fixing something that already “works”?
Having empirical data changes the game. Use it to your advantage by:
• Making code quality a KPI to give it the attention it deserves.
• Choosing metrics with proven impact to ensure you reap the promised benefits.
• Using the KPI to calibrate the balance between new features with the need for sustaining code quality via refactoring.
• Letting code quality improvements come with quantifiable benefits, enabling data-driven conversations between engineering and management.
This way, you ensure your codebase remains maintainable and continues to support the business, now and in years to come.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Read the full article here