In the past years there has been much ado about the quality of software. Programmers have emancipated and evolved into software craftsmen. Metrics have been defined and honed to measure the quality of code and deliverable artifacts. More and more of our clients are asking for guidance in achieving higher and higher quality goals.
The discussion about software craftsmanship hasn’t been all positive. Many developers that I’ve worked with express the feeling that certain levels of quality are only driven by the personal gratification of craftsmen and not in line with the economic realities of our trade. In this article I strive to establish guidelines in the compromise between quality and speed. I feel it is warranted to be more nuanced than the simplistic statement: “Going fast by going well”. This is because “going well” can mean different things in different contexts.
I look for a line in the sand between improving quality to improve procreation and improving quality for mere self indulgent practice.
The importance of quality
Any developer with some experience intuitively knows the importance of quality. Reading and changing code is harder than writing it. If you’ve ever had to change code of suboptimal quality you’ve known that feeling: if only this code had gotten a little bit more thought when it was originally developed it could have saved so much headache. The quality of code is principally defined by three attributes:
- Is it simple
- Is it easy to change
- Is it easy to test
I’ve asked many different developers to list the attributes of high quality code, and this top three was the result. When you think about it, it is kind of obvious. Whenever you look at the code, you’re either there to fix a problem, or to implement a change. The time you need to spend on that is highly dependent on these three factors.
When is quality high enough?
So far we’ve seen only qualitative attributes. This is unsatisfactory for two reasons. First there is the lack of scientific value; we have nothing measurable, so any statements about quality have to be subjective. Second there is no way to predict qualitative results like cost of ownership in relation to quality. There have been many attempts to resolve this. There are intricate metrics like cyclomatic complexity, test coverage, entanglement and what not. All of these metrics have been gamed and abused. There is at this time no way to relate these metrics to the cost of changing the software. Sorry.
The only thing we have is estimations by human developers. How unscientific!
Estimations work adequately for cost predictions in agile processes, so maybe we can use them anyway. Even if we have no useful quantitative measures for quality it doesn’t mean we’re completely in the dark. We do have qualitative relationships that we can use to guide us in deciding what level of quality is required. We can use estimates to quantify them.
There are three variables that control the optimum quality:
– the proportion of time spent reading the software vs. writing it.
– the average cost of a defect
– the amount of changes expected
In a standard (agile) process we usually only estimate the cost of implementing a change. We often don’t estimate the risk of the change. Also we don’t estimate the change and its risk after a theoretical refactoring we might do. If we would do this we would have much more insight into the cost of not doing quality improvements.
Only invest if you expect a return
With the industry turning Agile slowly, it is becoming common practice to only implement those features that, after estimating them, seem to add more value than their cost. This can also be applied to quality improvements.
First you estimate the effort of a certain quality improvement. Then you estimate the reduction in effort for the remaining features if the improvement would be done and the reduction in risk of defects. This would give you a result for a break even point in the future. This yields a concrete argument to either do or not do the improvement. You can then discuss this with the guy paying the bills in a vocabulary that he understands very well.
Is polishing code just for practice a bad thing?
There is a time and a place for practice. If you don’t practice you’ll never get good. Nothing wrong with polishing some code, removing some dodgy lines here and there. Just because you can.
It is however not our god given right to polish the code until we rise to crescendo. Many times I hear complaints from developers like: “We never get time for refactoring, so that’s why our code looks like this.” And I often hear these countered with: “You should just take that time, because it is your job as a craftsman to create high quality code.” The time spent polishing isn’t for free. We should look for a smart compromise, that is our job.
Is there room for real Craftsmanship or should we all just hack
At Facebook developers are urged to HACK. They are very successful. So in certain context I believe that spending hours on polishing existing code has no place. Make it work and damn the source is a great strategy in many more places than you might think. But Facebook is not making airplane guidance systems. Or trading systems. Or route planners. If you make something that people depend on for their shopping, travel, income or even their lives, you have more responsibility to make sure it doesn’t break.
But there is a big grey area between hacking and craftsmanship zealotry.
Figure out the forces that drive the costs of ownership in your context and use them to prioritize quality improvements when they actually make sense. And practice if you find the time of course. Only practice makes perfect!
I’m really interested in experiments you have done in making the cost and benefit of quality improvement visible. Drop me a link or a comment if you have something to share.