Estimating the number of defects a project should expect to produce and remove is one of the least talked about subjects in software development. In this article I'll provide some industry statistics to help with that estimation process and hopefully convince you of why Agile will likely reduce the level of effort associated with defect fixing.
There have been studies performed to estimate the number of defects that a software development effort will likely encounter during its life cycle. Obviously the larger the project the greater number of defects one should expect to encounter. If you follow this blog, you’ll know Steve McConnell’s book "Software Estimation: De-mystifying the Black Art” is one of my favorites and is always by my side. One of the chapters discusses estimating defects. In his book, McConnell references a Capers Jones (2000) study that indicates a reasonable expectation is to have 50 defects per 1000 lines of code (LOC).
However, a more granular look will show that smaller projects will experience fewer defects than larger ones; for example a project with fewer than 2K LOC will likely have 0-25 defects per 1K LOC where a project with over 512K LOC will likely have 4-100 defects per 1K LOC. Keep in mind that factors such as your programming language and other technologies will effect this estimate. It’s always more accurate to use historical data to estimate effort but in lieu of that, these data are better than nothing at all.
Here are a few more factors to consider:
- Defects occur at all points during development e.g. requirements, architecture, development, documentation, etc.
- There are best practices for defect removal such as design reviews, code reviews, prototyping, unit testing, system testing, and various levels of beta-testing – each of which have different removal rates. The highest removal rates come from formal code reviews (45-70%), prototyping (35-80%), and high volume beta testing (60-85%). Surprisingly (at least to me) one of the lowest removal ratings comes from regression testing (15-30%).
Here is where I transition from conveying data to providing my opinion on why Agile allows a development team to reduce the effort associated with defect removal when compared to Waterfall.
With Waterfall, the life cycle stages occur in a sequential manner,i.e. requirements, design, development, and test. Although it could be reasonably argued that the number of defects may not significantly change between Waterfall and Agile I think it is unreasonable to assume that the effort to remove them will remain the same. Defects introduced during the requirements gathering and design stages that are not found until development or even later have a snow ball affect because they pile up on one another.
Because of it’s iterative nature, Agile allows for requirements, design, and development defects to show themselves virtually immediately after they have been introduced. Dealing with defects introduced over a single two week sprint is a lot easier to manage than untangling a slew of defects that occurred over several months. We’ve all been on Waterfall projects where our integration testing revealed flaws that have required rewriting methods and even entire components. I contend that those types of wholesale defect removal efforts are mitigated substantially through continuous integration, daily stand-ups, two week sprints with customer tests immediately thereafter, as well as the other feedback loops inherent in Agile.
I’m interested in your thoughts.