Don't be fooled by the coverage report

By: 
John Paliotta

May 07, 2012

Last week, someone sent me a link to this paper: In pursuit of code quality: Don't be fooled by the coverage report I know this is not a new paper. It’s from 2006. But my inbox is really deep! In any event, the points made in this paper are applicable to software development and testing regardless of the industry or the language used. the paper reinforces the message that we use when introducing our Code Coverage product to potential clients. Specifically the following three points: Code Coverage Tools are an “easy addition” to a developer’s tool kit This is really a key point. Unlike many tools that require a change in process, code coverage tools can run silently behind the scenes while you perform all of your existing test activities. The setup of the code coverage tool can all be built into the build environment. If this is done properly testers will not even realize that they are testing a build with code coverage enabled. There is a disconnect between coverage percentage and test quality This is the central point of the paper. That is, that relying on the fact that you have achieve a high level of code coverage does not mean that your application has been thoroughly tested. The author uses may examples to make his point and I won’t repeat them here. This point causes confusion with many organizations as they look to adopt code coverage for their projects. We often get asked: “what percentage of code coverage should we achieve”. As the author correctly points out, this looks at the question from the wrong side. Code Coverage should not be the goal of testing. Testing should occur in the context of proving that the application, sub-system or unit are implemented correctly. Once the correctness of the application is established then code coverage can be examined to determine if the testing is complete What does missing code coverage tell us? Well the obvious thing that it tells us, is that part of the application has not been tested. Analysis is required to answer: “why has it not been tested”. There are three general reasons for the lack of code coverage: oversight by the testers, inadequate requirements, and dead code. Different Code Coverage Analysis Catches Different Problems The author uses several examples to show the deficiencies of statement coverage and branch coverage; specific poorly constructed statements and conditions that will not be exercised if you only get statement coverage or if you only get branch coverage. This point is exactly why industries such as avionics have adopted different coverage mandates for different levels of criticality in the software being tested. For example, the MC/DC (Modified Condition / Decision Coverage) catches problems like short circuit evaluation and function calls nested in conditionals and is required for the most safety critical systems in Avionics, Railway, Medical, Automotive, and Industrial Controls. I encourage you to read this paper, especially the hypothetical situations regarding estimating the time to modify existing code. Anyone who has been in this industry for a few years will get a chuckle out of that!