Jeffrey Fortin

September 22, 2017

You ran all of your required tests, and they all passed. But do you know how much of the code was even tested? If the only metric being evaluated is that the tests passed, then it’s possible to come away with the feeling that the product is in good shape and ready to release. But test results alone are often misleading.

Adding additional quality metrics such as test coverage, code complexity, total number of statements, etc. help by providing a more complete quality assessment. These metrics provide real information about your code and your tests.

Now that metrics have been obtained, what do they mean? The next step is to apply analytics to those metrics in order to understand how quality is affected. For example, the metric of code coverage has been proven in high integrity industries as a way to assess the level of integrity. The more code coverage you have, the higher the integrity of that code is. In this example, the metric of code coverage has meaning behind it because of the best practices in a particular industry. Analytics develop over time as you are able to associate what metrics are associated with desirable or undesirable outcomes. Standards such as IEC 61508 and MISRA outline best practices for software development and specify metrics and analytics for high integrity and critical systems. For example. IEC 61508 specifies structural test coverage metrics and an analysis of how the metric should be applied for different levels of integrity. 

Once you have a set of  metrics and corresponding analytics , they can then be used as tools to address the risks that the software might be carrying. In the scenario outlined in the beginning of this blog, we didn’t even know there was risk because all of the tests passed. However, by adopting industry best practices and looking at additional metrics such as code coverage, the true colors of the software were revealed.

Once you have real information --metrics-- and understand what those metrics mean --analytics--, the risks can be assessed, and the underlying technical debt can be measured. This information can then be used to determine what steps you should take next to address the risks.

The bottom line is, don’t deceive yourself into thinking all is well with your software by relying on test results alone. Use best practices to get yourself into a process that enables you to repeatedly develop high quality software by establishing a complete and continuous quality process for assessing the code correctness. Start by taking an assessment of where things stand. If only one metric is involved in the quality process, what would it take to identify and calculate additional meaningful metrics? Leverage automation to calculate and analyze the metrics so it can be done often and continuously and make it easy for everyone on the team to see and understand the results. From there, you can incrementally improve your processes over time.

Listen to our “How to Reveal Your Software's "True Colors" With System Testing” webinar to learn more, including how to set up a complete system test strategy that will allow you to run system tests more efficiently, test all of your code, and verify that all requirements are tested.