Jeffrey Fortin

December 05, 2017

Start with a plan
Software of any nontrivial size is complex, and fully testing every possible combination of inputs and validating the expected outputs is a luxury that few can afford. With a test plan, risk areas can be identified early on, and testing strategies can be identified to mitigate those risks. For example, you may decide that for unit testing, where an individual software function call is the unit under test, it is not necessary to test every value of an integer parameter. Instead you create a test strategy that says tests will be added for the maximum, mid and minimum values. In this way we cut the number of tests from 4,294,967,296 down to just 3. You can easily see that testing every possible combination of values for every parameter for every function is simply too time-consuming. Fortunately, testing strategies have been developed to give us ways to significantly reduce the number of tests required for a given software application. 

The test plan should be a part of the overall software development and lifecycle plan. The test plan should include all types of test activities like those shown in Figure 1. System test procedures can be written as soon as the requirements for the project are known. Likewise, once the design is complete, unit test procedures can be written. The testing of the APIs and control logic will yield high value as these reflect the essential behaviors of the application.The test plan should not be static, instead, metrics should be identified early to assess the overall software quality. These metrics can be fed back into the test plan and project action plan to improve the overall risk profile of the development project.

Figure 1 - Software Test Activities

Why is test completeness important?
Test completeness drives release readiness. Undertested code, meaning we have not tested all the requirements, means that we are gambling with the quality of the delivered product.  In the regulated industries, this is unacceptable, and it is why the regulations are very clear about the level quality that is required before software can be deployed into use. But even for the nonregulated market, the risk associated with released untested software is high. The ramifications could be damage to brand and product perception, legal liability and even loss of market share.

Undertested code also has an impact to the project or productivity metrics. The estimates for your next projects are based on the actuals for your last project. But the technical debt, the work left unfinished, is not added to those metrics. Therefore, you end up underestimating the next project and incurring more technical debt in the next release.

An incomplete set of test cases for the application’s requirements also adds risk to the maintenance of the software. As you fix bugs or make enhancements, you run the risk of breaking the existing APIs. This is the classic one step forward, two steps back. The effort you put into creating the test cases derives a benefit every time you run the tests by ensuring the software quality has been maintained. But when you have an incomplete test suite, the benefits could be overcome by the risks associated with the untested requirements.

How can we measure testing completeness?
Two test metrics are key to measuring testing completeness. The percentage of tested requirements and the percentage of code tested by the test suite (code coverage). With VectorCAST/Analytics, we can easily see these metrics for any project or for all multiple projects. VectorCAST automatically collects this information as you run your test suite based on your project settings. By having the information displayed on VectorCAST/Analytics dashboard, everyone on the project can see the current quality status and can contribute to the quality goals of the project.

What criteria can we use to measure testing completeness?
Once metrics have been identified, they must have relevance for your project. The meaning of the metric is where the metric has value to your project. If you are working on an application that is intended for safety critical, you may be required to meet specific test coverage requirements. For instance, your project may be required to achieve 100% statement coverage. So, for the statement coverage metric, your release criteria will specify that the statement coverage metric must be 100% before the software can be released. Tracking statement coverage has value because the software cannot be released until you have 100% statement coverage. Measuring and tracking this metric early in the development cycle and having it visible to the entire team will allow all members of the team to contribute to achieving the goal.

If you are in a regulated industry, your testing is not complete until you have met all the testing requirements for that industry. Avionics, Industrial, Automotive, Railway and Medical industries all have specific requirements for testing that must be considered as part of the release readiness requirements.

What steps can we take to improve testing completeness?

Plan for test
Your software development plan should clearly state what test activities will occur and how quality metrics will be collected and published to the team. Multiple test activities should be used to achieve the highest level of completeness. System tests have the advantage of testing large areas of the code base but may not be able to exercise all of the code. Individual unit tests have smaller scope but are easy and fast to run. A combination of system tests and unit tests will allow for more complete testing.

Make it easy for anyone to contribute to quality
Testing should be done by all members of the test, not just the “test group.” It should be easy for anyone on the team to add a test or run a test and easily understand the test results and quality metrics. The benefits from testing are highest when the tests are defined as early as possible and run early and often. In fact, tests should be run continuously.

Define Release Readiness
Just by knowing up front what must be achieved before the software can be released has value. The quality goals are best met when everyone on the team is contributing to the goals. Identifying the metrics associated with release readiness and monitoring these metrics will go a long way to ensure the right level of test completeness.

Because of the complexity of software, software is never fully tested. Test plans are a compromise between the time it takes to fully test the software and testing only the minimum amount needed to validate the application and system requirements. The number of tests and testing types will be different for any given application. Factors such as regulatory requirements are a crucial in defining the test plan and release criteria for the application. A good test plan will focus on the areas of highest risk to ensure that the return on the testing effort is maximized. It’s not about how many tests, it’s about having the right level of test completeness for your project.