How to Measure the Value of Testing

By: 
John Paliotta

October 13, 2015

I was asked last week if the number of bugs found is the best measure of the value of testing.  Honestly, I had to think about this for a few minutes, and decided that the number of bugs found is actually a really poor way to measure the value of testing.   Consider low-level testing (unit and API testing).  When your developers write tests do you really want them to log formal bug reports for every error they find?  Of course not, this is a development activity, we should hope they are delivering a complete set of passing tests for each change they make and NO bugs.  How about for the functional testing done by the QA team on the integrated product, are you happier if they find 100 bugs with a new feature or 0?  Hmm, 100 means the feature was awful when it was sent to QA, but what does 0 mean.  Does it mean that QA did a bad job, or that the quality of the feature was high BEFORE it went to QA.

Measuring the value of testing by the number of bugs found is the completely wrong metric.  The true value of testing is to formalize correct behavior in a measurable way.  When someone creates a set of tests with input and expected values that can be run every time the code changes, they are contributing to the infrastructure of quality.  A well-designed test will provide value for years, and pay for itself many times over.

Every time a test fails it provides value, it prevents a bug from getting into the code base.  Tests are bug preventers, not bug finders.