Is Static Analysis Enough?

By: 
John Paliotta

October 02, 2015

Software development is hard and building quality code bases is very costly. Every software development group in the world is searching for ways to improve productivity and quality, while reducing development time and cost. Two broad classes of tools that can greatly enhance software quality are: Static Analysis Tools and Automated Testing Tools.

Static Analysis Tools scan program source code to find potential bugs caused by syntax and semantic errors in the code base. Static tools are great at finding data access problems, such as buffer over-runs and pointer de-reference errors.

Automated Testing Tools, such as VectorCAST, formalize the expected behavior of entire applications, or individual components and measure testing completeness via code coverage analysis.

There has been significant improvement in static analysis tools over the last 10 years. First-generation static tools used pattern matching to find source code constructs that were known to be problematic, “if (a=b)…” for example. Modern tools use complex mathematical models to simulate run-time performance, and find more subtle errors.

Is it enough to prove that an application is “statically clean”?

It is certainly compelling to push a button and get a report of bugs, but like all real-world problems, improving software quality is not quite that simple. Static Analysis is great technology for finding bugs and enforcing coding standards, but it does not ensure correctness. The only way to prove correctness is by building tests based on the requirements. Just as Grammar and Spelling Checkers on their own don't produce great novels, Static Analysis alone will not generate perfect software.