How To Evaluate Embedded Software Test Tools

By: 
Michael Rielly

September 22, 2010

If you haven't noticed, we've been pretty busy on VectorCAST.com lately. We've recently added several new customer success stories and we have updated all of our whitepapers. Many of our whitepapers and customer success stories no longer require registration before downloading. Here is an excerpt from "How To Evaluate Embedded Software Test Tools", our most popular downloaded whitepaper.

How To Evaluate Embedded Software Test Tools

What's in Your Test Tool? Over the past few years the test automation tool market has become cluttered with tools that all claim to do the same thing: Automate Testing. Wikipedia lists 38 test framework tools for C/C++ alone. Unfortunately for potential users, when viewing product literature, or simplistic demos, many of these test tools look very much alike. You Can't Evaluate a Test Tool by Reading a Data Sheet All data sheets look pretty much alike. The buzzwords are the same: "Industry Leader", "Unique Technology", "Automated Testing", and "Advanced Techniques". The screen shots are similar: "Bar Charts", "Flow Charts", "HTML reports" and "Status percentages". It is mind numbing. What is Software Testing?All of us who have done software testing realize that testing comes in many flavors. For simplicity, we will use the following three terms:

  • System Testing Testing the fully integrated application
  • Integration Testing Testing integration sub-systems
  • Unit Testing Testing a few individual files or classes

Everyone does some amount of system testing where they do some of the same things with it that the end users will do with it. Notice that we said "some" and not "all." One of the most common causes of applications being fielded with bugs is that unexpected, and therefore untested, combinations of inputs are encountered by the application when in the field. Not as many folks do integration testing, and even fewer do unit testing. If you have done integration or unit testing, you are probably painfully aware of the amount of test code that has to be generated to isolate a single file or group of files from the rest of the application. At the most stringent levels of testing, it is not uncommon for the amount of test code written to be larger than the amount of application code being tested. As a result, these levels of testing are generally applied to mission and safety critical applications in markets such as aviation, medical devices, and railway. What Does “Automated Testing” Mean? It is well known that the process of unit and integration testing manually is very expensive and time consuming; as a result every tool that is being sold into this market will trumpet “Automated Testing” as their benefit. But what is “automated testing”? Automation means different things to different people. To many engineers the promise of “automated testing” means that they can press a button and they will either get a “green check” indicating that their code is correct, or a “red x” indicating failure. Unfortunately this tool does not exist. More importantly, if this tool did exist, would you want to use it? Think about it. What would it mean for a tool to tell you that your code is “Ok”? Would it mean that the code is formatted nicely? Maybe. Would it mean that it conforms to your coding standards? Maybe. Would it mean that your code is correct? Emphatically No! Completely automated testing is not attainable nor is it desirable. Automation should address those parts of the testing process that are algorithmic in nature and labor intensive. This frees the software engineer to do higher value testing work such as designing better and more complete tests. The logical question to be asked when evaluating tools is: “How much automation does this tool provide?”

Download the full product-independent evaluation whitepaper