10 Things to Look for when Evaluating Automated Testing Tools

Lynda Gaines

July 30, 2014

The need for automated testing tools is expanding, propelled by trends like “the internet of things”, and a general expansion of software that spans all industries. Software living within today’s medical devices, automobiles, avionics, industrial controls (and many more application), provides developers with an opportunity to maximize the potential of their products and satisfy customers.

In many ways, this reliance on software represents the crossing of a new technological frontier. And, as pioneers pushing forward into new territory, balancing risk has become more important than ever before. Thus, selecting effective testing tools has become as important to the application development process as engineering the software itself. But when one takes the first step into researching automated testing tools, the choices available are overwhelming — making it difficult to identify the ones that are a worthy fit for the tasks at hand.

The purpose of this blog post is to provide software engineers with a checklist of the top ten things to look for when sifting through products and considering the appropriate automated testing tools to adopt.

1. Parser and Code Generator

Most commercial and homegrown tools include a parser for C. However, it’s important the tool selected also supports the entire C++ language and can be used with even the most complicated code. A good way to test the ability of the parser and code generator is to evaluate it with complex code structures that will be used in the applications you are developing.

2. Test Driver

With open source software testing, the test driver is manually written. An effective automated testing tool would have a capable GUI for building out test cases, integrated code coverage analysis, integrated debugger, and an integrated target deployment. When evaluating tools, be sure to ask, “Is the driver automatically generated or do I write the code?

3. Stubbing Dependent Functions

Stubbing is an important piece of integration and unit testing, allowing engineers to isolate code under test from other parts of the application, and simplify the execution of the unit or sub-system of interest. Much like the test driver, this process should be fully automated in the testing tool. Some questions to ask include, “Are complex outputs supported automatically?” and “Can each call of the stub return a different value?”

4. Generation of Test Data

Tools used to implement test cases most commonly rely on a data-driven or single-test architecture. Data-driven architecture includes a test harness that is created for all the units under test. Single-test architecture requires the tool to build a new test driver for that test, requiring more time at test execution time. Take time to identify if the tool’s test harness being evaluated is data or single-test driven.

5. Compiler Integration

Effective automated testing tools allow test harness components to be compiled and linked automatically and also honor any language extensions that are unique to the compiler being used, thanks to compiler integration. Check to make sure the tool being evaluated automatically compiles and links the test harness, honors and implements compiler-specific language extensions, and is integrated with the debugger to allow for debugging tests.

6. Support for Testing on an Embedded Target

Keep in mind the automation level and robustness of the target integration. Make sure the tool under evaluation supports all compilers and all targets out of the box, and does not force engineers to do all the work manually. Automated testing tools should have the ability to start testing in a native environment, and transfer later over to the actual hardware. Remember, the tool’s artifacts should be hardware independent.

7. Test Case Editor

A lot of interactive time is spent within the test case editor. If automation of the aspects mentioned earlier has been achieved, the amount of time devoted to setting up the test environment will be minimal. Be sure to evaluate the difficulty of setting up test input and expected values for non-trivial constructs.

8. Code Coverage

Code coverage is typically displayed in table or flow graph form. However, in order to get as close to 100% coverage as possible, annotated source lists should be made available. These listings show the original source code file with colorations for covered, partially covered, and uncovered constructs, allowing for simple analysis and identification of test cases that are needed to reach 100%.

9. Regression Testing

Saving time during the testing process is essential. By doing so, engineers will have time to focus on other important aspects of the project. The candidate tool should have the ability to “save” important tests that can be re-run in the future. By having the ability to save these tests, they can be leveraged over the entire lifecycle of the application.

10. Reporting

At a minimum, automated testing tools should create an easy-to-understand report that shows inputs, expected outputs, actual outputs and a comparison of the expected and actual values. Going further, engineers should ask what output formats are supported, if the report content is user-configurable, and if the report format is user-configurable.

Hopefully this list provides some useful information and things to look for when navigating the cluttered landscape of available automated testing tools. When evaluating tools, be sure to dig deep and gain in-depth understanding of the tool’s true capabilities. After all, effective tools will free up engineers’ time, allowing them to focus on bettering the project at hand, saving your organization time and money.