VectorCAST is the premier dynamic software analysis tool for embedded applications. We are very pleased that our partner, Vector Software,...Read More »
Reduce Testing Time from Days to Minutes
How many times have you been surprised by a small change that caused a critical bug? Probably more often than you’d like to remember. So if you don’t have a trusted way to understand which tests should be re-run to validate that “one line change”, then running all tests is the “safe approach”.
The problem with the “safe approach” is that it wastes a lot of time and resources. In reality, most of those 10,000 tests are not affected by the change, and we‘re only running them because we fear the Unintended Consequences of software changes.
Change Based Testing
Running the complete set of tests for software applications can take days. Obviously running a full set of tests prior to a Production Release is prudent, but how about for intermediate builds as changes are made.
Assume you have 10,000 total tests, and a developer changes a single line of code, how many tests should you run? The developer might say “one”. They would argue that they added a simple new condition to a function that “cannot affect anything else,” and that “they already tested the new case”.
Automatically Identify the Minimum Tests for Each Code Change
For most software changes, you would ideally like to choose a sub-set of tests to run which will still adequately test the changes being made.
Choosing which tests to run is not as simple as finding tests that invoke a function directly. How about all of the tests that call functions that call that function, or that are called by the changed function, they are likely to be impacted too.
In fact most regressions occur because of these transitive relationships.