Testing with an RTOS API

By: 
Ryan Lavering

June 25, 2015

In his article "Testing code that uses an RTOS API", author Colin Walls presents a proposal for unit testing code that is designed to run in a real-time embedded operating system (RTOS). In a nutshell, Walls proposes that the embedded app should be cross-compiled for the desktop and unit tested against a test harness comprised of a stubbed version of the RTOS library that would, for many cases, “appear to respond just like the OS.” This would then allow testing earlier in the development process without the need to finalize software or hardware. 

 

I agree with much of what the article is proposing – unit testing is an extremely valuable tool for testing this type of code, and stubs (or mocks) are an integral part of any unit testing strategy. Similarly, testing on a simulator or cross-compiled for native execution on the build machine is a great strategy, as it puts a test platform into every developer’s hands. However, I doubt that in practice a monolithic stub RTOS library like the one Walls envisions would be a practical solution. Instead, a small, lightweight and automatically generated set of stubs that can be dynamically adjusted to the exact requirements of the code under test allows more flexibility, less startup cost, and overall faster testing.

Here’s why:

The problem with the monolithic approach to RTOS stubbing is that, first and foremost, modern RTOSes are complicated machines. Operating systems must deal with scheduling, context switching, interrupts, locking, priority inversions, etc. Any stubbed version of such APIs that purports to respond similarly to the real OS code would either have to have a very complicated implementation, or would out of necessity miss many of the edge cases. The former is a poor solution because it would involve a huge amount of effort and would inevitably run into bugs (many of which the RTOS engineers themselves probably have already found and fixed). The latter is also a poor solution because the goal of such a library is to provide a generic solution, but its implementation would ignore the corner cases where most of the problems in the code are likely to occur. Thus you end up chasing issues in the stub library, miss the fact that your code is incorrect, or spend extra time writing glue logic to fix up the problems with the library. Not good!

Second, unit testing is hard work. Creating high quality tests for this type of code requires a solid understanding of both the application code and the behavior of the underlying RTOS. Adding a third-party library in the middle of this not only introduces the possibility to errors unrelated to the operational code (as noted previously), but also adds an additional level of knowledge and understanding from the testers, and makes a hard job that much harder. This effect is further compounded when testing is undertaken by a different team than the one developing the code. Adding yet another set of code (developed by yet another team of engineers) to this mix is a recipe for misinterpretation, which ultimately degrades testing quality.

So what is a test team to do? 

I think that the best solution to this situation is a different take on what Walls proposes. Unit testing is more or less a given for achieving high levels of test coverage in an embedded app. There are just too many hard-to-reach corners that cannot be adequately tested using a system testing approach. (Footnote: That’s not to discount system testing. System tests, especially in concert with a code coverage tool like VectorCAST/Cover, proved a great way to establish a test coverage baseline from your existing high-level tests. This information can be used to intelligently allocate unit test resources to the most important gaps in test coverage.)

As stated before, unit testing and stubbing go hand in hand. In Walls’ solution the generic RTOS API stub library provides much of the functionality that you need. And, assuming such a library was available, that would represent a significant time savings over hand-coding each API stub. However, automated test tools like VectorCAST/C++ negate that advantage by automatically stubbing any external APIs (RTOS or otherwise). The difference between using the automated tool, however, is that you as the tester have complete, direct control over the exact behavior of the stub, rather than relying on an external library that may or may not implement the desired behavior. Where Walls envisions a monolithic “one size fits all” solution, I would argue that there is more utility in a “microservices” mentality. The reason is simple: Simple cases are trivial to implement anyway, and complex logic is nearly always going to need a complex, custom solution (and will run afoul of the previously mentioned “missing edge cases” scenario). Adding a library in the middle doesn’t really cut out a whole lot of work.

Take the following example for a hypothetical RTOS application:

// RTOS API

bool acquireSemaphore(semaphore *sem);

bool releaseSemaphore(semaphore *sem);

semaphore regLock;

int readRegisterUnlocked(int reg); // External application API

int readRegister(int reg) {

    if (acquireSemaphore(&regLock)) {

        int val = readRegisterUnlocked(reg);

        releaseSemaphore(&regLock);

        return val;

    } else {

        return -1; // couldn't acquire lock

    }

}

In order to test the readRegister() function you’ll need at least two tests (acquireSemaphore() returning true then false). But since unit testing is done primarily at the function call level, the stubs don’t actually need to provide much “intelligence” at all: simply returning the correct value from the stub gives us all the behavior we need , and it’s trivial to change the behavior of the acquireSemaphore() stub for each test case. With the monolithic API library, a ton of effort (and code space) is wasted handling cases that aren’t exercised.

Of course, real applications have more complicated logic, and this simplistic approach won’t work everywhere. But in my experience the “simple” level of stubbing is more common than it is uncommon. There are always complicated bits of application logic that are tricky to test, but by virtue of that complexity the application layer is typically designed to abstract that behavior through a well-defined interface. Hence a little concerted testing effort can be applied to verify those interfaces, with the rest of the time spent testing higher level interfaces with less regard to the underlying RTOS calls.

There are also additional benefits of the dynamic stubbing approach. Dynamic stubs are not limited to a particular runtime platform, and work equally well on the native development system, a target simulator, or the real target. In the latter two cases, modern test tools even allow the user to include the real operating system libraries, and selectively stub individual APIs on a per-test-case basis. (In VectorCAST this is known as “Library Stubbing”.)

While a fully stubbed RTOS layer could be useful, I don’t see it as a realistic solution to the problems of embedded test. The implementation would either be complicated and error-prone, or would not capture enough RTOS behavior to be worth pursuing. Modern automated test tools already provide that functionality with their automatic stubbing and library stub capabilities. And, given the choice between a complicated library that might give me false positives and a simple solution that needs a little more custom coding (but lets me control everything), I’d much rather take the latter. Because when it breaks – and we all know it will at some point – I’d rather fix a simple solution.