Unit-testing tradeoffs in embedded C++ projects
Most of the unit testing I've done for C++ projects has been on Linux, where there's no shortage of unit-testing frameworks. Google Test, Catch2, Boost.Test... the list goes on, and they're all perfectly adequate for testing on Linux or Windows or whatever.
A few weeks ago, I started playing with an embedded project using an STM32L432 and ChibiOS. ChibiOS is a lightweight RTOS, kind of like FreeRTOS, built for constrained environments. The L432 is an ARM Cortex-M4, and the chip I'm using has 256 kB of flash memory and 64 kB of SRAM. There is no MMU, so no virtual memory.
Fast-forward to a couple nights ago, when I wanted to start unit-testing some of the code I wrote. The lack of virtual memory and the limited amount of RAM means I need to implement a custom allocator if I want to use data structures in the STL that depend on dynamic vector, like std::vector
or std::string
, and it means I need to avoid using exceptions. In other words, my favorite unit-testing frameworks go out the window if I want to run the test code on the target (the L432).
So that led to the next question: do I actually want to run the test code on the target? Keeping in mind that I might want to automate the unit testing in the future, I saw four options for running test code (in no particular order):
- On an emulated ARM target
- On my development machine (x86_64 Linux) and stub hardware/OS interfaces
- On my development machine using the simulated ChibiOS platform
- Natively on the target
Option 1 at least gets me to the same architecture without having to write code to talk to the actual hardware if I want to automate testing. I still have to automate starting up the emulated system and talking to it, though. And there are no Cortex-M4 boards emulated by QEMU, even if ChibiOS did support them (it hasn't been ported to the two Cortex-M3 boards that QEMU supports).
Option 2 makes it easy to run unit tests in Jenkins, but it's running on the wrong architecture. Plus, I have to abstract away any hardware interfaces. I explored trying to just replace implementations of HAL (hardware abstraction layer) functions such as sdGet
(read a character from a serial device), but deliberately using the ChibiOS headers to ensure interface consistency. This approach wasn't effective because ChibiOS implements a lot of functions as macros. sdGet
, for example, is a macro that dequeues an element from the serial driver's input queue.
Option 3 shares the "wrong architecture" problem with option 2, but ostensibly I at least don't have to write stubbed ChibiOS code. The reality is that the simulator only implements an emulated serial driver and the necessary code to support threads, so I'd still have to write some stub code.
Option 4's primary shortcoming is that I might have to perform some shenanigans to work around the limited flash size if my test suite grows too large—for example, producing a different image for each test suite and reflashing the board in between test runs. The other shortcoming, though this also applies to options 1 and 3, is that I need some way to talk to the board if I want to automate the tests. In this case, it would mean using something like PySerial to talk to a shell over a serial interface. I don't see this as a very big lift, and it's probably a pretty typical way to automate testing of embedded systems.
Back to the discussion of unit-testing frameworks. Only option 2 gives me the ability to use a library I'm familiar with. You, the perceptive reader, may have already inferred that I ultimately chose option 4 as a testing strategy. This means I needed a testing framework that allowed for static memory allocation (no malloc
/free
—all memory allocations are known at compile time). Probably something like this exists, but I ended up rolling it myself in about 30 lines of C:
#define TEST_CASE_BEGIN(func) \
void func(BaseSequentialStream* chp) { \
int cases = 0; \
bool testSuccess = true; \
int failures = 0; \
chprintf(chp, #func "\r\n"); \
setup();
#define TEST_CASE_END \
if (failures == 0) { \
chprintf(chp, "OK\r\n\r\n"); \
} else { \
chprintf(chp, "FAIL: %d failures\r\n\r\n", failures); \
} \
teardown(); \
}
#define TEST_BEGIN(descr) \
cases++; \
chprintf(chp, " %02d: " descr "... ", cases); \
testSuccess = true;
#define TEST_END \
if (testSuccess) { \
chprintf(chp, "OK\r\n"); \
} else { \
chprintf(chp, "FAIL\r\n"); \
}
#define ASSERT(stmt) \
if (!(stmt)) { \
chprintf(chp, "Assertion " #stmt " FAILED\r\n"); \
testSuccess = false; \
failures++; \
}
ChibiOS provides a basic shell implementation, so I just use these macros to write tests, then call the test from a shell command. chprintf
is just a printf
analogue, and a BaseSequentialStream
would be a file descriptor in Unix-speak.
TEST_CASE_BEGIN
lets me declare local variables for the test and call the setup
method. Likewise, TEST_CASE_END
prints the test case result, calls the teardown
method, and closes the function scope. TEST_BEGIN
prints the test description and increments the counter, as well as clearing the state. The testSuccess
flag is used for determining whether a test succeeded, and this determines whether TEST_END
prints "OK" or "FAIL."
Here's a basic test:
#include "test_helpers.h"
class BasicTest {
BasicTest() {}
void setup() {}
void teardown() {}
TEST_CASE_BEGIN(testMath) {
TEST_BEGIN("addition should work") {
ASSERT(1+1 == 2);
}
TEST_END
TEST_BEGIN("multiplication should work") {
ASSERT(5*8 == 40);
}
TEST_END
}
TEST_CASE_END
TEST_CASE_BEGIN(testStrings) {
TEST_BEGIN("strlen should work") {
ASSERT(strlen("asdf") == 4);
}
TEST_END
TEST_BEGIN("strcpy should work") {
char destbuf[16];
strcpy(destbuf, "asdfgfhj");
ASSERT(strncmp("asdfgfhj", destbuf, 9) == 0);
}
TEST_END
}
TEST_CASE_END
};
Here's the shell command I wrote to execute the test code:
void cmdTestBasic(BaseSequentialStream *chp, __attribute__((unused)) int argc,
__attribute__((unused)) char *argv[]) {
BasicTest test{};
test.testMath(chp);
test.testStrings(chp);
}
And here's what it looks like when I run the command:
ChibiOS/RT Shell
ch> test_basic
testMath
01: addition should work... OK
02: multiplication should work... OK
OK
testStrings
01: strlen should work... OK
02: strcpy should work... OK
OK
So far, it does what I need, and I'm pleased that it was straightforward to build something in this constrained environment that acts a lot like every other unit-testing framework I've used.