I'm interested in WASM for microcontrollers to have the application guaranteed free of side-effects (due to the sandboxing). This allows to be test the logic thoroughly in isolation, and have reasonably high certainty that it does not do anything funky - like write to some hardware register in some edge-case.
Such code would not be able to do I/O directly, but must receive it from outside the sandbox. It could be a pure function implementing a state machine on form `state, outputs = next(state, inputs)`, where state,inputs and outputs are plain data. This structure is amendable to generating testcases, be it via property testing or fuzzing. Or regression tests by capturing a stream of state transitions in serialized form.
There are languages which makes this possible, like Haskell, or maybe even Rust or Ada. However in the embedded world, C is what everyone knows and uses, so there would be a benefit to staying within that ecosystem.
All software is a model of side-effects, but if you delegate the execution of your side-effects to a third party (usually the language's runtime or an application framework) all of your code can be side-effect free. If neither of those things is available, you can at least isolate your side-effect peforming code to a small footprint.
Yes, you are very right. The function of most microcontroller/embedded systems are almost all input/output (sensing/actuating). The challenge is that combined software+hardware systems can be hard to test, especially together.
The traditional way to simplify QA is to use a Hardware Abstraction Layer in the code, which hardware side-effects happen through. So you have a classic layer cake like:
Then during testing of the software application logic, a mocked implementation of the HAL can be used. When done with due care, this works OK.
In this basic model it is OK to read input and cause outputs "anywhere" in the application logic. That is easy and convenient, but (I argue) that this causes pure logic to be rare. Which is unfortunate, since it would be much easier to test. State also tends to be spread across the code-base, which can hide stateful behavior that is critical to test.
In the proposed model, the HAL functionality is split into two distinct parts: input and output drivers. And the application logic does not call the HAL, but gets called with Input and produces a description of Output. So the layer cake kinda tips sideways:
and can then perform validations on who sequences of Input/Output pairs. And debugging can trivially access whole sequences of State changes.
Those familiar with dataflow programming might find this very familiar. Frontend people might see parallels to unidirectional flow of data in reactive UI frameworks like React. Simulation minded people might see that this structure is very amendable to Discrete-Event Simulation.
I have used this model to good effect across many (relatively simple) embedded/IoT systems over the last few years. One thing I really like, is that it makes temporal logic very easy. Because in this model the current time (be it ticks or wall-time) is just a type of input. So it is easy to stimulate, visualize and make assertions across whole timelines of behavior, and one can see many such timelines at the same time. Similar to what Brett Victor showed with Mario in Inventing on Principle.
It's great to see these software ideas being used in embedded systems. From my limited experience with embedded code, most engineers in the domain treat their software more like hardware. They test it WITH specific hardware and don't abstract it, abstraction being more of a software thing.
I would like to read a blog post about what you're doing. It would also be great to connect it to the other software communities doing similar things with different names.
I think this encountered these ideas in the Java world around 2005, with their strong focus on testing. It's basically "dependency injection" or dependency inversion.
I just did some Googling and found this good overview:
It's very much an idea in the "enterprise" software world. I've never been in that world but it does seem like they are grappling with complex problems and systems, and this architecture has proven itsef in that domain. It's not surprising to me that it's also useful in the embedded domain.
I would say it's just "modularity". If your dependencies are hard-coded, then you have no modularity in your software at all. The whole thing is one big ball of mud which you can either take or leave. "Functions" aren't really modular if they have nontrivial hard-coded dependencies! (i.e. particularly ones that do I/O or depend on state).
Other names:
- capability based security / Object capabilities (WASM is influenced by these ideas, which originated from Eros OS as far as I know. The E language was an influential object capability language.) The idea of "ambient authority" is useful.
- https://news.ycombinator.com/item?id=14523728 -- a thread where Go programmers are rediscovering dependency injection. You can "invert" state or I/O. Those are independent decisions, but the same concept. In my larger programs, I tend to abstract both of them.
- In Haskell, you use State and IO monads. They are parameters, not things you go "looking for".
At its most fundamental, SIMPL is a set of library functions which allow the passing of encapsulated messages between cooperating processes. These processes may be local to one host computer or spread over a network.
I assume this is meant for cases where there's no LLVM backend for the target architecture and you're forced to use the vendor's proprietary C compiler. Then you can use that to compile this runtime and run any program written in a WASM-targeting language on top of it.
I'm assuming safer loadable modules (there's no memory protection on most mcus) and choice of languages that have wasm backends.
In some circumstances being able to run the same code in a browser in a test environment as on the device could also be great (for education for example).