It's simple to reason about and analyse. You can make assertions about worst case execution time and memory useage and you know for a fact what the order of execution is. The concurrancy on large systems like this often comes in the form of a lot of small computers talking to each other over some network/bus rather than having many systems all running on the same hardware.
Building concurrant systems with predictable real-time characteristics is hard. When you have a bunch of things that really need to happen every Nth of a second in order to fly straight, a simple approach like this is definitely preferrable. In situations like this predictability tends to be more important than raw performance (assuming you can reliably hit minimums).
That doesn't mean people don't use multi-threading in domains like this, but it's one of those things you want to keep as simple as you can and avoid where possible.
No comments yet.