We definitely tried to do on-board processing, as we were still bottlenecked by the blazing “gigabit over USB 2.0” NIC speed. However, while I have a hunch that the new Pi 4 might be more than up to some of the crunching, the real problem is that the spatial nature of the domain means that each Pi would need access to the sparse clouds and shape masks generated by the rest in order to proceed. We could do color correction and remove lens distortion on the Pi, but that’s it.
Reflecting on this conversation, it occurs to me that I should strongly consider publishing a string of #1 on HN deep dives on what I learned to pay it forward. ;)