[0] https://github.com/saubury/plane-kafka/blob/master/raspberry...
1. Simpler to upgrade to a distributed setup.
2. Format is stable but with enough pre-parsed information, like the coordinates which are hard to extract.
3. The process doing the data collection can avoid to die together with dump1090 in case of crashes or USB errors. You just need a script to restart dump1090, and the ability to reconnect in the other side.
However I don't remember exactly how the SBS1 port data are ready-to-use exactly, and if there is too much work to do compared to just parsing the raw text output of the program.
This post will serve as a model for that. I’ve had an SDR and a few old Pis lying around for years... time to dust them off.
Couldn't this be done on the Raspberry Pi with a O(n) search in the OpenFlights data? Probably less sexy, but a simple grep with location and time range should be enough.
given that in the end the query goes for specific time window, wouldn't "regular" sql work fine (vs event processing) ?
Using an RPi with an SDR dongle is already pushing its limits, let alone piling on more load. I can't understand why he'd use it other than for buzzword recognition.
Of course the lower-effort solution would have beem simply to run an FR24 playback for his location at 06:00...
P.S. Coming up with examples for things in computer science that aren't absurd is hard. I had to think of an example when I was explaining junction tables for many-to-many relationships to someone the other day. After a minute or so of thinking, I just fell back on that old standard of books and authors.
The RPI includes purpose built hardware for DSP in the VideoCore so, probably not.
Poe's law in action I suppose.
I'm all for big data but this is a case of nuking a fly.
Edit: That said, I think any sort of OpenCL/GPGPU solution is overkill for this problem. I run Flightaware's dump1090 fork on my Pi and it seems to only use around 30% of one CPU core.