I am interested in how other small electronics companies handle their End-of-Line testing and programming.
I am working for a small company (2.5 firmware devs, 2.5 software devs and 1 hardware guy)
We design and sell small electronic devices (RFID Reader), let them produce by a partner company but provides them with custom hardware and software to perform End-of-Line testing and programming.
I have inherited this EOL testing software and now need to maintain and implement new features, but it is a huge mess. So we are looking into replacing everything.
Currently everything is written in Python (the main language for software in the whole company) We flash a custom test firmware the DUT and perform different tests LED flashing Buzzer working RFID range Various Interfaces (USB, Erhernet/PoE, Rs485, Rs232, Bluetooth …) At the end we program the final firmware and configuration into the device and print labels.
Currently everything is homebrewed (software and hardware) and I am interested in more off the shelf solutions which can be customized.
How are you tackling this in your company?
The idea behind it is to measure how well a python function is tested by recording all inputs/outputs during a complete test run and comparing the values with the annotated types.
e.g. a function `foo(a: Optional[float])` which is only tested with `foo(5)` gets a low coverage because `foo(None)` is not being tested. Or it could be hinted that a more feasible type annotation would be `foo(a: int)`.
As a use case I was thinking of testing APIs, to make sure you covered all use cases of your API which you advertise by the annotated types.
An extension of this concept could be that you check how extensive you tested a type. Like did you test `foo(a: int)` with negative, positive and zero values? If not it could be a hint that your test coverage is too low or you have a wrong type, maybe a enum would be suited better.
I am curious to hear about your thoughts about this concept.