Everyone --- at least, everyone in the mid-2000s --- got CTR nonces wrong. But you haven't seen what a custom RF environment does to cryptography until you've seen the counters wrap. :)
Another hypothetical vendor may have claimed to use 128-bit AES, where it would take a config password, encrypt it with AES, and then xor each packet payload of RF traffic with the bytes from that ciphertext. This was when SDRs and anything that could intercept FHSS traffic cost over $10k so nobody really noticed.
My skills were lame by most standards, and if this is getting attention now, we can expect some really funny conference talks in the next few years and there are some careers to be made on breaking implementations in this relative backwater. The hardest part at the time was extracting the bootloader firmware dump via an open jtag, but most of the firmware images were available via ftp, and the tools for that today are just amazing compared to the 00's.
I don't think they're generally in 8-bitters unless some of the newer "big-little" ones throw one in, but probably most IoT devices that need cryptographic security would use a 32-bitter these days anyway if nothing else for the networking.
There are also devices like the ATECC608 which have an internal HRNG, and also provide offloaded security cryptographic signing based on that, which both saves a very small device burning cycles on crypto and also prevents a private key ever residing in the CPU.
This is rather annoying, and sort of the whole point of responsible disclosure.
Disclose the vulnerability to the company, and at some predetermined amount of time later spill the beans, including the vendor.
If the company does not want to fix it, the people using the products deserve to know that and make their decision (dump the product, live with the risk, etc.). Or the company fixes it, and people are happy.
The only reason not to release names after a reasonable responsible disclosure timeframe is because the researchers somehow think they are the only ones that will ever find that flaw. Pure hubris. Some malicious person will eventually find those same flaws, and then I'm fucked without being given the opportunity to evaluate whether or not I want to risk getting fucked.
> The precise definition of the second proverbial phrase depends on the context and has changed over the last couple of decades, but most of the time it means Do not design your cryptosystems, especially if you don’t know anything about them.
this post looks interesting but i can't get past this writing style. sorry. : /
Disclaimer: I happen to work at the same company as authors, not involved in writing this, but I was witness to all research that led to this post. I have seen huge internal arguments on how much and how should be disclosed, given the context (see below), prior to this article being written.
1. I can attest that all these bugs are found in one physical device. I have seen it. Which is really widely used to this day. Moreover, this device has more relatives than we could easily enumerate, some of them potentially vulnerable to a subset of the bugs identified as well. The "vendor" is aware and nothing is changing for a while, in some ways getting worse (blast radius increases over time). This is result of economic reasons, rather than negligence,- "the vendor" in this case is a mixed bag of responsibility between several parties, not all of them commercial, not all of them actually existing to this very date, I believe.
2. In a normal situation, responsible disclosure path, instead of what you've been reading in a post, would be a right way to go. However, context matters in this case: authors happen to live in a country which is at war now (takes like 5 seconds to figure out, looking at the website), so their ability to talk about security vulnerabilities is a bit different to your expectations for reasons that are not very hard to understand. They use vague language, distort a few important details and focus on frivolous illustrations to avoid unnecessary damage.
Pointing out practical exploitability vectors publicly in a way that is understandable to anyone related to the field of practice is sufficiently helpful:
* Some people will now have explanations why their toy cars were stolen and consider changing their supplier of toy car equipment.
* Some people conducting engineering risk analysis will understand that this is not a "potential theoretical vulnerability", looking at their toy car and some of its settings, and consider alternatives.
Consider blog post and examples to be didactic material for an ongoing discussion about some hardware among field practitioners. Authors needed something to point their fingers at and say "this is how X can be exploited to do Y", without reading 2 hours lecture on cryptographic bugs that have been obvious 15 years ago.
3. Why not point out vendor and device list? Consider the context again, please.
It's easy to wave your hand and say "if people are idiots using hardware and devices that are known to be vulnerable, we should let them screw themselves", disclose the name of the vendor, and go on with your life. However:
* Being pointed out directly, these vulnerabilities could easily lead not only to "market levelling out discrepancies" (which does not always happen harmlessly, as we all know). It could lead to more physical damage and deaths immediately happening around authors of this post because exploitation is so easy.
* Not making it would lead to these devices being used over and over again, and obvious cryptographic bugs being dismissed as "theoretical threats", because remote toy car community is full of "Internet of Stuff" people who are dismissing cryptographic vulnerabilities on basis of "it's crypto, who knows how to exploit it, we've got more important stuff to worry about right now".