Even if it doesn't need to contact the internet you're still going to want it to connect through cables. There's good reason to connect through bluetooth.
But why should it contact over the internet? Well it sure is nice to be able to stream music from my NAS. There's utility in that. There's also utility in the parent company updating firmware to support new audio codecs. Or to support new algorithms. If my device is gaining more utility, that's a great thing! And of course, if it is connected wirelessly in any way (including bluetooth) I sure as hell would like updates with respect to security.
Without this, the thing becomes e-waste. The environment moves. Time marches on. No thing can exist in isolation, no matter how hard you try. Again, software rots, not because the software changes, but because the world does.
But that's not the problem here. The problem is abuse of that power. It isn't for the benefit of the customer. The problem is managers pushing to release before things are ready. The need for speed with no direction. To not even consider in the calculus of decision making the tremendous costs of when things go wrong. And how this lesson is never learned despite facing the problem time and time again. Issues like this now cost tons of engineering hours, tons of lawyer hours, and ultimately will cost tons in rebates and refunds. How many weeks of work is that equivalent to? Sure, it doesn't always result in catastrophic failure like this, sometimes it results in smaller failures, sometimes small enough they can be brushed off. But those are still costs that no one considers. That's the problem here.
So I do get all the advantages of a connected device, but if the adapter is bricked, I can easily replace just that small device. And more likely, when there’s a new standard, most of my equipment is unaffected.
I believe you're missing the forest for the trees. My argument is invariant to the specific device we're talking about.
Of course they could be designed to be simpler and have whatever input device is used (e.g. the TV) handle fancy features like mobile phone support.
Sure, you could do everything through a static circuit and require things being fed with speaker wire. But if you add a microcontroller you're going to be able to do much more, get better sound quality, and protect your equipment. Do your speakers have batteries? Do they plug into wall? Either way you can better control power levels. Do you want to boost bass? Fix corrupted signals? Do you want to process signals from anything other than a bare wire?
Sure, you don't need a microcontroller in a speaker. But we also don't need them in our cars. You don't need them in your fucking kettle. But personally, I find them useful and considering how cheap they are it's worth the basically $0 increased price.
See my other argument. The issue isn't that there's a microcontroller in the speaker. The issue is bricking the device. Don't confuse the means in which a bad actor operates with the bad actor themselves. You'll never stop the bad actor by just banning everything tool they abuse. You'll end up with nothing.
That just isn't true though, is it? How would a microcontroller add sound quality?