I get Siri via car play, but even car play is sandboxed because it can’t control things like the radio or temperature.
Example: I frequently switch between display profiles to suit what I need. Saying
"switch to cinema display mode" - works fine.
Saying:
"switch to user display mode" - 100% of the time results in the TV replying "which user would you like to select?".
Like...it's not my fault that the dumb TV has the custom profile named "User".
Google probably spent billions on voice recognition, but it's all worthless, because someone without an ounce of imagination just coded it to react to the words "change" and "user" as the user profile selector, the rest of the sentence be damned.
But back to cars - Google Assistant in Android Auto is equally shit. Try saying "hey google, open spotify", then "hey google, play music". 100% of the time, it switches from Spotify to Google Music. It's insane.
It's not just Amazon/Google, car manufacturers like Hyundai are selling data to Verisk, a data broker, who in-turn sells the individuals driving data to insurance companies and other entities [2]. They get people to "opt-in" by offering people free 'connected services' which has data-sharing buried in the T&C.
I'm sure other manufacturers are also going to do similar things as this allows them to generate additional, incremental revenue from user data about their end users.
[1] https://www.motor1.com/news/239477/toyota-android-auto-priva...
[2] https://www.verisk.com/press-releases/2018/april/hyundai-joi...
https://www.motor1.com/news/239477/toyota-android-auto-priva...
(joking aside, is that voice control developed in the US or in Germany?)
Seriously - my guess is that they try to perform the speech recognition client side (on the local hardware) and are less agressive on how they collect data for model training.
Unlike Google or some company in China which make thin clients that send everything to a central server where it is much easier to recognize, correct and train.