I have no issues personally with using cloud services like this but they very much used weasley language in the presentation and website to obscure the fact that Apple's cloud absolutely will get access to your unencrypted raw utterances following their wakeword detection, and cause people to conclude that that wasn't happening.
As a side-note I find it a bit surprising that with all the history Apple has with Siri and the meaningfully powerful on-device ML acceleration that they still can't download and run a recognition model on-device that wouldn't require streaming your speech to Apple's servers...