This gets to a different philosophy of voice assistant computing between the companies.
Google (and Amazon) have significant cloud infrastructures. The sequence of wake -> audio clip -> cloud -> process -> command -> device --- that's how they do it.
Apple has taken a different route with a limited set of "domains" which have voice processing associated with them. These can be seen in https://developer.apple.com/documentation/sirikit which has different apps register that they can handle different domains.
This means that Google (and Amazon) will have a stronger parsing of those sentences and commands for arbitrary queries. Siri, however, has a stronger integration with arbitrary commands that are part of an existing 'domain' being handled by an existing app.
The multiple device wake is an interesting problem (I've seen Alexa, when two devices both wake, verify that the correct one had the response). With Siri, I've seen multiple devices wake, but I haven't had multiple ones respond - I suspect there's some network traffic to decide which one has the best audio signal and ability to process the information... but that's my experience, I could very well be wrong there.
The thing is that this really goes to a difference in philosophy about how voice assistants work and integrate with different 3rd party applications.