Oh, from the video I got the impression it was more than that, based on it recognising app contexts and the like. I guess that's mostly just icing on the cake for the core dictation part.
Users have different preferences on the text format they input into different apps. Aqua is able to pick up on these explicit and implicit preferences across apps – but no "open XYZ app" commands, yea