Not necessarily reliable though, and you could get different results if you typed an extra whitespace or punctuation.
As you note, your scheme significantly solves the first problem (which is a pretty weak condition) but fails to solve the second problem.
I can see this having odd effects with natural language. Natural language users are forever in a state of negotiation with each other. If you say something to someone and they don't understand they can ask for clarification (or, more likely, just look confused) but, equally, you can take that feedback and adjust your own language model. This happens all day, every day. If most people understand you but a few don't, it's on the few to adjust their models, but if more misunderstand than understand then it's on you to adjust yours.
With current LLMs it's one way. Only you, the human, are malleable. Of course, theoretically the LLM could continuously incorporate input into its model, but we're a long way off that being practical as far as I know.
We'll have to see how it pans out but I can it either ending up in a weird feedback loop where people just capitulate and use the language of the LLM, or they continue to use human language with humans and a special LLM language with LLMs. Both options seem pretty bad.
It will make them more deterministic, but it will not make them fully deterministic. This is a crucial distinction.
If people valued reliability and determinism at its absolute maximum, we would still use formal proof methods as Dijkstra was advocating at the time.
I hate that, but this society has brought it upon itself through consumer choices.
People are really quick to depend on and trust technology that has shown itself to be useful. This can already be observed for LLMs.