The point isn't that you can't use LLM output, it's that you should always consider LLM output as potentially hostile. You can somewhat mitigate this by pairing a LLM with a deterministic system that only allows a predictable subset of behavior, but it's a tricky problem to remove completely.