Maybe you can try to read other comments below your original comment, as they mostly share the same point and I don't bother to repeat what everyone else has said.
I'll put it concisely:
Trying to build predictable result upon unpredictable, not fully understood mechanisms is an extremely common practice in every single field.
But anyway you think LLM is just coin toss so I won't engage with this sub-thread anymore.