Say you and I both wrote the same spec that under-specifies the same parts. But we both expect different behavior, and trust that LLM will make the _reasonable_ choices. Hint: “The choice that I would have made.”
Btw, by definition, when we under-specify we leave some decisions to the LLM unknowingly.
And absent our looks or age as input, the LLM will make some _reasonable_ choices based on our spec. But will those choices be closer to yours or mine? Assuming it won’t be neither.