Prompting it to ask clarifying questions will make it ask questions it has seen before, not ask questions it needs you to clarify. So that doesn't solve the problem, it just causes other problems.
If it actually did solve the problem then they would train the models to act that way by default, so anything that you need to make smart prompts for has to be dumb.