That would be nice, but I cynically suspect it's not something LLMs are constitutionally able to provide.
Since they don't actually model facts or contradictions, adding prompt-text like "provide alternatives" is in effect more like "add weight to future tokens and words that correlate to what happened in documents where someone was asked to provide alternatives."
So the linguistic forms of cautious equivocation are easy to evoke, but reliably getting the logical content might be impossible.