This is a good point, and to drive this home to people, if you have a conversation of this pattern:
User: Fix this problem ...
Assistant: X
User: No, don't do X
Assistant: Y
User: No, Y is wrong too.
Assistant: X
It is generally
pointless to continue. You now have a context that is full of the assistant explaining to you and
itself why X and Y are the right answers, and much
less context of you explaining why it is wrong.
If you reach that state, start over, and constrain your initial request to exclude X and Y. If it brings up either again, start over, and constrain your request further.
If the model is bad at handling multiple turns without getting into a loop, telling it that it is wrong is not generally going to achieve anything, but starting over with better instructions often will.
I see so many people get stuck "arguing" with a model over this, getting more and more frustrated as the model keeps repeating variations of the broken answer, without realising they're filling the context with arguments from the model for why the broken answer is right.