When it broke requirement or constraint "A", I reminded it about it. It apologized and formulated a new "solution" that broke constraint "B", when reminded about that, it apologized again and proceeded to break constraint "A" again.
The conversation went on for 20 odd turns where I tried to iterate and arrive at a solution - a custom algorithm to solve a problem it's likely never encountered before (at least in that form). It wasn't a particularly difficult problem, it just had many logical branches and steps it would have to reason about holistically. Instead it kept breaking one or more requirements that were clearly explained just 1 or 2 turns ago.
Your response is needlessly rude, perhaps the reality is that you haven't pushed it enough to discover severe limitations of GPT's logical reasoning capabilities.
> If this is truly your judgement then GPT4 already has better reasoning capabilities than you.
Wouldn't it stand to reason that someone with poor reasoning capabilities would be easily impressed by something or someone with better reasoning capabilities? ;)
Those unqualified statements are false. GPT4 may not have been able to pass your particularly complex reasoning task, but that does not mean that it can't reason.
My tone is really out of extreme frustration because misjudgement like you are displaying literally puts the fate of the human race at risk.
Everyone talks about how the act of software development is being automated, nobody thinks about the impact of the software that is being developed, as if that had no impact on anyone else. Given enough effort you can replicate any human skill faster than people can retrain to an unknown skill.