AIs do not “think” in any capacity and are therefore incapable of reasoning. However, if you wish to take “thinking” out of the definition, where we allow an AI to try its hand at “novel (for it)” problems, then AIs fail the test horrifically. I agree, they will probably spit something out and sound confident, but sounding confident is not being correct, and AIs tend to not be correct when something truly new to them is thrown at them. AIs spit out straight incorrect answers (colloquially called “hallucinations” so that AI enthusiasts can downplay the fact that it is factually wrong) for things that an AI is heavily trained on.
If we train an AI on what a number is. But then we slap it with 2+2 =5 long enough, it will eventually start to incorrectly state that 2+2=5. Humans, however, due to their capacity to actually think and reason, can confidently tell you, no matter how much you beat them over the head, that 2+2 is 4 because that’s how numbers work.
Even if we somehow get a human to state that 2+2=5 as an actual thought pattern, they would be capable of reasoning out the problems the moment we start asking “what about 2+3?” Where an AI might make the connection, but there no forward thinking won’t resolve the issue.