Of course :)
>> I don’t know if it’s been formally verified, but it’s pretty safe to bet that ChatGPT is Turing Complete. If so, than your statement is false. ChatGPT can emulate every computable problem.
I think that's unlikely. For Universal Turing Machine expressivity, a system needs to have something to function as an infinite Turing tape, and where's that in an LLM? ChatGPT famously has a limited input buffer and it doesn't even have a memory, as such (it forgets everything you tell it, hence why a user's input and its own answers have to be fed back to it continuously during a conversation or it loses the thread).
Besides, OpenAI themselves, while they have made some curious claims in the past (about GPT-3 calculating arithmetic) seem to have backtracked recently and nowadays if you ask ChatGPT to calculate the result of a computation it replies with a sort-of-canned reply that says it's not a computer and can't compute. Sorry I don't have a good example, I've seen a few examples of this with Python programs and bash scripts.
Anyway you could easily test whether ChatGPT (or any LLM) is computing: ask it to perform an expensive computation. For example, ask it to compute the Ackermann function for inputs 10,20, which should take it a few hundred years if it's actually performing a computation. Or ask it to generate the first 1 million digits of pi. It will probably come up with a nonsense answer or with one of it's "canned" answers, so it should be obvious it's not computing the result of the computation you asked it to.
Btw, I think one could argue that a Transformer architecture is Turing-complete, in the sense that it could be, in principle, trained to simulate a UTM; I seem to remember there are similar results for Recurrent Neural Networks (which are, however, a different architecture). But a Transformer trained to generate text is trained to generate text, not to simulate a UTM.