You realize humans on call, ready to completely focus on what I want, are expensive right?
Having a "human-like" entity I can chat with about interesting little problems in math, economics, governance and ethics is really helpful.
I use the word "understanding", because it's so clear when it does, and when it doesn't.
I am not implying it is conscious or aware. Simply that it has represented something in a robust enough way to be able to chat about it from different perspectives consistently.
Another helpful thing is getting pushback from the model when it thinks I am wrong. I have to explain myself better, or occasionally discover I am the one making a mistake. Beautiful!
The limit is the limit of the chat length. There is a sense of accomplishment to explain a problem to another entity, until it understands, and then together establish some interesting results. The day I get to have an entity whose memory accumulates all the details of all the problems I am (we are?) working on will be a GREAT day.