Why isn't GPT learning when it did the same?
It's not so much that they are raising an LLM to their own level, although that has obvious dangers, e.g. in giving too much 'credibility' to answers the LLM provides to questions. What actually disturbs me is they are lowering themselves (by implication) to the level of an LLM. Which is extremely nihilistic, in my view.
Why don't other forms of computer supremacy alarm you in the same way, anyways? Did it lower your humanity to recognize that there are certain data analysis tasks that have a conventional algorithm that makes zero mistakes and finishes in a second? Does it lower the humanity of mathematicians working on the fluid equations to be using computer-assisted proof algorithms that output a flurry of gigabytes of incomprehensible symbolic math data?
Even when we know that physically, that's all that's going on. Sure, many orders more dense and connected than current LLMs, but it's only a matter of time and bits before they catch up.
Grab a book on neurology.
We either repeat like a parrot (think about kids who you though got something and then you discover they didn't understood it)
Or create a model (as chatgpt does) of abstraction and then answer through it.