It makes one point: that LLMs are useful.
The other point is still suspect: that LLMs will ever scale to AGI.
Which specifically means reliability and explainability for higher-order thinking.
The writing is on the wall that LLMs are going to automate failure-tolerant work.
But the rub there is that failure-tolerant work is also tolerant of less than state of the art, cost-optimized LLMs.
Which leaves OpenAI where? AGI or bust.
And I wouldn't take that bet, when MS, Google, and Apple are alternative options.