So what people bothers writing down are just a pale reflection of what has been, the reader has to relies on his experience and imagination to recreate it. If we take drawing for example, you may read all the books on the subject, you still have to practice to properly internalize that knowledge. Same with music, or even pure science (the axioms you start with are grounded in reality).
I believe LLMs are great at extracting patterns from written text and other forms of notation. They may be even good at translating between them. But as anyone who is polyglot may attest, literal translation is often inadequate because lot of terms are not equivalent. Without experiencing the full semantic meaning of both, you'll always be at risk at being confusing.
With traditional software, we were the ones providing meanings so that different tools can interact with each other (when I click this icon, a page will be printed out). LLMs are mostly translation machines, but with just a thin veneer of syntax rules and terms relationships, but with no actual meaning, because of all the information that they lack.
As another example, LLMs are kind of magical when it comes to what I'd call "bad memory spelunking". Is there a video game, book, or movie from your childhood, which you only have some vague fragments of, which you'd like to rediscover? Format those fragments into a request for a list of candidates, and if your description contains just enough detail, you will activate that semantic understanding to uncover what you were looking for.
I'd encourage you to check out 3blue1brown's LLM series for more on this!
I think it's true they lack a lot of information and understanding, and that they probably won't get better without more data, which we are running out of. That's sort of the point I was originally trying to make.