Depends on what your definition of a novel problem is. If it's some variation of a problem that has already been seen in some form in the training data, then yes. But if you mean a truly novel problem—one that humans haven't solved in any form before (like the Millennium Problems, a cancer cure, new physics theories, etc.)—then no, LLMs haven't solved a single problem.
> And if it wasn't obvious, an LLM can string together two words that it had never seen together in the training dataset, it really shows how people tend to simplify the extremely complex dynamics by which these models operate.
Well, for anyone who knows how latent space and attention work in transformer models, it's pretty obvious that they can be used together. But I guess for someone who doesn't know the internals, this could seem like magic or reasoning.