LLMs will eventually make a
lot of simpler machine-learning models obsolete. Imagine feeding a prompt akin to the one below to GPT5, GPT6, etc.:
prompt = f"The guidelines for recommending products are: {guidelines}.
The following recommendations led to incremental sales: {sample_successes}.
The following recommendations had no measurable impact: {sample_failures}.
Please make product recommendations for these customers: {customer_histories}.
Write a short note explaining your decision for each recommendation."
product_recommendations = LLM(prompt)
To me, this kind of use of LLMs looks... inevitable, because it will give nontechnical execs something they have always wanted: the ability to "read and understand" the machine's "reasoning." There's growing evidence that you can get LLMs to write chain-of-thought explanations that are consistent with the instructions in the given text. For example, take a look at the ReAct paper:
https://arxiv.org/abs/2210.03629 and some of the LangChain tutorials that use it, e.g.:
https://langchain.readthedocs.io/en/latest/modules/agents/ge... and
https://langchain.readthedocs.io/en/latest/modules/agents/im... . See also
https://news.ycombinator.com/item?id=35110998 .