"Training a model" is just a different name for "optimization".
The typical logistics optimization system is "smart" because it's optimizing exactly what it is supposed to. LLMs here would not be "smart" as they are optimizing toward a different target (human-like production of language-like text), and using them for things they're not specifically trained to do is indeed stupid.