The problem is that legal world are still undecided about the safety of public models.
Plus often businesses need GenAI to be trained on their own IP (ie stuff sensitive to their own business that they don’t want in the public domain).
—-
Point 1 will be decided over the next few years as creators take companies to court (or “ethical AI” starts to displace the current models trained on unlicensed content)
Point 2 cannot be resolved without training your own models.
——
Let’s also not forget that LLMs are just one part of the GenAI movement. There’s audio and image generation too (Plus video, but that’s more an extension of image). In fact it was the latter that I worked on.
And then you have other areas of AI outside of the generative space too. From hundreds of different applications of image recognition to sound processing to searching for other kinds of bespoke patterns. These are all areas I’ve worked in too.
Often a GenAI product will require multiple different “AIs” to function, as part of a larger pipeline that appears like a single opaque box to the customer. And most of those models in the pipeline likely aren’t generative, let alone LLMs.