But that's what OpenAI's costumers were supposed to do.
If there's a better AI, old AI will lose the job first.
As somebody who got to work adjacent to some of these things for a long time, I've been wondering about this. Are LLMs and transformers actually better than these "old" models or is it more of an 80/20 thing where for a lot less work (on developers' behalf) LLMs can get 80% of the efficacy of these old models?
I ask because I worked for a company that had a related content engine back in 2008. It was a simple vector database with some bells and whistles. It didn't need a ton of compute, and GPUs certainly weren't what they are today, but it was pretty fast and worked pretty well too. Now it seems like you can get the same thing with a simple query but it takes a lot more coal to make it go. Is it better?
Nonetheless the fact that you can just change a bit the prompt to instruct the model to do what you want makes everything much faster.
Yes the trade-off is that you need GPUs to make it run, but that's why we have cloud
"OpenAI is losing its job to open AI."
OpenAIs $200 closed-ai uppended by hedge-funds free side-project
Quant geeks outcompete overpaid silicon valley devs etc.
Basically, hubris gets its comeuppance which is a david vs goliath biblical archetype which is why this drama grips all of us.
That said, I feel like "quant geeks" aren't quite underdogs compared to silicon valley devs. wdyt?
https://youtu.be/NUhrF0xkhhc?si=1WHWYZrhRmfOYO_y&t=1150 (it's about 2 minutes)