1- OpenAI is still working in GPT-4-level models. More than 14 months after the launch of GPT-4 and after more than $10B in capital raised. 2- The rhythm that token prices are collapsing is bizarre. Now a (bit) better model for 50% of the price. How people seriously expect these foundational model companies to make substantial revenue? Token volume needs to double just for revenue to stand still. Since GPT-4 launch, token prices are falling 84% per year!! Good for mankind, but crazy for these companies. 3- Maybe I am an asshole, but where are my agents? I mean, good for the consumer use case. Let's hope the rumors that Apple is deploying ChatGPT with Siri are true, these features will help a lot. But I wanted agents! 4- These drop in costs are good for the environment! No reason to expect them to stop here.
Especially since this demo is extremely impressive given the voice capabilities, yet still the reaction is, essentially, "But what about AGI??!!" Seriously, take a breather. Never before in my entire career have I seen technology advance at such a breakneck speed - don't forget transformers were only invented 7 years ago. So yes, there will be some ups and downs, but I couldn't help but laugh at the thought that "14 months" is seen as a long time...
You can be excited, as I am, while also being bearish, as I am.
1. Railroad companies in the second half of the 19th century.
2. Car companies in the early 20th century.
3. Telecom companies and investment in the 90s and early 2000s.
Be impatient. Its a positive feeling, not a negative one. Be disappointed with the current progress; its the biggest thing keeping progress moving forward. It also, if nothing else, helps communicate to OpenAI whether they're moving in the right direction.
No it isn't - excitement for the future is the biggest thing keeping progress moving forward. We didn't go to the moon because people were frustrated by the lack of progress in getting off of our planet, nor did we get electric cars because people were disappointed with ICE vehicles.
Complacency regarding the current state of things can certainly slow or block progress, but impatience isn't what drives forward the things that matter.
It's like the people in this community all suffer from a complete disconnect from society and normal human needs/wants/demands.
GPT and all the other chatbots are still absolutely magic. The idea that I can get a computer to create a fully functional app is insane.
Will this app make me millions and run a business? Probably not. Does it do what I want it to do? Mostly yes.
Humanity in a nutshell.
I spend the last two years dismayed with the reaction but I’ve just recently begun to realize this is a feature not a flaw. This is latent demand for the next iteration expressed as impatient dissatisfaction with the current rate of change inducing a faster rate of change. Welcome to the future you were promised.
The only time people weren’t displeased was increasing internet speeds 15mb to 100mb.
You will keep being dismayed! People only like good things, not good things that potentially make them obsolete
I'm pretty skeptical about all the whole LLM/AI hype, but I also believe that the market is still relatively untapped. I'm sure Apple switching Siri to an LLM would ~double token usage.
A few products rushed out thin wrappers ontop of chatgpt ai, developing pretty uninspiring chat bots of limited use. I think there's still huge potential for this LLM technology to be 'just' an implementation detail of other features, just running in the background doing its thing.
That said, I don't think OpenAI has much of a moat here. They were first, but there's plenty of others with closed or open models.
Profits are the real metric. Token volume doesn't need to double for profits to stand still if operational costs go down.
So despite all the effort and cost that goes into these models, you still have to compete against a “free” offering.
Meta doesn’t sell an API, but they can make it harder for everybody else to make money on it.
Whether or not that's actually enforceable[0], and whether or not other companies will actually challenge Facebook legal over it, is a different question.
[0] AI might not be copyrightable. Under US law, copyright only accrues in creative works. The weights of an AI model are a compressed representation of training data. Compressing something isn't a creative process so it creates no additional copyright; so the only way one can gain ownership of the model weights is to own the training data that gets put into them. And most if not all AI companies are not making their own training data...
No, the license prohibits usage by Licensees who already had >700m MAUs on the day of Llama 3's release [0]. There's no hook to stop a company from growing into that size using Llama 3 as a base.
My take on this common question is that we haven't even begun to realize the immense scale of which we will need AI in all sorts of products, from consumer to enterprise. We will look back on the cost of tokens now (even at 50% of price a year or so ago) and look at it with the same bewilderment of "having a computer in your pocket" compared to mainframes from 50 years ago.
For AI to be truly useful at the consumer level, we'll need specialized mobile hardware that operates on a far greater scale of tokens and speed than anything we're seeing/trying now.
Think "always-on AI" rather than "on-demand".
The revenue will likely come from application layer and platform services. ChatGPT is still much better tuned for conversation than anything else in my subjective experience and I’m paying premium because of that.
Alternatively it could be like search - where between having a slightly better model and getting Apple to make you the default, there’s an ad market to be tapped.
Imagine you are in 1970s and saying computers suck, they are expensive, there is not that many use cases....fast forward to 90s and you are using Windows 95 with GUI and chip astronomically more powerful that we had in 70s and you can use productivity apps , play video games and surf Internet.
Give AI time, it will fulfill its true protentional sooner or later.
What I am saying is that computers are SO GOOD that AI is getting VERY CHEAP and the amount of computing capex being done is excessive.
It's more like you are in 1999, people are spending $100B in fiber, while a lot of computer scientists are working in compression, multiplexing, etc.
But nobody knows what's around the corner and what future brings....for example back in day Excite didn't want to buy Google for $1m because they thought that's a lot of money. You need to spend money to make money and yea, you need to spend sometimes a lot of money on "crazy" projects because it can pay off big time.
Where they might make future businesses is in the tooling. My understanding from friends within these companies is their tooling is remarkably advanced vs generally available tech. But base models aren’t the future of revenues (to be clear tho they make considerable revenue today but at some point their efficiency will cannibalize demand and the residual business will be tools)
The message to competitor investors is that they will not make their money back.
OpenAI has the lead, in market and mindshare; it just has to keep it.
Competitors should realize they're better served by working with OpenAI than by trying to replace it - Hence the Apple deal.
Soon model construction itself will not be about public architectures or access to CPU's, but a kind of proprietary black magic. No one will pay for upstart 97% when they can get reliable 98% at the same price, so OpenAI's position will be secure.
* Summarisation * Smart filtering * Smart automatic drafting of replies
Very much in beta, and summarisation is still behind feature flag, but feel free to give it a try.
For summarisation here I mean to get one email with all your unread emails summarised.
GPT-3: June 2020
GPT-3.5: November 2022
GPT-4: March 2023
There were 3 years between GPT-3 and GPT-4!
But there's a light and day difference post-Nov22 than before. Both in the AI race it sparkled, but also in the funding all AI labs have.
If you're expecting GPT-5 by 2026, that's ok. Just very weird to me.
This may or may not be true - just because we haven't seen GPT-level-5 capabilities, does not mean that it does not yet exist. It is highly unlikely that what they ship is actually the full capability of what they have access to.
The graph looks like an exponential and is still increasing.
Every exponential is a sigmoid in disguise, but I don’t think there has been enough time to say the curve has flattened.
1- The mania only started post Nov 22. And the huge investments since then didn't meant substantial progress since GPT-4 launch in March 22. 2- We are running out of high quality tokens in 2024. (per Epoch AI)
for another adjacent example, every piece of code github copilot ever wrote, for example, is microsoft ai output, which you "can't use to develop / otherwise improve ai," some nonsense like that.
the sum total of these various prohibitions is a data provenance nightmare of extreme proportion we cannot afford to ignore because you could say something to an AI and they parrot it right back to you and suddenly the megacorporation can say that's AI output you can't use in competition with them, and they do everything, so what can you do?
answer: cancel your openai sub and shred everything you ever got from them, even if it was awesome or revolutionary, that's the truth here, you don't want their stuff and you don't want them to have your stuff. think about the multi-decade economics of it all and realize "customer noncompete" is never gonna be OK in the long run (highway to corpo hell imho)