In particular, GPT-2 to GPT-4 spans an increase from 'well read toddler' to 'average high school student' in just a few years, while simultaneously the computational cost of training less capable models goes down similarly.
Also worth noting: the article claims Stripe, another huge money raiser, had an obviously useful product. gdb, sometime-CTO of stripe and its fourth employee, is now president of OpenAI. And, most of all, the author doesn't remember how nonobvious Stripe's utility was in its early days, even in the tech scene: there were established ways to take people's money and it wasn't clear why Stripe had an offering worth switching to.
For an alternate take, I think https://situational-awareness.ai provides a well reasoned argument for the current status of AI innovation and growth rate, and addresses all of the points here on a general (though not OpenAI specific) way.
GPT-4 was released 16+ months ago. In that time OpenAI made a cheaper model (which it teased extensively and the media was sure was GPT-5) and its competitors caught up but have not yet exceeded them. OpenAI's now saying that GPT-5 is in progress, but we don't know what it looks like yet and they're not making any promises.
What I'm seeing right now suggests that we're in the optimization stage of the tech as it is currently architected. I expect it to get cheaper and to be used more widely, but barring another breakthrough on the same order as transformers I don't expect it to see the kind of substantial gains in abilities we've hitherto been seeing. If I'm right, OpenAI will quickly be just one of many dealers in commodity tech.
I don't really know anything about business, but something else I've wondered is this: if LLM scaling/progress really is exponential, and the juice is worth the squeeze, why is OpenAI investing significantly in everything that's not GPT-5? Wouldn't exponential growth imply that the opportunity cost of investing in something like Sora makes little sense?
The progress we’ve seen to date was powered by the ambitions and belief of NVIDIA and LLM companies.
Now, it’s head-to-head competition. It is way too early to call an impending slowdown.
Given how NVIDIA and Meta are leaning in on OSS, the next 18 months are going to be very interesting.
Even if fundamental progress slows, there are many, many secondary problems to solve in using the capabilities we have today that are rapidly improving. As someone deploying AI in business use cases daily, we are just now getting started.
I’d look to when NVIDIA starts to slow down on hardware as an early indicator of a plateau.
AWS is a commodity. When your commodity is compute, there’s a very large amount of growth available.
I have to push back on this. Anybody who had built for B2B credit card acceptance on the Web prior to Stripe's founding knew immediately what a big deal it was. For starters, they let you get up and running the same day. Second, no credit check (and associated delays). Third, their API made sense (as compared to popular legacy providers like Authorize.net) and was easy to integrate using an open source client. Fourth, self-service near-real-time provisioning. Their value proposition was immediately obvious, and they nailed all of these points in their first home page[1].
By contrast, Fee Fighters[2] was innovative for the time but still required me to fax a credit application to them. They got me up and running faster than the legacy provider, which is to say about a week. And I think I only had to talk on the phone with them once or twice. I remember really liking Fee Fighters, but Stripe was in a class of its own.
Stripe was a hit because they promised to solve hard problems that nobody else did, and then they did exactly that. (You still don't have to talk to a rep or do a personal credit check to start using Stripe!)
1 - https://www.quora.com/What-did-the-first-version-of-Stripe-l...
https://www.bloomberg.com/news/features/2017-08-01/how-two-b...
This "breakthrough" is often touted as AGI or something similar to it which to me is even more risky than a nuclear fusion startup as:
1. Fusion has had some recent breakthroughs that could result in a commercially viable reactor eventually.
2. Fusion has a fundamentally sound theoretical basis unlike producing AGI (or something like it).
That wasn't true at all. Stripe was a product that people were rushing to pay for it for just how good and useful it was. It was an example of success MVP that people want to pay to use and the profitability was not a problem.
The same can't be true for OpenAI. We don't know how long it can stay in the red. Maybe it can survive. Maybe its money will run dry first. We are not so sure at current stage
Stripe had high variable costs (staff, COGS of pass-through processing fees) but low fixed costs. OpenAI has enormous fixed (pre-revenue!) costs alongside high variable costs (staff of AI engineers, inference).
Financially, OpenAI looks more like one of the EV startups like Tesla or Rivian than it does a company like Stripe. And where Stripe was competing with relatively stodgy financial institutions, OpenAI is competing with the very biggest, richest companies in the world.
Instead, I wanted to show people the terms under which OpenAI survives, and how onerous said terms were. It's deeply concerning - and I do not think that's a big thing to say! - how much money they may be burning, and how much money they will take to survive.
I also think it's a leap of logic to suggest that the former CTO of Stripe joining is somehow the fix they need, or proof they're going to accelerate.
Also, I fundamentally disagree - Stripe was an obvious business. Explaining what Stripe did wasn't difficult. The established ways of taking money were extremely clunky - perhaps there was RELUCTANCE to change, which is a totally fair thing to bring up, but that doesn't mean it wasn't obvious if you thought about it. What's so obvious about GPT? What's the magic trick here?
Anyway, again, thanks for reading, I know you don't necessarily agree, but you've given me a fair read.
You have a small point that anyone who used authorize.net or similar wanted it to be better and that was obvious, but there's nearly infinite things people want to be better. I'd like breakfast, my commute, my car, my doctor, my vet, etc to be better. That you could make a better thing was incredibly non-obvious and that's why no one did.
I am bearish on AI because the nimbleness of humans, even the outsourced ones, is quite capable. If you only want the AI to operate in a box, then you probably can code the decision tree of the box with more specificity and accuracy than a fuzzy AI can provide.
It's a very useful tool, I'm skeptical however about how it can disrupt things economy-wide. I think it can do some things very well, but the value to the market and businesses vs. the cost of training and adapting it to the business need is quite suspicious, at least for this cycle. I think this is one of those "wait 10 years" situations and many AI companies will die within 1 to 3 years.
It won't disrupt much because we already had "AGI" of a sorts. The internet itself, with billions of people and trillions of pieces of text and media is like a generative model. Instead of generating you search. Instead of LLMs you chat with real people. Instead of Copilot we had StackOverflow and Github. All the knowledge LLMs have is on search engines and social networks, with a few extra steps, and have been for 20 years.
Computers have also gotten a million times faster and more networked. We have automated in software all that we could, we have millions of tools at our disposal, most of them open source. Where did all that productivity go? Why is unemployment so low? The amount of automation possible in code is non-trivial, what can AI do dramatically more than so many human devs put together? Automation in factories is already old, new automation needs to raise the bar.
It seems to me AI will only bring incremental change, an evolution rather than revolution. AI operates like "internet in a box", not something radically new. My yet unrealized hope is that assisting hundreds of millions of users, LLMs will accumulate some kind of wisdom, and they will share back that wisdom at an accelerated speed. An automated open sourcing of problem solving expertise.
I've been saying this for the past couple of years. Yes AI is cool, but we already have computers and computer programs. Things that can be solved algorithmically, SHOULD be solved algorithmically. Because you WANT your business rules and logic to be as predictable and reliable as possible. You want to lessen liability, complexity, and amount of possible outcomes.
We already even see this with human customer support. They follow a script and flowchart. They're just glorified algorithms. Despite being human, they're actively told to not be creative, not think, and act as a computer. Because, as it turns out, from a business perspective that's usually very advantageous (where you can do it).
AI would never, or should never, replace those types of tasks.
logarithmic, the capabilities increase with log(cost), what grows exponentially is compute used over time
I think you are misremembering. Stripe was a _big deal_. They had a curl call on their home page for a while for how to take a payment IIRC. It was like how Twilio opened the door for anyone to send SMS, Stripe made it stupid-easy to handle payments online. Nothing else at the time compared in terms of simplicity and clearly defined fees.
GPT-2 was indeed much smaller and weaker model. But the question do we have "exponential" boost after GPT3, or just marginal while competition commoditized this vertical.
Hint: it's not correct. It's nothing like exponential. It's not even order of magnitude stuff. It's tiny increments, to a system which fundamentally is a bit of a dead end.
What does it even mean for a value whose scale is not defined to be logarithmic, quadratic, or exponential?
You guys are threading water about nothing.
WeWork went bankrupt. Uber briefly made money but is losing it again, and is nowhere near paying back its investors. Tesla has become a major luxury car company, and is somewhat profitable, but the stock is way overpriced for a car company. Everybody now makes electric cars, so this is a low-margin business. (Reuters: "Tesla's bleak margins sink shares as Musk hypes everything but cars.")
OpenAI, as a business, is assuming both that LLM-type AI will get much better very fast, and that everybody else won't be able to do what they do. It's unlikely that both of those assumptions hold. Look at autonomous vehicles. First tech demos (CMU) in the 1980s. First reasonably decent demos (DARPA Grand Challenge) in the 2000s. First successful deployment in the 2020s (Waymo, maybe Cruise and Zoox). Still not profitable. 40 years from first demos to deployment, probably 50 to profitability. It's entirely possible that OpenAI's business will look like that. Their burn rate is way too high to sustain for that long.
Often it takes that long, even when the basics have been figured out. Xerography was first demoed in the late 1930s. The demo machine used to be in the lobby at Xerox PARC. Profitability came in the 1960s. By the late 1970s, everybody had the technology, and it was low-margin. Electronic digital computing goes back to IBM's 1940s pre-WWII electronic multiplier experiments, but didn't come down from insanely expensive price levels until the 1980s. Memory was a million dollars a megabyte as late as the mid-1970s. Color television was first demoed in 1928, and the first color CRT was developed in the 1940s. But mainstream adoption didn't come until 1966-1967.
So what? No one has participate in the "rationalist" subculture's weird practices. It means nothing to refuse to take a bet like that, let alone that the claims made in the article are suspect (which you seem to be implying).
[I can't actually read anything beyond the tweet you linked because twitter is stupid].
True!
> It means nothing to refuse to take a bet like that, let alone that the claims made in the article are suspect (which you seem to be implying).
False! If the author was sufficiently confident in their claims, they'd be happy to take the free money (or, if they're sufficiently liquidity-constrained, propose a smaller bet at similar terms). You can certainly argue that the practice of betting on one's beliefs is "weird" but that objection is circular. If you claim to see free money on the ground, and other people notice that you aren't picking it up, they would be correct to wonder why.
Meta's open source LLM stance makes things more spicy, making it challenging for anyone generate differentiated and lasting profit in the LLM space.
At the current pace, the LLM bubble is poised to pop in a year or two - negative net revenue can't keep growing forever - barring a transformative, next-generation capability from closed-source AI companies that Meta can't replicate. All eyes on GPT-5.
The post says:
> The supply shortage has subsided: Late 2023 was the peak of the GPU supply shortage. Startups were calling VCs, calling anyone that would talk to them, asking for help getting access to GPUs. Today, that concern has been almost entirely eliminated. For most people I speak with, it’s relatively easy to get GPUs now with reasonable lead times.
But a couple of days ago I heard from a startup founder that the usual cloud credits (~$100k in cloud compute) that AWS provides to vetted startups that passed some milestones are recently barred from being used on GPU-powered instances.
Like they already did in the last 2 years?
> Have such a significant technological breakthrough that GPT is able to take on entirely unseen new use cases, ones that are not currently possible or hypothesized as possible by any artificial intelligence researchers.
Huh, what are these use-cases which no AI researcher thinks AI is capable of solving? Does the author not realize that many employees at the leading AI labs (including OpenAI) are explicitly trying to build ASI? I am so confused????????
> Have these use cases be ones that are capable of both creating new jobs and entirely automating existing ones in such a way that it will validate the massive capital expenditures and infrastructural investment necessary to continue.
Why would they have to create new jobs? They just have to be good enough that OpenAI can charge enough money for them to be in the green.
OpenAI already has a $3.4 billion ARR! Most of that is _not_ enterprise sales.
AT&T.
And I'm guessing Uncle Sam will want control of these AI companies anyway if AI starts looking even a little powerful/threatening.
That's a weird comparison. The goal of the space race was never to make money, and it was funded by the taxpayers, not VCs expecting a return on their investment.
I doubt that Uncle Sam cares that much about controlling OpenAI.
It just burned a ridiculous amount of money stringing lines.
> I am neither an engineer nor an economist.
clearly.
It's more plausible for me that we will see a notable productivity increase in a lot of sectors of the economy over the next decade. Part of me wonders if this is an additional reason why the Russell 2000 has been spiking lately (investors concluding that there is more money to be made from the general productivity increases in the wider economy than the tech companies providing the LLMs that don't seem to possess any monopolies on the technology), but this is just my speculation.
"Mass market utility" here refers to its ability to sell at the scale it would need to substantiate its costs. As it stands, LLMs do not have mass-market utility at the scale that they need to substantiate their costs. It is really that simple. If they did, these companies would be profitable, and they would be having a meaningful effect on productivity, which they are not.
See page 4 of this report from Daron Acemoglu of MIT: https://www.goldmansachs.com/images/migrated/insights/pages/...
So why should we listen to him? ChatGPT has saved me a lot of time. Luckily it’s well trained on AWS API’s, tge SDK, CDK, Terraform and Kubernetes. Anything it isn’t trained on, I just give it links to the documentation
The "killer app", as far as I can tell, is essentially natural language search. However, the core function (in my opinion) has existed since DuckDuckGo added contextual infoboxes to the right of search results ("knowledge panels"), and the benefit of using natural language has existed since Siri and has never seemed to add much to the experience for me. AI image generators seems to be used mostly by youtube creators and spammers. The main users of AI language generation seem to be spammers and crappy content farms on TikTok.
Commercials for AI products ALWAYS lie by speeding up the time it takes for results to arrive, and the most impressive demos always seem to end up as some version of the same useless feature: "what am I looking at right now?" Who needs that? AI-assisted coding also seems to have a similar issue. Demos that supposedly show off the technology never actually use it to create the kind of code that is actually worth money.
I'd be happy to be proven wrong here, but I keep looking and I never find that killer app.
Outbound sales is being automated.
Lots of very hum-drum stuff. Ordering room service at a hotel for example.
Tons of data entry jobs are now gone.
LLMs are already better at humans from a cost perspective for many tasks.
Anyone who is sight impaired, or doesn’t have their glasses on while reading a menu, or looking at a sign in another language.
It’s clearly not in final form. In 1978, people couldn’t see the use of home computers. In 1988, most people were saying the same thing about email. In 1998, most people were saying the same thing about the internet.
It might not prove out, but evaluating something super early isn’t all that interesting. Let’s see where we are in 2030 at least.
If I can compare it to Google search back in the 90s. There weren’t all these evangelists saying in 5-10 years blah blah blah. We just used it, in our daily lives. I don’t use AI at all except at work. And I couldn’t even tell you what product it’s going toward because I don’t think 99% of employees know. Why we don’t know, who knows!
But AI code generation is garbage, just like AI text generation is. AI generated books, and songs, and poetry etc is just appalling rubbish that nobody wants to read or hear. AI generated art is ugly and generic, and stands out immediately as zero-effort and near-zero cost. Except of course, it cost alot of money to make, but nobody is willing to pay for it and it's so far been bankrolled by VC capital.
If someone had come out of a copy of Google in 2000, we'd be looking at a much different picture.
What does reducing costs by "a factor of thousands of percent" mean? It starts printing money? It costs 1/10 as much?
This line is absurd. I use it constantly. 4o reads my code and generates documentation and type annotations. It generates boilerplate code. It generates logos for projects. I review all of its outputs code wise and make the odd correction here or there. I use it to check over documents before I send them. It’s replaced Stack Overflow entirely in my workflow.
I’m curious as to what’s above the author’s line for revolutionary.
Who will continue to feed it new information? The answer is, nobody will. And instead of having a community driven knowledge base, you'll have a bankrupt corporation and nowhere to turn to when you discover that you can't write code anymore.
None of the tools can exist without stealing the net sum of human knowledge - insofar as that is represented by the contents of the internet - for corporate profit. If what the AI proponents claim comes true, that source of knowledge will cease to exist. And what then?
So I believe there are many avenues for further improvement. I still think it will be very hard though, and until we hit the next breakthrough, will be very resource intensive and possibly quite slow going.
I still write a lot of code without needing to use an LLM. It’s just a tool that helps me be more productive. I don’t see how what is doing is any different than Google just displaying the answer instead of links to the answer.
OpenAI has raised $11.3bn (source the article)
Since partnering with Microsoft in 2019, Microsoft's valuation has gone from $0.7tn to $3.1bn, or an increase of $2.4tn, a lot of that on AI enthusiasm.
Microsoft can sell some shares to fund OpenAI, 2.4tn being about 200x what they've put in.
Sure the market bubble will pop at some stage but not by 200x. I'm skeptical of the they can't survive argument.
Also I recall in the early days of Facebook, Google and Amazon people saying they lose money each year, the first two didn't have a monetization model, how will they get by? But of course they ended up some of the world's most profitable companies. With AI also you have to think a few years down the road when ASI's output may exceed the current global GDP ($100tn or so).
"I ultimately believe that OpenAI in its current form is untenable."
Followed by a bunch of reasons why. Later they write:
"What I am not saying is that OpenAI will for sure collapse, or that generative AI will definitively fail"
What? Didn't they just explain 100 different reasons why they think think OpenAI will fail? There was also this:
"To be clear, this piece is focused on OpenAI rather than Generative AI as a technology — though I believe OpenAI's continued existence is necessary to keep companies interested/invested in the industry at all."
To be clear? So they are trying to separate OpenAI from gen AI. Then they throw in a hyphen and say, oh but without OpenAI, companies would stop spending time and money on gen AI. Ok, thank you for the..clarification.
I stopped reading after that.
"GPT-4o Mini (OpenAI's "cheaper" model) already beaten in price by Anthropic's Claude Haiku model"
GPT-4o Mini is presently cheaper than Claude 3 Haiku.
Also, I'd like to point out that the total investment into AT&T, which they position as untenable, is less than AT&T has spent on their network investments every single year for at least the last 10 years. It isn't like companies don't invest billions of dollars into things.
I do not THINK Microsoft will put that much money into it. They could! It isn't impossible. But it would be totally unprecedented.
1. I do not know of any article that said that Google was "DOA" or "done" as a result of the choice of a search engine as a business model. In fact, search engines were an already-established industry at the time. If I'm wrong, I'd love to read it, as I imagine it's a fascinating historical document - even if it was horribly wrong!
2. OpenAI's business model and Google Search's business models are totally different. Apples and oranges. The way that OpenAI monetizes, the technology it uses to both deliver a service AND monetize it, the technology stack, the scaling, even the tech they acquire to build it, just totally different.
Again, if you can find an article that had someone in the 90s or 2000s saying "Google is DOA! Search is stupid!" then I'd really really love to read it, genuinely.
Maybe, but based on the egregious errors the author has made in previous articles, they probably don't have the ability to understand or reason about any of the data they read. Also note that despite what's implied by this statement, most of this article is not sourced, it's just the opinions of the author who admits they have no qualifications.
I didn't read the entire gish gallop, but spot-checked a few paragraphs here and there. It's just the kind of innumerate tripe that you should expect from Zitron based on their past performance.
> Have a significant technological breakthrough such that it reduces the costs of building and operating GPT — or whatever model that succeeds it — by a factor of thousands of percent.
You can't reduce the cost of anything by more than 100%. At that point it's free.
But let's consider the author's own numbers: $4B in revenue, $4B in serving costs, $3B in training costs, $1.5B in payroll. To break even at the current revenue, OpenAI need to cut their serving costs and training costs by about 66% ($1.3B+$1B+$1.5B<$4B), not by "thousands of percent".
> As a result, OpenAI's revenue might climb, but it's likely going to climb by reducing the cost of its services rather than its own operating costs.
... Sorry, what?
Reducing operating costs does not increase revenue. And I don't know how the author thinks that reducing cost of services would not reduce operating costs.
> OpenAI's only real options are to reduce costs or the price of its offerings. It has not succeeded in reducing costs so far, and reducing prices would only increase costs.
Reducing prices does not increase costs.
> I see no signs that the transformer-based architecture can do significantly more than it currently does.
So, here's a prime example of the author basing the "analysis" on them personally "seeing no signs" of something they have no expertise to evaluate. There's no source for this claim, and it's pretty crucial for their conclusions that transformers have hit a wall.
> While there may be ways to reduce the costs of transformer-based models, the level of cost-reduction would be unprecedented,
But for a given quality of model, haven't the inference costs already gone down by like 90% this year?
> particularly from companies like Google, which saw its emissions increase by 48% in the last five years thanks to AI.
It should be pretty obvious to somebody who can read publicly available data that all of the increase over 5 years can't be attributed to AI.
"Egregious errors in previous articles" is not a valid argument against current arguments, nor do I agree there were those errors. Nevertheless, we're discussing one particular article today!
"I didn't read the entire gish gallop, but spot-checked a few paragraphs here and there. It's just the kind of innumerate tripe that you should expect from Zitron based on their past performance."
Well that's not very nice! It also means that your argument is made on incomplete data.
"... Sorry, what?
Reducing operating costs does not increase revenue. And I don't know how the author thinks that reducing cost of services would not reduce operating costs."
I'm afraid you misread what I said, likely because you (and I quote) "spot-checked a few paragraphs."
One of the problems OpenAI has is that their cost of revenue - and we don't know it to be exact - is extremely high, higher than the revenue they're actually gaining, otherwise known as an "operating loss." As a result, even if they increase revenue, they'll actually lose more money. On top of that, the argument I was making is that if there's a race to the bottom (one that's already started), they will have to cut costs, making them less money even if they get more customers.
"Reducing prices does not increase costs." Does reducing prices reduce operating expenses? Because if it doesn't, it actually does increase costs, because you're taking home less cash for the same cost. It could be that 4oMini is somehow more efficient - i can find no evidence that that's the case, and if it exists, I will happily update my article.
"So, here's a prime example of the author basing the "analysis" on them personally "seeing no signs" of something they have no expertise to evaluate. There's no source for this claim, and it's pretty crucial for their conclusions that transformers have hit a wall."
I can find no examples of radically-different functionality in GPT or other mass-market transformer-based models. In the event I am wrong, I would be fascinated to read about them, but I would need to understand A) how these functionalities are different and B) how they can be productized. After that, I'd need to understand how this would be profitable, and in turn how this would scale into something truly world-changing.
"But for a given quality of model, haven't the inference costs already gone down by like 90% this year?" Have they?
"It should be pretty obvious to somebody who can read publicly available data that all of the increase over 5 years can't be attributed to AI."
I too read publicly-available data, and my source in this case is "Google."
Forgive the messy copy-paste. https://www.gstatic.com/gumdrop/sustainability/google-2024-e...
In 2023, our total GHG emissions were 14.3 million tCO2e, representing a 13% year-overyear increase and a 48% increase compared to our 2019 target base year. This result was primarily due to increases in data center energy consumption and supply chain emissions. As we further integrate AI into our products, reducing emissions may be challenging due to increasing energy demands from the greater intensity of AI compute, and the emissions associated with the expected increases in our technical infrastructure investment.
> I'm afraid you misread what I said, likely because you (and I quote) "spot-checked a few paragraphs."
I quoted what you wrote, it wasn't out of context, and it was obvious nonsense. That you can't catch such obvious nonsense is exactly why nothing you write can be trusted.
> One of the problems OpenAI has is that their cost of revenue - and we don't know it to be exact - is extremely high, higher than the revenue they're actually gaining, otherwise known as an "operating loss." As a result, even if they increase revenue, they'll actually lose more money. On top of that, the argument I was making is that if there's a race to the bottom (one that's already started), they will have to cut costs, making them less money even if they get more customers.
None of that seems to bear any relation to what you actually wrote: "As a result, OpenAI's revenue might climb, but it's likely going to climb by reducing the cost of its services rather than its own operating costs". That is you claiming that reducing the cost of its services would increase revenue.
That is not you talking about operating income, or margin, or cost of revenue. These words have actual meaning, you can't just randomly one for another and expect it to make sense. Again, a recurring pattern.
> I too read publicly-available data, and my source in this case is "Google."
Yes, you already bragged in the article that you know how to read publicly available data, which is why that's the qualifier I used. I don't dispute that you're able to read. I will, however, claim that you either do not understand much what you read or are intentionally choosing to misrepresent that. Let's look at this example:
> In 2023, our total GHG emissions were 14.3 million tCO2e, representing a 13% year-overyear increase and a 48% increase compared to our 2019 target base year. This result was primarily due to increases in data center energy consumption and supply chain emissions. As we further integrate AI into our products, reducing emissions may be challenging due to increasing energy demands from the greater intensity of AI compute, and the emissions associated with the expected increases in our technical infrastructure investment.
What part of that supports your claim of AI being the cause of the 48% increase? None of it. It is only attributed to "supply chain emissions" and "data center energy consumption". The mention of AI is entirely forward-looking. Let's take it for granted that you indeed read the text you copy-pasted. Why is your claim about what it says so obviously incorrect?
Did you really not understand the text? It's not that complex. Did you understand it and just lie about it because it supported the narrative you had in mind, and nobody checks the sources anyway? Seems like a bad plan. Either way, it again demonstrates that you are not cut out for doing any kind of analysis.
As of today all of the evidence indicates the LLM paradigm is saturated.
Why all the hand-wringing about hypotheticals? As of today this stuff is a failed experiment.
Altman or Amodei coming up with the goods is a tail X-risk.
Hypotheses and hypotheticals are useful tools when writing about something big and messy. Instead of me saying - as I have before - that I believe generative AI is a complete dead end and thus OpenAI is in a really bad way - I took great pains to explain the terms under which they WOULD succeed - how difficult success might be, how much money it would take and how many factors would have to go their way.
If OpenAI pulls it off, it'd be really remarkable. Truly historic! But if they don't, they are in deep, deep doo doo.
Any day now everyone will adopt bitcoin and the value will skyrocket and we’ll all be exceedingly rich, HODL, gl, wgtm, to the moon.
Not saying they’re not working on stuff, but that’s what you sound like. OpenAI is amazing at generating hype, but the competitors are catching up, or outpacing them, and OpenAI source models are never far behind.
Citation?
LLMs bring the cost of writing software close to $0. We can finally live in a world of truly bespoke code.
I, for one, welcome back the web of the early 2000s.
LLMs do not "bring the cost of writing software close to $0" on a number of levels.
1. The code is not 100% reliable, meaning it requires human oversight, and human beings cost money. 2. LLMs themselves are not cheap, nor profitable. I am comfortable humoring the idea that someone could run their own models - something which is beginning to happen - to write code. I think that's really cool, but I am also not sure how good said code will be or how practical doing so will be.
Right now, Microsoft is effectively subsidizing the cost of Github Copilot, though they appear to have produced quite a lot of revenue from it.
https://www.benzinga.com/news/24/07/40061358/satya-nadella-s...
However, it seems that Github was not profitable before (https://news.ycombinator.com/item?id=17224136) and I would argue isn't profitable now. It's hard to tell, because Microsoft blends their costs into other business lines.
Citation?
Your concerns are certainly valid, but the LLMs are getting smaller, faster, and cheaper to run every day. Now, I also agree that you still need someone "programming" -- in the sense that they're telling a computer what to do, but they no longer need to "code" in the traditional sense (curly braces and semicolons).
We're actively seeing non-engineers build useful software for themselves, just with a $20/month subscription to ChatGPT/Claude.
Times are changing, you no longer need a 6 figure engineer to build your one-off tool.