LLMs can't replace most human jobs in the same way that Google didn't replace most human jobs. However, many people become more productive thanks to modern web search and a few people did lose their jobs or were downsized. Nobody hires a research librarian in a private company these days because employees are expected to do their own searches!
The same thing will happen with LLMs. It'll be an alternative to Google Searches and perform much the same function, extending the capability to fuzzy searches and contextual searches. It'll be integrated with character-accurate indexes, and then there will be one "ask the Internet" product. It'll be useful. It'll make everyone more productive. I don't think it'll replace any of us any time soon. Maybe in 15+ years, but not next year.
[1] Most of the criticism I've seen of LLMs stems from a misunderstanding of what they do and how they work. People expect character-accurate output, such as URLs and references. It's not an index, it doesn't work that way!
Absolutely nothing about our current generation of AI is accurate at all.
67% even 87% on synthetic benchmarks does not intelligence make.
It’s all statistics based, it’s not infinitely accurate, nor do we have any reason to think any AI system would exhibit anything resembling patience. They don’t exist outside of inference time, let alone have a sense of the passage of time.
You know we do need a customer base of actual human beings to sell things to. AI buying AI products AIn't gonna cut it.
I think this is the reason why every innovation takes time until it's faded in completely and the entire society benefits from the innovation. So, neither AI nor other innovations will usher in an era of explosive growth.
How will AI increase aggregate household income explosively? Creating a few more billionaires is just measurement noise, not even visible in the trend line.
Now, society is information based, and we see ourselves as the thinking machines.
Just as the industrial revolution didn't remove humans from all mechanical work, AI won't remove us from all knowledge work, but I believe it will uncover the next level of humanity. If we're not only mechanical, and we're not only cerebral, what are we?
With that being the case, there's efforts underway to stifle AI. It looks like big business hasn't been the quickest to adopt. It's been full steam ahead on things like self-driving cars, even though at times the level of safety has been exaggerated (at least in the early years).
P.S. This is probably a load of nonsense, as evidenced by the many people working on AI, and all the money going into it but it seems like business hasn't been the most enthusiastic. It's never because they truly care.
P.S.S I also don't know how that would work exactly, but I could see things looking different with everything working fine and "employees" now having free time. Not having money to give them, and time to think "hey, why does that guy get all the stuff while we starve, maybe we should find a way to fix".
There's also the reality that while proprietary AI models could be bad for workers, AI could also be bad for big business. Highly disruptive if this can't be controlled. It's not always material costs, sometimes the issue is just that you could never staff teams of engineers to work on problem. Or you have a staff of engineers and need artists... here it seems the artists could actually have the upper hand, which is nice to see :)
That's a very optimistic view. From where I'm sitting, it seems like the rich people control all three of the important AI companies (OpenAI, Google DeepMind, Anthropic) and all one (Nvidia) of the important chipmakers, so they will likely get even richer and many comparatively poor people will lose their jobs.
Under this "concept", we are actually being held back and robbed. The exact reason this is undesirable won't be the clearest. Also, the exact mechanism would be a bit mysterious. If I could explain and prove it, it wouldn't be a fringe theory.
It's a way of reasoning that looks only at behaviors through a cynical lens. Purely, just looking at how business has responded compared to eg. self driving cars, data security, ect.
Does it not feel like there's some extra care here that doesn't really make sense compared to how industry handled similar issues?
The bigger counterpoint would be if the money and effort that IS there. (there is, it's a huge flaw in this theory)
Getting to the hypothetical reasons it could go this way: 1. With more people out of work there will be more discussion and organizing about why some people get so much stuff, while others are destitute. People will no longer be too tired from work and instead either a) [good] living lives of leisure OR b) [bad] out of work, desperate, and pissed off.
In any case, with so many people of leisure, it will actually become clearer that we're all the same. Who cares who your daddy is? Why do we need gorgeous empty properties, while we cannot provide basic housing to the population. Look at the extreme and disgusting wastefulness from the "elite" in society.
With everyone working and fighting for limited resources, you can hide this. While we all may go down to the office and sit at the desk for similar time, some are able to inflate their contributions... sometimes correctly, but who cares? Usually it's not the most dedicated, impactful workers, but the various princes and kings fighting over the loot. However, it looks like some people so much more valuable. When no ones doing anything, it looks different.
2. Without labor being a factor for eg. ~40-80% of people out of work, there won't be money flowing to pay the capitalists. This could result in economic collapse, hurting us all. However, perhaps we could see changes in the economic system that only hurt the wealthiest.
3. Human resources are actually and extremely common technical moat. If you've worked on technical projects you've probably realized that while one person can get a shocking amount done, the work can also easily become impractical for a single person or a small team.
Now it will be harder for corporations to lean on their intellectual property. When the tech doesn't do what people want, they'll just create one that does. Today this can prove absurdly impractical, but it might be much easier in the future. Currently, many companies rely on the fact that customers are likely locked in; they don't have options because it's a multi-billion dollar investment to make the product, so they're forced to accept what the company wants.
Here's an example use case we found for our business:
Our sales people request invoices from a potential customer. On those invoices are our competitor's services and price. We have matching services and our own prices. The goal is to find similar services where we charge less. In the past, our sales people would spend hours combing through those invoices. We wrote a prompt for GPT4, fed in our services and prices, and asked it to find services we could potentially replace as well as our profit margin. It took us a day to write this prompt. The results were outstanding and GPT4 gave accurate results. We even asked it to package it up in a PDF for us.
This will save our company hundreds of thousands each year and we can get back to the potential customer much faster than before - increasing the likelihood of a sale.
If we had to program this like normal software, it'd probably take months to get it right. Chances are, engineering would never even prioritize this feature for our sales people.
GPT6 with much higher context and much cheaper inference cost? Yes please. I think people can't imagine how it's going to change everything.
What you describe will save your company money because you are an early adopter but in the long run, everyone is going to do these kind of things and the savings will be passed on to the consumer.
Munger mentions this talking about a textile business they had. The new more efficient machine wasn't going to make the business better but just end up passing savings on to the consumer so they actually sold the business.
Management wouldn't have prioritized that project for engineering because it would have cost too much and have uncertain benefits given the cost.
This is all massively deflationary and certain highly prized skills that cost $120k a year per right now, will be $20 bucks a month in 2024 dollars someday.
I've said this many times but the only thing stopping me from using GPT4 API for everything in my life is inference cost - both context window limitations and cost per token. I would try to feed everything into GPT4 if I could.
Inference will be solved one day.
The cost to serve a website today is probably millions of times cheaper than in 1998. Heck, Cloudflare literally gives you unlimited bandwidth for your website for free. It's that cheap today.
When inference cost is much higher today and the cost to do inference is as cheap as loading a website today, I think the world will be profoundly different.
(Betteridge's law of headlines, but also true in this case)
My prediction is that it will not cause explosive economic growth, but it will have noticeable economic effects that will benefit some at the expense of others.
diarrhea
bombs
fireworks
so imagine a gigantic bomb of diarrhea fireworks, that is what AI will be like. i would type that into DallE but im afraid OpenAI would ban me and/or make my entire history public on linkedin.