* They burned up the hype for GPT-5 on 4o and o1, which are great step changes but nothing the competition can't quickly replicate.
* They dissolved the safety team.
* They switched to for profit and are poised to give Altman equity.
* All while hyping AGI more than ever.
All of this suggests to me that Altman is in short-term exit preparation mode, not planning for AGI or even GPT-5. If he had another next generation model on the way he wouldn't have let the media call his "discount GPT-4" and "tree of thought" models GPT-5. If he sincerely thought AGI was on the horizon he wouldn't be eyeing the exit, and he likely wouldn't have gotten rid of the superalignment team. His actions are best explained as those of a startup CEO who sees the hype cycle he's been riding coming to an end and is looking to exit before we hit the trough of disillusionment.
None of this is to say that AI hasn't already changed a lot about the world we live in and won't continue to change things more. We will eventually hit the slope of enlightenment, but my bet is that Altman will have exited by then.
a) "the technology is overhyped", based on some meaningless subjective criteria, if you think a technology is overhyped, don't invest your money or time in it. No one's forcing you.
b) "child abuse problems are more important", with a link to an article that clearly specifies that the child abuse problems have nothing to do with OpenAI.
c) "it uses too much energy and water". OpenAI is paying fair market price for that energy and what's more the infrastructure companies are using those profits to start making massive investments in alternative energy [1]. So if everything about this AI boom fails what we'll be left with is a massive amount of abundant renewable energy (the horror!)
Probably the laziest conjecture I have endured from The Atlantic.
[1]: https://www.cbc.ca/news/canada/calgary/artificial-intelligen...
Except that someone has to pay for it. AI companies are only willing to pay for power purchase agreements, not capital expenses. Same with the $7T of chip fab. Invest your money in huge capital expenditures and our investors will pay you for it on an annual basis until they get tired of losing money.
At best, you're forcing old generation capacity that would have been retired to stay online. At worst you're forcing the government to take loans to invest in new capacity you may not be around to pay for in a few years, leaving the public finances holding the ball.
New "investors" are Microsoft and Nvidia. Nvidia will get the money back as revenue and fuel the hype for other customers. Microsoft will probably pay in Azure credits.
If OpenAI does not make profit within two years, the "investment" will turn into a loan, which probably means bankruptcy. But at that stage all parties have already got what they wanted.
I don’t believe this is accurate. I think this is what you’re referring to?:
Under the terms of the new investment round, OpenAI has two years to transform into a for-profit business or its funding will convert into debt, according to documents reviewed by The Times.
That just means investors want the business to be converted from a nonprofit entity into a regular for-profit entity. Not that they need to make a profit in 2 years, which is not typically an expectation for a company still trying to grow and capture market share.
Source: https://www.nytimes.com/2024/10/02/technology/openai-valuati...
At about the ten year mark, there has to be a changing of the guards from the foot soldiers who give their all that an unlikely institution could come to exist in the world at scale to people concerned more with stabilizing that institution and ensuring its continuity. In almost every company that has reached such scale in the last decade, this has often meant a transition from an executive team formed of early employees to a more senior C-team from elsewhere with a different skillset. In a world context where the largest companies are more likely to stay private than IPO, it's a profoundly important move to allow some liquidity for longterm employees, who otherwise might be forced to stay working at the company long past physical burnout.
For many people, sadly, one can never be rich enough. My point is, planning for both short term exit, and long term gains, is essentially the same in this particular situation. What a boon! Nice problem to have!
https://en.wikipedia.org/wiki/Gartner_hype_cycle
It just keeps happening over and over. I'd say we are at "Negative press begins".
> If he sincerely thought AGI was on the horizon he wouldn't be eyeing the exit
If such a thing could exist and was right around the corner, why would you need a company for it? Couldn't the AGI manage itself better than you could? Job's done, time to get a different hobby.
AGI doesn’t mean smarter than the best humans.
Why would you sell that?
For these models today, if we measure the amount of energy expended for training and inference how do humans compare?
For starters, we still need the AI (LLMs for now) to be more efficient, i.e. not require a datacenter to train and deploy. Yes, I know there are tiny models you can run on your home pc, but that's comparing a bycicle to a jet.
Second, for an AGI it meaningfully improve itself, it has to be smarter than not just any one person, but the sum total of all people it took to invent it. Until then no single AI can replace our human tech sphere of activity.
As long as there are limits to how smart an AI can get, there are places where humans can contribute economically. If there is ever to be a singularity, it's going to be a slow one, and large human AI vompanies will be part of the process for many decades still.
Well, you still have to have the baby, and raise it a little. And wouldn't you still want to be known as the parent of such a bright kid as AGI? Leaving early seems to be cutting down on his legacy, if a legacy was coming.
The long-term problem may be access to quality/human-created training data. Especially if the ones that control that data have AI plans of their own. Even then I could see OpenAI providing service to many of them rather than each of them creating their own models.
At best, theres a slow march to incremental improvements that look exactly like how human culture developed knowledge.
And all the downsides will remain, the same way people, despite hundreds of good sourxes of info still prefer garbage.
If the end goal is monetization of ChatGPT with ads, it will be enshittified to the same degree as Google searches. If you get to that, what is the benefit of using ChatGPT if it just gives you the same ads and bullshit as Google?
High-intelligence AGI is the last human invention — the holy grail of technology. Nothing could be more ambitious, and if we know anything about Altman, it is that his ambition has no ceiling.
Having said all of that, OpenAI appears to be all in on brute-force AGI and swallowing the bitter lesson that vast and efficient compute is all you need. But they’ve overlooking a massive dataset that all known biological intelligences rely upon: qualia. By definition, qualia exist only within conscious minds. Until we train models on qualia, we’ll be stuck with LLMs that are philosophical zombies — incapable of understanding our world — a world that consists only of qualia.
Building software capable of utilizing qualia requires us to put aside the hard problem of consciousness in favor of mechanical/deterministic theories of consciousness like Attention-Schema Theory (AST). Sure, we don’t understand qualia. We might never understand. But that doesn’t mean we can’t replicate.
I’m pretty sure it means exactly that. Without actually understanding subjective experience, there’s a fundamental doubt akin to the Chinese room. Sweeping that under the carpet and declaring victory doesn’t in fact victory make.
Citation?
...or are you just assuming that AGI will be able to solve all of our problems, appropos of nothing but Sam Altman's word? I haven't seen a single credible study suggest that AGI is anything more than a marketing term for vaporware.
Maybe not, since Altman pretty much said they no longer want to think it terms of "how close to AGI?". Iirc, he said they're moving away from that and instead want to move towards describing the process as hitting new specific capabilities incrementally.
I still don't get the safety team. Yes, I understand the need for a business to moderate the content they provide and rightly so. But elevating the safety to the level of the survival of humanity over a generative model, I'm not so sure. And even for so-called preventing harmful content, how can an LLM be more dangerous than the access of the books like The Anarchist Cookbook, the pamphlets on how to conduct guerrilla warfare, the training materials of how to do terrorisms, and etc? They are easily accessible on the internet, no?
> Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.
> How do we ensure AI systems much smarter than humans follow human intent?
This is a question that naturally arises if you are pursuing something that's superhuman, and a question that's pointless if you believe you're likely to get a really nice algorithm for solving certain kinds of problems that were hard to solve before.
Getting rid of the superalignment team showed which version Altman believes is likely.
Slight nit: a board can't start a coup because a coup is an illegitimate attempt to take power and the board's main job is to oversee the CEO and replace them if necessary. That's an expected exercise of power.
The coup was when the CEO reversed the board's decision to oust him and then ousted them.
Altman's actions are even more consistent with total confidence & dedication to a vision where OpenAI is the 1st and faraway leader in the production of the most valuable 'technology' ever. Plus, a desire to retain more personal control over that outcome – & not just conventional wealth! – than was typical with prior breakthroughs.
I'm a software engineer comfortably drawing a decent-but-not-FAANG paycheck at an established company with every intention of taking the slow-and-steady path to retirement. I'm not projecting, I promise.
> to a vision where OpenAI is the 1st and faraway leader in the production of the most valuable 'technology' ever
Except that OpenAI isn't a faraway leader. Their big news this week was them finally making an effort to catch up to Anthropic's Artifacts. Their best models do only marginally better in the LLM Arena than Claude, Gemini, and even the freely-released Llama 3.1 405B!
Part of why I believe Altman is looking to cash out is that I think he's smart enough to recognize that he has no moat and a very small lead. His efforts to get governments to pull up the ladder have largely failed, so the next logical choice is to exit at peak valuation rather than waiting for investors to recognize that OpenAI is increasingly just one in a crowd.
Where did that assertion come from? Has anyone come close to replicating either of these yet (other than possibly Google, who hasn't fully released their thing yet either), let alone "quickly"? I wouldn't be surprised if it's these "sideways" architectural changes actually give OpenAI a deeper moat than just working on larger models.
(In the Gell-Mann amnesia sense, make sure you take careful note of who was going "OAI has AGI internally!!!" and other such nonsense so you can not pay them any mind in the future)
How would the car industry change if someone made a 3D printer that could make any part, including custom parts, with just electricity and air? It is a sea change to manufacturers and distributors, but there would still be a need for mechanics and engineers to specify the correct parts, in the correct order, and use the parts to good purpose.
It is easy to imagine that the inventor of such a technology would probably start talking about printing entire cars - and if you don't think about it, it makes sense. But if you think about it, there are problems. Making the component of a solution is quite different than composing a solution. LLMs exist in the same conditions. Being able to generate code/text/images is of no use to someone who doesn't know what to do with it. I also think this limitation is a practical, tacit solution to the alignment problem.
It’s possible that this could happen but you need to propose a mechanism and metric for this argument to be taken seriously (and to avoid fooling yourself with moving goalposts). Under what grounds do you assert that the trend line will stop where you claim it will stop?
Yes, if super-human AGI simply never happens then the alignment problem is mostly solved. Seems like wishful thinking to me.
We have a very limited ability to define human intelligence, so it is almost impossible to know how near or far we are from simulating it. Everyone here knows how much a challenge it is to match average human cognitive abilities in some areas, and human brains run at 20 watts. There are people in power that may take technologists and technology executives at their word and move very large amounts of capital on promises that cannot be fulfilled. There was already an AI Winter 50 years ago, and there are extremely unethical figures in technology right now who can ruin the reputation of our field for a generation.
On the other hand, we have very large numbers of people around the world on the wrong end of a large and increasing wealth gap. Many of those people are just hanging on doing jobs that are actually threatened by AI. They know this, they fear this, and of course they will fight for their and their families lifestyles. This is a setup for large scale violence and instability. If there isn't a policy plan right now, AI will be suffering populist blowback.
Aside from those things, it looks like Sam has lost it. The recent stories about the TSMC meeting, https://news.ycombinator.com/item?id=41668824, was a huge problem. Asking for $7T shows a staggering lack of grounding in reality and how people, businesses, and supply chains work. I wasn't in the room and I don't know if he really sounded like a "podcasting bro", but to make an ask of companies like that with their own capital is insulting to them. There are potential dangers of applying this technology; there are dangers of overpromising the benefits; and neither of them are well served when relatively important people in related industries thing there is a credibility problem in AI.
The problem is when the hype machine causes the echoes to replace the original intelligence that spawned the echoes, and eventually those echoes fade into background noise and we have to rebuild the original human intelligence again.
The invention would never see the light of day. If someone were to invent Star Trek replicators, they'd be buried along with their invention. Best Case it would be quickly captured by the ownership class and only be allowed to be used by officially blessed manufacturing companies, and not any individuals. They will have learned their lesson from AI and what it does to scarcity. Western [correction: all of] society is hopelessly locked into and dependent on manufacturing scarcity, and the idea that people have to pay for things. The wealthy and powerful will never allow free abundance of physical goods in the hands of the little people.
So to solve this problem you need billions to burn on gambles. I guess that's how we ended up with VC's.
How do you reconcile that with the fact that Western society has invented, improved, and supplied many of the things we lament that other countries don't have (and those countries also lament it - it's not just our own Stockholm Syndrome).
Are there specific historical examples of this that come to mind?
Then there would be a violent revolution which wrestles it out of their hands. The benefits of such a technology would be immediately obvious to the layman and he would not allow it to be hoarded by a select few.
AI can magically decide where to put small pieces of code. Its not a leap to imagine that it will later be good at knowing where to put large pieces of code.
I don't think it'll get there any time soon, but the boundary is less crisp than your metaphor makes it.
Magically, but not particularly correctly.
Right.
It sounds to me like you agree and are repeating the comment but are framing as disagreeable.
I'm sure I'm missing something.
If you had a printer that could print semi-random mechanical parts, using it to make a car would be obviously dumb, right? Maybe you would use it to make, like, a roller blade wheel, or some other simple component that can be easily checked.
While the attention-based mechanisms of the current generation of LLMs still have a long way to go (and may not be the correct architecture) to achieve requisite levels of spatial reasoning (and of "practical" experience with how different shapes are used in reality) to actually, say, design a motor vehicle from first principles... that future is far more tangible than ever, with more access to synthetic data and optimized compute than ever before.
What's unclear is whether OpenAI will be able to recruit and retain the talent necessary to be the ones to get there; even if it is able to raise an order of magnitude more than competitors, that's no guarantee of success. My guess would be that some of the decisions that have led to the loss of much senior talent will slow their progress in the long run. Time will tell!
I think the insight is that some people truly believe that LLMs would be exactly as groundbreaking as a magical 3D printer that prints out any part for free.
And they're pumping AI madly because of this belief.
Learning how to get what you want is a fundamental skill you start learning from infancy.
But what's interesting when I speak to laymen is that the hype in the general public seems specifically centered on the composite solution that is ChatGPT. That's what they consider 'AI'. That specific conversational format in a web browser, as a complete product. That is the manifestation of AI they believe everyone thinks could become dangerous.
They don't consider the LLM API's as components of a series of new products, because they don't understand the architecture and business models of these things. They just think of ChatGPT and UI prompts (or it's competitor's versions of the same).
*(which is always a risky way of looking at it, because who the hell am I? Neither somebody in the AI field, nor completely naive toward programming, so I might be in some weird knows-enough-to-be-dangerous-not-enough-to-be-useful valley of misunderstanding. I think this describes a lot of us here, fwiw)
Why is it in your worldview a CEO “has to lie”?
Are you incapable of imagining one where a CEO is honest?
> The real story of LLMs is revealed when you posit a magical technology that can print any car part for free.
I’ll allow it if you stipulate that randomly and without reason when I ask for an alternator it prints me toy dinosaur.
> It is easy to imagine that the inventor of such a technology
As if the unethical sociopath TFA is about is any kind of, let alone the, inventor of genai.
> Being able to generate code/text/images is of no use to someone who doesn't know what to do with it.
Again, conveniently omitting the technology’s ever present failure modes.
The arguments are essentially:
1. The technology has plateaued, not in reality, but in the perception of the average layperson over the last two years.
2. Sam _only_ has a record as a deal maker, not a physicist.
3. AI can sometimes do bad things & utilizes a lot of energy.
I normally really enjoy the Atlantic since their writers at least try to include context & nuance. This piece does neither.
It's like fossil fuels. They took billions of years to create and centuries to consume. We can't just create more.
Another problem is that the data sets are becoming contaminated, creating a reinforcement cycle that makes LLMs trained on more recent data worse.
My thoughts are that it won't get any better with this method of just brute-forcing data into a model like everyone's been doing. There needs to be some significant scientific innovations. But all anybody is doing is throwing money at copying the major players and applying some distinguishing flavor.
Progress on benchmarks continues to improve (see GPT-o1).
The claim that there is nothing left to train on is objectively false. The big guys are building synthetic training sets, moving to multimodal, and are not worried about running out of data.
o1 shows that you can also throw more inference compute at problems to improve performance, so it gives another dimension to scale models on.
Imagine not going to school and instead learning everything from random blog posts or reddit comments. You could do it if you read a lot, but it's clearly suboptimal.
That's why OpenAI, and probably every other serious AI company, is investing huge amounts in generating (proprietary) datasets.
Our problem isn't technology, it's humans.
Unless he suggests mass indoctrination per AI AI won't fix anything.
I think the job of a CEO is not to tell you the truth, and the truth is probably more often than not, the opposite.
What if gpt5 is vaporware, and there’s no equivalent 3 to 4 leap to be realized with current deep learning architectures? What is OpenAI worth then?
I keep hearing from people who find these enormous benefits from LLMs. I've been liking them as a search engine (especially finding things buried in bad documentation), but can't seem to find the life-changing part.
It's helped me stay productive on days when my brain just really doesn't want to come up with a function that does some annoying fairly complex bit of logic and I'd probably waste a couple hours getting it working.
Before I'd throw something like that at it, and it'd give me something confidently that was totally broken, and trying to go back and forth to fix it was a waste of my time.
Now I get something that works pretty well but maybe I just need to tweak something a bit because I didn't give it enough context or quite go over all the inconsistencies and exceptions in the business logic given by the requirements (also I can't actually use it on client machines so I have to type it manually to and from another machine, so I'm not copy pasting anything so I try to get away with typing less).
I'm not typing anything sensitive, btw, this is stuff you might find on Stack Overflow but more convoluted, like "search this with this exception and this exception because that's the business requirement and by these properties but then go deeper into this property that has a submenu that also needs to be included and provide a flatlist but group it by this and transform it so it fits this new data type and sort it by this unless this other property has this value" type of junk.
Sam Altman himself doesn't know whether it's the case. Nobody knows. It's the natural of R&D. If you can tell whether an architecture works or not with 100% confidence it's not cutting edge.
I think that was before 4o? I know 4o-mini and o1 for sure have come out since he said that
That’s dead. OpenAI knows that much. There will be more, but they aren’t going to report that we’re doing incremental advances until there’s a significant breakthrough. They need to stay afloat and say what it takes to try and bridge the gap.
I suspect it's a little different. AI models are still made of math and geometric structures. Like mathematicians, researchers are developing intuitions about where the future opportunities and constraints might be. It's just highly abstract, and until someone writes the beautiful Nautilus Mag article that helps a normie see the landscape they're navigating, we outsiders see it as total magic and unknowable.
But Altman has direct access to the folks intuiting through it (likely not validated intuitions, but still insight)
That's not to say I believe him. Motivations are very tangled and meta here
[0] https://norcalrecord.com/stories/664710402-judge-tosses-clas...
I'm sure the defence is always, "but if we just had a bit more money, we would've got it done"
Ceos are more often come from marketing backgrounds than other disciplines for the very reason they have to sell stakeholders, employees, investors on the possibilities. If a ceos myth making turns out to be a lie 50 to 80 percent of the time then hes still a success as with Edison, Musk, Jobs, and now Altman.
But i think AI ceos seem to be imagining and peddling wilder fancier myths than the average. If AI technology pans out then i dont feel theyre unwarranted. I think theres enough justification but im biased and have been doing AI for 10 years.
To ur question, If a ceos lies dont accidently turn true eventually as with the case of Holmes then yes its a big problem.
What you need a CEO for is to sell you (and your investors) a vision.
It saddens me how easily someone with money and influence can elevate themselves to a quasi religious figure.
In reality, this vision you speak of is more like the blind leading the blind.
OpenAI decides what they call gpt5. They are waiting for a breakthrough that would make people "wow!". That's not even very difficult and there are multiple paths. One is a much smarter gpt4 which is what most people expect but another one is a real good voice-to-voice or video-to-video feature that works seamlessly the same way chatgpt was the first chatbot that made people interested.
Otherwise people might get the impression that we’re already at a point of diminishing returns on transformer architectures. With half a dozen other companies on their heels and suspiciously nobody significantly ahead anymore, it’s substantially harder to justify their recent valuation.
"We have no current plans to make revenue."
"We have no idea how we may one day generate revenue."
"We have made a soft promise to investors that once we've built a general intelligence system, basically we will ask it to figure out a way to generate an investment return for you."
The fact he has no clue how to generate revenue with an AGI without asking it, shows his lack of imagination.
I wouldn't take this statement at face value any more than anything else a CEO says. For example, maybe he still had to appease the board, and talking about making profit would be counterproductive. Maybe he gave this answer because it signals that he's in it for the long game and investors should be too.
But it is always about the narrative.
But you are right, we live in a post truth influencer driven world. It's all about the narrative.
Which model? Sonnet 3.5? I subscribed to Claude for while to test Sonnet/Opus, but never got them to work as well as GPT-4o or o1-preview. Mostly tried it out for coding help (Rust and Python mainly).
Definitely didn't see any "leap" compared to what OpenAI/ChatGPT offers today.
Not sure the record supports that if you remove OpenAi which is a work-in-progress and supposedly not going too great at the moment. A talented 'tech whisperer' maybe?
The world has become a less trustworthy place for a lot of reasons and AI is only making it worse, not better.
Reality: AI needs unheard amounts of energy. This will make climate significantly worse.
… and it always will? It seems terribly limiting to stop exploring the potential of this technology because it’s not perfect right now. Energy consumption of AI models does not feel like an unsolvable problem, just a difficult one.
Edit: Also why are you getting downvoted...
Real technological progress in the 21st century is more capital-intensive than before. It also usually requires more diverse talent.
Yet the breakthroughs we can make in this half-century can be far greater than any before: commercial-grade fusion power (where Lawrence Livermore National Lab currently leads, thanks to AI[1]), quantum computing, spintronics, twistronics, low-cost room-temperature superconductors, advanced materials, advanced manufacturing, nanotechnology.
Thus, it's much more about the many, not the one. Multi-stakeholder. Multi-person. Often led by one technology leader, sure, but this one person must uplift and be accountable to the many. Otherwise we get the OpenAI story, and end-justifies-the-means type of groupthink wrt. those who worship the technoking.
[1]: https://www.llnl.gov/article/49911/high-performance-computin...
now you can use AI to easily write the type of articles he produces and he's pissed.
You really cannot.
It seems fair to say Altman has completed his Musk transformation. Some might argue it's inevitable. And indeed Bill Gates' books in the 90s made a lot of wild promises. But nothing that egregious.
So far Musk has been pushing the lies out continually to try and prevent any possible exposure to fraud. Like "Getting to Mars will save humanity" or the latest "We will never reach Mars unless Trump is president again". Then again, self driving cars are just around the corner, as stated in 2014 with a fraudulently staged video of their technology, that they just need to work the bugs out.
Altman is making wild clams too with how Machine Learning will slow and reverse climate change while proving that the technology needs vast more resources, specially in power consumption, just to be market viable for business and personal usage.
All three play off people's emotions to repress critical thinking. They are no different than the lying preachers, I can heal you with a touch of my hand, that use religion to gain power and wealth. The three above are just replacing religion with technology.
Google just paid over $2.4 billion to get Noam Shazeer back in the company to work with Gemini AI. Google has the deepest pool of AI researchers. Microsoft and Facebook are not far behind.
OpenAI is losing researchers, they have maybe 1-2 years until they become Microsoft subsidiary.
The other issue is that AI's 'boundless prosperity' is a little like those proposals to bring an asteroid made of gold back to earth. 20m tons, worth $XX trillion at current prices, etc. The point is, the gold price would plummet, at the same time as the asteroid, or well before, and the promised gains would not materialize.
If AI could do everything, we would no longer be able (due to no-one having a job), let alone willing, to pay current prices for the work it would do, and so again, the promised financial gains would not materialize.
Of course in both cases, there could be actual societal benefits - abundant gold, and abundant AI, but they don't translate directly to 'prosperity' IMHO.
I still "feel the AGI". I think Ben Goertzel'a recent talk on ML Street Talk was quite grounded / too much hype clouds judgement.
In all honesty, once the hype dies down, even if AGI/ASI is a thing - we're still going to be heads down back to work as usual so why not enjoy the ride?
Covid was a great eye-opener, we dream big but in reality people jump over each other for... toilet paper... gotta love that Gaussian curve of IQ right?
So my question is: What does the AI rumor mill say about that? Was all that just hype-building, or is OpenAI holding back some major trump card for when they become a for-profit entity?
All of these doing the rounds of foreign governments and acting like artificial general intelligence is just around the corner is what got him this fundraising round today. It's all just games.
On top of that, the advance in models for language and physical simulation based models (for protein prediction and weather forecasting as examples) has been so rapid and unexpected that even folks who were previously very skeptical of "AI" are believers - it ain't because Sam Altman is up there talking a lot. I went from AI skeptic to zealot in about 18 months, and I'm in good company.
He was literally invited to congress to speak about AI safety. Sure, perhaps people that have a longer memory of the tech world don't trust him. That's actually not a lot of people. A lot of people just aren't following tech (like my in-laws).
ITT: People taking Sam at his word.
The problem is, when it pops, which it will, it'll fuck the economy.
Now that he’s restructuring the company to be a normal for-profit corp, with a handsome equity award for him, we should assume the normal monopoly-grabbing that we see from the other tech giants.
If the dividend is simply going to the shareholder (and Altman personally) we should be much more skeptical about baking these APIs into the fabric of our society.
The article is asinine; of course a tech CEO is going to paint a picture of the BHAG, the outcome that we get if we hit a home run. That is their job, and the structure of a growth company, to swing for giant wins. Pay attention to what happens if they hit. A miss is boring; some VCs lose some money and nothing much changes.
Not saying it is a bubble but something seems imbalanced here.
The sophisticated investors are not betting on future increasing valuations based on current LLMs or the next incremental iterations of it. That's a "static" perspective based on what outsiders currently see as a specific product or tech stack.
Instead, you have to believe in a "dynamic" landscape where OpenAI the organization of employees can build future groundbreaking models that are not LLMs but other AI architectures and products entirely. The so-called "moat" in this thinking would be the "OpenAI team to keep inventing new ideas beyond LLM". The moat is not the LLM itself.
Yes, if everyone focuses LLMs, it does look like Meta's free Llama models will render OpenAI worthless. (E.g. famous memo : https://www.google.com/search?q=We+have+no+Moat%2C+and+Neith...)
As an analogy, imagine that in the 1980s, Microsoft's IPO and valuation looks irrational since "writing programming code on the Intel x86 stack" is not a big secret. That stock analysis would then logically continue saying "Anybody can write x86 software such as Lotus, Borland, etc." But the lesson learned was that the moat was never the "Intel x86 stack"; the moat was really the whole Microsoft team.
That said, if OpenAI doesn't have any future amazing ideas, their valuation will crash.
It was just the next in line to be inflated after crypto.
Hilarious.
The progress that we've seen in the past two years has been completely insane compared to literally any other field. LLMs complete absolutely insane reasoning tasks, including math proofs at the level of a "mediocre grad student" (which is super impressive). For better or worse, image generation & now video generation is indistinguishable from the real thing, a lot of the times.
I think that crazy business types and media really overhyped the fuck out of AI so fucking high, that even with such strong progress, it's still not enough.
AGI cannot exist in a box that you can control. We figured that out 20 years ago.
Could they start that? Sure theoretically. However they would have to massively pivot and nobody at OAI are robotics experts
This is not unusual - politicians cannot be taken at their word, government bureaucrats cannot be taken at their word, and corporate media propagandists cannot be taken at their word.
The fact that the vast majority of human beings will fabricate, dissemble, lie, scheme, manipulate etc. if they see a real personal advantage from doing so is the entire reason the whole field of legally binding contract law was developed.
During my interview with Jared Friedman, their CTO, I asked him what Sam was trying to create, the greatest investment firm of all time surpassing Berkshire Hathway, or the greatest tech company surpassing Google? Without hesitation, Jared said Google. Sam wanted to surpass Google. (He did it with his other company, OpenAI, and not YC, but he did it nonetheless)
This morning I tried Googling something and the results sucked compared to what ChatGPT gave me.
Google still creates a ton of value (YouTube, Gmail, etc), but he has surpassed Google in terms of cutting edge tech.
But that's not how the market works.
I am personally not sold on AGI being possible. We might be able to make some poor imitation of it, and maybe an LLM is the closest we get, but to me it smacks of “man attempts to create life in order to spite his creator.” I think the result of those kinds of efforts will end more like That Hideous Strength (in disaster).
Old tactic.
The project that would eventually became Microsoft Corp. was founded on it. Gates told Ed Roberts the inventor of the first personal computer that he had a programming for it. He had no such programming langugage.
Gates proceeded to espouse "vapourware" for the decades. Arguably Microsoft and its disciples are still doing so today.
Will the tactic ever stop working. Who knows.
Focus on the future that no one can predict, not the present that anyone can describe.
The issue is more that the company is hemorrhaging talent, and doesn’t have a competitive moat.
But luckily this doesn’t affect most of us, rather it will only possibly harm his investors if it doesn’t work out.
If he continues to have access to resources and can hire well and the core tech can progress to new heights, he will likely be okay.
Is kind of a boring way of looking at things. I mean we have fairly good chatbots and image generators now but it's where the future is going that's the interesting bit.
Lumping AI in with dot coms and crypto seems a bit silly. It's a different category of thing.
(By the way Sam being shifty or not techy or not seems kind of incidental to it all.)
He went from a failed startup to president of yc to ultra wealthy investor in the span of about a decade. That's sus
They would at least be more believable if they blast claims that a certain video must be fake, especially with how absurd and shocking it is.
I have skepticism of his predictions, and disregard for his exaggerations.
I have a ChatGPT subscription and build features on OpenAI technology.
It's funny we coach people not to ascribe human characteristics to LLMS..
But we seem equally capable of denying the very human characteristics in our would be overlords.
Which warlord will we canonize next?
But, but, but… their drama, or Altman’s drama is now too much for me, personally.
With a lot of reluctance I just stopped doing the $20/month subscription. The advanced voice mode is lots of fun to demo to people, and o1 models are cool, but I am fine just using multiple models for chat on Abacus.AI and Meta, an excellent service, and paid for APIs from Google, Mistral, Groq, and OpenAI (and of course local models).
I hope I don’t sound petty, but I just wanted to reduce their paid subscriber numbers by -1.
So close, yet so far. And, both help the respective CEOs in hyping the respective companies.
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
First, he mentioned wishing he was more into AI. While I appreciate the honesty, it was pretty off-putting. Here’s the CEO of a company building arguably the most consequential technology of our time, and he’s expressing apathy? That bugs me. Sure, having a dispassionate leader might have its advantages, but overall, his lack of enthusiasm left a bad taste in my mouth. Why IS he the CEO then?
Second, he talked about going on a “world tour” to meet ChatGPT users and get their feedback. He actually mentioned meeting them in pubs, etc. That just sounded like complete BS. It felt like politician-level insincerity—I highly doubt he’s spoken with any end-users in a meaningful way.
And one more thing: Altman being a well-known ‘prepper’ doesn’t sit well with me. No offense to preppers, but it gives me the impression he’s not entirely invested in civilization’s long-term prospects. Fine for a private citizen, but not exactly reassuring for the guy leading an organization that could accelerate its collapse.
I've done a huge amount of political organizing in my life, for common good - influencing governments to build tens of billions of dollars worth of electric rail infrastructure.
I'm also a big prepper. It's important to understand that stigmatizing prepping is very dangerous - specifically to those who reject it.
Whether it's a gas main break, a forest fire, an earthquake, or a sci-fi story, encouraging people to become resilient to disaster is incredibly beneficial for society as a whole, and very necessary for individuals. The vast, vast majority of people who do it are benefiting their entire community by doing so. Even, as much as I'm sure I'd dislike him if I met him, Sam Altman. Him being a prepper is good for us, at least indirectly, and possibly directly.
Just look at the stories in NC right now - people who were ready to clear their own roads, people taking in others because they have months of food.
Be careful not to ascribe values to behaviors like you're doing.
From robotics, neurology, transport to everything in between - not a word should be taken as is.
Yeah, maybe on the surface chatbots turned out to be chatbots. But you have to be a poor journalist to stop your investigation of the issue at that and conclude AI is no big deal. Nuance, anyone?
But apparently as a society we like handing multi-billion dollar investments to folks with a proven track record of (not actually shipping) complete bullshit.
Yes. We've been through this again and again. Technology does not follow potential. It follows incentive. (Also, “all of physics”? Wtf is he smoking?)
> It’s much more pleasant fantasizing about a benevolent future AI, one that fixes the problems wrought by climate change, than dwelling upon the phenomenal energy and water consumption of actually existing AI today.
I mean, everything good in life uses energy, that’s not AIs fault per se. However, we should absolutely evaluate tech anchored in the present, not the future. Especially with something we understand so poorly like emergent properties of AI. Even when there’s an expectation of rapid changes, the present is a much better proxy than yet-another sociopath with a god-complex whose job is to be a hype-man. Everyone’s predictions are garbage. At least the present is real.
The most laughable part of the article is where they point at the fact that in the past TWO YEARS we haven’t gone from “OMG we’ve achieved near perfect NLP” to “Deep thought tell us the answer to life the universe and everything” as some sort of huge failure is patently absurd. If you took Altman at his word on that one, you probably also scanned your eye ball for fake money. The truth though is that the rate of change in the products his company is making is still breath taking - the text to speech tech in the latest advanced voice release (recognizing it’s not actually text to speech but something profoundly cooler, but that’s lost on journalism majors teaching journalism majors like the author) puts to shame the last 30 years of TTS. This alone would have been enough to have a fairly significant enterprise selling IVR and other software.
When did we go from enthralled by the rate of progress to bored that it’s not fast enough? That what we dream and what we achieve aren’t always 1:1 but that’s still amazing? I get that when we put down the devices and switch off the noise we are still bags of mostly water, our back hurts, we aren’t as popular as we wish we were, our hair is receding, maybe we need invisiline but flossing that tooth every day is easier and cheaper, and all the other shit that makes life much less glamorous than they sold us in the dot com boom, or nanotech, etc, as they call out in the article.
But the dot com boom did succeed. When I started at early Netscape no one used the internet. We spun the stories of the future this article bemoans to our advantage. And it was messier than the stories in the end. But now -everyone- uses the internet for everything. Nanotechnology permeates industry, science, tech, and our every day life. But the thing about amazing tech that sounds so dazzling when it’s new is -it blends into the background- if it truly is that amazingly useful. That’s not a problem with the vision of the future. It’s the fact that the present will never stop being the present and will never feel like some illusory gauzy vision you thought it might be. But you still use dot coms (this journalism major assessment of tech was published on a dot com and we are responding on a dot com) and still live in a world powered by nanotechnology, and AI promised in TWO YEARS is still mind boggling to anyone who is thinking clearly about what the goal posts for NLP and AI were five years ago.
See same with Elon Musk.
Money turns genius to smooth brained egomaniacal idiots. See same with Steve Jobs
"It's too late to stop conflating wealth with intelligence"
Billionaires are shameful for the collective, they should be shameful to everyone of us. They are fundamentally most unfit for leadership. They are evidence of civilizatory failure, the least we can do is not idolize them.
Even the stories I heard about him from one of his indirect reports back in the pre-iCEO "Apple is still fucked, NeXT is a distracted mess" era were just like stories told about him from the dawn of Apple and in the iPhone era.
Musk and Altman are opportunists. Musk appears to be a maligant narcissist. Neither seem in a rush to be better humans.
The article is written to appeal to people who want to feel clever casually slagging off and dismissing tech.
> it appears to have plateaued. GPT-4 now looks less like the precursor to a superintelligence and more like … well, any other chatbot.
What a pathetic observation. Does the author not recall how bad chatbots were pre-LLMs?
What LLMs can do blows my mind daily. There might be some insufferable hype atm, but gees, the math and engineering behind LLMs is incredible, and it's not done yet - they're still improving from more compute alone, not even factoring in architecture discoveries and innovations!
This is such a ridiculous sentence.
GPT-4 now looks like any other chatbot because the technology advanced so the other chatbots are smarter now as well. Somehow the author is trying to twist this as a bad thing.
As a matter of fact, I suspect the author of the article actually belongs to gullible minority who ever took Altman at his word, and now is telling everyone what they already knew. But so what? What are we even discussing? Nobody calls to remove their OpenAI (or, in fact, Anthropic, or whatever) account, as long as we find it useful for something, I suppose. It just makes no difference at all if that writer or his readers take Altman at his word, their opinions have no real effect on the situation, it seems. They are merely observers.
https://www.betterworldbooks.com/product/detail/the-sociopat...
/s
This is a good reminder:
> Prominent AI figures were among the thousands of people who signed an open letter in March 2023 to urge a six-month pause in the development of large language models (LLMs) so that humanity would have time to address the social consequences of the impending revolution
In 2024, ChatGPT is a weird toy, my barber demands paper cash only (no bitcoin or credit cards or any of that phone nonsense, this is Silicon Valley), I have to stand in line at USPS and DMV with mindless paper-shuffling human robots, marveling at humiliating stupidity of manual jobs, robotaxis are still almost here, just around the corner, as always. Let's check again in a "coupe of thousand days" i guess!
Any system complex enough to be useful has to be embedded in an ever more complex system. The age of mobile phone internet rests on the shoulders of an immense and enormously complex supply chain.
LLMs are capturing low entropy from data online and distilling it for you while producing a shitton of entropy on the backend. All the water and energy dissipated at data centers, all the supply chains involved in building GPUs at the rate we are building. There will be no magical moment when it's gonna yield more low entropy than what we put in on the other side as training data, electricity and clean water.
When companies sell ideas like 'AGI' or 'self driving cars' they are essentially promising you can do away with the complexity surrounding a complex solution. They are promising they can deliver low entropy on a tap without paying for it in increased entropy elsewhere. It's physically impossible.
You want human intelligence to do work, you need to deal with all the complexities of psychology, economics and politics. You want complex machines to do autonomous work, you need an army of people behind it. What AGI promises is, you can replace the army of people with another more complex machine. It's a big bald faced lie. You can't do away with the complexity. Someone will have to handle it.
We have them in San Francisco now (and Los Angeles and Phoenix, and Austin soon.)
Surely this is just a case of the future not being evenly distributed. All of these 'problems' are already solved and the solution is implemented somewhere, just not where you happen to be.
You can walk to where they're waiting for you.
Now keep in mind that this is going to be the default option for a lot of forums and social media for automated moderation. Reddit is already using it a lot and now a lot of the front page is clearly feedback farming for OpenAI. What I'm getting at is we're moving towards a future where only a certain type of dialog will be allowed on most social media and Sam Altman and his sponsors get to decide what that looks like.
Contrary to the Atlantic's almost always intentionally misleading framing, the "dot com boom" did in fact go on to print trillions later and it is still printing them. After what was an ultimately marginal if account clearing dip for many.
I say that as someone who would be deemed to be an Ai pessimist, by many.
But its wildly early to declare anything to be "what it is" and only that, in terms of ultimate benefit. Just like it was and is wild to declare the dot com boom to be over.
Agreed - I stopped taking The Atlantic seriously after their 2009 cover story, "Did Christianity Cause the Crash?"[1] To ignore CDOs, the Glass-Steagal repeal, the co-option of the ratings agencies and the dissolution of lending standards, and instead blame the Great Recession on a few obnoxious megapastors is to completely discard the magazine's credibility.
[1] https://www.theatlantic.com/magazine/archive/2009/12/did-chr...
I don't have any suggestions on how to solve this. Everything I can think of has immediate large flaws.
Is it even possible ? Like, don't you know the political inclination of any website/journal you read ? I feel like this search of "The Objective Truth" is just a chimera. I'd rather articles combine pros and cons of everything they discuss tbh
For example you could say:
Joey JoeJoe, billionaire CEO, who notably said horrible things, was convicted of some crimes, and ate three babies, was quoted as saying “machine learning is just so awesome”.
There, you didn’t inject a judgement. You accurately quoted the subject. You gave the reader enough contextual information about the person so they know how much to trust or not-trust the quote.
A journalist doing anything other than journaling is not a journalist.
So people getting quoted verbatim is perfectly fine. If the quoted turns out to be a liar, that's just part of the journal.
This is a bizarre take about a 167-year-old, continuously published magazine.