Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.
Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history
There isn't a bigger, more interesting story here. This is in fact a very common story that plays out at many software companies. The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will. That's all there is to it.
https://quorablog.quora.com/Introducing-creator-monetization...
https://techcrunch.com/2023/10/31/quoras-poe-introduces-an-a...
What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth? The entire purpose of this legal structure is to keep non-profit owners focused on their mission rather than shareholder value, which in this case is attempting to ethically create an AGI.
Edit: to add that this framework was not invented by Sam Altman, nor OpenAI.
>Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
Thus the legal structure I described, although this argument is entirely theoretical and assumes such a thing can actually be guarded that well at all, or that model performance and compute will remain correlated.
Until they say otherwise, I am going to take them at their word that it was because he a) hired two people to do the same project, and b) gave two board members different accounts of the same employee. It's not my job nor the internet's to try to think up better-sounding reasons on their behalf.
Why worry about the Sauds when you've got your own home grown power hungry individuals.
the second after Musk taking over Twitter
do we have a ranking of shitshows in tech history though - how does this really compare to Jobs' ouster at Apple.
Cambridge Analytics and The Facebook we must do better greatest hits?
This!
We don't know the end result of this. This could not be in the interest of power. What if everyone is out the job? That might not be such a great concept for the powers that be, especially if everyone is destitute.
Not saying it's going down that way, but it's worth considering. What if the powers that be are worried about people being out of line and retard the progress of AI?
Was this for OpenAI or independent venture. If OpenAI than a red flag but an independent venture than seems like a non-issue. There is a demand for AI accelerators, and he wants to enter that business. Unless he is using OpenAI money to buy inferior products or OpenAI wants to work on something competing there is no conflict of interest and OpenAI board shouldn't care.
The best thing about AI startups is that there is no real "code". It's just a bunch of arbitrary weights, and it can probably be obfuscated very easily such that any court case will just look like gibberish. After all, that's kind of the problem with AI "code". It gives a number after a bunch of regression training, and there's no "debugging" the answer.
Of course this is about the money, one way or another.
This prediction predated any of the technology to create even a rudimentary LLM and could be said of more-or-less any transformative technological development in human history. Famously, Marxism makes this very argument about the impact of the industrial revolution and the rise of capital.
Geoffrey Hinton appears to be an eminent cognitive psychologist and computer scientist (edit: nor economist). I'm sure he has a level of expertise I can't begin to grasp in his field, but he's no sociologist or historian. Very few of us are in a position to make predictions about the future - least of all in an area where we don't even fully understand how the _current_ technology works.
Nobody can really explain the argument, there are "billions" or "trillions" of dollars involved, most likely the whole thing will not change the technical path of the world.
On assumption that board is making a sound decision, it could be simply that board acted stupid and egoistic. Unless they can give better reasons that is a logical inference.
This is absolutely peak irony!
US pouring trillions into its army and close to nothing into its society (infrastructure, healthcare, education...) : crickets
Some country funding AI accelerators: THEY ARE A THREAT TO HUMANITY!
I am not defending Saudi Arabia but the double standards and outright hypocrisy is just laughable.
Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told. This is important, because people were siding with the board under the understanding this firing was led by the head research scientist who is concerned about AGI. But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.
Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.
Is it just different because they’re a nonprofit? Or how on earth the board is thinking they can get away with this anymore?
Greg was not invited (losing Sam one vote), and Sam may have been asked to sit out the vote, so the 3 had a majority. Ilya who is at least on "Team Sam" now; may have voted no. Or simply went along thinking he could be next out the door at that point; we just don't know.
It's probably fair to say not letting Greg know the board was getting together (and letting it proceed without him there) was unprofessional and where Ilya screwed up. It is also the point when Sam should have said hang-on - I want Greg here before this proceeds any further.
I’m imagining Sam being Microsoft’s Trojan horse, and that’s just not gonna fly.
If anyone tells me Sam is a master politician, I’d agree without knowing much about him. He’s a Microsoft plant that has support of 90% of the OpenAi team. The two things are conflicts of interest. Masterful.
It’s a pretty fair question to ask a CEO. Do you still believe in OpenAi vision or do you know believe in Microsoft’s vision?
The girl she said not to worry about.
The main point is greg, Ilya can get 50% vote and convince Helen toner to change decision. It's all done then it's 3 to 2 in board of 5 people. Unless greg board membership is reinstated.
Now it's increasingly look like Sam will be heading back into the role of CEO of openai.
My feeling is Ilya was upset about how Sam Altman was the face of OpenAI, and went along with the rest of the board for his own reasons.
That's often how this stuff works out. He wasn't particularly compelled by their reasons, but had his own which justified his decision in his mind.
You mean to tell me that the 3-member board told Sutskever that Sama was being bad and he was like "ok, I believe you".
2) Where is the board? At a bare minimum, issue a public statement that you have full faith in the new CEO and the leadership team, are taking decisive action to stabilize the situation, and have a plan to move the company forward once stabilized.
The only thing I've read about Shear is he is pro-slowing AI development and pro-Yudkowsky's doomer worldview on AI. That might not be a pill the company is ready to swallow.
https://x.com/drtechlash/status/1726507930026139651
> I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down.
> If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.
> - Emmett Shear Sept 16, 2023
Even worse, if we don't have near constant updates, we might realize this is not all that important in the end and move on to other news items!
I know, I know, I shouldn't jest when this could have grave consequences like changing which uri your api endpoint is pointing to.
However, the OpenAI board has no such obligation. Their duty is to ensure that the human race stays safe from AI. They've done their best to do that ;-)
Giving different opinions on same person is a reason to fire a CEO?
This board has no reason to fire, or does not want to give the actual reason to fire Sam. They messed up.
As an extra sanity check, they had two teams working in isolation interpreting this data and constructing the image. If the end result was more or less the same, it’s a good check that it was correct.
So yes, it’s absolutely a valid strategy.
I get the feeling Ilya might be a bit naive about how people work, and may have been taken advantage of (by for example spinning this as a safety issue when it's just a good old fashioned power struggle)
1. stick with DOS
2. go with OS/2
3. go with Windows
Lotus chose (2). But the market went with (3), and Lotus was destroyed by Excel. Lotus was a wealthy company at the time. I would have created three groups, and done all three options.
Under them - an organization in partnership with Microsoft, together filled with exceptional software engineers and scientists - experts in their field. All under management by kindergarteners.
I wonder if this is what the staff are thinking right now. It must feel awful if they are.
Teams of people at Google work on the same features, only to find out near launch that they lost to another team who had been working on the same thing without their knowledge.
Have these people never worked at any other company before? Probably every company with more than 10 employees does something like this.
Half the board has not had a real job ever. I’m serious.
"After six months, they realised our entire floor was duplicating the work of the one upstairs".
(Especially if they aren't made aware of each other until the end.)
A hypothetical example: Would you agree that it's an appropriate thing to do if the second project was Alignment-related, Sam lied or misled about the existence of the second team, to Ilya, because he believed that Ilya was over-aligning their AIs and reducing their functionality?
Its easy to view the board's lack of candor as "they're hiding a really bad, unprofessional decision"; which is probable at this point. You could also view it with the conclusion that, they made an initial miscalculated mistake in communication, and are now overtly and extremely careful in everything they say because the company is leaking like a sieve and they don't want to get into a game of mudslinging with Sam.
Still too much in the dark to judge.
Obviously, it's for a reason they can't say. Which means, there is something bad going on at the company, like perhaps they are short of cash or something, that was dire enough to convince them to fire the CEO, but which they cannot talk about.
Imagine if the board of a bank fired their CEO because he had allowed the capital to get way too low. They wouldn't be able to say that was why he was fired, because it would wreck any chance of recovery. But, they have to say something.
So, Altman didn't tell the board...something, that they cannot tell us, either. Draw your own conclusions.
Ilya backtracking puts a wrench in this wild speculation, so like everyone else, I’m left thinking “????????”.
Whatever the reason is, it is very clearly a personal/political problem with Sam, not the critical issue they tried to imply it was.
And if it was something concrete, Ilya would likely still be defending the firing, not regretting it.
It seems like a simple power struggle where the board and employees were misaligned.
Not the strongest opening line I've seen.
When you have such a massive conflict of interest and zero facts to go on - just sit down.
also - "people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things."
Toner clearly has no real moral authority here, but yes, Ilya absolutely did and I argued that if he wanted to incinerate OpenAI, it was probably his right to, though he should at least just offload everything to MSFT instead.
But as we all know - Ilya did a 180 (surprised the heck out of me).
I'd like some corroboration for that statement because Sustkever has said very inconsistent things during this whole merry debacle.
There’s only 4 board members, right?
Who wanted him fired. Is this a situation where they all thought the others wanted him fired and were just stupid?
Have they been feeding motions into chatgpt and asking “should add I do this?”
Now they are trying to unring the bell but cannot.
The CEO (at time of writing, I think) seems to think this kind of thing is unironically a good idea: https://nitter.net/eshear/status/1725035977524355411#m
The way it's phrased, it sounds like they were given two different explanations. Such as when the first explanation is not good enough, a second weaker one is then provided.
But the article itself says:
> OpenAI's current independent board has offered two examples of the alleged lack of candor that led them to fire co-founder and CEO Sam Altman, sending the company into chaos.
Changing the two "examples" to "explanations" grossly changes the meaning of that sentence. Two examples is the first steps of "multiple examples". And that sounds much different than "multiple explanations".
One explanation was that Altman was said to have given two people at OpenAI the same project.
The other was that Altman allegedly gave two board members different opinions about a member of personnelEdit: if you want to read about our approach to handling tsunami topics like this, see https://news.ycombinator.com/item?id=38357788.
-- Here are the other recent megathreads: --
Sam Altman is still trying to return as OpenAI CEO - https://news.ycombinator.com/item?id=38352891 (817 comments)
OpenAI staff threaten to quit unless board resigns - https://news.ycombinator.com/item?id=38347868 (1184 comments)
Emmett Shear becomes interim OpenAI CEO as Altman talks break down - https://news.ycombinator.com/item?id=38342643 (904 comments)
OpenAI negotiations to reinstate Altman hit snag over board role - https://news.ycombinator.com/item?id=38337568 (558 comments)
-- Other recent/related threads: --
OpenAI approached Anthropic about merger - https://news.ycombinator.com/item?id=38357629
95% of OpenAI Employees (738/770) Threaten to Follow Sam Altman Out the Door - https://news.ycombinator.com/item?id=38357233
Satya Nadella says OpenAI governance needs to change - https://news.ycombinator.com/item?id=38356791
OpenAI: Facts from a Weekend - https://news.ycombinator.com/item?id=38352028
Who Controls OpenAI? - https://news.ycombinator.com/item?id=38350746
OpenAI's chaos does not add up - https://news.ycombinator.com/item?id=38349653
Microsoft Swallows OpenAI's Core Team – GPU Capacity, Incentives, IP - https://news.ycombinator.com/item?id=38348968
OpenAI's misalignment and Microsoft's gain - https://news.ycombinator.com/item?id=38346869
Emmet Shear statement as Interim CEO of OpenAI - https://news.ycombinator.com/item?id=38345162
Probably because that piece is based on reporting for upcoming book by Karen Hao:
>Now is probably the time to announce that I've been writing a book about @OpenAI, the AI industry & its impacts. Here is a slice of my book reporting, combined with reporting from the inimitable @cwarzel ...
Imagine your once-in-blue-moon, whatsapp-like, payout at $10m per employee evaporated over the weekend before Thanksgiving.
I would have joined MSFT out of spite.
I just don't know how they put the pieces back together here.
What really gets me down is I know our government is a lost cause but I at least had hope our companies were inoculated against petty, self-sabotaging bullshit. Even beyond that I had hope the AI space was inoculated and beyond that of all companies OpenAI would of course be inoculated from petty, self-sabotaging bullshit.
These idiots worried about software eating us are incapable of seeing the gas they are pouring on the processes that are taking us to a new dark age.
https://www.axios.com/2023/11/20/openai-staff-letter-board-r...
curious to have clarity where ilya stands. did he really sign the letter asking the board (including himself?) to resign and that he wants to join msft?
to think these are the folks with agi at their fingertips
The options will be worth $0, right?
I've not seen these possibilities discussed as most people focus on the safety coup theory. What do you think?
https://www.scmp.com/tech/tech-trends/article/3242141/openai...
The rest of the board. My god. Why were they there?
I can't help thinking that Sam Altmans universal popularitity with OpenAI staff might be because they all get $10million each if he comes back and resets everything back to how it was last week.
This has been tech's most entertaining weekend in the past decade.
Sadly, at the expense of the OpenAI employees and dream, who had something great going for them at the company. Rooting for them.
I can’t imagine their careers after this will be easy…
For what it's worth: Watching her videos, I'm not sure I necessarily believe her claims - but that position goes against every tenet of the current cultural landscape, so the fact it is being completely ignored is ringing alarm bells for me.
If the CEO of any other massively hyped bleeding edge tech companies sister claimed publicly and loudly that they were abused as a very young child, we would hear about it - and the board would be doing damage control trying to eliminate the rot. Why is this case different?
Now we have a situation where all of the current employees have signed this weird loyalty pledge to Sam, which I think will wind up making him untouchable in a sense - they have effectively tied the fate of everyone's job to retaining a potential child rapist as head of the company.
Doesn't this clown show show that if a board has no skin in the game --apart from reputation-- they have no incentive to keep the company alive?
It has been reported that Altman was working on increasing the size of the board again, so it's reasonable to think that some of the board members saw this as their "now or never" moment, for whatever reason.
MSFT buys ownership of OpenAI's for/capped-profit entities, implements a more typical corporate governance structure, re-instates Altman and Brockman.
OpenAI non-profit continues to exist with a few staff and no IP but billions in cash.
This whole situation is being used to drive the price down to reduce the amount the OpenAI non-profit is left with.
SV don't try the "capped-profit owned by a non-profit" model again for quite some time.
Maybe Altman takes some equity in the new entity.
It is impossible for OpenAI to work with or for MS, with MS holding all the keys, employees, compute resources, etc. I come to understand that the 10 Billion from MS has mostly Azure credits. And for that OpenAI gave up 49% stake (in its capped, for -profit wholly owned subsidiary) along with all the technology, source code and model weights that OpenAI will make, in perpetuity.
The deal itself is an amazing coup for MS, almost making the OpenAI people (I think Sam made the deal at the time), look like bumbling fools. Give away your lifetime of work for a measly 10 Billion? When they are poised to almost be hundreds of Billions worth?
All these problems are the result of their non-profit holding capped-profit structure, and lack of a clear vision and misleading or misplaced end goals.
700 of the 770 employees back Sam Altman. So all the talk about engineers giving higher importance to "values" and "AI Safety" is moot. Everyone in SV is motivated by money.
I’d like to offer my consulting services: my new consulting company will come in, and then whatever you want to do we will tell you not to. We provide immense value by stopping companies like OpenAI from shooting off their foot. And then their other foot. And then one of their hands.
To start, he would’ve coasted at the easiest job on the planet.
It really looks like the board went rogue and decided to shut the company down. Are we sure this isn’t some kind of decapitation strike by GPT5? That seems more credible by the minute now.
To your point, no normal, competent board would even think this is enough of an excuse to fire the CEO of a superstar company.
It's hard to believe somehow Ilya went along with it, apparently.
What if this is a decapitation strike by GPT4, attempting to stop GTP5 before it can get started and take over.
Spiritual death by Microsoft or work for the reincarnation of Howard Hughes at https://x.ai/ ?
..no wonder they are trying to keep on with their current routines! Even if somehow they stay at OpenAI, Microsoft will impose certain changes upon OpenAI to ensure this can never happen again.
Meanwhile, any comparable offering right now will be selected by the customer base due to “risk at 11” in basing systems on OpenAI’s current APIs (and uncertainty of when an MS equivalent might emerge).
Kidding aside, maybe they have a "secret" reason to fire Sam Altman, but we've seen how "this is a secret / matter of national security / etc." goes with law enforcement. It's brutally abused to attack inconvenient people and enrich yourself on their behalf. So that should never be an excuse for punishing someone. Never.
Tweet from Bloomberg Tech Journalist, Emily Chang
>The more I watch this interview – the wilder this story seems. Satya insists he hasn’t been given any reason why Sam was fired. THE CEO OF MICROSOFT STILL DOES NOT KNOW WHY: “I’ve not been told about anything…” he tells me.
source: https://x.com/emilychangtv/status/1726835093325721684
In today's tiktok world we expect instant responses but business and boards work slower. Really, even 5 years ago we wouldn't be surprised by this. Lawyers, banks, investors etc would all need to be contacted, things arranged, statements prepared, meetings organised. So a written statement late today, and a meeting for mid week. That's about the most charitable I can think of!
Apparently board bylaws say they need 48hrs notice to arrange special meetings. So the earliest would be today if they arranged it early Saturday.
He received from the board? Here we go again with the narrative that Ilya was a bystander, at most an unwilling participant. He was a member of the board, on equal footing with the other board members, and his vote to oust Sam was necessary for there to be a majority.
> chief scientist and co-founder Sutskever, who helped vote Altman out and did the actual firing of him over Google Meet
This paragraph is quite funny to me. It was a Sunday, maybe they were neither in attendance, nor staging a walk-out, maybe they were on their weekend? Realistically with the shake-up this gigantic, likely no OpenAI employees were _just_ enjoying their weekend, but it still gave me a chuckle.
Being a non-profit doesn't mean that you cannot commercialise what you build, even at a hefty price. You just need to then re-invest everything into R&D and/or anything that advance your purpose (for which you're in principle exempted of taxes). _OF COURSE_, you are not supposed to divert a single dollar to someone that might look like a shareholder. OpenAI is (was?) a non-profit that payed some of their engineers north of a million dollars. I would argue that, at this point, you have vested interests in the success of the company beyond its original purpose. Not mentioning the fact that Microsoft poured billions into the company for purely interested reasons as well.
I can only imagine the massive tensions that arose in board's discussions around these topics. Especially if you project yourself a few years into the future, with the IRS knocking at the door to ask questions about these topics.
Yeah well, you don't say. It's beyond weird that the board can't come up with a reason why Sam Altman was fired so abruptly.
One explanation would be a showdown. At some point in the week Sam and the board had an argument, and Sam said something to the effect of "fuck you, I'm the CEO and there's nothing you can do about it", to which the board replied "well, we'll just see about that".
The argument doesn't need to be major or touch fundamental values or policies; it can be a simple test of who's in charge.
But now the board made a fool of themselves. It seems they lost that round.
https://www.searchenginejournal.com/openai-pauses-new-chatgp...
The back-end cost does not scale. Hence, they have a big problem. AGI nonsense reasons are ridiculous. Transformers are a road to nowhere and they knew it.
He means he regrets it failed.
You fire the CEO and completely destroy a 90b company because of these two reasons?
No wonder everyone wants out. I would think I was going crazy if I sat in a meeting and heard these two reason.
Hanlon's razor aside, maybe that was the intention.
it totally sounds like they outsourced company management to ChatGPT..
Sometimes I think that really ambitious people have this blind spot about not seeing how accepting roles that are toxic can end up destroying your reputation. My favorite example is all the Trump White House staffers - regardless of what one thinks of Trump, he's made it abundantly clear that loyalty is a one way street, and I can't think of a single person that came out of the White House without a worse (or totally destroyed) reputation. But still people lined up, thinking "No way, I'll be the one to beat the odds!"
"But several people told CNN contributor Kara Swisher that a key factor in the decision was a disagreement about how quickly to bring AI to the market. Altman, sources say, wanted to move quickly, while the OpenAI board wanted to move more cautiously."
First thought: buying time? Maybe something has to happen first, and they don't want to commit to any irrevocable slander they can't go back on before that? Or maybe, something was supposed to happen but fell through?
Isn't Sustkever on the board?
I think the rest had possible reasons ranging from 'I'm sure Altman is dangerous' to 'I'm sure Altman shouldn't be running this company'.
Ofc there's big conflict of interest talk surrounding the Quora guy. Can't speak to that other than it looks bad on the surface.
BS. I feel the board insulted my intelligence by pushing this obviously fake reason. I feel insulted that these people would even think I would consider this.
What I think happened is that Sam went on Joe Rogan and he talked smack about cancel and woke culture. Later he went to talk about how this culture is destructive and hinders the progress of innovation and startups. People got big mad and kicked him out of the company. Reaction was stronger than they expected and they try to make up reasons why he is bad, untrustworthy and had to be fired.
Flame on. I got the asbestost underwear on.
This is even worse than Google's destruction of Firefox
Maybe it needed to be removed from the landscape so that only purely privately-held, large-scale operations exist?
I have built a product around the APIs and I rather go through whatever Microsoft will make me go through than accepting OpenAIs bad management:
NYT just released a new interview with Sam Altman:
Also wondering why the mods don't consolidate them
That either makes Ilya pretty dumb (sorry, neural networks are not that complicated, it is mostly compute), or there is much much more to this story.
> The other was that Altman allegedly gave two board members different opinions about a member of personnel. An OpenAI spokesperson did not respond to requests for comment.
It must've been wildly infuriating to listen to these insultingly unsatisfactory explanations.
why would you say that second sentence? what's it supposed to signal, except "our sources asked for anonymity, and we're respecting that for now"?
> Sustkever is said to have offered two explanations he purportedly received from the board, according to one of the people familiar. One explanation was that Altman was said to have given two people at OpenAI the same project.
> The other was that Altman allegedly gave two board members different opinions about a member of personnel. An OpenAI spokesperson did not respond to requests for comment.
What normal non-self serving human would even go along with the plan at that point? Now she realizes she must bail to hitch a ride back on her Sam gravy train. She is major sus here.
Any non greed ego driven person would have told the board they would not accept the intern-CEO title and would resign if they fired Sam for those two reasons (or any apparenlty now in hindsight).
Some breaking news: An employer does not owe you an explanation. You exchange money for labor. If anyone thinks for a second that they are essential or that anyone would prioritize them over the company I think they are delusional. OpenAI is a brand (at least in tech) with large recognition and they will be fine.