Key manipulative behaviors shown in the emails:
1. Using artificial urgency to force decisions: "Deepmind is going to give everyone massive counteroffers tomorrow to try to kill it"
2. Changing narratives to maintain control: Sam Altman's reasons for wanting CEO title kept shifting according to Greg/Ilya
3. Information control and selective disclosure: Sam being "bothered by how much Greg and Ilya keep the whole team in the loop"
4. Creating dependency through resource control: Elon using funding as leverage - "I will no longer fund OpenAI until..."
5. Double binds and no-win situations: Team had to choose between losing funding or accepting control terms
6. Triangulation between parties: Playing Greg/Ilya against Sam, Sam against Elon, etc.
The patterns use control, artificial urgency, shifting narratives, and manipulation of relationships to maintain power.
Please don’t think I’m a fan of the people at play here, I’m not. But it’s pretty standard stuff.
Today I became a fan of Ilya Sutskever
Ilya makes a good point, but I also wonder if he is also being naive. In order to get to the stage where AGI can be built outside of big tech, you probably do need a CEO with typical CEO powers. Maybe there can be some convoluted structure to avoid dictatorial control at the end but I also wonder if this is sort of an organizational distraction. There might explain the push to just move on and get past all this.
Of course this depends on who we’re talking about. I would trust Elon to share AGI as he did Tesla patents. But not Sam.
It was to Elon at that moment. Elon wanted majority equity, board control, and to be CEO.
> Of course this depends on who we’re talking about. I would trust Elon to share AGI as he did Tesla patents. But not Sam.
I think Ilya is right to make it not possible in place of depending of trust Elon. Then he tried again with Sam but then he lost.
If they learn from us, and we’re not allowed to learn from them, how is that good for safety? How much does that silo safety information?
Yes, I realize many folks just blow it off, and many more are normies who don’t know or care, but doesn’t it seem wrong to learn from humanity while telling humanity they can’t learn from you?
Thankful we have open AI models from Meta and Alibaba!
Is Elon's new lawsuit still on?
Does Elon's new political situation affect it?
Altman recently wrote an X post showing an OpenAI model making a less left-leaning reply than Grok.
What I am really waiting for over the next couple of years is for someone to demonstrate real progress in scaling the next hardware paradigm. Probably something fairly different like a completely memory-focused type of computation. Built on memristors or something. There have been examples built by labs, but they had relatively tiny amounts of compute, albeit theoretically with the _potential_ for massive efficiency gains in some future form.
The next 100X or 1000X efficiency gain from a new hardware paradigm may make a lot of the current generation's infrastructure and bickering mostly irrelevant. And I think it will be a huge boost to open source AI.
In my opinion, what's relevant is not any particular deal or company, but that we develop and adapt to a nuanced cultural understanding of what types and levels of AI capability can help us and which will actually be dangerous sooner or later, and how to fairly deploy the beneficial and avoid the dangerous.
But I think it needs to be a broad understanding in society.
> This is very annoying. Please encourage them to go start a company. I've had enough.
this is the gold bit for me because it signifies the stark difference between shitposting and reality in the mind of someone who knows which way is up. HN stands a chance to learn from this downward-spiral-esque email exchange, but something tells me the lessons won't come through.
pleasant to see dota mentioned, haven't heard that name in quite some time.
I do not see why one would worry about AGI just because there is a rise in bot performance. What kind of performance are we talking about? I have written bots for various games myself but never once I thought I am creating "intelligence" or that it would ever lead to that.
eg Deep Blue beat Kasparov (1997), but that was a narrow AI and couldn't play go or dota.
> The researchers would have significant financial upside but it would be uncorrelated to what they build, which should eliminate some of the conflict (we’ll pay them a competitive salary and give them YC equity for the upside). We’d have an ongoing conversation about what work should be open-sourced and what shouldn’t. At some point we’d get someone to run the team, but he/she probably shouldn’t be on the governance board.