Even worse, if we don't have near constant updates, we might realize this is not all that important in the end and move on to other news items!
I know, I know, I shouldn't jest when this could have grave consequences like changing which uri your api endpoint is pointing to.
For the record, I don't think it's true. I think it was a power play, and a failed coup at that. But it's about as substantiated as the "serious" hypotheses being mooted in the media. And it's more fun.
Your comment sounds like a rhetoric way to say that GPT is in the same class as autocomplete and that what autocomplete does sets some kind of ceiling to what IO functions that work a couple of bytes at a time can do.
It is not evident to me that that is true.
As they learn to construct better and more coherent conceptual chains, something interesting must be happening internally.
https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chat...
I get that this is a "just for fun" hypothesis, which is why I have just for fun questions like what incentive does anyone have to keep clearly observed ai risk a secret during such a public situation?
But if there's one thing that seems very easy to discern about Ilya, it's that he fully believes that when it comes to AI safety and alignment, the buck must stop with him. Giving that control over to government bureaucracy/gerontocracy would be unacceptable. And who knows, maybe he's right.
* Current-gen AI is really good at tricking laypeople into believing it could be sentient
* "Next-gen" AI (which, theoretically, Ilya et al may have previewed if they've begun training GPT-5, etc) will be really good at tricking experts into believing it could be sentient
* Next-next-gen AI may as well be sentient for all intents and purposes (if it quacks like a duck)
(NB, to "trick" here ascribes a mechanical result from people using technology, not an intent from said technology)
Yes, actually. This is overwhelmingly true for most people. At the end of the day, we all fear being alone. I imagine that fear is, at least in part, what drives these kinds of long-term "existential worries," the fear of a universe without other people in it, but now Ilya is facing the much more immediate threat of social ostracism with significantly higher certainty and decidedly within his own lifetime. Emotionally, that must take precedence.
His existential worries are less important than OpenAI existing, and him having something to work on and worry about.
In fact, Ilya may have worried more about the continued existence of OpenAI than Sam after he was fired, which looked instantly like a: "I am taking my ball and going home to Microsoft.". If Sam cared so much about OpenAI, he could have quietly accepted his resignation and help find a replacement.
Also, Anna Brockman had a meeting with Ilya where she cried and pleaded. Even though he stands by his decision, he may ultimately still regret it, and the hurt and damage it caused.
A statement from the CEO/the board is a standard descalation.
Haven't we gotten statements from them? The complaint seems to be that we want statements from them every day (or more) now.
The board has not given a statement besides the original firing of Sam Altman that kicked the whole thing off.
"All PR is good PR" is a meme for a reason. Many cultures thrive on dysfunction, particularly the kind that calls attention to themselves.
PSA: If you or your culture is dysfunctional and thriving - think about how much more you'll thrive without the dysfunction! (Brought to you by the Ad Council.)
Unless you're TNT, cause they "know drama"
If they had openly given literally any imaginable reason to fire Sam Altman, the ratio of employees threatening to quit wouldn't be as high as 95% right now.
Uh, or investors and customers will? Yes, people are going to speculate, as you point out, which is not good.
> we might realize this is not all that important in the end and move on to other news items!
It's important to some of us.
News
Company which does research and doesn't care about money makes a decision to do something which aligns with research and not caring about money.
From the OpenAI website...
"it may be difficult to know what role money will play in a post-AGI world"
Big tech co makes a move which sends its stock to an all time high. Creates research team.
Seems like there could be a "The Martian" meme here... we're going to Twitter the sh* out of this.