Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.
And if it’s wrong, D’Angelo and the rest of the board could help themselves out by explaining the real reason in detail and ending all this speculation. This gossip is going to continue for as long as they stay silent.
Their lawyers are all screaming at them to shut up. This is going to be a highly visible and contested set of decisions that will play out in courtrooms, possibly for years.
High-ranking employees that have communicated with them have already said they have admitted it wasn't due to any security, safety, privacy or financial concerns. So there aren't a lot of valid reasons left. They're not talking because they've got nothing.
Why do you think that? It still strikes me as the most plausible explanation.
Not saying he couldn't change now but at least this is enough for him to give clear benefit of doubt unless board accuses him.
It’s weird how many people try to guess why they did what they did without paying any attention to what they actually say and don’t say.
There are 3 other people on the board, right? Maybe they're all buddies of some big masterminding, but I dunno..
If that were the case, can't he get sued by the Alliance (Sam, Greg, rest)? If he has conflict of interest then his decisions as member of the board would be invalid, right?
Large private VC backed companies also don’t always fall under the same rules as public entities. Generally there are shareholder thresholds (where insider/private shareholders count towards) that in turn cause some of the general Securities/board regulations to kick in.
Quora was always supposed to be an AI/NLP company, starting by gathering answers from experts for its training data. In a sense, that is level 0 human-in-the-loop AGI. ChatGPT itself is level 1: Emergent AGI, so was already eating Quora's lunch (whatever was left of it after they turned into a platform for self-promotion and log-in walls). There either always was a conflict of interest, or there never was.
GPTs seemed to have been Sam's pet project for a while now, Tweeting in February: "writing a really great prompt for a chatbot persona is an amazingly high-leverage skill and an early example of programming in a little bit of natural language". A lot of early jailbreaks like DAN focused on "summoning" certain personas, and ideas must have been floated internally on how to take back control over that narrative.
Microsoft took their latest technology and gave us Sydney "I've been a good bot and I know where you live" Bing: A complete AI safety, integrity, and PR disaster. Not the best of track record by Microsoft, who now is shown to have behind-the-scenes power over the non-profit research organization that was supposed to be OpenAI.
There is another schism than AI safety vs. AI acceleration: whether to merge with machines or not. In 2017, Sam predicted this merge to fully start around 2025, having already started with algorithms dictating what we see and read. Sam seems to be in the transhumanism camp, where others focus more on keeping control or granting full autonomy:
> The merge can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot. But I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.
> Although the merge has already begun, it’s going to get a lot weirder. We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like. https://blog.samaltman.com/the-merge
So you have a very powerful individual, with a clear product mindset, courting Microsoft, turning Dev day into a consumer spectacle, first in line to merge with superintelligence, lying to the board, and driving wedges between employees. Ilya is annoyed by Sam talking about existential risks or lying AGI's, when that is his thing. Ilya realizes his vote breaks the impasse, so does a luke warm "I go along with the board, but have too much conflict of interest either way".
> Third, my prior is strongly against Sam after working for him for two years at OpenAI:
> 1. He was always nice to me.
> 2. He lied to me on various occasions
> 3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)
One strategy that helped me make sense of things without falling into tribalism or siding through ideology-match is to consider both sides are unpleasant snakes. You don't get to be the king of cannibal island without high-level scheming. You don't get to destroy a 80 billion dollar company and let visa-holders soak in uncertainty without some ideological defect. Seems simpler than a clearcut "good vs. evil" battle, since this weekend was anything but clear.
Even as I type that, when people talk about the board being altruistic and holding to the Open AI charter, how in the world can you be that user hostile, profit focused, and incompetent at your day job (Quora CEO) and then say "Oh no, but on this board I am an absolute saint and will do everything to benefit humanity"
> Adam D’Angelo is awesome, and we’re big Quora fans
[0] https://www.ycombinator.com/blog/quora-in-the-next-yc-batch