Anyone with sufficient intellectual power to grok building AI must be fully aware of the monetization value of the same. If you are navel gazing over AIG taking over humanity you must first step through the stage were capital and AI couple up.
So it is not too much to ask since others who also were aware of the inherent unwanted social distortions that was entirely predictable were relying on these individuals and "non-profit" organizations to actually live up to their claims.
As it is, it seems like a thinly disguised propaganda to recruit and benefit from altruistic and capable workers in the field to then have Sam Altman (and whoever is behind him $$$) to parachute and take over and say "oh well, you can'tn expect people to be truthful and have principles! What are ya, a chump?"
I figured they’d ship GPT-5 to justify OAI-5, but I guess they’ve realized that they now answer to no one on anything in practical terms.
That’s terrifying.
Why would the non-profit board approve a change to a for-profit company? Wouldn't this be against the nature of the non-profit entity that was founded and which they are supposed to govern?
Bret Taylor (Chair), Sam Altman, Adam D’Angelo, Dr. Sue Desmond-Hellmann, Retired U.S. Army General Paul M. Nakasone, Nicole Seligman, Fidji Simo, Larry Summers and Zico Kolter.
https://en.wikipedia.org/wiki/Bret_Taylor
https://en.wikipedia.org/wiki/Sam_Altman
https://en.wikipedia.org/wiki/Adam_D%27Angelo
https://en.wikipedia.org/wiki/Sue_Desmond-Hellmann
https://en.wikipedia.org/wiki/Paul_Nakasone
https://en.wikipedia.org/wiki/Nicole_Seligman
https://en.wikipedia.org/wiki/Fidji_Simo
There are sort-off loopholes, like changing the name of "Open AI" to something else and selling the name along with the IP to a commercial entity - EdX did this. But it's not really a loophole, since the board would need justify the price the commercial company paid and the money from the sale is still within a non-profit and bound by it's rules.
So, the whole question has been, how does he figure out how to leave the board with <50% ownership (preferably much less than, even 0%), in a way which passes the legal sniff test so the deal goes through, but which doesn't sacrifice >$75b that neither he nor anyone else has?
This is where the rhetoric and preparing the grounds comes in. You can argue that OA is actually worth <<$150b, maybe even as low as $0, by saying that it has value only because it is going to keep raising enough cash to reach AGI... but the board remaining in charge + the current cap on raising capital + Altman quitting will destroy that raising ability, rendering OA worthless. (Similar to his threat last year to destroy OA by creating a competitor in MS, and MS then immediately cutting off hardware & capital so the rump OA would starve to death.)
Hence, the board should - nay, is required by their legal & moral duties to the nonprofit - to accept much less than $150b, because whatever that deal is, it's more than $0.
This is of course ridiculous and wrong, but it's not so ridiculous that a board of loyalists can't sign off on it, nor that lawyers can't defend it in court in front of a judge who wants to rule in its favor (because judges will tolerate anything in nonprofits short of gross criminality).
I asked ChatGPT this simple question and the responses are really commendable and a bit hilarious:
Prompt: Since OpenAI the owner of the ChatGPT has removed non-profit control of OpenAI and gave Sam Altman equity what is the more appropriate alternative name to OpenAI now?
Answers: Given the shift in OpenAI's structure from a non-profit to a more profit-oriented model, a name like "AltmanAI" could be seen as more reflective of its current leadership and business model. Alternatively, something like "ProfitAI" or "EquityAI" could highlight the focus on profitability and equity distribution, though these might come across as blunt or satirical.
However, if the goal is to preserve a sense of innovation and artificial intelligence leadership while reflecting the new profit-driven focus, names like "InnovativeAI" or "EnterpriseAI" could also work. These names could better align with the organization's current direction while maintaining a forward-thinking brand identity.
OpenAI to Become For-Profit Company - https://news.ycombinator.com/item?id=41655954
Damn, that's harsh. And Musk said it cannot joke.
But whether something highly unlikely and slightly sensational like a finger on the scales, or the far more likely and utter banal explanation that people in vast numbers see their fates as intertwined with the status quo, the result is the same: on some deep institutional level HN is never going to hold Altman to account.
So the question becomes, what authority handles the cases the community can’t? On paper that’s regulators and legislators. Those folks ostensible and actual missions aren’t identical, and differ more with time, but they intersect at “prevent would-be autocrats being so brazen as to provoke de facto revolt”.
The public doesn’t hate Big Tech generally and its sociopath fringe specifically enough to make it a true wedge issue yet, but it’s trending that way.
I’d go so far as to say that most almost anyone breathing the Bay air isn’t capable of truly internalizing how deeply the general public loathes the modern Valley machine: it’s dramatically more than Wall St at any time.
It’s getting even trickier than usual to predict which historical social norms are still bright lines, but “profiting personally via using a charity as a vehicle for fraud” is still putting popular people in prison with bipartisan support.
And Altman isn’t popular even here. He’s feared here, but loved almost nowhere.
Their commitment will remain unparalleled, because AI safety actually means doing whatever it takes to provide maximum return to the shareholders, no matter the social cost.
“It's so deeply unimaginable to people to say i don’t really need more money... If I were to say I'm going to try and make a trillion dollars with OpenAI it would save a lot of conspiracy theories”
And now having turned OpenAI into closed AI he's trying to give himself $10bn in equity.
But I don’t think I’m being alarmist when I say that this moment, when the altruistic ideals get suddenly pushed to the side, may be the moment noted in history books before whatever it is that this leads us to happens. I don’t mean evil machines are next, but I do think it’s a cotton gin, telegram over the ocean, light bulb, AARPNET moment. Maybe even more impactful than those. Manhattan project? TBD I guess.
Which is why I believe we’ll regret that we didn’t move slower or enforce more collective stop gaps behind the unbridled force of capitalism and the public goodwill. I’m not a doomsayer but you can’t tell me something isn’t up when this much money is involved.
I encourage all Americans to further research Hiroshima and Nagasaki. Our propaganda has told us our war crimes were completely justified, but a more neutral historical analysis reveals this isn't the case.
Being so good that you get in charge of YC, and not fired by PG at all, makes you perfect to meta-morph 'OpenAI' into some dystopian big corp, as seen.
a historic reminder
Posted another source (https://news.ycombinator.com/item?id=41653028) since I feel this needs a discussion. This one has a more descriptive headline though.
I tend to agree that this is the bigger story and more worthy of being on the front page, but HN tends to enjoy a bit of celebrity gossip so not surprising to me that the news of the CTO leaving would get more traction.
I don't think it's any sort of conspiracy if that's what you're implying.
[1] https://news.ycombinator.com/item?id=41651038
Perhaps this is what Mira, Greg and IIya saw in Sam; his true intentions after that coup.
This 'non-profit' / 'for-profit' complication structure + taking capped investment won't be tried again in a very long time after these events.