Great point, and based on my amateur understanding you’re absolutely correct. I was mostly speaking so confidently because these founding documents in particular define the company as being founded to prevent
exactly this.
You’re right that Altman is/will sell it as an unexpected but necessary adaptation to external circumstances, but that’s a hard sell. Potentially not to a court, sadly, but definitely in the public eye. For example:
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions… We are committed to providing public goods that help society navigate the path to AGI.
From 2018:
https://web.archive.org/web/20230714043611/https://openai.co...And this is the very first paragraph of their founding blog post, from 2015:
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
https://openai.com/index/introducing-openai/