> Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.
> How do we ensure AI systems much smarter than humans follow human intent?
This is a question that naturally arises if you are pursuing something that's superhuman, and a question that's pointless if you believe you're likely to get a really nice algorithm for solving certain kinds of problems that were hard to solve before.
Getting rid of the superalignment team showed which version Altman believes is likely.