To me the chance of a future superintelligent AI being "catastrophic" is pretty much unknowable (we don't even have a concrete idea of how a superintelligent AI would even work yet!). It could be 99.999%, it could be 0.0001%.
Whereas the chance of a superintelligent AI created by a company being harnessed for personal profits, and that company attempting to maximize its profits by shutting down any competition, potentially by "raising awareness of AI safety concerns", is quite high simply based on our modern understanding of how large, powerful companies operate. And a single company with a monopoly on AI, in sole possession of AI (which you clearly agree can be dangerous) seems even more dangerous.