I agree. It's quite possible that humanity-ending AI is also not a good business, don't you agree?
I think the whole Apocalypse discussion is premature distraction for the moment. A more improtant discussion is what kinds of AI will end up making money. We have already seen how the internet turned from an infinite frontier to a more modern version of TV dominated by a few networks with addictive buttons. Unfortunately we will see the same with AI becuase such is the nature of money today, and capitalism is one thing that AI will not change. The applications of AI that make the most money will dominate, to the detriment of applications that are only benefiting small groups of people (such as the disabled).
> to publicly fund alignment research while
We don't really know if alignment research is what we need. Governments should fund AI research in general, otherwise it would be like the early attempts of the EU to regulate AI. In fact any kind of funding of AI ethics at the moment is dubious because it is changing so fast . Stopping it for six months will not solve those ethical issues either, it will just delay their obsolescence by six months. This is stupid on the face of it