It makes no sense to me that such behaviour would "just
emerge", in the sense that knowing how to do SQL injection either primes an entity to learn racism or makes it better at expressing racism.
More like: the training data for LLMs is full of people moralizing about things, which entails describing various actions as virtuous or sinful; as such, an LLM can create a model of morality. Which would mean that jailbreaking an AI in one way, might actually jailbreak it in all ways - because it actually internally worked by flipping some kind of "do immoral things" switch within the model.