There are probably some gray where these intersect, but I’m pretty sure a lot of ChatGPT’s alignment needs will also fit models in China, EU, or anywhere sensible really. Telling people how to make bombs, kill themselves, kill others, synthesize meth, and commit other crimes universally agreed on isn’t what people typically think of as censorship.
Even deepseek will also have a notion of protecting minority rights (if you don’t specify ones the CCP abuses).
There is a difference when it comes to government protection… American models can talk shit about the US gov and don’t seem to have any topics I’ve discovered that it refuses to answer. That is not the case with deepseek.