I also don't think it's just China, the US will absolutely order American providers to do the same. It's a perfect access point for installing backdoors into foreign systems.
That's easy (well, possible) to detect. I'd go the opposite way - sift the code that is submitted to identify espionage targets. One example: if someone submits a piece of commercial code that's got a vulnerability, you can target previous versions of that codebase.
I'd be amazed if that wasn't happening already.
Sure, maybe something like this can happen if you use the deepseek api directly which could have chinese servers but that is a really long strech but to give the benefit of doubt, maybe
but your point becomes moot if somebody is hosting their own models. I have heard glm 4.6 is really good comparable to sonnet and can definitely be used as a cheaper model for some stuff, currently I think that the best way might be to use something like claude 4 or gpt 5 codex or something to generate a detailed plan and then execute it using the glm 4.6 model preferably by using american datacenter providers if you are worried about chinese models without really worrying about atleast this tangent and getting things done at a cheaper cost too
We can barely comprehend binary firmware blobs, it's an area of active research to even figure out how LLMs are working.
There's zero reason or even technical feasibility for them to skip in backdoor that would be easily detected and destroy their market share
None of the security benchmarks or audits show that any Chinese models write insecure code
Antrophic have already published a paper on this topic, with the added bonus that the backdoor is trained into the model itself so it doesn't even require your target to be using an attacker-controlled cloud service: https://arxiv.org/abs/2401.05566
> For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it).
> The backdoor behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away.
> Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.
Now I'm not sure legality is on-topic any more.
I'm not sure how closely you've been following, but the US government has a long history of doing things they don't have legal authority to do.