Your local model is still going to get prompt-injected by third parties if it has an Internet connection. It
just isn't regularly phoning home to Google/Anthropic/etc. but tons of other people would be interested in your data (or convincing the model to encrypt your home directory). There's also still no real accountability anywhere. Even if you have the resources to train the model from scratch yourself, it's not like you can
audit the weights and understand any potential malicious behaviour encoded in there, beyond the baseline of "yeah these things are kinda unpredictable".
And on the flip side, a remote model isn't creating risk in and of itself. That comes from the agent harness being permitted to make network and filesystem calls. Even the most evil possible version of ChatGPT isn't going to exfiltrate anything except by somehow social-engineering you into volunteering the information.