ML models are great tools, but they're way too much of a black box. What you have here is a model that's predicting something you think it shouldn't have been possible to predict, and you can't simply ask it where that prediction comes from. Absent an explanation for how the model is doing this, you have to consider the possibility that whatever is poisoning that prediction will also poison others.