In my other comment I write about this a bit, but basically it doesn’t seem like non-conscious entities would be able to accurately predict the behavior of conscious entities, due to their lack of a shared meta-dataset of qualia. At best, they could find patterns of behavior and create a representation of qualia. But this isn’t the same as actually having the same data. It’s the difference between creating a representation of a state that causes another agent to cry, scream, and writhe, and that of knowing the precise state of pain itself. The former — a representation — doesn’t generalize past training data, especially when confronted with a multitude of qualia in varying combination. The latter — direct, precise, concrete data — might still suffer from inaccuracy (even knowing the precise potential states of another agent doesn’t mean we can infer which state that agent is in), but it’s better than the alternative: a guess built upon a guess.
I find the philosophical zombie to be a great thought experiment for this, along with the prisoners’ dilemma. Two conscious entities have a shared dataset that enables communication without words — spooky-action-at-a-distance via qualia. Two friends with great loyalty to one another can solve the dilemma by their knowledge of what love and betrayal is. A p-zombie would understand that given past behavior, that their prisoner counterpart might not choose betrayal. But qualia-experiencing agents know what is happening in one another’s minds in a way a non-qualia-experiencing entity can never know. The p-zombie would lack all empathy. It would always be logical, and choose the Nash Equilibrium. It would never mourn the dead. It would never commit suicide. It would never sacrifice its life for love, or for an ideal, because it would have neither.