For example, Netflix knows what I've watched and how I've rated things. But it doesn't let me tell it things like "I categorically dislike horror movies", so it sometimes recommends those.
It's trying to guess something that I'd rather just tell it.
What if it's labeled as horror and comedy because it's a parody?
I've also found that some people read way too much into how well an automatic recommendation system should work. There have been a couple of angry feedback messages due to a guess being off.
And even if trusting a recommendation causes a placebo effect that increases the perceived quality (as others have pointed out, it seems plausible that a hyped recommendation may cause disappointment instead), why on Earth would we trust a process we don't understand, but mistrust a process we do?
Determining something that's likely to suit your tastes based on your browsing/buying/watching/reading/etc. habits isn't "knowing" you and doesn't require such a thing. I don't know if this is some sort of accidental anthropomorphism or what, but I've seen this in more than one place and it just doesn't follow.
Computer predictions seems to work, but only get you half way there. I don't trust them because they are from a computer, but if I wrote the program myself I would trust it 100 times less. As with baseball, guessing the home team will win works at least half the time. Our recommendation engine works, and people read articles we recommend 4 times longer.
Conclusion: computer based recommendation engines are 50% placebo.
That's how you build trust: by making good recommendations. Everything else is secondary.
I don't think this is true... I find that there's a point where additional information becomes noise, and can make your prediction less useful.