Poorly considered automation can create frictionless experiences for some and Kafkaesque experiences for the rest, where systems refuse to accept your atypical name, your atypical style of speaking is flagged as an indicator of fraud, etc. Automating processes involving people necessarily makes assumptions about those people, and such assumptions are often brittle.
For example, it's easy to imagine a resume filtering AI being implicitly prejudiced against people from Fictionalstan, because it was only trained on a few resumes from Fictionalstan and most of those happened to be classified as "unqualified". This is a danger anytime you have a small number of samples from any particular group, because it's easy for small sample sizes to be overwhelmed by bad luck.
In general I think these types of issues are best viewed as software bugs. It's a clearer and more actionable perspective than as ideological issues. If the software isn't serving some of our end users properly, let's just fix it and move on.