If a computer program spots an irrelevant unhelpful correlation its generalization error will noticeable go up. If it is an irrelevant "helpful" correlation it means there is a problem with the data (such as leakage), not with the algorithm. If there is a problem with the data, all bets are of, both for black and white box models.
A blackbox model will probably not find that being an Armenian alone will lead to more crashes. Being non-linear in nature it will find interactions (young male Armenians are more likely to crash than young males in general). If the importance of such a feature is not significant enough to distinguish it from noise, then regularization may automatically remove it.
Even if we observe that 99 out 100 Armenians crash their cars, and you decline someone a loan, because he/she is Armenian, you may just have discriminated against the 1 Armenian who is a safe driver. Young male drivers who drive safely have a worse time getting loans, because their group (the set of young male drivers) spoiled it for them. So their only hope of getting a loan is you adding more features (like nationality), to be able to distinguish them as safe drivers, not removing them and lumping them into the status quo.