The problem is, when given access to a large number of classifiers, some of which have inevitably been affected by a pre-existing racial bias, a black box machine learning algorithm will likely become discriminatory as well if race is not in some way represented and equalized.
For instance, many justice systems in the U.S. use machine learning software to determine the likelihood that a criminal will reoffend, and use that prediction to determine sentencing. Race is never used explicitly as a classifier, but the program ended up being significantly more likely to rate blacks as more likely to reoffend [1]. Classifiers like "had parents with previous criminal convictions" can be misleading when blacks are more likely to be convicted for the same crime as whites. It doesn't mean that the white person's parents didn't engage in criminal activity or other reprehensible behavior that might cause their child to become a violent, repeat offending criminal - just that they were able to get away with it more easily because of a biased system.
Machines end up just as biased as the data they've been trained on, so if we are going to use computers to judge things that have such a significant impact on people's lives, we can't risk racism slipping through the cracks.
[1] https://www.propublica.org/article/machine-bias-risk-assessm...