I don't think the paper asked for that. Relevant quote:
> machine learning is more concerned with making predictions, even if the prediction can not be explained
very well (a.k.a. “a blackbox prediction”)
So in your example: an algo may explain that a car slows down, before taking a turn, because else it would likely crash. It may even get to a threshold ("under these weather conditions, anything over 55Mph is unsafe when taking a turn of such and such degree"). Statistics can help with that.
Welling is not asking for deep learning models to explain why a person got a cancer, but to explain its reasoning when it diagnoses a person with cancer ("I am confident, because in a random population of a 1000 other patients, these variables are within ..."). Statistics can help with that. It aligns with their mind set and tool set.
Regulations are cheated even with these kind of explanations, but that is for another story (black box models may provide some plausible deniability).