The Banach fixed point theorem is extensively used for convergence proofs in reinforcement learning, but when you operate at the level of gradient descent for deep neutral networks it's difficult to do so because most commonly used optimizers are not guaranteed to converge to a unique fixed point.
The article seems to do the work to define a Fisher Information metric space, and contractions with the Stein score. Which seems to be the hypothesis for the Banach fixed point theorem, but I am not quite sure what conclusion we would get in this instance.