Yes, the algorithm you proposed is impressive and has the potential to become a game-changer.
However, I think the MNIST and the Ying/Yang dataset, using latency-coding, are not the ideal example to demonstrate its performance.
These datasets are useful to demonstrate nonlinear classification, and it's certainly great to see that the spiking network performs competitively. However, the transformation into a latency code costs time, in terms of computation, and also in terms of representation, before even one item is classified. Perceptron-based ANNs with continuous outputs don't require this step and will always have an edge over spiking networks in such scenarios.
I think what the field is really lacking is an ML problem that can leverage spiking networks directly, that does not require costly conversion of data into a representation that is suitable for spiking networks.