"boys"
Not sure what this was supposed to mean? Yes, I think Fei Fei Li's datasets are much better tests than MNIST if that is what you were getting at?
I don't see any of these people submitting MNIST to NIPS in 2017
None of them submitted things as entirely new and different as this, either.
Having said that, I think my point holds.
The completely awesome 2017 "Generalization in Deep Learning" paper[1] was co-authored by Bengio and uses MNIST - because everyone can follow it.
Yann LeCun was co-author on the 2017 "Adversarially Regularized Autoencoders
for Generating Discrete Structures"[1.5], using MNIST
Ian Goodfellow Autoencoder NIPS paper[1] used MNIST as one of its 4 datasets. Yes, it was 2014, but when introducing a new technique using familiar datasets isn't a bad thing.
DeepMind's "Bayes by Backprop" (ICML15) used MNIST[2]
Another example: the (June 2017) John Langford (Vowpal Wabbit) et. al paper[3] on using Boosting to learn ResNet blocks used MNIST.
So yes, I agree there are much better datasets to compare performance on. But to prove something new works, MNIST is a useful dataset.
[0] https://arxiv.org/pdf/1710.05468.pdf
[1] http://papers.nips.cc/paper/5423-generative-adversarial-nets
[1.5] https://arxiv.org/pdf/1706.04223.pdf
[2] https://deepmind.com/research/publications/weight-uncertaint...
[3] https://arxiv.org/pdf/1706.04964.pdf