The point of the paper is to show that NN can still learn long after fully memorizing the train dataset.
This behavior goes against current paradigm of thinking about training NNs. It is just very unexpected, similarly as double descent is unexpected from classical statistics point of view that more parameters lead to more over-fitting.
They could have split validation test set into validation and test sets, but I don't know what that would achieve in their case.
Fig. 1 center shows different train / validate splits. Fig 2. shows a swoop between different optimization algorithms if you are concerned about hyperparameters over-fitting.
But to me really interesting is the Fig 3. that shows that NN learned the structure of the problem.