Yes, but the researchers get plenty of feedback from the validation set and there's nothing easier for them than to tweak their system to perform well on the validation set. That's overfitting on the validation set by proxy. It's absolutely inevitable when the validation set is visible to the researchers and it's very difficult to guard against because of course a team who has spent maybe a month or two working on a system with a publication deadline looming are not going to just give up on their work once they figure it it doesn't work very well. They're going to tweak it and tweak it and tweak it, until it does what they want it to. They're going to converge -they are going to converge- on some ideal set of hyperparameters that optimises their system's performance on its validation set (or the test set, it doesn't matter what it's called, it matters that it is visible to the authors). They will even find a region of the weight space where it's best to initialise their system to get it to perform well on the validation set. And, of course, if they can't find a way to get good performance out of their system, you and I will never hear about it because nobody ever publishes negative results.
So there are very strong confirmation and survivorship biases at play and it's not surprising to see, like you say, that the system keeps doing better. And that suffices to explain its performance, without the need for any mysterious post-overfitting grokking ability.
But maybe I haven't read the paper that carefully and they do guard against this sort of overfitting-by-proxy? Have you found something like that in the paper? If so, sorry for missing it myself.