Just like with the Netflix Prize stuff, where the conclusion was very similar, ie. just dump in as much data as you can, crank up the ML machinery, and it'll discover the features (better than you can engineer them) and learn what to use for recommendation ranking. And that's basically what we see with GPT-3 too. If you have some useful labels in the data it'll learn them even without supervision, because it has so many parameters, it basically sticks.
Get some papers run it through a supervised training phase where you give it a set with every paper scored based on how retracted/bad/unreplicating it is and you'll get a great predictor. Then run it to find papers that stick out, and then have a human look at them, and try to replicate some of them to fine-tune the predictor. Plus continue to feed it with new replication results.
That said, using an ML system as the gatekeeper as OP suggested is a bad idea, as it'll quickly result in the loss of proxy variables' predictive power.
Though ultimately a GPT-like system has the capacity to encode "common sense".