The paper you link to counts as a publication, but its reputation stands on its own, it has nothing to do with arXiv as a venue. Ideally, that's how it is for all papers, but it isn't, just by publishing in certain venues your paper automatically gets a certain amount of reputation depending on the venue.
We require a method of filtering such that a given researcher doesn't have to personally vet in excruciating detail every paper he comes across because there simply isn't enough time in the day for that.
Ideally such a system would individually for each paper provide a multi-dimensional score that was reputable. How can those be calculated in a manner such that they're reputable? Who knows; that exercise is left for the reader.
In practice "well it got published in Nature" makes for a pretty decent spam filter followed by metrics such as how many times it's been cited since publication, checking that the people citing it are independent authors who actually built directly on top of the work, and checking how many of such citing authors are from a different field.
We do require such a method. Isn't that what AI is for? Strictly working as a filter. You still need to personally vet in excruciating detail every paper you rely on for your work.
PageRank was a decent solution for websites. Can't we treat citations as a graph, calculate per-author and per-paper trustworthiness scores, update when a paper gets retracted, and mix in a dash of HN-style community upvotes/downvotes and openly-viewable commentary and Q&A by a community of experts and nonexperts alike?