The vacuum that arXiv originally filled was one of a glorified PDF hosting service with just enough of a reputation to allow some preprints to be cited in a formally published paper, and with just enough moderation to not devolve into spam and chaos. It has also been instrumental in pushing publishers towards open access (i.e., to finally give up).
Unfortunately, over the years, arXiv has become something like a "venue" in its own right, particularly in ML, with some decently cited papers never formally published and "preprints" being cited left and right. Consider the impression you get when seeing a reference to an arXiv preprint vs. a link to an author's institutional website.
In my view, arXiv fulfills its function better the less power it has as an institution, and I thus have exactly zero trust that the split from Cornell is driven by that function. We've seen the kind of appeasement prose from their statement and FAQ [1] countless times before, and it's now time for the usual routine of snapshotting the site to watch the inevitable amendments to the mission statement.
"What positive changes should users expect to see?" - I guess the negative ones we'll have to see for ourselves.
This has been a common practice in physics, especially the more theoretical branches, since the inception of arXiv. Senior researchers write a paper draft, and then send copies to some of their peers, get and incorporate feedback, and just submit to arxiv.
It works for physics because physicists are very rigorous. So papers don't change very much. It also works for ML because everyone is moving very fast that it's closer to doing open research. Sloppier, but as long as the readers are other experts then it's generally fine.
I think research should really just be open. It helps everyone. The AI slop and mass publishing is exploiting our laziness; evaluating people on quantity rather than quality. I'm not sure why people are so resistant to making this change. Yes, it's harder, but it has a lot of benefits. And at the end of the day it doesn't matter if a paper is generated if it's actually a quality paper (not in just how it reads, but the actual research). Slop is slop and we shouldn't want slop regardless. But if we evaluate on quality and everything is open it becomes much easier to figure out who is producing slop, collision rings, plagiarist rings, and all that. A little extra work for a lot of benefits. But we seem to be willing to put in a lot of work to avoid doing more work
It is an interesting instance of the rule of least power, https://en.wikipedia.org/wiki/Rule_of_least_power.
I think both sides could learn from the other. In the case of ML, I understand the desire to move fast and that average time to publication of 250-300 days in some of the top-tier journals can feel like an unnecessary burden. But having been on both sides of peer review, there is value to the system and it has made for better work.
Not doing any of it follows the same spirit as not benchmarking your approach against more than maybe one alternative and that already as an after-thought. Or benchmaxxing but not exploring the actual real-world consequences, time and cost trade offs, etc.
Now, is academic publishing perfect? Of course not, very very far from it. It desperately needs to be reformed to keep it economically accessible, time efficient for both authors, editors and peer reviewers and to prevent the "hot topic of the day" from dominating journals and making sure that peer review aligns with the needs of the community and actually improves the quality of the work, rather than having "malicious peer review" to get some citations or pet peeves in.
Given the power that the ML field holds and the interesting experiments with open review, I would wish for the field to engage more with the scientific system at large and perhaps try to drive reforms and improve it, rather than completely abandoning it and treating a PDF hosting service as a journal (ofc, preprints would still be desirable and are important, but they can not carry the entire field alone).
The current balance where people wrote a paper with reviers in mind, upload it to Arxiv before the review concludes and keep it on Arxiv even if rejected is a nice balance. People get to form their own opinion on it but there is also enough self-imposed quality control on it just due to wanting it to pass peer review, that even if it doesn't pass peer review, it is still better than if people write it in a way that doesn't care or anticipate peer review. And this works because people are somewhat incentivized to get peer reviewed official publications too. But being rejected is not the end of the world either because people can already read it and build on it based on Arxiv.
The arXiv vs journal debate seems a lot like 'should the work get done, or should the work get certified' that you see all over 'institutions', and if the certification does not actually catch frauds or errors, it's not making the foundations stronger, which is usually the only justification for the latter side.
Others (at least in chemistry) will accept it, but it raises concern if a paper is only available as a preprint.
What possible value does a journal like Nature, for example, bring to the table by claiming a paper for themselves and charging people for it, given the alternative?
I don't see any value there. Maintaining an exclusive clique by using artificial scarcity while coasting on the dregs of reputation remaining to a once prestigious institution is what a lot of these journals are doing.
The world has changed. There's no need for that sort of pay to play gatekeeping, and in fact, the model does tremendous damage to academic and intellectual integrity. It allows people to get away with fraud and it makes the institutions motivated to hide and cover it up so as to not damage their own reputations by admitting anything slipped by them.
If you contrast the damage done by journals, with regards to suppressed research, gatekept access, money taken from researchers and readers alike, against the value they might plausibly provide, the answer is clear.
They're not needed anymore. The AI era, since 2017, has thoroughly demonstrated that journals are materially incapable of keeping up, that they're unable to meaningfully contribute to the field, and that their curation or other involvement has no effective practical value. The same is true for other fields, but everyone involved wants to keep their piece of the grift going as long as possible.
We don't need them, anymore. I suspect we never did.
In my experience as a publishing scientist, this is partly because publishing with "reputable" journals is an increasingly onerous process, with exorbitant fees, enshittified UIs, and useless reviews. The alternative is to upload to arXiv and move on with your life.
arXiv has become a target for grifters in other domains like health and supplements. I’ve seen several small scale health influencers who ChatGPT some “papers” and then upload them to arXiv, then cite arXiv as proof of their “published research”. It’s not fooling anyone who knows how research work but it’s very convincing to an average person who thinks that that they’re doing the right thing when they follow sources that have done academic research.
I’ve been surprised as how bad and obviously grifty some of the documents I’ve seen on arXiv have become lately. Is there any moderation, or is it a free for all as long as you can get an invite?
This just isn't true. arXiv is not a venue. There's no place that gives you credit for arXiv papers. No one cares if you cite an arXiv paper or some random website. The vast vast majority of papers that have any kind of attention or citations are published in another venue.
Bibliometrics reveal that they are highly cited. Internal data we had at arXiv 20 years ago show they are highly read. Reading review papers is a big part of the way you go from a civilian to an expert with a PhD.
On the other hand, they fall through the cracks of the normal methods of academic evaluation.
They create a lot of value for people but they are not likely to advance your career that much as an academic, certainly not in proportion to the value they create, or at least the value they used to create.
One of the most fun things I did on the way to a PhD was writing a literature review on giant magnetoresistance for the experimentalist on my thesis committee. I went from knowing hardly anything about the topic to writing a summary that taught him a lot he didn't know. Given any random topic in any field you could task me with writing a review paper and I could go out and do a literature search and write up a summary. An expert would probably get some details right that I'd get wrong, might have some insights I'd miss, but it's actually a great job for a beginner, it will teach you the field much more effectively than reading a review paper!
How you regulate review papers is pretty tricky. If it is original research the criterion of "is it original research" is an important limit. There might already be 25 review papers on a topic, but maybe I think they all suck (they might) and I can write the 26th and explain it to people the way I wish it was explained to me.
Now you might say in the arXiv age there was not a limit on pages, but LLMs really do problematize things because they are pretty good at summarization. Send one off on the mission to write a review paper and in some ways they will do better than I do, in other ways will do worse. Plenty of people have no taste or sense of quality and they are going to miss the latter -- hypothetically people could do better as a centaur but I think usually they don't because of that.
One could make the case that LLMs make review papers obsolete since you can always ask one to write a review for you or just have conversations about the literature with them. I know I could have spend a very long time studying the literature on Heart Rate Variability and eventually made up my mind about which of the 20 or so metrics I want to build into my application and I did look at some review papers and can highlight sentences that support my decisions but I made those decisions based on a few weekends of experiments and talking to LLMs. The funny thing is that if you went to a conference and met the guy who wrote the review paper and gave them the hard question of "I can only display one on my consumer-facing HRV app, which one do I show?" they would give you that clear answer that isn't in the review paper and maybe the odds are 70-80% that it will be my answer.
In the typical "experimental report" sort of paper, the focus is typically narrowed to a knifes edge around the hypothesis, the methods, the results, and analysis. Yes, there is the "Introduction" and a "Discussion", but increasingly I saw "Introductions" become a venue to do citation bartering (I'll cite your paper in the intro to my next paper if you cite that paper in the intro to your next paper) and "Discussion" turn into a place to float your next grant proposal before formal scoring.
Review papers, on the other hand, were more open to speculation. I remember reading a number that were framed as "here's what has been reported, here's what that likely means...and here's where I think the field could push forward in meaningful ways". Since the veracity of a review is generally judged on how well it covers and summarizes what's already been reported, and since no one is getting their next grant from a review, there's more space for the author to bring in their own thoughts and opinions.
I agree that LLMs have largely removed the need for review papers as a reference for the current state of a field...but I'll miss the forward-looking speculation.
Science is staring down the barrel of a looming crisis that looks like an echo chamber of epic proportions, and the only way out is to figure out how to motivate reporting negative results and sharing speculative outsider thinking.
what are you referring to, who is being appeased who shouldn't be? what are you worried about happening?
You get ahead as an academic computer scientist, for instance, by writing papers not by writing software. Now there really are brilliant software developers in academic CS but most researchers wrote something that kinda works and give a conference talk about it -- and that's OK because the work to make something you can give a talk about is probably 20% of the work it would take to make something you can put in front of customers.
Because of that there are certain things academic researchers really can't do.
As I see it my experience in getting a PhD and my experience in startups is essentially the same: "how do you do make doing things nobody has ever done before routine?" Talk to people in either culture and you see the PhD students are thinking about either working in academia or a very short list of big prestigious companies and people at startups are sure the PhDs are too pedantic about everything.
It took me a long time of looking at other people's side projects that are usually "I want to learn programming language X", "I want to rewrite something from Software Tools in Rust" to realize just how foreign that kind of creative thinking is to people -- I've seen it for a long time that a side project is not worth doing unless: (1) I really need the product or (2) I can show people something they've never seen before or better yet both. These sound different, but if something doesn't satisfy (2) you can can usually satisfy (1) off the shelf. It just amazes me how many type (2) things stay novel even after 20 years of waiting.
Is a mid-to-high engineering salary outlandish for a CEO of what is likely to be a fairly major non-profit? Even non-profits have to be somewhat competitive when it comes to salary, and the ideal candidate is likely someone who would be balancing this against a tenured position at a major university
And while academic salaries are generally not great, tenured professors at big universities tend to make a fair bit (plus a lot more vacation time and perks than is normal in the US)
Sure, but the cost of living there is significantly higher as well. Anyway, I can hardly even comprehend these kinds of sums, though I am a bit of an outlier, as I earn around $27,700 as an SWE in Europe, which is low even by the standards of companies in my own country.
I just moved to SV a few months ago from the Midwest (and not a particularly cheap part of it). Telling my coworkers who aren't from the US what a house costs in Wisconsin, you'd have thought I was the one who moved from a foreign country.
The city manager of a small city in Texas gets paid around that much and that's taxpayer money.
Now what collegiate football coaches are paid, that's pretty crazy.
arXiv does not need to and should not optimize for “shareholder value”, which is at least nominally the justification for outlandish CEO pay packages.
In reality you could host the entire thing for well under $50k/year in hardware and storage if someone else is providing a free CDN. Their costs could be incredibly low.
But just like Wikipedia I see them very likely very quickly becoming a money hole that pretends to barely be kept afloat from donations. All when in reality whats actually happening is that its a ridiculous number of rent seekers managed to ride the coattails of being the defacto preprint server for AI papers to land themselves cushy Jobs at a place that spends 90+% of their money on flights and hotels and wages for their staff.
I'm already expecting their financial reports to look ridiculously headcount heavy with Personnel Expenses, Meetings and Travel blowing up. As well as the classic Wikipedia style we spend a ton of money in unclear costs [1].
Whats already sad is they stopped having a real broken down report that used to actually showed things. Like look at this beautiful screenshot of a excel sheet. Imagine if Wikipedia produced anything this clear. [2]
[0] https://blog.arxiv.org/2023/12/18/faster-arxiv-with-fastly/
[1] https://info.arxiv.org/about/reports/FY26_Budget_Public.pdf
[2] https://info.arxiv.org/about/reports/2020_arXiv_Budget.pdf
Non-profits aren't maximizing stock value, but they do need to optimize for stakeholder value - you want to maximize the amount of money being donated in and you want to make the most of the donations you receive, both to advance the primary mission of the non-profit and to instill confidence in donors. This demands competent leadership. The idea that just because something is not being done for profit means the value of the person's contributions is worth less is absurd. So long as the CEO provides more than $300k of value by leading the organization, which might include access to their personal connections, then the salary is sensible.
Non-profits run into the problem of creating cushy jobs that just burn doner money.
Arxiv is basically a giant folder in the cloud and shouldnt have such high paying jobs. At least not if they want rational people to keep donating.
Though, saying that, I suppose all the reputation data is kind of public. Apart from emails/accounts.
Google "sorted out" a messy web with pagerank. Academic papers link to each others. What prevents us from building a ranking from there?
I'm conscious I might be over-simplifying things, but curious to see what I am missing.
Now, honestly, I have no idea why would one spend resources on uploading terabytes of LLM garbage to arXiv, but they sure can. Even if some crazy person is publishing like 2 nonsense papers daily, it is no harm and, if anything, valid data for psychology research. But if somebody actually floods it with non-human-generated content, well, I suppose it isn't even that expensive to make ArXiv totally unusable (and perhaps even unfeasible to host). So there has to be some filtering. But only to prevent the abuse.
Otherwise, I indeed think that proper ranking, linking and user-driven moderation (again, not to prevent anybody from posting anything, but to label papers as more interesting for the specific community) is the only right way to go.
The reason is because arxiv is growing significantly leading to 297,000 deficit in operating costs for 2025 alone. Corenell has helped with donation a long with other organizations that pay membership fees.
As a result, donors + leaders of arxiv think it's best to spin off to increase funding.
And, FWIW, I do think that arXiv truly has a vast potential to be improved. It is currently in the position to change the whole process of how the research results are shared, yet it is still, as others have said, only a PDF hosting. And since the universities couldn't break out of the whole Elsevier & co. scam despite the internet existing for the 30 years, to me, breaking free from the university affiliation sounds like a good thing.
But, of course, I am talking only about the possibilities being out there. I know nothing about the people in charge of the whole endeavor, and ultimately in depends on them only, if it sails or sinks.
APS and BNL Host XXX e-Print Archive Mirror Feb. 1, 2000
The APS is establishing, in cooperation with Brookhaven National Laboratory, the first electronic mirror in the United States for the Los Alamos e-Print Archive.
Today, from the landing page, it describes itself as "arXiv is a free distribution service and an open-access archive for nearly 2.4 million scholarly articles in the fields of [long list]. Materials on this site are not peer-reviewed by arXiv.
Well, that's a large part of the problem. A lot of the stuff there now will never see a journal (even of dubious quality) and there is limited filtering of what new submissions will be stored. GIGO.
Best thing ArXiv could do is go back to their roots - limit the fields and return to preprint only. Spin off the comp sci stuff for sure to someone else along with all its headaches.
fixed: url
Then getting peer reviewed is a harder process but one can see some form of credit on the site coming from doing a decent reviewers job.
I suspect I am missing a lot of nuance …
I think NIST hosts the CVE repo (through a contract to MITRE)
another will need to rise to take its place.
To this end, they added an endorsement requirement this year: https://blog.arxiv.org/2026/01/21/attention-authors-updated-...
Its especially problematic because while ArXiv love to claim to be working for open science, they don't default to open licensing. Much of the publications they host are not Open Access, and are only read access. So there is definitely the potential to close things off at some point in the future, when some CEO need to increase value.
Could they not have made it into some legal structure that puts universities at the top? Say, with a bunch of universities owning shares that comprise the entirety of the ownership of arXiv, but that would allow arXiv to independently raise funds?
The article says that "it will become an independent nonprofit corporation", and as OpenAI's failed attempt showed, converting a non-profit to a for-profit organization is either really hard or impossible.
> Could they not have made it into some legal structure that puts universities at the top?
As a corporation (even a non-profit one), it will have a board of directors. I have no idea what their charter will look like, but I would be surprised if at least one seat wasn't reserved for a university representative, and more than that seems quite likely as well.
A setup as a US-based "non-profit" is worrisome, if only because 300K is an obscene salary even in a for-profit setting. That the US-based posters can't see this is evidence of the basic problem which is that the US, both left and right, has been taken over by a neoliberal feudal antidemocratic nativist mindset that is anathema to the sort of free interchange of ideas that underlay the ArXiv's development in the hands of mathematicians and physicists now swept aside and ignored by machine learning grifters and technicians who program computers.
Any change to the basic premise will be a negative step.
They should just be boring quiet unopininionated neutral background infrastructure.
All the Mozilla executives have done for the last 15+ years is
* lay off developers
* spend lots of money on stupid side projects nobody asked for or wants
* increase their own salaries
and all that with the backdrop of falling quality, market share, and relevance.
I would happily donate to Firefox, but this fucked up organization will never see a single cent from me. They will spend it on anything but Firefox, which is the only thing anybody wants them to spend it on.
It might already be too late, and we will be left with a browser monopoly.
Ladybird continues to have the appearance of making progress, fwiw:
"oh no, you see we are not a preprint server host anymore, our mission is a values driven blablabla to make a meaningful change in the blablabla, we have spent X dollars to promote the blablabla, take me seriously please I'm also fancy like you! "
;_;
Mozilla certainly won’t spend it on Firefox, because the structure of the organization legally prohibits them from spending any of their donation money on Firefox. The ‘side projects’ are, at least officially, the real purpose of Mozilla.
Maybe a bloated foundation (pursuing expensive objectives completely unrelated to ArXiv's core mission of hosting PDFs), new classes of unnecessary management staff, new and useless paid features that nobody wants, and obnoxious nag banners claiming "ArXiv is not for sale!" but demanding money anyway.
Exactly. It should be a utility. Not quite dumb pipe, but not too far either.
I am wary of that. IMO the business model is damaged therein. You can say in 2022 we had 27; bankrupt in 2030.
arXiv is doomed. It was nice while it lasted.
I wonder when they will introduce the algorithmic feed and the social network features.
You need your favourite academic gatekeeper (= thesis advisor) to vouch for you in order to be allowed to upload.
Then AI slop gets flagged and the shame spreads through the graph. And flaggings need to have evidence attached that can again be flagged.
It's probably not perfect but in practice, it seems to have been enough to get rid of the worst crackpotty spam.
> arXiv requires that users be endorsed before submitting their first paper to arXiv or a new category.
OpenAI shows exactly how well that works and what that kind of governance does to a company and to its support of science and the commons.
TL;DR, it's fucked.
People keep falling into the same trap. They love monopolies, then are shocked when those monopolies jerk them around.
I don't see much of a monopoly, nor any "moat" apart from it being recognised. You can already post preprints on a personal website or on github, and there are "alternatives" such as researchgate that can also host preprints, or zenodo. There are also some lesser known alternatives even. I do not see anything special in hosting preprints online apart from the convenience of being able to have a centralised place to place them and search for them (which you call "monopoly"). If anything, the recognisability and centrality of arxiv helped a lot the old, darker days to establish open access to papers. There was a time when many journals would not let you publish a preprint, or have all kinds of weird rules when you can and when you can't. Probably still to some degree.
I read a dozen papers a month, typically on arxiv, never from paywalled journals. I find the quality on par. But maybe I'm missing something.
Oh, wait.
I had to tell my AI to set up an MCP for "fetch while bypassing arXiv's rate limit" so that it doesn't burn 40k tokens looking for workarounds every time it wants to look at a paper and gets hit with a "sorry, meatbags only" wall.
Very annoying, given how relevant arXiv papers are for ML specifically, and how many of papers there are. Can't "human flesh search" through all of them to pick the relevant ones for your work, and they just had to insist on making it harder for AIs to do it too.
That is, it's not readily parseable, it really gives an insider term vibe - like this isn't for you if you don't already know what it means or how you should read or say it. It sort of reminds me of the overuse of latin and latinate terms generally in the old professions and, well, the academy.
Just always struck me as being somewhat at odds with the goal.
To me it's just a way to get out your work fast, so that there is already a trace of it on the Internets - nothing more and nothing less.
> That is, it's not readily parseable, it really gives an insider term vibe...
Isn't that normal with highly specialized research fields? I agree many papers could benefit from clearer wording, but working in a niche means you sometimes don't reach a broader audience
But I did justify and maybe to reword slightly, surely if one of the main drivers is opening up research, the brand name should be something that's less obscure and more accessible / understandable as to what it is on first sight?
Maybe arXiv evoking the word 'archive' with an ancient Greek twist does that for some, but it's clearly a bit cryptic for many, and if the point is to open up probably the brand should just be something much plainer.
The original service didn't even have a name, only a description, and it was amusingly hosted at xxx.lanl.gov. But LANL wasn't really interested in it, and the founder eventually left for Cornell. At that point, the service needed a domain name, but archive.org was already taken.
And besides, the name has Ancient Greek influences. A similar Latinate term might be something like "archive".
Isn't that actually kindof a good brand signal for a repo of very specialized papers? "Fun with learning" in comic sans wouldn't help credibility.
Google I'll grant you, though it's still pretty phonetic and easy to read. The other two not at all, they're incredibly well known instantaneously recognisable words.