And detection was not done with some snake oil "AI detector" but by invisible prompt injection in the paper pdf, instructing LLMs to put TWO long phrases into the review. They then detected LLM use through checking if both phrases appear in the review.
This did not detect grammar checks and touchups of an independently written review. The phrases would only get included if the reviewer fed the pdf to the LLM in clear violation to their chosen policy.
> After a selection process, in which reviewers got to choose which policy they would like to operate under, they were assigned to either Policy A or Policy B. In the end, based on author demands and reviewer signups, the only reviewers who were assigned to Policy A (no LLMs) were those who explicitly selected “Policy A” or “I am okay with either [Policy] A or B.” To be clear, no reviewer who strongly preferred Policy B was assigned to Policy A.
A professor's career is built on reputation, and that reputation is as strong as their students' (who do much of the "work" such as it is). It comes down to the professor, but this can be a career-ending moment for those students and I'm quite confident there were some very uncomfortable discussions as a result of this.
Most of these people are likely students; this should be a learning moment, but I don't think it is yet grounds for their entire academic career to be crippled by being unable to publish in a top-tier ML venue.
I think consequences are well deserved, but hopefully not on the authors cost (if innocent).
It's incredible how so many people thought it was fair that their paper should be assessed by human reviewers alone, and yet would not extend the same courtesy to others.
To be clear this is not an excuse but an explanation why I am not surprised.
The trick is: I can't cut-and-paste between the two machines. So there is never even a temptation to do so and I can guarantee that my writing or other professional output will never be polluted. Because like you I'm well aware of that poor impulse control factor and I figured the only way to really solve this is to make sure it can not happen.
It really does sound like an addiction when you put it this way.
They were quite conservative in their approach, so the only things that were rejected were from people who had agreed not to use an LLM and almost definitely did use an LLM (since they fed hidden watermarked instructions to the llm's).
This means the true number of people that used LLM's in their review (even in group A that had agreed not to) is likely higher.
Also worth noting, 10% of these authors used them in more than half of their reviews.
Given this detection method works so well in the use case of feeding reviewing LLMs instructions, it should also work for the original submitted paper itself, as long as it was passed along with its watermark intact. Even those just using LLMs to summarise could easily be affected if LLMs were instructed to generate very positive summaries.
So the 2% cheaters on policy A, AND 100% of policy B reviewers could fall for this and be subtly guided by the LLMs overly-positive summaries or even complete very positive reviews (based on hidden instructions).
That this sort of adversarial attack works is really quite troubling for those using LLMs to help them understand texts, because it would work even if asked to summarise something.
What I found funny was that if you asked ChatGPT to provide a score recommendation, it was also significantly higher than what that reviewer put. They were lazy and gave a middle grade (borderline accept/reject). We were accepted with high scores from the other reviews, but it was a bit annoying that they seemingly didn't even interpret the output from the model.
The learning experience was this: be an honourable academic, but it's in your interest to run your paper through Claude or ChatGPT to see what they're likely to criticise. At the very least it's a free, maybe bad, review. But you will find human reviewers that make those mistakes, or misinterpret your results, so treat the output with the same degree of skepticism.
I may or may not know a guy who added several hidden sentences in Finnish to his CV that might have helped him in landing an interview.
Has been done: https://www.theguardian.com/technology/2025/jul/14/scientist...
LLMs have a real problem with not treating context differently from instructions. Because they intermingle the two they will always be vulnerable to this in some form.
> ICML: every paper in my review batch contains prompt-injection text embedded in the PDF
source: https://old.reddit.com/r/MachineLearning/comments/1r3oekq/d_...
There are recent comments there as well:
> Desk Reject Comments: The paper is desk rejected, because the reciprocal reviewer nominated for this paper ([OpenReview ID redacted]) has violated the LLM reviewing policy. The reviewer was required to follow Policy A (no LLMs), but we have found a strong evidence that LLM was used in the preparation of at least one of their reviews. This is a breach of peer-review ethics and grounds for desk rejection. (...)
source: https://old.reddit.com/r/MachineLearning/comments/1r3oekq/d_...
And if they in their review work have agreed to a "no LLM use" policy, but got exposed using LLMs anyway, then their submitted research article is desk rejected. Theoretically, someone could have submitted a stellar research article, but because they didn't follow agreed policy when reviewing other people's work, then also their research contribution is not welcome.
(At first I understood that innocent author's articles would have been rejected just because they happened to go to a bad reviewer. But this is not the case.)
But if anything, I think the whole anti-LLM review philosophy is wrong. If anything we need multiple deep background and research analyses of papers. So many papers are trash or are publishing what has already been done or are missing things. The volume of AI papers makes it impossible for a human alone to really critique work because hundreds of new papers come out a day.
What about you not putting your name on the paper? Or does it hurt the student if they publish in their own name only?
In any case, I had reached the same interpretation before reading your post, thinking that this is the only interpretation that could make any sense, but I'm still not convinced that this is what happened. Hopefully, no "innocent authors' articles were rejected because they happened to go to a bad reviewer".
So maybe those people are right and are getting away with it for most readers of it.
Correct me if I'm wrong, but this means that many people are using LLMs despite claiming not to.
It's the first symptom of a dependency mechanism.
If this happens in this context, who knows what happens in normal work or school environments?
(P.S.: The use of watermarks in PDFs to detect LLM usage is very interesting, even though the LLM might ignore hidden instructions.)
One wonders what leads them to the AI rejecting option in the first place.
Hiding behind a false “choice” to not use AI or basically not use AI isn’t an appropriate proposal. This is crooked and shameful. We should boycott ICML except we can’t because they are already the gatekeepers!
ML conferences aren't for profit ventures. If you submit papers and expect others to review it, you should reciprocate as well.
And they didn't give a permanent ban or anything, these authors can just resubmit to another conference, of which there are many.
So it is a sneaky and typically academic way of doing stuff. Also, "We hope that by taking strong action against violations of agreed-upon policy we will remind the community that as our field changes rapidly the thing we must protect most actively is our trust in each other. If we cannot adapt our systems in a setting based in trust, we will find that they soon become outdated and meaningless." is so academic and pointless.
It’s a bit harder to make the argument that those people _explicitly_ agreed to not use LLMs.
And given how the desk-rejection logic relies on an ethical integrity argument, actual explicit intent is important.
Extremely conservative detection. The real number must be much higher.
Article seems to say that this choice was given just for review (how you will review not how you will get reviewed) and the consequence of getting caught, their paper being rejected, was a punishment, not the original trade-off or motivation for choosing option A.
Happy to be corrected.
I can divide 98,324,672,722 by 161,024 by hand. At least I used to be able to do, but nobody is going to pay me to do that when a calculator exists.
Likewise I can write a bunch of assembly (well OK I can't), but why would I do that when my compiler can convert my intention into it.
Or will you have every intention to keep the promise but it would seem such a chore by now (cuz calculator is such a part of your workflow) that you would minimize the sanctity of your promise in your mind?
If yes, that's dependency, not usual use.
(I just learned that choosing no-LLM also meant no-LLM on their own papers, so I am less generous with motivations now. Wasn't dependency, just plain old self-interest. Thanks for your point.)
I don't personally use LLMs for this kind of stuff, but I'll certainly not sign a "No LLM" pledge unless you give me some kind of benefit.
So your quip is just nonsensical.
My original point (loosely based on the subject, not TFA) is that it's LLMs all the way down, way more than it's "measured" to be.
In any case, having reviewed a lot of mostly very poorly written articles and occasionally solid papers when I was still a researcher, I can sympathize with using LLMs to streamline the process. There are a lot of meh papers that are OK for a low profile workshop or small conference where you cut people some slack. But generally standards should be higher for things like journals. Judging what is acceptable for what is part of the game. For a workshop, the goal is to get interesting junior researchers together with their senior peers. Honestly, workshops are where the action is in the academic world. You meet interesting people and share great ideas.
Most people may not realize this but there are a lot of people that are starting in their research career that will try to get their papers accepted for workshops, conferences, or journals. We all have to start somewhere. I certainly was not an amazing author early on. Getting rejections with constructive feedback is part of how you get better. Constructive feedback is the hard part of reviewing.
The more you publish, the more you get invited to review. It's how the process works. It generates a lot of work for reviewers. I reviewed probably at least 5-10 papers per month. It actually makes you a better author if you take that work seriously. But it can be a lot of work unless you get organized. That's on top of articles I chose to read for my own work. Digesting lots of papers efficiently is a key skill to learn.
Reviewing the good papers is actually relatively easy. It's enjoyable even; you learn something and you get to appreciate the amazing work the authors did. And then you write down your findings.
It's the mediocre ones that need a lot of careful work. You have to be fair and you have to be strict and right. And then you have to provide constructive feedback. With some journals, even an accept with revisions might land an article on the reject pile.
The bad ones are a chore. They are not enjoyable to read at all.
The flip side of LLMs is that both sides can and should (IMHO) use them: authors can use them to increase the quality of their papers. With LLMs there no longer is any excuse for papers with lots bad grammar/spelling or structure issues anymore. That actually makes review work harder. Because most submitted papers now look fairly decent which means you have to dive into the detail. Rejecting a very rough draft is easy. Rejecting a polished but flawed paper is not.
If I was still doing reviews (I'm not), I'd definitely use LLMs to pick apart papers, to quickly zoom in on the core issues and to help me keep my review fair and balanced and professional in tone. I would manually verify the most important bits and my effort would be proportional to which way I'm leaning based on what I know. Of course, editors can use LLMs as well to make sure reviews are fair and reasonable in their level of detail and argumentation. Reviewing the reviewers always has been a weakness of the peer review system and sometimes turf wars are being fought by some academics via reviews. It's one of the downsides of anonymous reviews and the academic world can be very political. A good editor would stay on top of this and deal with it appropriately.
LLMs are good at filtering, summarizing, flagging, etc. With proper guard rails, there's no reason to not lean on that a bit. It's the abuse that needs to be countered. In the end, that begins and ends with editors. They select the reviewers. So when those do a bad job, they need to act. And when their journals fill up with AI slop, it's their reputations that are on the line.
Like any tool, use caution and common sense. Blanket bans are not that productive at this stage.