References:
[1] A grad.-based way to optimize axis-parallel and oblique decision trees: the Tree Alternating Optimization (TAO) algorithm https://proceedings.neurips.cc/paper_files/paper/2018/file/1.... An extension was the softmax tree https://aclanthology.org/2021.emnlp-main.838/.
[2] XAI explains models, but can you recommend corrective actions? FACE: feasible and Actionable Counterfactual Explanations https://arxiv.org/pdf/1909.09369, Algorithmic Recourse: from Counterfactual Explanations to Interventions https://arxiv.org/pdf/2002.06278
[3] OBOE: Collaborative Filtering for AutoML Model Selection https://arxiv.org/abs/1808.03233
And I feel like we're far too dismissive of instances we see where good papers get rejected. We're too dismissive of the collusion rings. What am I putting in all this time to write and all this time to review (and be an emergency reviewer) if we aren't going to take some basic steps forward? Fuck, I've saved a Welling paper from rejection from two reviewers who admitted to not knowing PDEs, and this was a workshop (should have been accepted into the main conference). I think review works for those already successful, who can p̶a̶y̶ "perform more experiments when requested" their way out of review hell, but we're ignoring a lot of good work simply for lack of m̶o̶n̶e̶y̶ compute. It slows down our progress to reach AGI.
Not that I disagree, but I don't think that's a reason to not publish. There's another way to rephrase what you've said
many ideas that work well at small scales do not trivially work at large scales
But this is true for many works, even transformers. You don't just scale by turning up model parameters and data. You can, but generally more things are going on. So why hold these works back because of that? There may be nuggets in there that may be of value and people may learn how to scale them. Just because they don't scale (now or ever) doesn't mean they aren't of value (and let's be honest, if they don't scale, this is a real killer for the "scale is all you need" people)> Other ideas that work at super specialized settings don’t transfer or don’t generalize.
It is also hard to tell if these are hyper-parameter settings. Not that I disagree with you, but it is hard to tell.
> Correlations in huge multimodal datasets are way more complicated than most humans can grasp and we will not get to AGI before we can have a large enough group of people dealing with such data routinely.
I'm not sure I understand your argument here. The people I know that work at scale often have the worst understanding of large data. Not understanding the differences between density in a normal distribution and a uniform. Thinking that LERPing in a normal yields representative data. Or cosine simularity and orthogonality. IME people that work at scale benefit from being able to throw compute at problems.
> we don’t do anybody a favor by increasing the entropy of the publications in the huge ML conferences
You and I have very different ideas as to what constitutes information gain. I would say a majority of people studying two models (LLMs and diffusion) results in lower gain, not more.
And as I've said above, I don't care about novelty. It's a meaningless term. (and I wish to god people would read the fucking conference reviewer guidelines as they constantly violate them when discussing novelty)
Personally, this has affected me as a late PhD student. Late in the literal sense as I'm not getting my work pushed out (even some SOTA stuff) because of factors like these and my department insists something is wrong with me but will not read my papers, the reviews, or suggest what I need to do besides "publish more." (Literally told to me, "try publishing 5 papers a year, one should get in.") You'll laugh at this, I pushed a paper into a workshop and a major complaint was that I didn't give enough background on StyleGAN because "not everyone would be familiar with the architecture." (while I can understand the comment, 8 pages is not much room when you gotta show pictures on several datasets. My appendix was quite lengthy and included all requested information). We just used a GAN as a proxy because diffusion is much more expensive to train (most common complaints are "not enough datasets" and "how's it scale"). I think this is the reason so many universities use pretrained networks instead of training things from scratch, which just railroads research.
(I also got a paper double desk rejected. First because it was "already published." Took a 2 months for them to realize it was arxiv only. Then they fixed that and rejected again because "didn't cite relevant works" with no mention of what those works were... I've obviously lost all faith in the review process)
Thank you for fighting the good fight.
This is why I love OpenReview, I can spot and ignore nonsensical reviewer criticisms and ratings and look for the insightful comments and rebuttals. Many reviewers do put in a lot of very valuable work reading and critiquing most of which would go to waste if not made public.
And I gotta say, I'm not going to put up a fight much longer. As soon as I get out of my PhD I intend to just post to OR.