Social media banning aims to preserve anonymity when the reviews are blind. It is hard to convincingly keep anonymity for many submissions, but an effort to keep it is still worthwhile and typically helps the less privileged to get a fair shot at a decent review, avoiding the social media popularity contest.
The policies for LLM usage differ between conferences. The only possibly valid concern with use of AI is the disclosure of non public info to an outside LLM company that may happen to publish or be retrained on that data (however unlikely this is in practice) before the paper becomes public; for example, someone could withdraw their publication and it no longer sees the day of light on the openreview website. (I personally disagree with this concern.) As far as I know there is no real limitation to using self hosted AI as long as the reviewer takes full credit for the final product and there is no limitation in using non public AI to improve the review clarity without dumping the full paper text. A fraction of authors would appreciate better referee reports, so at a minimum, the use of AI can bridge the language gap. I wouldn’t mind the conferences instituting an automatic AI processing to help the reviewers reduce ambiguity and avoid trivialities.
The high school track has been ridiculed, as expected. I think it is a great idea and doesn’t only apply to rich kids. There exist excellent specialized schools in NYC and other places in the US that might find ways to get resources for underprivileged ambitious high schoolers. It is possible that in the future a variant of such a track will incentivize some industry to donate compute resources to high school programs and it may start early and powerful local communities. I learned a lot in what would be middle school in the US by interacting with self motivated children at a ad hoc computer club and kept the same level of osmotic learning in the computer lab at college. The current state of AI is not super deep in terms of background knowledge, mostly super broad, and some specialized high schools already cover calculus and linear algebra, and certainly many high schools nowadays provide sufficient background in programming and elementary data analysis.
My personal reward hacking is that the conferences provide a decent way to focus the review to the top hundred or couple hundred plausible abstracts and even when the eventual choice is wrong I get a much better reward to noise ratio than from social media and the pure attacks on the arxiv (although LLMs help here as well). I always find it refreshing to see the novel ideas when they are in a raw form before they have been polished and before everyone can easily judge their worth. Too many of them get unnecessary negative views, which is why the system integrates multiple reviewers and area chairs that can make corrective decisions. It is important to avoid too much noise even at the risk of missing a couple great ones, and yet it always hurts when people drop greatness because of misunderstandings or poor chair choices. No system is perfect, but scaling these conferences from a couple hundred people a year up to about a dozen years ago to approaching hundred thousand a year has worked reasonably well.