Sentiment analysis, classification.
Classification depends on the problem (and mostly the datasize). Boosting is certainly competitive on tabular data and widely everywhere I've worked.
No one talks about it (except on Kaggle) because it's pretty much at a local maximum. All the improvement comes from manual feature engineering.
But modern techniques using NNs on tabular data are are competitive with boosting and do away with a lot of the feature engineering. That's a really interesting development.
I wouldn't say this. Sentiment analysis trained on the standard datasets is one place where performance is barely better than old-school linear classifiers. They remained brittle and easy to trick until recent flexible systems systems based on question answering, zero-shot entailment or lotsa instruction finetuning (improving in that order). I strongly advice against using something fine-tuned solely on sentiment datasets. It'd be a total waste.
Well yeah. But why would you do that?
Do what eveyrone does: Train on large scale a language corpus (or use a pre-trained model) then finetune for sentiment analysis.
> I strongly advice against using something fine-tuned solely on sentiment datasets
Did you mean trained on sentiment datasets? I agree with that.
Otherwise, well [1] is a decent overview of the field. I think Document Vectors using Cosine Similarity[2] at 17 is the highest rated that isn't a NN trained on large corpus and fine-tune on sentiment task. Even that uses document vectors that are trained on a large language corpus.
[1] https://paperswithcode.com/sota/sentiment-analysis-on-imdb
[2] https://paperswithcode.com/paper/the-document-vectors-using-...