2Nemotron-4 15B large multilingual language model trained on 8T tokens (opens in new tab)(arxiv.org)3hack_ml2y ago1
3EffVer: Version your code by the effort required to upgrade (opens in new tab)(jacobtomlinson.dev)98hack_ml2y ago46
47x speed improvement for LLaMA in less than 10 lines of code (opens in new tab)(github.com)2hack_ml2y ago1
5Accelerating Topic Modeling on GPUs with Rapids and Bert Models (opens in new tab)(medium.com)1hack_ml3y ago0
719.25x Faster TF-IDF on GPUA with Dask and Rapids than CPUS (opens in new tab)(medium.com)1hack_ml4y ago0