Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
No Train No Gain:Revisiting Efficient Training Algrthm for Transformer-BasedLM
(opens in new tab)
(arxiv.org)
11 points
froster
2y ago
1 comments
Share
No Train No Gain:Revisiting Efficient Training Algrthm for Transformer-BasedLM | Better HN
1 comments
default
newest
oldest
froster
OP
2y ago
Recent paper highlights the difficulty of creating a new optimizer as drop-in replacement. Sophia and Lion were recently proposed as superior alternatives to Adam, but appeared worse in an independent eval
j
/
k
navigate · click thread line to collapse