Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
undefined | Better HN
0 points
tyre
2y ago
0 comments
Share
They said 7b llama which I read as the base LLaMa model, not this one specifically. All of these LLMs are trained on Stack Overflow so it makes sense that they’d be good out of the box.
0 comments
default
newest
oldest
brandall10
2y ago
The top level comment is specifically citing performance of code llama against codex.
j
/
k
navigate · click thread line to collapse