Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
undefined | Better HN
0 points
hislaziness
1y ago
0 comments
Share
isn't it 2 bytes (fp16) per param. so 7b = 14 GB+some for inference?
0 comments
default
newest
oldest
ancientworldnow
1y ago
This was trained to be run at FP8 with no quality loss.
hislaziness
OP
1y ago
The model description on huggingface says - Model size - 12.2B params, Tensor type - BF16. Is the Tensor type different from the training param size?
fzzzy
1y ago
it's very common to run local models in 8 bit int.
qwertox
1y ago
Yes, but it's not common for the original model to be 8 bit int. The community can downgrade any model to 8 bit int, but it's always linked to quality loss.
j
/
k
navigate · click thread line to collapse