Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
undefined | Better HN
0 points
LeanderK
2y ago
0 comments
Share
do you know if the LLM was fine-tuned in any way to the sparsity & quantisation? Or did it just work out of the box?
0 comments
default
newest
oldest
fxtentacle
2y ago
I personally fine-tuned it with QAT = quantisation aware training and custom extensions to induce the sparsity.
https://pytorch.org/docs/stable/quantization.html#quantizati...
j
/
k
navigate · click thread line to collapse