Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
undefined | Better HN
0 points
BaculumMeumEst
1y ago
0 comments
Share
What is the best of the llama 3.1 models that I can fine-tune with a macbook m3 max w/ 96GB of ram?
0 comments
default
newest
oldest
lostmsu
1y ago
None unless you are prepared to spend 5+ years. Macs just don't have flops.
BaculumMeumEst
OP
1y ago
Really? You can't even fine-tune a quantized 8B model on such a machine? That's a bummer.
lostmsu
1y ago
You can't really fine-tune quantized models as-is. Gradients require floating-point calculations at descent precision.
1 more reply
j
/
k
navigate · click thread line to collapse