Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
Don't have a $5k MacBook to run LLAMA65B? MiniLLM runs LLMs on GPUs in <500 LOC
(opens in new tab)
(github.com)
3 points
volodia
3y ago
2 comments
Share
Don't have a $5k MacBook to run LLAMA65B? MiniLLM runs LLMs on GPUs in <500 LOC | Better HN
2 comments
default
newest
oldest
tempaccount420
3y ago
Doesn't this use as much VRAM as llama.cpp (with int4 models) uses RAM? RAM is a lot cheaper than VRAM.
volodia
OP
3y ago
It won't run as fast on your CPU at it will run on a GPU. Also, it might clog most of your RAM; it's better to offload to a cheap GPU.
j
/
k
navigate · click thread line to collapse