1
Ask HN: What tiny LLMs are you getting the best results from?
Curious if anyone here is having any success with running smaller LLMs locally on constrained hardware, such as laptops or GPU-less devices. If so, what kind of utility have they brought you?