And then there’s the whole repetition issue. Infinite loops with "Pygame’s Pygame’s Pygame’s" kind of defeats the point of quantization if you ask me. Sure, the authors have fixes like adjusting the KV cache or using min_p, but doesn’t that just patch a symptom rather than solve the actual problem? A fried model is still fried, even if it stops repeating itself.
On the flip side, I love that they’re making this accessible on Hugging Face... and the dynamic quantization approach is pretty brilliant. Using 1.58-bit for MoEs and leaving sensitive layers like down_proj at higher precision—super clever. Feels like they’re squeezing every last drop of juice out of the architecture, which is awesome for smaller teams who can’t afford OpenAI-scale hardware.
"accessible" still comes with an asterisk. Like, I get that shared memory architectures like a 192GB Mac Ultra are a big deal, but who’s dropping $6,000+ on that setup? For that price, I’d rather build a rig with used 3090s and get way more bang for my buck (though, yeah, it’d be a power hog). Cool tech—no doubt—but the practicality is still up for debate. Guess we'll see if the next-gen models can address some of these trade-offs.
Min_p = 0.05 was a way I found to counteract the 1.58bit model generating singular incorrect tokens which happen around 1 token per 8000!
I'm running Open WebUI for months now for me and some friends as a front-end to one of the API providers (deepinfra in my case, but there are many others, see https://artificialanalysis.ai/).
Having 1.58-bit is very practical for me. I'm looking much forward to the API provider adding this model to their system. They also added a Llama turbo (also quantized) a few months back so I have good hopes.
AMD strix halo APU will have quad channel memory and will launch soon so expect these kinds of setups available for much less. Apple is charging an arm and a leg for memory upgrades, hopefully we get competition soon. From what I saw at CES OEMs are paying attention to this use case as well - hopefully not following suite on RAM markups.
Here's hoping the Nvidia Digit (GB10 chip) has a 512 bit or 1024 bit wide interface, otherwise the Strix Halo will be the best you can do if you don't get the Mac Ultra.
I’m sure there’ll be some amount of undercutting but I don’t think it’ll be a huge difference on the RAM side itself.
Mistral's large 123B model works well (but slowly) at 4-bit quantisation, but if I knock it down to 2.5-bit quantisation for speed, performance drops to the point where I'm better off with a 70B 4-bit model.
This makes me reluctant to evaluate new models in heavily quantised forms, as you're measuring the quantisation more than the actual model.
EDIT: It seems that original authors provided a nice write-up:
https://unsloth.ai/blog/deepseekr1-dynamic#:~:text=%F0%9F%96...
Other than that, if you really need the big one you can get six 3090s and you're good to go. It's not cheap, but you're running a ChatGPT equivalent model from your basement. A year ago this was a wetdream for most enthusiasts.
This line in the stuff inside the <think> section suggests it's also been trained on YouTube clips:
>> "I'm not entirely sure if I got all the details right, but this is what I remember from watching clips and summaries online."
An excerpt from the generated summary:
>> "Set in the 23rd century during a Z-Corp invasion, the series features action sequences, strategic thinking, and humor. It explores themes of international espionage, space warfare, and humanity's role in the cosmos. The show incorporates musical numbers and catchy theme songs for an engaging viewing experience. The plot involves investigating alien warships and their secret base on Kessari planet while addressing personal conflicts and philosophical questions about space."
"It explores themes of international espionage, space warfare, and humanity's role in the cosmos" is the closest to correct line in the whole output.
Anyone who has a/the need for or understands the value of a local LLM would be OK with this kind of output.
Wishful thinking.
I'm curious, what would you use that rig for?
Random observation 2: It's time to cancel the OpenAI subscription.
Don’t get me wrong what DS did is great, but anyone thinking this reshape the fundamental trend of scaling laws and make compute irrelevant is dead wrong. I’m sure OpenAI doesn’t really enjoy the PR right now, but guess what OpenAI/Google/Meta/Anthropic can do if you give them a recipe for 11x more efficient training ? They can scale it to their 100k GPUs clusters and still blow everything. This will be textbook Jevons paradox.
Compute is still king and OpenAI has worked on their training platform longer than anyone.
Of course as soon as the next best model is released, we can train on its output and catch up at a fraction of the cost, and thus the infinite bunny hopping will continue.
But OpenAI is very much alive.
Need an LLM to one-shot some complex network scripting? as of last night, o1 is still where its at.
Of course cost is incomparably higher since plus has a very low limit. Which of course is a huge deal.
2. If you have GitHub Copilot, you get o1 chat also there.
I haven't seen much value with OpenAI subscription for ages.
ChatGPT is the king of the multimodal experience still. Anthropic is a distant second, only because it lets you upload images from the clipboard and responds to them, but it can't do anything else like generate images - sometimes it will do a flowchat which is kind of cool, GPT won't do that - but will it speak to you, have tones, listen to you? no.
And in the open source side, this area has been stagnant for like 18 months. There is no cohesive multimodal experience yet. Just a couple vision models with chat capabilities and pretty pathetic GUIs to support them. You have to still do everything yourself there.
There is a huge utility for me, and many others that dont know it yet, if we could just load a couple models at once that work together seamlessly in a single seamless GUI like how ChatGPT works.
AFAIK you can't do that with newer consumer cards, which is why this became an annoyance. Even a RTX 4070 Ti with its 12 GB would be fine, if you could easily stack a bunch of them like you used to be able with older cards.
That's because it's Apple. It time to start moving to AMD systems with shared memory. My Zen 3 APU system has 64GB these days and its a mini ITX board.
It's better to get (VRAM + RAM) >= 140GB for at least 30 to 40 tokens/s, and if VRAM >= 140GB, then it can approach 140 tokens/s!
Another trick is to accept more than 8 experts per pass - it'll be slower, but might be more accurate. You could even try reducing the # of experts to say 6 or 7 for low FLOP machines!
Can you release slightly bigger quant versions? Would enjoy something that runs well on 8x32 v100 and 8x80 A100.
Apple's M chips, AMD's Strix Point/Halo chips, Intel's Arc iGPUs, Nvidia's Jetsons. The main issue with all of these though is the lack of raw compute to complement the ability to load insanely large models.
It seems that AMD Epyc CPUs support terabytes of ram, some are as cheap as 1000 EUR. why not just run the full R1 model on that - seems that it would be much cheaper than multiple of those insane NVidia-Karten.
I'm impressed by the 140 tokens per second speed with the 1.58-bit quantization running on dual H100s. That kind of performance makes the model practical for small or mid sized shops to use it for local applications. This is a huge win for people working on agents that require low latency that only local models could support.
Not accusing you anything. Could be that you happen to write in a way similar to LLMs. Could be that we are influenced by LLM writing styles and are writing more and more like LLMs. Could be that the difference between LLM generated content and human-generated content is getting smaller and harder to tell.
It’s the exclamation point in the first paragraph, the concise and consistent sentence structure, and the lack of colloquial tone.
OP, no worries if you’re real. I often read my own messages or writing and worry that people will think I’m an LLM too.
Amazing that OP confirmed you're correct (and good use of LLM @OP).
This is really interesting insight (although other works cover this as well). I am particularly amused by the process by which the authors of this blog post arrived at these particular seeds. Good work nonetheless!
I also tried not setting the seeds, but the results are still the same - quantizing all layers seems to make the model forget and repeat everything - I put all examples here: https://docs.unsloth.ai/basics/deepseek-r1-dynamic-1.58-bit#...
Another option is to employ min_p = 0.05 to force the model not to generate low prob tokens - it can help especially in the case when the 1.58bit model generates on average 1/8000 tokens or so an "incorrect" token (for eg `score := 0`)
It’s a very bold claim which is really shaking up the markets, so I can’t help but wonder if it was even verified at this point.
Based on Nvidia being down 18% yesterday I would say the claim is generally accepted.
If confirmed, Nvidia could go down even more
“I don’t believe this, but I know others will, so I’m selling”
The only part of DeepSeek-R1 I do not like. I hope it's over, but I am not holding my breath.
That said, what they did with $5 million of GPUs is impressive. Reportedly, they resorted to using PTX assembly to make it possible:
https://www.tomshardware.com/tech-industry/artificial-intell...
If they aren't lying because they have hardware they're not supposed to have, which is also a possibility.
the cost absolutely includes the cost of GPUs and data centers, they quoted a standard price for renting h800 which has all of this built in. but yes, as very explicitly noted in the paper, it does not include cost of test iterations
Oh nice! So I can try it in my local "low power/low cost" server at home.
My homesystem does run in a ryzen 5500 + 64gb RAM + 7x RTX 3060 12gb
So 64gb RAM plus 84gb VRAM
I dont want to brag around, but point to solutions for us tinkerers with a small budget and high energy costs.
such system can be build for around 1600 euro. The power consumption is around 520 watt.
I started with a AM4 Board (b450 Chipset) and one used RTX 3060 12gb which cost around 200 Euro used if you are patient.
There every additional GPU is connected with the pcie riser/extender to give the cards enough space.
After a while I had replaces the pcie cards with a single pcie x4 to 6x PCIe x1 extender.
It runs pretty nice. Awesome to learn and gain experience
ryzen 5500 + 7x3060 + cooling ~= 1.6 kW off the wall, at 360 GB/s memory bandwidth, and considering your lane budget, most of it will be wasted in single PCIe lanes. After-market unit price of 3060's is 200 eur, so 1600 is not good-faith cost estimate.
From the looks of it, your setup is neither low-power, nor low-cost. You'd be better served with a refurbished mac studio (2022) at 400GB/s bandwidth fully utilised over 96 GB memory. Yes, it will cost you 50% more (considering real cost of such system closer to 2000 eur) however it would run at a fraction of power use (10x less, more or less)
I get it that hobbyists like to build PC's, but claiming that sticking seven five year out of date low-bandwidth GPU's in a box is "low power/low cost" is a silly proposition.
You're advocating for e-waste
Now add that this guy has 7x3060 = 100% miner. So you know that he is running a optimized profile (underclocked).
Fyi, my gaming 6800 draws 230W, but with a bit of undervolting and sacrificing 7% performance, it runs at 110W for the exact same load. And that is 100% taxed. This is just a simple example to show that a lot of PC hardware runs very much overclocked/unoptimized out of the box.
Somebody getting down to 520W sounds perfectly normal, for a undervolted card that gives up maybe 10% performance, for big gains in power draw.
And no, old hardware can be extreme useful in the right hands. Add to this, its the main factor that influences the speed tends to be more memory usage (the more you can fit and the interconnects), then actual processing performance for running a LLM.
Being able to run a large model for 1600 sounds like a bargain to me. Also, remember, when your not querying the models, the power will be mostly the memory wakes + power regulators. Coming back to that youtuber, he was not constantly drawing that 130W, it was only with spikes when he ran prompts or did activity.
Yes, running from home will be more expensive then a 10$ copilot plan but ... nobody is also looking at your data ;)
> We managed to selectively quantize certain layers to higher bits (like 4bit), and leave most MoE layers (like those used in GPT-4) to 1.5bit
For example, I imagine a strong MoE base with 16 billion active parameters and 6 or 7 experts would keep a good performance while being possible to run on 128GB RAM macbooks.
Maybe using a strong reasoning model such as R1 the next generation, even more performance can be extracted from smaller models.
i’ve gotten full fp8 running on 8xh100, probably going to keep doing that
Do we finally have a model with access to the training architecture and training data set, or are we still calling non-reproducible binary blobs without source form open-source?
I also like to ask the models to create a simple basic Minecraft type game where you can break pieces and store them in your inventory, but disallow building stuff
So you can load a different active subset of the MoE into each 89GB GPU, sharding it across something like 32 different GPUs (or can you get away with less? Wouldn't be surprised if they can infer on 8x H800 gpus). Some parameters are common, others are independent. Queries can be dynamically routed between GPUs, potentially bouncing between GPUs as much as once per output token, depending on which experts they need to activate.
Though, I suspect it's normal to stick on one MoE subset for several output tokens.
This has a secondary benefit that as long as the routing distribution is random, queries should be roughly load balanced across all GPUs.
Then by using pipeline parallelism, if a new request comes, we simply stick them in a queue - GPUs 0, 1, 2, ..., 8. Request A is at GPU 2, Request B at GPU 1, Request C at GPU 0 and so on.
The other option is tensor parallelism were we split the weights evenly. You could combine pipeline and tensor parallelism as well!
I cannot understand why "openai is dead" has legs: repurpose the hardware and data and it can be multiple instances of the more efficient model.
you invest in a 100x machine expecting a revenue of X, but now you can only charge X/100 because R1 shows that AI inference can be done much more efficiently. see the price decrease of ChatGPT and addition of free O3 etc.
this reduction of future cash flows, ceteris paribus, implies that the present value of these cash flows decrease. this then results in massive repricing to the downside as market participants update their forecasts.
what you are missing is that to assume as you do, you must make the additional assumption that demand for additional compute is infinite. Which may very well be the case, but it is not guaranteed compared to the present realized fact that R1 means lower revenues for AI inference providers -> changes the capex justification for even more hardware -> NVDA receives less revenue.
I love the original DeepSeek model, but the distilled versions are too dumb usually. I'm excited to try my own queries on it.
I love the original DeepSeek model, but the distilled versions are too dumb usually.
Apart from being dumber, they also don't know as much as R1. I can see how fine-tuning can improve reasoning capability (by showing examples of good CoT) but there's no reason that would improve the knowledge of facts (relative to the Qwen or Llama model on which the finetuning was based).(I've been using the 32B and while it could always be better, I'm not unhappy with it)
Is there any good quick summary of what's special about DeepSeek?
Yes, section 2.3 of the Deepseek R1 paper summarizes the training part you're asking about, in less than a page.https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSee...
youtube.com/watch?v=Nl7aCUsWykg
One thing I've being thinking about doing is to combine one of those LLM models running in llama.cpp, feed it with the output of whisper.cpp and connect its output to some TTS model. I wonder how far from Wheels and Roadie from the Pole Position tv series.
Not to make fun of OpenAI and the great work they've done but it's kinda like if I went out in the 90s and said I'm going to found a company to have the best REST APIs. You can always found a successful tech company, but you can't found a successful tech company on a technological architecture or pattern alone.
80%? On 2 H100 only? To get near chatgpt 4? Seriously? The 671B version??
I 100% expect some downvotes from the ccp.
And that's a really important strategic advantage China has versus America, which has such an insane fixation on pure(ish) free markets and free trade that it gives away its advantages in strategic industry after strategic industry.
Some people falsely infer from the experience with the Soviet Union that freer markets always win geopolitical competition, but that's false.
> Some people falsely infer from the experience with the Soviet Union that freer markets always win geopolitical competition, but that's false.
The data we have is 500 years of free markets in the western world and the verdict is overwhelmingly: Yes, more freedom means more winning.
Just invite some incompetent bureaucrat over your house to dictate how you should cook and you'll quickly agree.
Always happy to oblige when someone insinuates that any critics must be government agents