Mostly because Intel has way too much motivation to pass it off as a microcode issue, as they can fix a microcode issue for free, by pushing out a patch. If it's an actual hardware issue, then Intel will be forced to actually recall all the faulty CPUs, which could cost them billions.
The other reason, is that it took them way too long to give details. If it's as simple as a buggy microcode requesting an out-of-spec voltage from the motherboard, they should have been able to diagnose the problem extremely quickly and fix it in just a few weeks. They would have detected the issue as soon as they put voltage logging on the motherboard's VRM. And according to some sources, Intel have apparently been shipping non-faulty CPUs for months now (since April, from memory), and those don't have an updated microcode.
This long delay and silence feels like they spent months of R&D trying to create a workaround, create a new voltage spec to provide the lowest voltage possible. Low enough to work around a hardware fault on as many units as possible, without too large of a performance regression, or creating new errors on other CPUs because of undervolting.
I suspect that this microcode update will only "fix" the crashes for some CPUs. My prediction is that in another month Intel will claim there are actually two completely independent issues, and reluctantly issue a recall for anything not fixed by the microcode.
That said I too am very skeptical. I just issued a moratorium on the purchase of anything Intel 13th/14th gen in our company and waiting for some actual proof that the issue is fully resolved.
On Raptor lake, there are a few integrated voltage regulators to which provide new voltages for specialised uses (like the E core's L2 cache, parts of DDR memory IO, PCI-E IO), but the current draw on those regulators is pretty low. The bulk of the power comes directly from motherboard VRMs on one of several rails with no internal regulation. Most of the power draw is grouped onto just two rails, VccGT for the GPU, and VccCore (also known as VccIA in other generations) which powers all the P-cores, all the E-cores and, the ring bus and the last-level cache.
Which means all cores share the same voltage, and it's trivial to monitor externally.
I guess it's possible the bug could be with only of the integrated voltage regulators, but those seem to only power various IO devices, and I struggle to see how they could trigger this type of instability.
Making CPUs is kind-of like sorting eggs. When they're made, they all have slightly different characteristics and get placed into bins (IE, "binned") based on how they meet the specs.
To oversimplify, the cough "better" chips are sold at higher prices because they can run at higher clock speeds and/or handle higher voltages. If there's a spec of dust on the die, a feature gets turned off and the chip is sold for a lower price.
In this case, this is most likely an edge case that would not be a defect if shipping microcode already handled it. (Although it is appropriate to ask if it would result in effected chips going into a lower-price bin if they are effected.)
Do you mean that if a 13900KS CPU has a manufacturing defect, it gets downgraded and sold as 13900F or something else according to the nature of the defect?
Towards the middle of the video it brings up some very interesting evidence, from online game server farms that use 13900 and 14900 variants for their high single-core performance for the cost, but with server-grade motherboards and chipsets that do not do any overclocking, and would be considered "conservative". But these environments show a very high statistical failure rate for these particular CPU models. This suggests that some high percentage of CPUs produced are affected, and it's long run-time over which the problem can develop, not just enthusiast/gamer motherboards pushing high power levels.
That's how CPU design goes. The way that is done is by pushing as much to firmware as possible, adding chicken switches and fallback paths, and all sorts of ways to intercept regular operation and replace it with some trap to microcode or flush or degraded operation.
Applying fixes and workaround might cost quite a bit of performance (think spectre disabling of some kinds of branch predictors for an obvious very big one). And in some cases you even see in published errata they leave some theoretical correctness bugs unfixed entirely. Where is the line before accepting returns? Very blurry and unclear.
Almost certainly, huge parts of their voltage regulation (which goes along with frequency, thermal, and logic throttling) will be highly configurable. Quite likely it's run by entirely programmable microcontrollers on chip. Things that are baked into silicon might be voltage/droop sensors, temperature sensors, etc., and those could behave unexpectedly, although even then there might be redundancy or ways to compensate for small errors.
I don't see they "passed it off" as a microcode issue, just said that a microcode patch could fix it. As you see it's very hard from the outside to know if something can be reasonably fixed by microcode or to call it a "microcode issue". Most things can be fixed with firmware/microcode patches, by design. And many things are. For example if some voltage sensor circuit on the chip behaved a bit differently than expected in the design but they could correct it by adding some offsets to a table, then the "issue" is that silicon deviates from the model / design and that can not be changed, but firmware update would be a perfectly good fix, to the point they might never bother to redo the sensor even if they were doing a new spin of the masks.
On the voltage issue, they did not say it was requesting an out of spec voltage, they said it was incorrect. This is not necessarily detectable out of context. Dynamic voltage and frequency scaling and all the analog issues that go with it are fiendishly complicated, voltage requested from a regulator is not what gets seen at any given component of the chip, loads, switching, capacitance, frequency, temperature, etc., can all conspire to change these things. And modern CPUs run as close to absolute minimum voltage/timing guard bands as possible to improve efficiency, and they boost up to as high voltages as they can to increase performance. A small bug or error in some characterization data in this very complicated algorithm of many variables and large multi dimensional tables could easily cause voltage/timing to go out of spec and cause instability. And it does not necessarily leave some nice log you can debug because you can't measure voltage from all billion components in the chip on a continuous basis.
And some bugs just take a while to find and fix. I'm not a tester per se but I found a logic bug in a CPU (not Intel but commercial CPU) that was quickly reproducible and resulted in a very hard lockup of a unit in the core, but it still took weeks to find it. Imagine some ephemeral analog bug lurking in a dusty corner of their operating envelope.
Then you actually have to develop the fix, then you have to run that fix through quite a rigorous testing process and get reasonable confidence that it solves the problem, before you would even make this announcement to say you've solved it. Add N more weeks for that.
So, not to say a dishonest or bad motivation from Intel is out of the question. But it seems impossible to make such speculations from the information we have. This announcement would be quite believable to me.
"And some bugs just take a while to find and fix."
I think it's less that it took awhile to find the bug/etc, more so that they've been pretty much radio silent for six months. AMD had the issue with burning 7 series CPUs, they were quick to at least put out a statement that they'll make customers whole again.
They claimed:
> a microcode algorithm resulting in incorrect voltage requests to the processor.
They learned a lot from the Pentium disaster, even if it's a hardware issue, they can address it with microcode at least, which is just as good.
For example, Intel CPU + Spectre mitigation is not "as good" as a CPU that didn't have the vulnerability in the first place.
"Unfortunately for John, the branches made a pact with Satan and quantum mechanics [...] In exchange for their last remaining bits of entropy, the branches cast evil spells on future genera- tions of processors. Those evil spells had names like “scaling- induced voltage leaks” and “increasing levels of waste heat” [...] the branches, those vanquished foes from long ago, would have the last laugh."
"John was terrified by the collapse of the parallelism bubble, and he quickly discarded his plans for a 743-core processor that was dubbed The Hydra of Destiny and whose abstract Platonic ideal was briefly the third-best chess player in Gary, Indiana. Clutching a bottle of whiskey in one hand and a shot- gun in the other, John scoured the research literature for ideas that might save his dreams of infinite scaling. He discovered several papers that described software-assisted hardware recovery. The basic idea was simple: if hardware suffers more transient failures as it gets smaller, why not allow software to detect erroneous computations and re-execute them? This idea seemed promising until John realized THAT IT WAS THE WORST IDEA EVER. Modern software barely works when the hardware is correct, so relying on software to correct hardware errors is like asking Godzilla to prevent Mega-Godzilla from terrorizing Japan. THIS DOES NOT LEAD TO RISING PROP- ERTY VALUES IN TOKYO. It’s better to stop scaling your transistors and avoid playing with monsters in the first place, instead of devising an elaborate series of monster checks- and-balances and then hoping that the monsters don’t do what monsters are always going to do because if they didn’t do those things, they’d be called dandelions or puppy hugs."
this is how I feel about electric car supercharging stations at the moment. There is a definitely a privilege aspect, which some attractive people are beneficiaries of in a predictable way, as well as other expensive maintenance for their health and attraction.
so I could see myself saying the same thing to my children
More voltage generally improves stability, because there is more slack to close timing. Instability with high voltage suggests dangerous levels. A software patch can lower the voltage from this point on, but it can't take back any accumulated fatigue.
intel is claiming 4% performance hit on the final patch https://youtu.be/wkrOYfmXhIc?t=308
It seemed like the base frequencies vs boost frequencies were much farther apart on Intel than with most of the AMDs. This was especially true on the laptops were cooling is a larger concern. So I suspect they were pushing limits.
Also, the performance core vs efficiency core stuff seemed kind of gimmicky with so few performance cores and so many efficiency cores. Like look at this 20 core processor! Oh wait, it's really an 8 core when it comes to performance. Hard to compare that to a 12 core 3D cached Ryzen with even higher clock...
I will say, it seems intel might still have some advantages. It seems AMD had an issue supporting ECC with the current chipsets. I almost went Intel because of it. I ended up deciding that DDR5 built in error correction was enough for me. The performance graphs also seem to indicate a smoother throughput suggesting more efficient or elegant execution (less blocking?). But on the average the AMDs seem to be putting out similar end results even if the graph is a bit more "spikey".
AMD has the advantage with regards to ECC. Intel doesn't support ECC at all on consumer chips, you need to go Xeon. AMD supports it on all chips, but it is up to the motherboard vendor to (correctly) implement. You can get consumer-class AM4/5 boards that have ECC support.
I was baffled by this too but what they don't make clear is the performance cores have hyperthreading the efficiency cores do not.
So what they call 2P+4E actually becomes an 8 core system as far as something like /proc/cpuinfo is concerned. They're also the same architecture so code compiled for a particular architecture will run on either core set and can be moved from one to the other as the scheduler dictates.
The E cores are about half as fast as the P cores depending on use case, at about 30% of the size. If you have a program that can use more than 8 cores, then that 8P+12E CPU should approach a 14P CPU in speed. (And if it can't use more than 8 cores then P versus E doesn't matter.) (Or if you meant 4P+16E then I don't think those exist.)
> Hard to compare that to a 12 core 3D cached Ryzen with even higher clock...
Only half of those cores properly get the advantage of the 3D cache. And I doubt those cores have a higher clock.
AMD's doing quite well but I think you're exaggerating a good bit.
Too low is dangerous because you lose rational thought, and the ability to maintain consciousness or self-recover. However, despite not having the immediate dangers of being low, having high blood sugar over time is the condition which causes long-term organ damage.
Looks like history may be repeating itself, or at least rhyming somewhat.
Back then, CPUs ran on fixed voltages and frequencies and only overclockers discovered the limits. Even then, it was rare to find reports of CPUs killed via overvolting, unless it was to an extreme extent --- thermal throttling, instability, and shutdown (THERMTRIP) seemed to occur before actual damage, preventing the latter from happening.
Now, with CPU manufacturers attempting to squeeze all the performance they can, they are essentially doing this overclocking/overvolting automatically and dynamically in firmware (microcode), and it's not surprising that some bug or (deliberate?) ignorance that overlooked reliability may have pushed things too far. Intel may have been more conservative with the absolute maximum voltages until recently, and of course small process sizes with higher potential for electromigration are a source of increased fragility.
Also anecdotal, but I have an 8th-gen mobile CPU that has been running hard against the thermal limits (100C) 24/7 for over 5 years (stock voltage, but with power limits all unlocked), and it is still 100% stable. This and other stories of CPUs in use for many years with clogged or even detached heatsinks seem to contribute to the evidence that high voltage is what kills CPUs, and neither heat nor frequency.
Edit: I just looked up the VCore maximum for the 13th/14th processors - the datasheet says 1.72V! That is far more than I expected for a 10nm process. For comparison, a 1st-gen i7 (45nm) was specified at 1.55V absolute maximum, and in the 32nm version they reduced that to 1.4V; then for the 22nm version it went up slightly to 1.52V.
Oh the memories. I had a Thunderbird-core Athlon with a stock frequency of (IIRC) 1050Mhz. It was stable at 1600Mhz, and I ran it that way for years. I was able to get it to 1700Mhz, but then my CPU's stability depended on ambient temperatures. When the room got hot in the summer my workstation would randomly kernel panic.
Anecdotally, my CPU has been a champ and I haven’t noticed any stability issues despite doing both a lot of gaming and a lot of compiling on it. I lost a bit of performance but not much setting a power limit of 150W.
Will be interesting to see how that pans out.
Now Alderon Games reports that Raptor Lake crashes impact Intel's 13th and 14th-Gen processors in laptops as well.
"Yes we have several laptops that have failed with the same crashes. It's just slightly more rare then the desktop CPU faults," the dev posted.
These are the guys who publicly claimed[2] Intel sold defective chips based on the desktop chips crashing.
[1]: https://www.tomshardware.com/pc-components/cpus/dev-reports-...
[2]: https://www.tomshardware.com/pc-components/cpus/game-publish...
The issue does happen with desktop chips that are being used in a server context when pairing with workstation chipset such as W680. However, there haven't been any reports of Xeon E-2400/E-3400 (which is essentially a desktop chip repurposed as a server) with C266 having these issues, though it may be because there hasn't been a large deployment of these chips on the server just yet (or even if there are, it's still too early to tell).
Do note that even without this particular issue, Xeon Scalable 4th Gen (Sapphire Rapids) is not a good chip (speaking from experience, I'm running w-3495x). It has plenty of issues such as slow clock ramp, high latency, high idle power draw, and the list goes on. While Xeon Scalable 5th Gen (Emerald Rapids) seems to have fixed most of these issues, Zen 4 EPYC is still a much better choice.
That said, the video is the GamersNexus one where they talk about an unverified claim that this is a fabrication process issue caused by oxidation between atomic deposition layers. If that's the case, then yeah, microcode can only do so much. But like Steve says in the video, the oxidation theory has yet to be proven and they're just reporting what they have so far ahead of the Zen 5 reviews coming soon.
The energy efficiency is much appreciated in the UK with our absurd price of electricity.
I benchmarked it, the difference between 170 and 105 is basically zero, and the difference to 60W is just a few percent of a performance hit, but way worth it, as it's ~0.3€/kWh over here.
I don't think Intel is done though, at least not yet.
Ie we think this might solve it, but if it doesn't we can roll back with the least amount of PR attention.
For Asus: https://www.pcgamer.com/hardware/motherboards/asus-adds-inte...
Curious to see how this develops in terms of fixing defective silicon.
But this is already a mess very hard to clean since I feel many of these CPUs will die in an year or 2 because of these problems today but by then nobody will remember this and an RMA will be "difficult" to say the least.
Anything Intel announces, in my experience, is half true, so I'm interested to see what's actually true and what Intel will just forget to mention or will outright hide.
That’s great news for intel. If that’s correct. If not that’ll be a PR bloodbath
I highly recommend going into your motherboard right now and manually setting your configurations to the current intel recommendation to prevent it from degrading to the point where you'd need to RMA it. I have a 14900K and it took about 2.5 months before it started going south and it was getting worse by the DAY for me. Intel has closed my RMA ticket since changing the bios settings to very-low-compared-to-what-the-original-is has made the system stable again, so I guess I have a 14900K that isn't a high end chip anymore.
Below are the configs intel provided to me on my RMA ticket that have made my clearly degraded chip stable again:
CEP (Current Excursion Protection)> Enable. eTVB (Enhanced Thermal Velocity boost)> Enable. TVB (Thermal Velocity boost)>Enable. TVB Voltage Optimization> Enable. ICCMAX Unilimited bit>Disable. TjMAX Offset> 0. C-States (Including C1E) >Enable. ICCMAX> 249A. ICCMAX_APP>200A. Power limit 1 (PL1)>125W. Power limit 2 (PL2)>188W
Tbh, XMP is probably the cause of most modern crashes on gaming rigs. It does not guarantee stability. After finding a stable cpu frequency, enable xmp and roll back the memory frequency until you have no errors in occp. The whole thing can be done in 20 minutes and your machine will have 24/7/365 uptime.
You can have a system that’s 24/7 prime95 stable that crashes as soon as you exit out, because it tests so very little of it. That’s actually not uncommon due to the changes in frequency state that happen once the chip idles down… and it’s been this way for more than a decade, speedstep used to be one of the things overclockers would turn off because it posed so many problems vs just a stable constant frequency load.
And will it actually fix the issue?
Related:
Complaints about crashing 13th,14th Gen Intel CPUs now have data to back them up
https://news.ycombinator.com/item?id=40962736
Intel is selling defective 13-14th Gen CPUs
https://news.ycombinator.com/item?id=40946644
Intel's woes with Core i9 CPUs crashing look worse than we thought
https://news.ycombinator.com/item?id=40954500
Warframe devs report 80% of game crashes happen on Intel's Core i9 chips
I have mine at a factory setting that Intel would suggest, not the asus multi core enhancement crap. noctua dh15 cooler. It’s really been a stable setup.
We've already seen examples of this happening on non-OC'd server-style motherboards that perfectly adhere to the intel spec. This isn't like ASUS going 'hur dur 20% more voltage' and frying chips. If that's all it was it would be obvious.
Lowering voltage may help mitigate the problem, but it sure as shit isn't the cause.
Supermicro and ASRock Rack do sell W680 as a server (because it took Intel a really long time to release C266), but while they're strictly to the spec, some boards are really not meant for K CPUs. For example, the Supermicro MBI-311A-1T2N is only certified for a non-TVB E/T CPUs, and trying to run the K CPU on these can result in the board plumbing 1.55V into the CPU during the single core load (where 1.4V would already be on the higher side)[2].
In this particular case, the "non-OC'd server-style motherboard" doesn't really mean anything (even more so in the context of this announcement).
edit: BZ just put out a video talking about running Minecraft servers destroying CPUs reliably, topping out at 83C, normally in the 50s, running 3600 speeds. Which is a clear issue with low-thread loads.
A recent YouTube video by GamersNexus speculated the cause of instability might be a manufacturing issue. The employee's response follows.
Questions about manufacturing or Via Oxidation as reported by Tech outlets:
Short answer: We can confirm there was a via Oxidation manufacturing issue (addressed back in 2023) but it is not related to the instability issue.
Long answer: We can confirm that the via Oxidation manufacturing issue affected some early Intel Core 13th Gen desktop processors. However, the issue was root caused and addressed with manufacturing improvements and screens in 2023. We have also looked at it from the instability reports on Intel Core 13th Gen desktop processors and the analysis to-date has determined that only a small number of instability reports can be connected to the manufacturing issue.
For the Instability issue, we are delivering a microcode patch which addresses exposure to elevated voltages which is a key element of the Instability issue. We are currently validating the microcode patch to ensure the instability issues for 13th/14th Gen are addressed
Good to know.
No product will ever be perfect. You don't need to do a recall for a sufficiently rare problem.
And in case anyone skims, I will be extra clear, this is based on the claim that the oxidation is separate from the real problem here.
> The HP-35 had numerical algorithms that exceeded the precision of most mainframe computers at the time. During development, Dave Cochran, who was in charge of the algorithms, tried to use a Burroughs B5500 to validate the results of the HP-35 but instead found too little precision in the former to continue. IBM mainframes also didn't measure up. This forced time-consuming manual comparisons of results to mathematical tables. A few bugs got through this process. For example: 2.02 ln ex resulted in 2 rather than 2.02. When the bug was discovered, HP had already sold 25,000 units which was a huge volume for the company. In a meeting, Dave Packard asked what they were going to do about the units already in the field and someone in the crowd said "Don't tell?" At this Packard's pencil snapped and he said: "Who said that? We're going to tell everyone and offer them, a replacement. It would be better to never make a dime of profit than to have a product out there with a problem". It turns out that less than a quarter of the units were returned. Most people preferred to keep their buggy calculator and the notice from HP offering the replacement.
Any company can reach a state where the Process people take over, and the Product people end up at other firms.
Intel could have grown a pair, and spun the 32 core RISC-V DSP SoC + gpu for mobile... but there is little business incentive to do so.
Like any rotting whale, they will be stinking up the place for a long time yet. =)
On the other hand they are saying they will fix it in microcode. How is that even possible?
Are they saying that their CPUs are signaling the mainboards to give them too much voltage?
Can someone make sense of this? It reminds me of Steve Jobs' You Are Holding It Wrong moment.
Yes that's exactly what they said.
On top of all this mess: these products were part of Intel's repeated attempts to move the primary voltage rail (the one feeding the CPU cores) to use on-die voltage regulators (DLVR). They're present in silicon but unused. So it's not entirely surprising if the fallback plan of relying solely on external voltage regulation wasn't validated thoroughly enough.
Modern CPU's are incredibly complex machines with a ridiculously large amount of possible configuration states (too large to exhaustively test after manufacture or sim during design), e.g. a vector multiply in flight with an AES encode in flight with x87 sincos, etc. Each operation is going to draw a certain amount of current. It is impractical to guarantee each functional unit with the required current but the supply rails are sized for a "reasonable worst case".
Perhaps an underestimate was mistakenly made somewhere and not caught until recently. Therefore the fix might be to modify the instruction dispatcher (via microcode) to guarantee that certain instruction configurations cannot happen (e.g. let the x87 sincos stall until the vector multiply is done) to reduce pressure on the voltage regulator.