I'm still baffled by that showing years later. Over-engineered, over-cooled chips to reach absurd speed record has been a staple since as far as I remember, like back in the Pentium 2 or before. Why did anyone at Intel think they should hide the sauce, or get pissed when fans got to it, is beyond my comprehension.
Didn't they use a chiller?
The 9Ghz clock was achieved not through any normal cooling or by efficiency of the chip.
These overclocking records have been around for decades but they're in no way shape or form representative of the average of even the top 1% of users.
It's impressive purely because it was possible with an off the shelf chip.
AMD has raised their market share over the last 4-5 years from about 8% to 31% of x86 sales. Intel also saw 5 straight years of market share declines against AMD in the server space - which is by far the most lucrative.
And yes, they're also worried about ARM and Nvidia.
ARM is another threat on the horizon on that front, but it's nothing compared to the beating AMD has been giving them since the Ryzen showed up.
The team was able to achieve a very impressive 9043.92GHz(a literal joke, since visible light is 480 thz)
What does this mean? Only those with Anglo-Saxon heritage make for good writers?
Looks like the usual cost cutting where you replace employees with contractors.
Also, this entire article is based on an Asus advertisement video on YouTube. I'm sure they wouldn't put their best writers on that kind of content.
The overclocking team from Asus has achieved a new CPU frequency world record with Intel's brand-new Raptor Lake Refresh Core i9-14900KF. The team was able to achieve a very impressive 9043.92GHz on a single P-core with liquid helium, breaking the previous world record by 35.1MHz.
Perhaps the editor thought that "almost 9.1GHz" sounds better than "over 9GHz". I disagree with both - the best would be "over 9000 MHz" [0].Boom, honest, direct reporting.
It’s really not hard to do, it takes more energy to think up the clickbait than to just tell the facts.
But everything's so slow and path dependent.
I wonder how much you could do with a single rack if you got really serious about it. Cooling, power, networking etc.
A super long pipeline allows higher clock rates but it takes a giant dirt nap when branch prediction fails and when you have a cache miss. You end up having massive latencies in these cases.
Further, generally all else being equal a lower clock rate allows you to be more energy efficient.
Agree that it had tons of problems. But branch prediction has gotten better, compilers have gotten better, etc. Maybe they could be handled now!
I’m kidding in these sense that I don’t think a single core could be designed to usefully use 1000W. I get why things happened as they did. But I do still think single threaded performance is much more interesting than multi-core, so I wish we could see how those designs would have evolved.
This? You push a button and a number comes up.