A12 runs at 2.5Ghz, and draws ~3.64W of power. I also think the actual power draw of the i7-7700K may be north of 45W, Anandtech claims 90W (with a TDP of 91W).
I bet if you could pop a heatsink on the A12(X) and overclock it/up the TDP, it would come very close or beat intel at most single-threaded applications.
Source: https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-re...
b. extended benchmark duration - no thermal throttling encountered
c. straight forward C compiler - no special code for the GPU unit or whatever
d. main reason is probably memory and cache configuration
I think it is time to consider A(RM) and Intel cores at par. Qualcomm cores maybe X% slower but they will close the gap.
It is now a question of system trade-offs, as designers jangle memory sizes, cache sizes, cores, cooling, vector extensions, etc
This is a really pivotal event.
And dont forget AMD challenging Intel on like-for-like cores.
Its pretty amazing that noone has put together a real ARM alternative for the server side.
I think you're vastly underestimating the value of X in the above. The latest Qualcomm cores weren't even competitive with last year's A11, and the A12 leaves them in the dust.
[1] https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-re...
Cavium (now under Marvell) and Ampere both did. Huawei/HiSilicon are also apparently coming.
(Also Qualcomm, but they quit for mysterious reasons. "Want to focus on mobile" or something.)
But these are all very high end solutions. For the low end, there's Marvell's Armada8k (MACCHIATObin)… and kind of an empty void in the middle :(
This is assuming that semiconductor process improvements will significantly slow down or completely stop.
I wish Apple would get on with it and include Terminal.App and the FreeBSD user land and Xcode for iOS so we can start running a shell and doing interesting things with this new hardware :-)
But for experiments and in-house tooling, it works very well, and I barely have to touch my laptop ever because I have enough tooling to do most things I need to do.
Small things are; more developer freedom (allowing fuller control; probably will happen very very slowly) and better keyboard (for the iPad Pro).
>>VTune agrees, and says that Z3 spends a lot of time waiting on memory while iterating through watched literals.
But how much is "a lot" is not specified. And the propagation also allocates memory sometimes, to keep the learned clauses. Not sure how Z3 manages this, but couldn't it be that the mallocs are just slower on that desktop, so an OS issue?
Had their roadmap not tied node to arch, this comparison wouldn't look so damning. Also, many engineers left Intel for Apple because they're talented and tired of releasing Skylake every year. Once Intel fully decouples node and arch, we should see non-mobile CPUs running away from mobile chips like the A12.
This benchmark is essentially Intel's 2015 design on 14nm vs Apple's 2018 design on 7nm.
Except it's what Intel is selling in 2018.
Nor am I defending Intel (I recently chose and now use a Ryzen 2700X), I'm just contributing additional facts and context to the conversation for anyone who wasn't aware of it since I read all the comments here and no one mentioned it.