Alder Lake

Yoused

up
Posts
5,512
Reaction score
8,688
Location
knee deep in the road apples of the 4 horsemen
The full 200W+ 8+8 i9 ADL beats M1 Pro/Max in Cinebench multicore by a factor of almost 2.5x, and yet in many SPEC tests they show similar performance.
From what I was reading, running the new i9 full out seems to be more like 240W, and one account I saw indicated that it was up to boiling eggs (100C), for which an expensive liquid cooling system would be needed (the same person said they could wind out a Ryzen and cool it with an average fan). So the question becomes, is the improvement really worth that much? For a few people, perhaps.

Meanwhile, to get the ASi to the base power usage of the Alder Lake i9, you would have to add probably about 8 more P cores. At that point, it seems like the i9 would melt down trying to keep up.
 

leman

Site Champ
Posts
611
Reaction score
1,126
From what I was reading, running the new i9 full out seems to be more like 240W, and one account I saw indicated that it was up to boiling eggs (100C), for which an expensive liquid cooling system would be needed (the same person said they could wind out a Ryzen and cool it with an average fan). So the question becomes, is the improvement really worth that much? For a few people, perhaps.

Well, M1 also runs close to 100C, but this is about heat, not temperature. And 200W+ is for the desktop chip, not the laptop one (regarding the latter's performance we can only make educated guesses). If you are looking for a big powerful desktop computer, the i9 ADL will definitely be a great chip and Apple currently has nothing to compete in that segment. M1 is more geared towards mobile processors, and I personally have little doubt that M1 Pro/Max will outperform the mobile ADL in almost every real-world scenario.

Meanwhile, to get the ASi to the base power usage of the Alder Lake i9, you would have to add probably about 8 more P cores. At that point, it seems like the i9 would melt down trying to keep up.

To reach 200W? You'd probably need around 32 P-cores.
 

tomO2013

Power User
Posts
96
Reaction score
169
I have zero doubt that M1 Pro/Max will outperform the mobile ADL is most real-world scenarios today and that a widening performance delta in M1 Pro/Max’ favour will open up as more apps are optimized for apple silicon over time.
Those apps that have been updated to use Metal, CoreML etc… have already shown considerable performance improvements, sometimes in order of magnitude.

I’d also continue to expect ADL desktop to show it’s 5-10% performance advantage with 200W of power in unoptimized synthetic benchmarks, games and legacy code relative to M1 Pro/Max.

Totally off topic, but one thing really struck me from a heat and thermals perspective this generation of x86…

Assuming that we have another round of physical consoles (Xbox Series X 2 and PS6) and that vendors do not move fully to cloud streaming - then do we still think that a potential Xbox or PS6 would still use an x86 fundamental core for next next gen consoles?

Obviously a move of Sony/Microsoft to ARM would result in the loss of some backwards compatibility on physical device, but with a pre-existing established hybrid physical console and cloud streaming business offering, is that really a loss? Heck we can stream PowerPC games from Xbox 360.

I would not be surprised that **IF** we have another physical console round that we MS and Sony look heavily and shift to ARM longer term…
AMD still has an advantage there with their APU design experience AND have development experience with ARM already owing to their partnership with MS to develop a competitor to Apple M1 for Windows laptops.

Exciting times ahead!
 

Yoused

up
Posts
5,512
Reaction score
8,688
Location
knee deep in the road apples of the 4 horsemen
To reach 200W? You'd probably need around 32 P-cores.
I do not believe that wattage should be regarded as a target. If a M-series SoC can get equivalent/better performance at significantly less power, that is a good thing. Intel is still using HT (because OoOE is not really an option for them), which is a dubious gain and burns a lot of juice. The M-series appears to get as much work done, clock-for-clock and watt-for-watt, with a smaller core/pseudo-core set.
 

Nycturne

Elite Member
Posts
1,111
Reaction score
1,426
Assuming that we have another round of physical consoles (Xbox Series X 2 and PS6) and that vendors do not move fully to cloud streaming - then do we still think that a potential Xbox or PS6 would still use an x86 fundamental core for next next gen consoles?

Obviously a move of Sony/Microsoft to ARM would result in the loss of some backwards compatibility on physical device, but with a pre-existing established hybrid physical console and cloud streaming business offering, is that really a loss? Heck we can stream PowerPC games from Xbox 360.
It really depends on when a vendor steps forward and provides silicon that is well suited for this space that happens to be built on ARM. Nintendo is in an interesting spot because of the lack of a roadmap for the Tegra platform, and the signs that Nvidia has been letting it rot. Apple’s not going to be interested in being a silicon vendor for Sony or MSFT. Other high end chips are being made for Android, and show large gaps between it and Apple’s SoCs. There’s clearly work to be done here to be able to bring ARM to home consoles.

Where things break down though is that cloud streaming is not a great option for backwards compatibility. But it doesn’t mean it won’t happen. ”Just use the cloud” is a refrain I keep seeing in my industry, but it is one that is pretty centric to regions with well developed internet connectivity, and winds up leading to underserved communities when not properly thought through and analyzed.
 

tomO2013

Power User
Posts
96
Reaction score
169
Yes - I received a present of a PS5 recently (Truthfully I don’t have much time to game these days). That being said, I did try PS Now and even with a 1GB up and down direct fibre channel and sitting right beside the network router, the PS5 streaming experience was not… how would you say…. good? Laggy, lower resolution that I remembered. I’d definitely prefer the locally provided experience!
 

Cmaier

Site Master
Staff Member
Site Donor
Posts
5,216
Reaction score
8,267
I’ll believe this when I see it.

 

Hrafn

Snowflake from Hell
Posts
903
Reaction score
1,085
I’ll believe this when I see it.

Sure, but cool if true...
But, I'm betting "shah, right..."
 

Yoused

up
Posts
5,512
Reaction score
8,688
Location
knee deep in the road apples of the 4 horsemen
The copy (fine print under the graph) says that they used SpecRate integer n-copy data to determine their measurements. Integer performance is very important, but it seems like they are glossing over at least a third of what a CPU does in the real world.

Also, their power envelope seems very CPU-centric, while the M1 power envelope covers the entire SoC, including, if I am very much mistaken, RAM and memory controllers. Seems like "Our plane is so much faster – you supply the wings and empennage."
 

Andropov

Site Champ
Posts
606
Reaction score
757
Location
Spain
According to that graph, the 11980HK is 8% faster in SPECint2017 than the M1 Max. Which... can't be right? I don't know how they measured it, but AnandTech's review of the M1 Max put the M1 Max ahead of the 11980HK in all but one sub-test of SPECint2017 (and even then, only by 1.8%, 557.xz_r). Overall it was at least ~25% slower.

Also, if their own graph is to be trusted, the 12th Gen will come with another 10W increase in maximum SoC power draw. I bet that part they'll deliver on.

EDIT: Also the graph puts the M1 Max SoC power starting at 20W when it actually consumes about ~200mW when idle.
 
Last edited:

Cmaier

Site Master
Staff Member
Site Donor
Posts
5,216
Reaction score
8,267
According to that graph, the 11980HK is 8% faster in SPECint2017 than the M1 Max. Which... can't be right? I don't know how they measured it, but AnandTech's review of the M1 Max put the M1 Max ahead of the 11980HK in all but one sub-test of SPECint2017 (and even then, only by 1.8%, 557.xz_r). Overall it was at least ~25% slower.

Also, if their own graph is to be trusted, the 12th Gen will come with another 10W increase in maximum SoC power draw. I bet that part they'll deliver on.

EDIT: Also the graph puts the M1 Max SoC power starting at 20W when it actually consumes about ~200mW when idle.

Who even knows how they measure the power. They include the RAM power for Apple but not for Intel? They ignore GPU performance? I suspect we’ll find that they don’t come particularly close to M1 Max performance/watt when the parts are actually available for testing.
 

Yoused

up
Posts
5,512
Reaction score
8,688
Location
knee deep in the road apples of the 4 horsemen
Edit: Sorry, I failed to look at the date, this is from August

Here is what I have gleaned from Tom Shardware,

They have a "S/P/M" (server/portable/mobile) strategy. P/E core counts are something like
S: 10/8
P: 6/8
M: 2/8
S models have a GPU with 32 EUs while P and M have 96, presumably because S-level platforms would use an eGPU while P might or might not and M probably would not.

The AVX512 and AMX (matrix functions for ML) functionality is present in the P cores but fused off for all but enterprise-level CPUs.

The M models are quotes as drawing 7W (probably CPU-only), so Intel finally has a phone-quality processor, now that nobody really cares. Why would a phone maker even want to use that? Unless their pricing is extremely aggressive against ARM, it seems unlikely that they would get a toe into that market.
 
Last edited:

tomO2013

Power User
Posts
96
Reaction score
169
Very interesting but very confusing.
We already know from real world bench’s of their high end 12900k with a TDP of 228W under boost - achieved under SpecInt scores maybe 10% faster than M1 max.
What secret sauce did Intel use with the mobile parts to achieve a 40% improvement over M1 , or 30% faster than desktop alderlake.
Surely ICC cannot account for such a large performance improvement. I suspect a large sprinkling of BS on these results.

Very much looking forward to having 3rd parties independently verify!
 

Cmaier

Site Master
Staff Member
Site Donor
Posts
5,216
Reaction score
8,267
Very interesting but very confusing.
We already know from real world bench’s of their high end 12900k with a TDP of 228W under boost - achieved under SpecInt scores maybe 10% faster than M1 max.
What secret sauce did Intel use with the mobile parts to achieve a 40% improvement over M1 , or 30% faster than desktop alderlake.
Surely ICC cannot account for such a large performance improvement. I suspect a large sprinkling of BS on these results.

Very much looking forward to having 3rd parties independently verify!

I believe the secret sauce is "we made up a graph."
 

tomO2013

Power User
Posts
96
Reaction score
169
I believe the secret sauce is "we made up a graph."
Wouldn’t be the first time - but sure they are all at it!
I also find it very interesting how the graph presents very similar to Apples m1 graphs!

Classic Intel.

I’m hoping you see apple get back to their classic Intel roasting adverts (literally)!

They should rerun a version of this swapping m1 for PowerPC and referencing video experts!
 

mr_roboto

Site Champ
Posts
272
Reaction score
432
The M models are quotes as drawing 7W (probably CPU-only), so Intel finally has a phone-quality processor, now that nobody really cares. Why would a phone maker even want to use that? Unless their pricing is extremely aggressive against ARM, it seems unlikely that they would get a toe into that market.
It's not a phone chip, they truly gave up years ago. They never got their Atom-based phone chips into actual phones without effectively buying their way in.

The computers Intel currently considers to be their "mobile" market segment range from Windows tablets at the smallest to notebooks of the same class as the old 16" Intel MacBook Pro (~45W TDP). In this case, I'm sure the 7W version is for tablets.
 
Top Bottom
1 2