# Alder Lake



## Cmaier

The Intel 12th Gen Core i9-12900K Review: Hybrid Performance Brings Hybrid Complexity
					






					www.anandtech.com
				




Those power numbers are, as predicted, insane.  Base 125W, Turbo 241W.  Max 272 W. For just a CPU.

Single thread P-core uses 55-65 W. By itself.  11-15 W for the E-core.  

Weirdly, the P-cores support AVX-512 and the E-cores do not, but instead of just ensuring that all threads that need it are routed to the P-cores, they just disabled the AVX-512 hardware in bios. 

Thread Director makes naive assumptions.  Workload not in user focus gets deprioritized. Dumb.


----------



## SuperMatt

Looks like a nice place to visit if you’re ever near Tacoma:














						Alder Lake Park Campground - Visit Rainier
					

The 161-acre Alder Lake Park, owned and operated by Tacoma Power’s Nisqually River Project, offers options for everyone from traditional tent campers to those who want more of the comforts...




					visitrainier.com


----------



## Pumbaa

SuperMatt said:


> Looks like a nice place to visit if you’re ever near Tacoma:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Alder Lake Park Campground - Visit Rainier
> 
> 
> The 161-acre Alder Lake Park, owned and operated by Tacoma Power’s Nisqually River Project, offers options for everyone from traditional tent campers to those who want more of the comforts...
> 
> 
> 
> 
> visitrainier.com



Yes. Water cooling makes sense given the power numbers mentioned by @Cmaier.


----------



## leman

Ugh, yeah, I’m rather underwhelmed. Frankly, I don’t understand all the praises the reviewers are singing. So Intel managed to build a CPU core that’s slower than ARM cores from 2018 but still manages to consume more power. What’s the big deal?


----------



## User.168

.


----------



## SuperMatt

theSeb said:


> Not sure, but I do see reviewers being awfully kind to Intel on multiple sites. Minor wins against current (soon to be previous) gen AMD in some benchmarks = crushing defeat. Weird choice of words from a site that I have considered neutral.
> 
> I think the big deal may be that Intel has actually managed to keep up and slightly beat AMD CPUs for the first time in quite a while, hence the big hype and the big news. It does seem though, judging by the consumption, that their little cores are not little enough.



People were talking in the 1990s about how they should consider moving on from x86… is it surprising they are being lapped by competitors in 2021? It feels like, instead of pursuing something new, they just keep trying to get that last drop of juice out of the 286 mark 10000.


----------



## Cmaier

Steven Sinofsky:

Intel 12th Gen Core i9-12900K Review https://www.anandtech.com/show/1704...w-hybrid-performance-brings-hybrid-complexity // My head hurts. The problem is the OS has none of these semantics, so this asks every app to figure this out and have OS APIs to call to use app info. It's roughly the equivalent of manually managing memory swapping. https://www.twitter.com/i/web/status/1456482184760479744/


----------



## Yoused

SuperMatt said:


> People were talking in the 1990s about how they should consider moving on from x86



They did. Well, they tried, anyway. They had an amazing new 64-bit architecture that was going to set the world on fire. Unfortunately, it set itself on fire instead, producing mediocre results at too-high TPDs.


----------



## Cmaier

Yoused said:


> They did. Well, they tried, anyway. They had an amazing new 64-bit architecture that was going to set the world on fire. Unfortunately, it set itself on fire instead, producing mediocre results at too-high TPDs.




I prefer to think i had a personal hand in that, by helping to design the 64-bit architecture that actually did set the world on fire


----------



## Renzatic

Yoused said:


> They did. Well, they tried, anyway. They had an amazing new 64-bit architecture that was going to set the world on fire. Unfortunately, it set itself on fire instead, producing mediocre results at too-high TPDs.




Talking about Itanium? I vaguely remember people talking about it being the next big thing, then disappearing with nary a mention since.

As far as Alder Lake goes, this moreso than anything shows Intel's weakness in their designs. Yeah, it's a fast chip, but it's having to consume a ridiculous amount of power to push itself ahead of the competition. To use an analogy, it's like Intel took a decent enough car, tweaked the aerodynamics a bit, then welded a couple of jet engines on the side, all so they could say they have the fastest car in the world.

...yeah, it's fast, but it's not exactly long term feasible. Meanwhile, the competition is making cars that are smaller, considerably faster, and don't require high octane jet fuel to run.


----------



## SuperMatt

Renzatic said:


> Talking about Itanium? I vaguely remember people talking about it being the next big thing, then disappearing with nary a mention since.



It seems I recall Itanium being called the “Itanic” or something like that.


----------



## Yoused

SuperMatt said:


> It seems I recall Itanium being called the “Itanic” or something like that.



It did ultimately earn that nickname. It looked like they structured it to optimize its ability to emulate x86 as efficiently as possible but came up way short even in that.


----------



## Cmaier

Yoused said:


> It did ultimately earn that nickname. It looked like they structured it to optimize its ability to emulate x86 as efficiently as possible but came up way short even in that.




The original titanium actually had an entire x86 core sitting on the die.  A real bad one.


----------



## Yoused

John Salter over at ars seems to think the Alder Lake design is the bees knees, "crushing" Ryzen in benchmarks, that one of its big drawbacks is "_Does not come with free kitten._" I mean, kittens are cute and fluffy, but then you have to feed them and take care of them and, at last, try to cope with the cattitude they grow into.


----------



## Cmaier

The fact that software can tell the difference between E- and P- cores is…inauspicious.









						Faulty DRM breaks dozens of games on Intel’s Alder Lake CPUs
					

About 50 titles are affected, though there are workarounds and pending fixes.




					arstechnica.com


----------



## Andropov

The Thread Director using focus to put apps on the performance cores is honestly the dumbest thing I've read about this launch. I get that Intel does not have the control over the OS to implement something like GCD and QoS for processes as Apple does, but their solution is just... dumb. Maybe they rushed this launch to try to save face against Apple Silicon and that's why some things are half-baked (AVX512 support?).

I wonder what Intel has left for next year. Alder Lake is already in a new process node, with a new DDR version, a new core architecture and heterogeneous CPU cores. Since Intel used to update process node and core μarch every two years... what's left for 2022?


----------



## Yoused

Someone at ars posted a video link that showed an alder lake cpu getting a better (something) performance score at 35W than an M1 Max at 30W. Of course, if you adjust for the P/W value, the Max did get a better result per watt, whatever that means – and it looks like the 35W figure was not a package number, which appeared to be more like 44W.

The i9 appeared to be handicapped by having two cores disabled, so it was beating the Max with only 14 cores running. Well, sorta beating it, but not really. 14 cores trouncing 10 cores with a worse net level of performance. Yeah, maybe not so much.

And, of course, the video is entirely in Mandarin, so, good luck with that.


----------



## Cmaier

Yoused said:


> Someone at ars posted a video link that showed an alder lake cpu getting a better (something) performance score at 35W than an M1 Max at 30W. Of course, if you adjust for the P/W value, the Max did get a better result per watt, whatever that means – and it looks like the 35W figure was not a package number, which appeared to be more like 44W.
> 
> The i9 appeared to be handicapped by having two cores disabled, so it was beating the Max with only 14 cores running. Well, sorta beating it, but not really. 14 cores trouncing 10 cores with a worse net level of performance. Yeah, maybe not so much.
> 
> And, of course, the video is entirely in Mandarin, so, good luck with that.



Ah, this is the “emulated result” thing I was just asked about, I guess?

Let’s play this game in reverse. Add more cores to M1 Max and increase the package power to match, and emulate the performance


----------



## januarydrive7

Cmaier said:


> Ah, this is the “emulated result” thing I was just asked about, I guess?
> 
> Let’s play this game in reverse. Add more cores to M1 Max and increase the package power to match, and emulate the performance



Please do


----------



## leman

Yoused said:


> Someone at ars posted a video link that showed an alder lake cpu getting a better (something) performance score at 35W than an M1 Max at 30W. Of course, if you adjust for the P/W value, the Max did get a better result per watt, whatever that means – and it looks like the 35W figure was not a package number, which appeared to be more like 44W.
> 
> The i9 appeared to be handicapped by having two cores disabled, so it was beating the Max with only 14 cores running. Well, sorta beating it, but not really. 14 cores trouncing 10 cores with a worse net level of performance. Yeah, maybe not so much.
> 
> And, of course, the video is entirely in Mandarin, so, good luck with that.




The thing is, I do not doubt this. M1 doesn't not perform too well in Cinebench, and this has been discussed in detail before. In fact, if the benchmark is accurate, this means that the mobile ADL (with a comparable config) is likely to be at least 30-40% slower than M1 Pro/Max in demanding sustained CPU workloads. The full 200W+ 8+8 i9 ADL beats M1 Pro/Max in Cinebench multicore by a factor of almost 2.5x, and yet in many SPEC tests they show similar performance.


----------



## Yoused

leman said:


> The full 200W+ 8+8 i9 ADL beats M1 Pro/Max in Cinebench multicore by a factor of almost 2.5x, and yet in many SPEC tests they show similar performance.



From what I was reading, running the new i9 full out seems to be more like 240W, and one account I saw indicated that it was up to boiling eggs (100C), for which an expensive liquid cooling system would be needed (the same person said they could wind out a Ryzen and cool it with an average fan). So the question becomes, is the improvement really worth that much? For a few people, perhaps.

Meanwhile, to get the ASi to the base power usage of the Alder Lake i9, you would have to add probably about 8 more P cores. At that point, it seems like the i9 would melt down trying to keep up.


----------



## leman

Yoused said:


> From what I was reading, running the new i9 full out seems to be more like 240W, and one account I saw indicated that it was up to boiling eggs (100C), for which an expensive liquid cooling system would be needed (the same person said they could wind out a Ryzen and cool it with an average fan). So the question becomes, is the improvement really worth that much? For a few people, perhaps.




Well, M1 also runs close to 100C, but this is about heat, not temperature. And 200W+ is for the desktop chip, not the laptop one (regarding the latter's performance we can only make educated guesses). If you are looking for a big powerful desktop computer, the i9 ADL will definitely be a great chip and Apple currently has nothing to compete in that segment. M1 is more geared towards mobile processors, and I personally have little doubt that M1 Pro/Max will outperform the mobile ADL in almost every real-world scenario. 



Yoused said:


> Meanwhile, to get the ASi to the base power usage of the Alder Lake i9, you would have to add probably about 8 more P cores. At that point, it seems like the i9 would melt down trying to keep up.




To reach 200W? You'd probably need around 32 P-cores.


----------



## tomO2013

I have zero doubt that M1 Pro/Max will outperform the mobile ADL is most real-world scenarios today and that a widening performance delta in M1 Pro/Max’ favour will open up as more apps are optimized for apple silicon over time. 
Those apps that have been updated to use Metal, CoreML etc… have already shown considerable performance improvements, sometimes in order of magnitude.

I’d also continue to expect ADL desktop to show it’s 5-10% performance advantage with 200W of power in unoptimized synthetic benchmarks, games and legacy code relative to M1 Pro/Max. 

Totally off topic, but one thing really struck me from a heat and thermals perspective this generation of x86…

Assuming that we have another round of physical consoles (Xbox Series X 2 and PS6) and that vendors do not move fully to cloud streaming - then do we still think that a potential Xbox or PS6 would still use an x86 fundamental core for next next gen consoles? 

Obviously a move of Sony/Microsoft to ARM would result in the loss of some backwards compatibility on physical device, but with a pre-existing established hybrid physical console and cloud streaming business offering, is that really a loss? Heck we can stream PowerPC games from Xbox 360. 

 I would not be surprised that **IF** we have another physical console round that we MS and Sony look heavily and shift to ARM longer term…
AMD still has an advantage there with their APU design experience AND have development experience with ARM already owing to their partnership with MS to develop a competitor to Apple M1 for Windows laptops.

Exciting times ahead!


----------



## Yoused

leman said:


> To reach 200W? You'd probably need around 32 P-cores.



I do not believe that wattage should be regarded as a target. If a M-series SoC can get equivalent/better performance at significantly less power, that is a good thing. Intel is still using HT (because OoOE is not really an option for them), which is a dubious gain and burns a lot of juice. The M-series appears to get as much work done, clock-for-clock and watt-for-watt, with a smaller core/pseudo-core set.


----------



## Nycturne

tomO2013 said:


> Assuming that we have another round of physical consoles (Xbox Series X 2 and PS6) and that vendors do not move fully to cloud streaming - then do we still think that a potential Xbox or PS6 would still use an x86 fundamental core for next next gen consoles?
> 
> Obviously a move of Sony/Microsoft to ARM would result in the loss of some backwards compatibility on physical device, but with a pre-existing established hybrid physical console and cloud streaming business offering, is that really a loss? Heck we can stream PowerPC games from Xbox 360.



It really depends on when a vendor steps forward and provides silicon that is well suited for this space that happens to be built on ARM. Nintendo is in an interesting spot because of the lack of a roadmap for the Tegra platform, and the signs that Nvidia has been letting it rot. Apple’s not going to be interested in being a silicon vendor for Sony or MSFT. Other high end chips are being made for Android, and show large gaps between it and Apple’s SoCs. There’s clearly work to be done here to be able to bring ARM to home consoles. 

Where things break down though is that cloud streaming is not a _great_ option for backwards compatibility. But it doesn’t mean it won’t happen. ”Just use the cloud” is a refrain I keep seeing in my industry, but it is one that is pretty centric to regions with well developed internet connectivity, and winds up leading to underserved communities when not properly thought through and analyzed.


----------



## tomO2013

Yes - I received a present of a PS5 recently (Truthfully I don’t have much time to game these days). That being said, I did try PS Now and even with a 1GB up and down direct fibre channel and sitting right beside the network router, the PS5 streaming experience was not… how would you say…. good? Laggy, lower resolution that I remembered. I’d definitely prefer the locally provided experience!


----------



## Yoused

Taking TDP to a new level









						Backwards capacitor causes fiery failure for some high-end Asus motherboards
					

Asus has issued a statement and recall for affected Z690 boards.




					arstechnica.com


----------



## Cmaier

Yoused said:


> Taking TDP to a new level
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Backwards capacitor causes fiery failure for some high-end Asus motherboards
> 
> 
> Asus has issued a statement and recall for affected Z690 boards.
> 
> 
> 
> 
> arstechnica.com




What’s funny to me is they know the serial numbers of the boards which have the backward capacitors, which means, what, they intentionally did that and noted it in their system?


----------



## Yoused

Cmaier said:


> What’s funny to me is they know the serial numbers of the boards which have the backward capacitors, which means, what, they intentionally did that and noted it in their system?



Well, they probably know it happened on 李星合's shift, based on the ones they have discovered.


----------



## Cmaier

I’ll believe this when I see it.









						Intel Says New Core i9 Processor for Laptops is Faster Than Apple's M1 Max Chip
					

Intel today unveiled new 12th-generation Core processors suitable for laptops, and as part of the announcement, it claimed that the new Core i9 is not only faster than Apple's M1 Max chip in the 16-inch MacBook Pro, but is the fastest mobile processor ever.    The new Core i9 features a 14-core...




					forums.macrumors.com


----------



## Hrafn

Cmaier said:


> I’ll believe this when I see it.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Intel Says New Core i9 Processor for Laptops is Faster Than Apple's M1 Max Chip
> 
> 
> Intel today unveiled new 12th-generation Core processors suitable for laptops, and as part of the announcement, it claimed that the new Core i9 is not only faster than Apple's M1 Max chip in the 16-inch MacBook Pro, but is the fastest mobile processor ever.    The new Core i9 features a 14-core...
> 
> 
> 
> 
> forums.macrumors.com



Sure, but cool if true...
But, I'm betting "shah, right..."


----------



## Yoused

The copy (fine print under the graph) says that they used SpecRate integer n-copy data to determine their measurements. Integer performance is very important, but it seems like they are glossing over at least a third of what a CPU does in the real world.

Also, their power envelope seems very CPU-centric, while the M1 power envelope covers the entire SoC, including, if I am very much mistaken, RAM and memory controllers. Seems like "_Our plane is so much faster – you supply the wings and empennage_."


----------



## Andropov

According to that graph, the 11980HK is 8% faster in SPECint2017 than the M1 Max. Which... can't be right? I don't know how they measured it, but AnandTech's review of the M1 Max put the M1 Max ahead of the 11980HK in all but one sub-test of SPECint2017 (and even then, only by 1.8%, 557.xz_r). Overall it was at least ~25% slower.

Also, if their own graph is to be trusted, the 12th Gen will come with another 10W increase in maximum SoC power draw. I bet that part they'll deliver on.

EDIT: Also the graph puts the M1 Max SoC power starting at 20W when it actually consumes about ~200mW when idle.


----------



## Cmaier

Andropov said:


> According to that graph, the 11980HK is 8% faster in SPECint2017 than the M1 Max. Which... can't be right? I don't know how they measured it, but AnandTech's review of the M1 Max put the M1 Max ahead of the 11980HK in all but one sub-test of SPECint2017 (and even then, only by 1.8%, 557.xz_r). Overall it was at least ~25% slower.
> 
> Also, if their own graph is to be trusted, the 12th Gen will come with another 10W increase in maximum SoC power draw. I bet that part they'll deliver on.
> 
> EDIT: Also the graph puts the M1 Max SoC power starting at 20W when it actually consumes about ~200mW when idle.




Who even knows how they measure the power. They include the RAM power for Apple but not for Intel? They ignore GPU performance? I suspect we’ll find that they don’t come particularly close to M1 Max performance/watt when the parts are actually available for testing.


----------



## Yoused

Edit: Sorry, I failed to look at the date, this is from August

Here is what I have gleaned from Tom Shardware,

They have a "S/P/M" (server/portable/mobile) strategy. P/E core counts are something like
S: 10/8
P: 6/8
M: 2/8
S models have a GPU with 32 EUs while P and M have 96, presumably because S-level platforms would use an eGPU while P might or might not and M probably would not.

The AVX512 and AMX (matrix functions for ML) functionality is present in the P cores but fused off for all but enterprise-level CPUs.

The M models are quotes as drawing 7W (probably CPU-only), so Intel finally has a phone-quality processor, now that nobody really cares. Why would a phone maker even want to use that? Unless their pricing is extremely aggressive against ARM, it seems unlikely that they would get a toe into that market.


----------



## tomO2013

Very interesting but very confusing.
We already know from real world bench’s of their high end 12900k with a TDP of 228W under boost - achieved under SpecInt scores maybe 10% faster than M1 max. 
What secret sauce did Intel use with the mobile parts to achieve a 40% improvement over M1 , or 30% faster than desktop alderlake. 
Surely ICC cannot account for such a large performance improvement. I suspect a large sprinkling of BS on these results.

Very much looking forward to having 3rd parties independently verify!


----------



## Cmaier

tomO2013 said:


> Very interesting but very confusing.
> We already know from real world bench’s of their high end 12900k with a TDP of 228W under boost - achieved under SpecInt scores maybe 10% faster than M1 max.
> What secret sauce did Intel use with the mobile parts to achieve a 40% improvement over M1 , or 30% faster than desktop alderlake.
> Surely ICC cannot account for such a large performance improvement. I suspect a large sprinkling of BS on these results.
> 
> Very much looking forward to having 3rd parties independently verify!




I believe the secret sauce is "we made up a graph."


----------



## Yoused

oops, not Alder Lake.


----------



## tomO2013

Cmaier said:


> I believe the secret sauce is "we made up a graph."



Wouldn’t be the first time - but sure they are all at it!
I also find it very interesting how the graph presents very similar to Apples m1 graphs!

Classic Intel.

I’m hoping you see apple get back to their classic Intel roasting adverts (literally)!

They should rerun a version of this swapping m1 for PowerPC and referencing video experts!


----------



## mr_roboto

Yoused said:


> The M models are quotes as drawing 7W (probably CPU-only), so Intel finally has a phone-quality processor, now that nobody really cares. Why would a phone maker even want to use that? Unless their pricing is extremely aggressive against ARM, it seems unlikely that they would get a toe into that market.



It's not a phone chip, they truly gave up years ago. They never got their Atom-based phone chips into actual phones without effectively buying their way in.

The computers Intel currently considers to be their "mobile" market segment range from Windows tablets at the smallest to notebooks of the same class as the old 16" Intel MacBook Pro (~45W TDP).  In this case, I'm sure the 7W version is for tablets.


----------



## Joelist

So they MAY have a chip that MAY be 10% faster on synthetic benchmarks while drawing over three times the power and not a word on graphical performance?


----------



## Yoused

Joelist said:


> So they MAY have a chip that MAY be 10% faster on synthetic benchmarks while drawing over three times the power and not a word on graphical performance?



Alder Lake is not a UMA-design SoC: Intel is expecting most use cases to employ an eGPU, so graphical performance is less likely to be intrinsic. In addition, an article from August suggested that consumer-level CPUs will have AVX512 and AMX (matrix logic for ML) but it will be fused off (non-reachable) except in enterprise/server models.

In other words, comparing the rich capabilities of M1 to the baseline capabilities of AL is like comparing a fisherman's pontoon boat to an Alaska Marine Highway multi-vehicle ferry with staterooms: the former might be faster across the lake, but getting a lot of people and cars up the inside passage may not be a thing it can accomplish efficiently on its own without help.


----------



## jbailey

Intel is reportedly telling motherboard/bios vendors to disable unsupported AVX-512 instructions in a microcode update. Previously Anandtech reported that you could disable the efficiency cores which don’t have AVX-512 hardware and enable AVX-512 in the BIOS for the performance cores.


----------



## Andropov

Leaked Alder Lake 12900HK benchmark.

Looks like its 2-3% faster than M1 Max in single core, 10-15% faster than M1 Max in multicore (judging by Geekbench results). PL2 limited to 115W, but the exact energy efficiency is uncertain as of now. Doesn't look very promising though, looks like they aimed for surpassing the competition in performance regardless of power consumption. For example, it's 30% faster than last year's AMD 5900HX at more than twice the power consumption (and the AMD chip is running on slower RAM too).


----------



## Cmaier

Andropov said:


> Leaked Alder Lake 12900HK benchmark.
> 
> Looks like its 2-3% faster than M1 Max in single core, 10-15% faster than M1 Max in multicore (judging by Geekbench results). PL2 limited to 115W, but the exact energy efficiency is uncertain as of now. Doesn't look very promising though, looks like they aimed for surpassing the competition in performance regardless of power consumption. For example, it's 30% faster than last year's AMD 5900HX at more than twice the power consumption (and the AMD chip is running on slower RAM too).




That’s what the evidence seems to be pointing to.  They just moved to the right on the perf/power curve.


----------



## Yoused

Andropov said:


> For example, it's 30% faster than last year's AMD 5900HX at more than twice the power consumption (and the AMD chip is running on slower RAM too).



Sounds like that would put Ryzen pretty close to M1 in perf/power – at least until you factor in the GPU. I think I read that the M1 GPU can pull as much as 100 watts at times.


----------



## Andropov

Lol, I went back to the source to check their numbers and couldn't figure how they got the '30% faster than 5900HX'. Finally it looks like what they did was:
- Assume that ALL CineBench tests (R15 and R23 specifically) use the same CPU package power.
- Intel 12900HK got 2697 points in CineBench *R15* (plugged in).
- Intel 12900HK used an average of 65.4W in CineBench *R23* (plugged in).
- AMD 5900HX got 2086 points in CineBench *R15* (plugged in).
- AMD 5900HX used an average of 29.3W in CineBench *R15* (plugged in).

The problem is that there's no data for how much CPU package power the 12900HK used in Cinebench R15, just on Cinebench R23. IF both Cinebench tests average about the same CPU package power: Intel 12900HK gets ~40 Cinebench R15 points per watt, AMD 5900HX gets ~70 Cinebench R15 points per watt. So +29% faster using +123% more power (not 23% more, 123% more).

That's a big if, though.

Also, all scores get cut in half (literally) when unplugged. So, on battery, the 12900HK barely outperforms the regular M1 (not Pro/Max).


----------



## Cmaier

Andropov said:


> Lol, I went back to the source to check their numbers and couldn't figure how they got the '30% faster than 5900HX'. Finally it looks like what they did was:
> - Assume that ALL CineBench tests (R15 and R23 specifically) use the same CPU package power.
> - Intel 12900HK got 2697 points in CineBench *R15* (plugged in).
> - Intel 12900HK used an average of 65.4W in CineBench *R23* (plugged in).
> - AMD 5900HX got 2086 points in CineBench *R15* (plugged in).
> - AMD 5900HX used an average of 29.3W in CineBench *R15* (plugged in).
> 
> The problem is that there's no data for how much CPU package power the 12900HK used in Cinebench R15, just on Cinebench R23. IF both Cinebench tests average about the same CPU package power: Intel 12900HK gets ~40 Cinebench R15 points per watt, AMD 5900HX gets ~70 Cinebench R15 points per watt. So +29% faster using +123% more power (not 23% more, 123% more).
> 
> That's a big if, though.
> 
> Also, all scores get cut in half (literally) when unplugged. So, on battery, the 12900HK barely outperforms the regular M1 (not Pro/Max).




Intel is going to look pretty dumb when these things are out in the wild and we can run real tests and put numbers out there.


----------



## Souko

To the power consumption. My M1 Pro 8 core (6+2) draws about 20.5W average (6 cores 20W and 2 effeciency cores 0.5W) average in Cinebench R23 and score was 9540. (CPU only - no RAM, no GPU,..) So M1 Pro/Max with 10 core CPU will draw about 27W...

In low power mode it was about 12.2W CPU only (probably around 16W for 10-core) and score for 8core M1 Pro was 7904 for me.

Power consumption was measured using Terminal - sudo powermetrics.


----------



## Cmaier

Souko said:


> To the power consumption. My M1 Pro 8 core (6+2) draws about 20.5W average (6 cores 20W and 2 effeciency cores 0.5W) average in Cinebench R23 and score was 9540. (CPU only - no RAM, no GPU,..) So M1 Pro/Max with 10 core CPU will draw about 27W...
> 
> In low power mode it was about 12.2W CPU only (probably around 16W for 10-core) and score for 8core M1 Pro was 7904 for me.
> 
> Power consumption was measured using Terminal - sudo powermetrics.



Welcome aboard, by the way!


----------



## Nycturne

Yoused said:


> Sounds like that would put Ryzen pretty close to M1 in perf/power – at least until you factor in the GPU.* I think I read that the M1 GPU can pull as much as 100 watts at times.*




As far as I know, that would be full package power with both CPU and GPU loaded. That said, I've yet to get the GPU on an M1 Max to pull the 55-60W Apple claims outside of a couple specific benchmark scenarios. Lots of underutilization in the (admittedly small and Rosetta-dependent) selection of stuff I've tried so far.


----------



## Souko

Cmaier said:


> Welcome aboard, by the way!



Thanks!



Nycturne said:


> As far as I know, that would be full package power with both CPU and GPU loaded. That said, I've yet to get the GPU on an M1 Max to pull the 55-60W Apple claims outside of a couple specific benchmark scenarios. Lots of underutilization in the (admittedly small and Rosetta-dependent) selection of stuff I've tried so far.



OK, so I ran some tests for GPU. I have MBP14" base, 14-core GPU. Power consumption was measured using Terminal - sudo powermetrics.
I ran 3D Mark Wild Life Extreme, unlimited and GFXBench Metal - Aztec Ruins 1440p, offscreen. Average power draw was 19.5W for only GPU in both test. So estimate is around 45W for 32-core M1 Max. 


I got 47W max on M1 Pro with GFXBench + Cinebench R23 (21W GPU + 20W CPU + 2W DRAM + 4W other). 

I would say maximum power that M1 Max takes is about 81W. (27W CPU + 45W GPU + 4W DRAM (not sure if 32GB is consuming two times 16GB power) + 4W other (again not sure about it + 1-2W for maximum (not average) power draw.)) In some specific tests probably more.


----------



## casperes1996

In the very first post here, Cmaier talked about the AVX-512 capability of the P cores. Well, not just is it disabled in BIOS, Intel has said they're looking into ways of ensuring it cannot be enabled at all, even if you disable the E-cores. Don't know why they would potentially want to do that but they've said it, so. Think I read it on Anandtech, may have been GN or Hardware Unboxed though. 

I think Alder Lake will also be rather interesting on the Linux front as patches roll in to use Intel's Thread Director thingy and see how the Linux kernel might utilise it and perform AMP scheduling - Has the Windows NT Kernel ever had to deal with different core types like this? The ARM cores Windows have supported in the past have they even been big.Little? Just looked up the Snapdragon 8c and it looks like that's just 8 cores of one type. 

Do we know for a fact that enabling AVX-512 in the BIOS requires disabling the E-cores, or is it just that Windows can't handle it both being supported and not at the same time in the scheduler (yet)? Because in that case Linux could catch illegal AVX-512 on E cores and schedule those jobs on P cores though there's a lot of decisions to be made regarding the logic for moving threads, their permanency on P cores after encountering AVX-512 for the first time and potentially not for a long time again after that and so on, but still


----------



## Cmaier

casperes1996 said:


> In the very first post here, Cmaier talked about the AVX-512 capability of the P cores. Well, not just is it disabled in BIOS, Intel has said they're looking into ways of ensuring it cannot be enabled at all, even if you disable the E-cores. Don't know why they would potentially want to do that but they've said it, so. Think I read it on Anandtech, may have been GN or Hardware Unboxed though.
> 
> I think Alder Lake will also be rather interesting on the Linux front as patches roll in to use Intel's Thread Director thingy and see how the Linux kernel might utilise it and perform AMP scheduling - Has the Windows NT Kernel ever had to deal with different core types like this? The ARM cores Windows have supported in the past have they even been big.Little? Just looked up the Snapdragon 8c and it looks like that's just 8 cores of one type.
> 
> Do we know for a fact that enabling AVX-512 in the BIOS requires disabling the E-cores, or is it just that Windows can't handle it both being supported and not at the same time in the scheduler (yet)? Because in that case Linux could catch illegal AVX-512 on E cores and schedule those jobs on P cores though there's a lot of decisions to be made regarding the logic for moving threads, their permanency on P cores after encountering AVX-512 for the first time and potentially not for a long time again after that and so on, but still




I can’t think of any heterogeneous cores that the NT scheduler has had to deal with previously.  I may be forgetting something.  It’s a bit of an interesting computer science problem figuring out how you would dispatch threads so as to avoid the problem, unless you have some facility in the OS where threads that need AVX-512 are marked somehow.


----------



## mr_roboto

casperes1996 said:


> In the very first post here, Cmaier talked about the AVX-512 capability of the P cores. Well, not just is it disabled in BIOS, Intel has said they're looking into ways of ensuring it cannot be enabled at all, even if you disable the E-cores. Don't know why they would potentially want to do that but they've said it, so. Think I read it on Anandtech, may have been GN or Hardware Unboxed though.



One possible reason which has come to my mind: they may be enhancing yield by not scrapping parts with defects somewhere in the upper 256 bits of the AVX execution units.   Wouldn't look good if some random subset of the customers who disabled all E cores to enable AVX512 had it blow up in their face.

The way they're disabling it is a microcode update, BTW.


----------



## Colstan

So, over at the "other place" mother ship, the early results for i9 mobile Alder Lake are trickling in. The short version is that, yes, the i9 does outperform the M1 Max...by about 4%. The tradeoff is that the i9 runs at 100w, spiking to 140w, while the Max tops out at 40w. On battery, the i9 gets six hours of video playback, while the Max gets 21. The MSI laptop tested is over an inch-thick and weighed one-third more than the 16-inch MacBook Pro.

Is is just me, or did Intel intentionally fine tune this particular i9 just to edge out the M1 Max in performance, forget all other considerations? A 4% performance improvement isn't something that the typical user is going to notice in day-to-day use, but heat, battery life, weight, energy bills, and fan noise are.

As a desktop user, most of those issues are irrelevant to me, except fan noise. I optimize for quiet computing because I have sensitive hearing, so Apple Silicon is a huge bonus, in that regard. Of course, if you are primarily a macOS user, Alder Lake and the x86 side are mainly a curiosity, but the horse race is an amusing side show.


----------



## Yoused

i9 is the 12K Xeon-ish. Most realistic notebooks will have an i5 or maybe i3, which will probably have better stamina at the cost of less mega-juice but be no real match for a '20 M1. If Apple produces a 20 core SoC with a ridiculous GPU (say, 64 cores) for the desk, Alder Lake will be knee deep in trying to keep up.


----------



## Andropov

Colstan said:


> Is is just me, or did Intel intentionally fine tune this particular i9 just to edge out the M1 Max in performance, forget all other considerations? A 4% performance improvement isn't something that the typical user is going to notice in day-to-day use, but heat, battery life, weight, energy bills, and fan noise are.



Yeah, I think it's been pretty obvious that the i9 12900HK was specifically designed and tuned to beat the M1 Max, no matter what.


----------



## Cmaier

Looks like taking battery sizes into account that alder lake essentially ties M1 Max but needs around 2.5-3x the power to get there, and with battery life around ¼ (not quite sure how the numbers work for that, but we’ll get more data soon).  Add in fan noise differential to boot.


----------



## casperes1996

Cmaier said:


> Looks like taking battery sizes into account that alder lake essentially ties M1 Max but needs around 2.5-3x the power to get there, and with battery life around ¼ (not quite sure how the numbers work for that, but we’ll get more data soon).  Add in fan noise differential to boot.



The only device I know that has been tested with 12900HK is that Raider laptop. According to Anandtech the Nvidia 3080 Laptop GPU in it, is not very good at power-gating when not in demand, which contributes to power draw as well. 
Though on the flipside of that, even though it's a 17" display, it's only 1080p  and Anandtech normalises battery tests at 200 nits, but the display can't go much above that either. So the MacBook Pro display probably draw quite a bit more power, at least when the dimming zones aren't mostly off or something. There's also the consideration that M1 Max power is often reported as package power, which includes the memory and Intel numbers would not. 

So when talking about the battery life, well, many more aspects than just the CPU at play. Spinning the fans harder isn't free either I guess


----------



## Yoused

If the 12900HK has a TDP of 85W (I assume that is some sort of median) and the RTX 3080 Ti has a TDP of 350W, does that mean that the Raider could be briefly drawing close to half a KW under heavy loads?


----------



## tomO2013

Frankly if I was Intel I'd be embarrassed (and worried) by these results and what it took to get them. Performance per watt is terrible compared to the competition and that is before we factor in that M1 architecture is coming up on year 2 right now, with another M2 rumoured to be dropping this year. The multi-core score in particular required a greater number of cores, higher clock speed, multiple times the power requirement connected to the wall, a lot more noise, to achieve ....  4% on an artificial benchmark that likely has not been optimized for Apple Silicon (due to the nature of benchmarks).

I don't use Cinebench daily or stock crabs chess benchmarks or whatever the latest troll is over at the mothership. I like the ports, sound , screen, build materials and quality of the MBP. 
I don't value loud fans, I don't want to have to tether to a wall to access full performance. I don't like go-faster neon lights etc... I'm not the target audience of the Raider laptop.

That being said I do use Xcode, Final Cut Pro, Logic and Capture One. On the first 3 apps my new m1 pro max is multiple times faster in quantifiable performance over my 2019 i9 MBP. When I compare performance (not strictly apples:apples) between a similar video project export on Resolve on a Ryzen 5950 desktop with a 3080 and a similar project export on M1 Max (with LUTS applied, some color grading, exposure adjustments etc...) FCP on the M1 Max; the MBP is quantifiably faster for what I care about .... a lot faster in operation.

Even comparing to a desktop xeon Mac Pro, Xcode compile times are faster and Final Cut exports are significantly faster... so for my use cases, on real world applications that ARE optimized for Apple Silicon (before accounting the other benefits that I care about - battery life, build quality, sound, screen quality, ports, noise, heat).

For anything else that runs better on a WinTel I'll wait for a Thinkpad P1 ( my laptop of choice on Wintel), sacrifice a little performance over the Raider, sacrifice the go-fast lights, and get a more portable solution.


----------



## Colstan

Andropov said:


> Yeah, I think it's been pretty obvious that the i9 12900HK was specifically designed and tuned to beat the M1 Max, no matter what.



It's the Weyland-Yutani philosophy of CPU architecture. Special Order 386: "Priority one. Ensure defeat of M1 in synthetic benchmarks. All other considerations secondary. Design team expendable."


----------



## throAU

Cmaier said:


> I believe the secret sauce is "we made up a graph."



Or 1600 watt chiller unit.

Repurposed from their 28 core 5ghz stunt


----------



## Cmaier

throAU said:


> Or 1600 watt chiller unit.
> 
> Repurposed from their 28 core 5ghz stunt



That said, how can that graph be right given what we now know?


----------



## Andropov

Cmaier said:


> That said, how can that graph be right given what we now know?



The numbers are likely real, just unrealistic in their setup. I believe (since they didn't say anything other than that they compiled for M1 Max using Xcode 13) that they used the default compiler flags for LLVM on Xcode 13 (that is, _-Os_), which leaves a lot of the most performance-critical optimizations on the table. It's a reasonable default flag for an app, but not for a compute-intensive benchmark. And on the other hand they used ICC for the Intel compilations, which already introduces very aggressive optimizations, and then they (likely) hand-picked the best compile settings for each of the subtests in SPECint. That, along with crippling the M1 Max performance by deliberately using suboptimal compiler options gave them a massive advantage. Had they compiled for M1 Max using _-O3_ or something like it I suspect the difference would be much, much smaller, and the graph wouldn't exist.

So not made up, just comically bad cherry-picking.


----------



## Cmaier

Andropov said:


> The numbers are likely real, just unrealistic in their setup. I believe (since they didn't say anything other than that they compiled for M1 Max using Xcode 13) that they used the default compiler flags for LLVM on Xcode 13 (that is, _-Os_), which leaves a lot of the most performance-critical optimizations on the table. It's a reasonable default flag for an app, but not for a compute-intensive benchmark. And on the other hand they used ICC for the Intel compilations, which already introduces very aggressive optimizations, and then they (likely) hand-picked the best compile settings for each of the subtests in SPECint. That, along with crippling the M1 Max performance by deliberately using suboptimal compiler options gave them a massive advantage. Had they compiled for M1 Max using _-O3_ or something like it I suspect the difference would be much, much smaller, and the graph wouldn't exist.
> 
> So not made up, just comically bad cherry-picking.




Someone should redraw that graph now that we have information about actual power consumption. .  At 30W it must be awful.


----------



## Andropov

Cmaier said:


> Someone should redraw that graph now that we have information about actual power consumption. .  At 30W it must be awful.



Ah, I totally forgot about the power consumption part. I have no explanation for that. Maybe it is pure fantasy after all.


----------



## Yoused

Cmaier said:


> Someone should redraw that graph now that we have information about actual power consumption. .  At 30W it must be awful.



Maybe they have a sophisticated COP that can measure per-core draw and they are comparing the draw of one core to than of an entire M1?


----------



## casperes1996

Cmaier said:


> Someone should redraw that graph now that we have information about actual power consumption. .  At 30W it must be awful.




The only source I've really looked at for 12900HK performance so for is Anandtech, but they put the Raider laptop in "Battery saver mode" or whatever that power profile in Windows is called and noted that it locked the power draw of the CPU at 30W, i.e. no turbo above either. Just 30W. Unfortunately the only test they ran at 30W was Cinebench R20 (not R23 for some reason), making it hard to compare against any Apple Silicon since R20 is older and not updated for ARM. But comparing it to itself works as a proxy. In the performance mode which is 85W sustained 110W turbo (PL2). It hits 6,869. In the battery saver mode at 30W, it hits 3,494. So it's ≈ half. Intel might be able to undervolt and set up a different configuration of E cores only but loads of them that does really well in multi-threaded benchmarks at somewhat low wattage, if we allow them as many marketing tricks as possible to fudge the numbers in place?


----------



## Andropov

I missed this news back in March:








						Intel Nukes Alder Lake's AVX-512 Support, Now Fuses It Off in Silicon
					

Flipping the fuses ends the story




					www.tomshardware.com
				




Apparently Intel is now disabling AVX-512 (for real) on Alder Lake.


----------



## Cmaier

Andropov said:


> I missed this news back in March:
> 
> 
> 
> 
> 
> 
> 
> 
> Intel Nukes Alder Lake's AVX-512 Support, Now Fuses It Off in Silicon
> 
> 
> Flipping the fuses ends the story
> 
> 
> 
> 
> www.tomshardware.com
> 
> 
> 
> 
> 
> Apparently Intel is now disabling AVX-512 (for real) on Alder Lake.




Stay classy, Intel.


----------

