M3 core counts and performance

From this German review on YouTube, some non specific Blender scores

1699294807559.png

1699294820472.png

I’m assuming this is the laptop 4070?

From the other place, someone posted some Octane X scores
1699294931236.png

1699294953956.png

The M3 Ultra is gonna be interesting if the scaling holds up!
 
Maybe the variance in Cinebench 2024 GPU results relates to high power mode?
Matthew Moniz mentions using high power mode in his video
Could be. High power mode allows the fans to run faster:

I recall someone else writing it also minimizes system background tasks (Spotlight indexing comes to mind) but I don't know if that's the case.
 
I’d be interested in people’s feedback and ideas on the game results in the Geekerwan video. They showed Baldurs Gate 3 on a few Macs, vs some PCs. At 4k, the M3 Max gets 40fps, the 4060 laptop gets 36 fps and the 4080 laptop gets 71 fps. At 1440p the M3 Max gets 77 fps, the 4060 gets 56 and the 4080 gets 78. At 1080p the M3 Max gets 101 fps, the 4060 gets 66 fps and the 4080 gets 86 fps.

What could be the reasons for the M3 Max to beat the 4080 at 1080p, equal it at 1440p and lag it at 4k?

Is it bound by some component? Lacking the raw power of the 4080? Optimisations?
 
We’ll have to wait and see, but it’s interesting that under the “High Performance Mode” the M3 Max MacBook Pro’s fans can be heard (as per the reviews above). IIRC, the difference between normal mode and high performance on M1 generation was so minimal that I wondered why was that a feature at all. It’s possible that the M3 SoC can be clocked higher but Apple doesn’t want to compromise the silent operation, so there’s a chance we could see higher clocks on desktop. Higher power consumption on desktop would also explain the huge heatsink of the Mac Studio (which seemed a bit overkill por the power consumption of the M1 Max / Ultra).
Maybe the variance in Cinebench 2024 GPU results relates to high power mode?
Matthew Moniz mentions using high power mode in his video
Yeah, according to Apple, that's what High Power Mode does: It allows the fans to run faster. So maybe the M1/M2 Max's TDP wasn't sufficient to significantly benefit from the higher fans*, while the M3 Max's is.

I recall someone else writing it also minimizes system background tasks (Spotlight indexing comes to mind) but I don't know if that's the case.

*Or maybe High Power can benefit the M1/M2, but it takes a more extreme load than those reviewers used in their testing--something that simultaneously stresses the CPU and GPU.
 
I’d be interested in people’s feedback and ideas on the game results in the Geekerwan video. They showed Baldurs Gate 3 on a few Macs, vs some PCs. At 4k, the M3 Max gets 40fps, the 4060 laptop gets 36 fps and the 4080 laptop gets 71 fps. At 1440p the M3 Max gets 77 fps, the 4060 gets 56 and the 4080 gets 78. At 1080p the M3 Max gets 101 fps, the 4060 gets 66 fps and the 4080 gets 86 fps.

What could be the reasons for the M3 Max to beat the 4080 at 1080p, equal it at 1440p and lag it at 4k?

Is it bound by some component? Lacking the raw power of the 4080? Optimisations?
Maybe memory bandwidth? The entire SoC shares the 400GB/s bus, RTX 4080 mobile gets 432GB/s to itself 🤔
 
At 1080p the GPU is saturated and it becomes a CPU test.

At 4K it's GPU bound and CPU is less of an importance. What this shows is that in Apple CPUs for gaming are one of the best in the industry but lack kin GPU performance.
What does "GPU is saturated" mean? I.e., why would the GPU be saturated at the lowest resolution rather than the highest? What would prevent NVIDA from making full use of the GPU at 1080p?

Also, 4k requires 4x the bandwidth of 1080p, yet the none of the frame rates at 4k are 25% of those at 1080p, indicating that compression is being applied at the higher resolutions, and the compression NVIDIA uses is likely different from Apple's.
 
Last edited:
Apple tells The Verge that there will be no 27" iMac. Which either means they are moving entirely away from large high-end AIO's, or that when they eventually release a large iMac it will be >27".

My guess is they woudn't have said that if they plan to do the latter within a year or so, so those wanting a large screen with AS should assume a large iMac is not in the cards for at least a while (if at all).

 
Good point. Not to my knowledge.

A little frustrated no one has done any power measurements of any substance. I know geekerwan did a multi-core measurement of 50 watts for the cpu, but annoyingly they didn’t talk about the single core power.
 
What would prevent NVIDA from making full use of the GPU at 1080p?

It’s just that the 4080 is able to finish its work so fast at 1080p that it’s waiting for instructions and data from the CPU to push out the next frame. It’s able to finish its massively parallel work faster than the CPU is able to finish its complex serial work - same for the Max which is why gaming at lower resolutions tends to test the CPU performance rather than the GPU. And since the Apple CPU is better, the frame rates get better relative to the 4080 machine.

I remember when the M1 came out a twitter idiot, a twit if you will, tried to convince everyone that the M1 CPU might be good in “toy benchmarks” but would be terrible for gaming relative to x86 because in gaming the frequency was all important. Even at the time it was obvious that wasn’t true, that in CPU bounded performance, Apple ARM based SOCs with lower clocks gained on its x86 rivals.

Having said that your and @Aaronage ‘s proposals about bandwidth and compression is likely part of the reason why at 4K the 4080 pulls ahead (the other being it is simply a bigger more power hungry device all around, but you do need to support that with bandwidth).
 
Last edited:
Good point. Not to my knowledge.

A little frustrated no one has done any power measurements of any substance. I know geekerwan did a multi-core measurement of 50 watts for the cpu, but annoyingly they didn’t talk about the single core power.
I miss Andrei/Ian at Anandtech 😞 maybe we’ll get more in depth reviews over time.
 
I miss Andrei/Ian at Anandtech 😞 maybe we’ll get more in depth reviews over time.
Funny you should say that. I was just thinking that someone with technical knowledge and a copy of Spec, would be greatly appreciated on YouTube, with in-depth reviews. Geekerwan are good, but I do get the feeling they rush their reviews lately.
 
What I find odd is that the structures I interpret to be NPUs look different for each M3 processor.

IMG_5017.jpeg

Outlined in green, respectively lower left, lower right, upper left. Now, this may simply be an effect of diffrent exposures, but they look odd. I only see eight elements, when M1/2 you could clearly make out 16, but that must mean each block is paired up with the adjacent one. I wonder if there really is a difference in performance between models.
 
Funny you should say that. I was just thinking that someone with technical knowledge and a copy of Spec, would be greatly appreciated on YouTube, with in-depth reviews. Geekerwan are good, but I do get the feeling they rush their reviews lately.
I would love to give it a shot someday (if I ever get enough free time, sigh lol).
There’s a huge gap between influencer tech content (a lot of what we’ve seen today) and the hardcore analysis AnandTech used to provide. I think there’s an appetite for something in the middle.
The problem with influencer stuff is that they’re mostly incurious. They don’t really care about the why or how, they just run the same basic tests and report if the line goes up. It leads to some wonky testing and conclusions that aren’t particularly helpful. It drives me a bit crazy at times.
 
I would love to give it a shot someday (if I ever get enough free time, sigh lol).
There’s a huge gap between influencer tech content (a lot of what we’ve seen today) and the hardcore analysis AnandTech used to provide. I think there’s an appetite for something in the middle.
The problem with influencer stuff is that they’re mostly incurious. They don’t really care about the why or how, they just run the same basic tests and report if the line goes up. It leads to some wonky testing and conclusions that aren’t particularly helpful. It drives me a bit crazy at times.
To their credit, MaxTech are curious! They just don't have enough experience/domain knowledge to match their curiosity. But hey everyone starts somewhere, and maybe they'll accrue enough over time. Some of the people at LTT are better (e.g. Antony) and they clearly have the ambition to be more (LTT labs). But of course we have all the other issues there. Maybe they'll get better too. Gamers Nexus has the technical knowledge but uhh ... I wouldn't count on them reviewing Mac hardware anytime soon given their ... viewpoints about Apple and Macs.

Once Vulkan is up and running on Asahi Linux, one test that might be fun and actually kinda worthwhile would be to compare Windows games/benchmarks under Proton for Asahi vs other Linux laptops/desktops. True the Apple GPU is optimized for the Metal API and of course there's the x86 to ARM translation for the Mac, so I'm not claiming its a perfect Apples to Apples, but they are both running under emulation, both under DX to Vulkan. Maybe when good Qualcomm laptops come out it would be an even worthier test since they'll be ARM and for the GPUs one could test against both the Qualcomm iGPU and, for some models, dGPUs. As per @Jimmyjames ' wishes you could absolutely even now run SPEC on Linux for both though and eliminate differences in compiler and OS. Just something I thought would be at least fun to see and still be more than a little bit informative.
 
Back
Top