M4 Mac Announcements

For me, the one key downside to the other current non-Apple ≈220 ppi externals (Samsung 5k, Dell 6k) is they're matte only. That's great for photo/video work, but I don't like it for text because it reduces the sharpness and creates a sparkling snowfield effect on white backgrounds. [If you're ever in an Apple store, do a side-by-side comparison of the glossy and nanotextured ASD's.] Plus Apple's glossy coating is superb--as glossy coatings go, it's excellent at reducing reflections.

So I'm wondering what kind of surface coating this one has and, if it is glossy, whether it's as good as Apple's. I can't tell from the the marketing material, which describes it as follows, though it's promising that they say it doesn't soften the image like regular matte sufaces:

"The inclusion of LuxPixel technology gives the panel an anti-glare, low-reflection (AGLR) coating for a paper-like screen effect. Unlike traditional matte panel finishes that soften the image, LuxPixel technology minimizes distracting reflections to ensure pixel-perfect colors and sharp image detail."

The price is appealing -- $799 retail, and it should eventually be available discounted, possibly in the $600 range.
 
Cross posting from the other place after diving into firmware a bit, and this is what I've got:
Code:
M4 Macs
  Mac16,1   MBP 14” M4
  Mac16,2   iMac 24” M4 (2-port)
  Mac16,3   iMac 24” M4 (4-port)
  Mac16,4   DNE
  Mac16,5   MBP 16” M4 Max (384b & 512b)
  Mac16,6   MBP 14” M4 Max (384b & 512b)
  Mac16,7   MBP 16” M4 Pro
  Mac16,8   MBP 14” M4 Pro
  Mac16,9   Mac Studio M4 Max
  Mac16,10  Mac mini M4
  Mac16,11  Mac mini M4 Pro
  Mac16,12  MBA 13” M4
  Mac16,13  MBA 15” M4

M5 Macs
  Mac17,1   iMac 30” M5
  Mac17,2   iMac 30” M5 Pro

It seems that both of the variants of the M4 Max use the same firmware (perhaps some sort of chop or fusing) and thus do not have different designations. Also, it's likely the M5 and M5 Pro are in a later testing phase, but not others in that lineup.

To be clear, I’m certain about those M4 designations, and the M5 designations are an educated guess.
 
Last edited:
Cross posting from the other place after diving into firmware a bit, and this is what I've got:
Code:
M4 Macs
  Mac16,1   MBP 14” M4
  Mac16,2   iMac 24” M4 (2-port)
  Mac16,3   iMac 24” M4 (4-port)
  Mac16,4   DNE
  Mac16,5   MBP 14” M4 Max (384b & 512b)
  Mac16,6   MBP 14” M4 Max (384b & 512b)
  Mac16,7   MBP 16” M4 Pro
  Mac16,8   MBP 14” M4 Pro
  Mac16,9   Mac Studio M4 Max
  Mac16,10  Mac mini M4
  Mac16,11  Mac mini M4 Pro
  Mac16,12  MBA 13” M4
  Mac16,13  MBA 15” M4

M5 Macs
  Mac17,1   iMac 30” M5
  Mac17,2   iMac 30” M5 Pro

It seems that both of the variants of the M4 Max use the same firmware (perhaps some sort of chop or fusing) and thus do not have different designations. Also, it's likely the M5 and M5 Pro are in a later testing phase, but not others in that lineup.

To be clear, I’m certain about those M4 designations, and the M5 designations are an educated guess.
One of those, the 16,5 or 16,6, is supposed to be the 16” MBP Max, right? You have the 14” twice.

Which piece of firmware got you this and how did you extract this information? Also, does the educated guess really show the iMac 30” is coming?! How strong is the evidence for that?
 
Last edited:
One of those, the 16,5 or 16,6, is supposed to be the 16” MBP Max, right? You have the 14” twice.

Which piece of firmware got you this and how did you extract this information? Also, does the educated guess really show the iMac 30” is coming?! How strong is the evidence for that?
I’m curious why the M5 models wouldn’t be the Studio or Pro? I am hoping they will give the Studio the iPad Pro treatment!
 
One of those, the 16,5 or 16,6, is supposed to be the 16” MBP Max, right? You have the 14” twice.

Which piece of firmware got you this and how did you extract this information? Also, does the educated guess really show the iMac 30” is coming?! How strong is the evidence for that?
Sorry, Mac16,5 is the 16”, I’ll edit my post 🤦‍♂️. The iMac educated guess is pretty weak, and I can’t get into evidence.
 
For me, the one key downside to the other current non-Apple ≈220 ppi externals (Samsung 5k, Dell 6k) is they're matte only. That's great for photo/video work, but I don't like it for text because it reduces the sharpness and creates a sparkling snowfield effect on white backgrounds. [If you're ever in an Apple store, do a side-by-side comparison of the glossy and nanotextured ASD's.] Plus Apple's glossy coating is superb--as glossy coatings go, it's excellent at reducing reflections.

So I'm wondering what kind of surface coating this one has and, if it is glossy, whether it's as good as Apple's. I can't tell from the the marketing material, which describes it as follows, though it's promising that they say it doesn't soften the image like regular matte sufaces:

"The inclusion of LuxPixel technology gives the panel an anti-glare, low-reflection (AGLR) coating for a paper-like screen effect. Unlike traditional matte panel finishes that soften the image, LuxPixel technology minimizes distracting reflections to ensure pixel-perfect colors and sharp image detail."

The price is appealing -- $799 retail, and it should eventually be available discounted, possibly in the $600 range.
Here's a review that includes a comment on the AR coating. I run three displays, and while it looks like it wouldn't work for me as my main monitor, it would probably be fine as one of my two side monitors:

https://www.kitguru.net/peripherals...a27jcv-review-5k-60hz-professional-monitor/8/

"The other niggle is perhaps more subjective, but it's worth pointing out the PA27JCV uses what ASUS calls ‘LuxPixel AGLR (Anti-Glare, Low Reflection)'. In other words, it's a fairly strong matte coating, and while ASUS claims ‘the user only sees accurate colors and sharp details' thanks to the LuxPixel tech, some coating grain is still visible. Compared against a 5K Apple iMac (2019), I did find myself preferring the appearance of text on the Apple display, but everyone has their own preference when it comes to gloss vs matte."
 
Last edited:
We now have a range of M4 Max results posted for GB6, for both the 14" and 16" MBP:

1730582564754.png
1730582540363.png

1730582545438.png
 
I am curious about the gpu scores for the Max. These GB scores put the M4 Max slightly above the the laptop 4090. I have a suspicion that the 4090 is leaving performance on the table without Cuda. I have no proof for this. Does anyone have any insight?
 
I am curious about the gpu scores for the Max. These GB scores put the M4 Max slightly above the the laptop 4090. I have a suspicion that the 4090 is leaving performance on the table without Cuda. I have no proof for this. Does anyone have any insight?
Are you perhaps looking at the Metal scores? Because to compare the NVIDIA and Apple Silicon GPU scores on GB, you'd need to look at the only benchmark they run for both, which is OpenCL. And there the 4090 laptop scores 181041/116293 = 1.6 x faster than the M4 Max (though that's based on just one OpenCL result for the M4 Max; plus we don't know which M4 Max that is).

1730584374608.png
 
Are you perhaps looking at the Metal scores? Because to compare the NVIDIA and M4's GPU scores, you'd need to look at the only benchmark they run for both, which is OpenCL. And there the 4090 laptop scores 181041/116293 = 1.6 x faster than the M4 Max (though that's based on just one OpenCL result for the M4 Max; plus we don't know which M4 Max that is).

View attachment 32462
This has been discussed before and there are different opinions I believe. According to JPoole, there is no reason to use the same api. These are calculations and whichever api is used, the results can be compared. The caveat being that not all APIs have the same features and drivers can be less optimized than others. That being said we are comparing how fast a given gpu within a device completes the test.
 
Last edited:
Are you perhaps looking at the Metal scores? Because to compare the NVIDIA and Apple Silicon GPU scores on GB, you'd need to look at the only benchmark they run for both, which is OpenCL.

I disagree. OpenCL is officially deprecated on MacOS, and has been so for years. OpenCL and Vulkan both is officially supported by Nvidia, so it is their responsibility to ensure that the code works well.
 
I’m blown away by these CPU numbers. In MC both the Pro and Max handily outpace my M1 Ultra, and even the basic M4 gets reasonably close. I’m more than excited to add a MacBook Pro (or hell, even an Air if it offers enough RAM) at some point, be it an M4 or one of the new iterations in the next year or two.

That said I’m more than happy I got the Studio when I did.
 
This has been discussed before and there are different opinions I believe. According to JPoole, there is no reason to use the same api. These are calculations and whichever api is used, the results can be compared. The caveat being that not all APIs have the same features and drivers can be less optimized than others. That being said we are comparing how fast a given gpu within a device completes the test.

I disagree. OpenCL is officially deprecated on MacOS, and has been so for years. OpenCL and Vulkan both is officially supported by Nvidia, so it is their responsibility to ensure that the code works well.

Ah, because of the score disparities, I wasn't aware the API scores were supposed to be comparable. For instance, the 4090 desktop GPU's Vulkan and OpenCL scores are 262,900 and 317,130 respectively (OpenCL is 21% higher).

But NVIDIA's corporate responsibilities aside, if you want the benchmark to reflect the comparative real-world performance capabilities of AS vs NVIDIA vs AMD GPU's, shouldn't it be comparing Metal on AS to CUDA on NVIDIA to whatever the equivalent is on AMD (would that be ROCm?).

For NVIDIA, I think it comes down to what percentage of GPU-bottlenecked apps that, when run on a NVIDIA GPU, would be running under OpenCL or Vulkan rather than CUDA. And for AMD, what percentage would be running under Vulkan rather than ROCm (?).
 
Last edited:
Ah, because of the score disparities, I wasn't aware the API scores were supposed to be comparable. For instance, the 4090 desktop GPU's Vulkan and OpenCL scores are 262,900 and 317,130 respectively (OpenCL is 21% higher).

But NVIDIA's corporate responsibilities aside, if you want to the benchmark to reflect the comparative real-world performance capabilities of AS vs NVIDIA vs AMD GPU's, shouldn't it be comparing Metal on AS to CUDA on NVIDIA to whatever the equivalent is on AMD (would that be ROCm?).

For NVIDIA, I think it comes down to what percentage of GPU-bottlenecked apps that, when run on a NVIDIA GPU, would be running under OpenCL or Vulkan rather than CUDA. And for AMD, what percentage would be running under Vulkan rather than ROCm (?).
It’s interesting because intuitively I think using Cuda makes the most sense in terms of knowing the Nvidia card’s peak performance. John Poole was asked and stated that only 3% of results in GB 5 were Cuda and that its adoption in consumer applications is low. I don’t know the truth of that, but given GB compute targets consumer applications, it makes sense I suppose.
1730597030402.png
 
Which piece of firmware got you this and how did you extract this information?
I thought I wrote a post about macOS/iOS firmware analysis a year or two ago. Here's the gist:
Edit: I oversimplified my initial post; it's better now 🙂
Edit: Fixed incorrect ipsw link 🙈
Edit: Nits
 
Last edited:
It’s interesting because intuitively I think using Cuda makes the most sense in terms of knowing the Nvidia card’s peak performance. John Poole was asked and stated that only 3% of results in GB 5 were Cuda and that its adoption in consumer applications is low. I don’t know the truth of that, but given GB compute targets consumer applications, it makes sense I suppose.
View attachment 32467
From what I found online, in order to run GB5 with CUDA, the user needs to compile it for CUDA by selecting the CUDA flag. So that just means only 3% of end users do this. That wouldn't have anything to do with the percent of GPU-intensive consumer applications whose performance would be significantly improved by CUDA, and that are supplied as precompiled GPU-universal binaries that detect the type of GPU, and install the GPU-appropriate binary (CUDA for NVIDIA GPUs, etc.).

Or let me flip this on its head: If, for most GPU-intensive computer applications, there isn't much difference in performance between whether they are run on Metal, Vulkan, or OpenCL, then a benchmark whose performance is in fact sensitive to this (GB6) might not be a good one to use to predict GPU performance on consumer applications.

I.e., if the API really affects performance for GPU-intensive consumer computer applications, then you'd expect devs would provide GPU-universal precompiled binaries that detect the GPU and install the appropriate binary. If so, then GB6 should do the same, to reflect that. Or if the API doesn't significantly affect performance for GPU-intensive computer applications then, by being API-sensitive, GB6 is failing in its goal of representing GPU performance on consumer apps.
 
Last edited:
From what I found online, in order to run GB5 with CUDA, the user needs to compile it for CUDA by selecting the CUDA flag. So that just means only 3% of end users do this. That wouldn't have anything to do with the percent of GPU-intensive consumer applications whose performance would be significantly improved by CUDA, and that are supplied as precompiled GPU-universal binaries that detect the type of GPU, and install the GPU-appropriate binary (CUDA for NVIDIA GPUs, etc.).
I think that’s a guess, and JFPoole is in a better position to know that than anyone here. The same selection is necessary for Metal and Vulcan, as OpenCL is the default afaik, and they see much higher than 3% use.

He isn’t saying the 3% is related to the amount of applications that support Cuda. He is stating that he, or his employees surveyed the consumer app landscape and found very little use of Cuda. I don’t know if it’s true or not. I am just reporting his reasoning for omitting Cuda.
Or let me flip this on its head: If, for most GPU-intensive computer applications, there isn't much difference in performance between whether they are run on Metal, Vulkan, or OpenCL, then a benchmark whose performance is in fact sensitive to this (GB6) might not be a good one to use to predict GPU performance on consumer applications.
I’m not sure I follow. I think there may well be a difference in performance between Cuda and OpenCL/vulkan. However, if Cuda isn’t used in consumer applications, then benchmarks which use it aren’t giving an accurate picture. The makers of GB aren’t claiming their compute benchmark is a tool to determine the performance of gpus in every context, just consumer applications like Adobe apps, Cinema4D etc.

I can’t speak to whether GB’s compute benchmark is good or bad. I am just explaining their reasoning. I think if we accept their premise, the logic is sound. It may be true however that their premise isn’t correct.
I.e., if the API really affects performance for GPU-intensive consumer computer applications, then you'd expect devs would provide GPU-universal precompiled binaries that detect the GPU and install the appropriate binary. If so, then GB6 should do the same, to reflect that. Or if the API doesn't significantly affect performance for GPU-intensive computer applications then, by being API-sensitive, GB6 is failing in its goal of representing GPU performance on consumer apps.
I don’t know what a “GPU-universal precompiled” is but if it’s the idea that within each application there exists the option to use either Cuda/OpenCL/Vulkan, then it sounds like a recipe for huge amounts of work. Very few devs would do that surely? The options exist on GB because OpenCL used to be the main compute api used in apps doing compute. Then Vulkan and Metal arrived. Keeping OpenCL as a choice meant people would have an idea of how their older apps would perform. Vulkan/Metal how their newer apps would perform.

In any case, this is only tangentially related to the issue of whether we can compare scores using different apis. I haven’t seen a convincing argument why we wouldn’t be able to. Certainly if we can compare CPU results, we can compare GPU results.
 
Last edited:
I don’t know what a “GPU-universal precompiled” is but if it’s the idea that within each application there exists the option to use either Cuda/OpenCL/Vulkan, then it sounds like a recipe for huge amounts of work. Very few devs would do that surely? The options exist on GB because OpenCL used to be the main compute api used in apps doing compute. Then Vulkan and Metal arrived. Keeping OpenCL as a choice meant people would have an idea of how their older apps would perform. Vulkan/Metal how their newer apps would perform.
Yes, that is what I mean.

I'm thinking out loud here, so bear with me, but this is what I'm trying to say:

Every consumer app that has a GUI (which is nearly every consumer app) uses the GPU. E.g., Excel and Word. [At least I assume they use the GPU, rather than sending their renders directly to the display engine.]

But it's only a tiny fraction of consumer apps whose performance is significantly GPU-compute-limited, i.e., where the GPU compute performance is noticeable to the end user.

Those are the only ones for which GPU compute performance matters, and for which the GB6 GPU compute benchmark is germane. So when Poole is trying to decide whether GB6 should incorporate CUDA, the relevant percentage isn't the percent of all apps that have adopted CUDA (which should be, as Poole said, quite small).

Instead, you want to take just the subset of apps whose performance the GB6 GPU compute benchmark is designed to predict, namely the tiny subset of consmer apps whose performance is significantly-GPU-compute-limited. Then, from that subset, you want to ask what the percent of CUDA adoption is.

Fabricating arbitrary numbers to make this more concrete: Let's suppose 4% of consumer apps are significantly affected by GPU compute performance and, of those, 50% have adopted CUDA. That means, of course, that 50% x 4% = 2%* of consumer apps have adoped CUDA. [*Well, 2% plus the percentage of apps not affected by GPU compute that incorporate CUDA, which I'm assuming is zero or close to it.] In assessing whether GB6 should adopt CUDA, the relevant figure isn't the 2% of consumer apps that have adopted CUDA. It would be the 50% of consumer apps for which GPU compute is important that have adopted CUDA.

In summary, when Poole looks at the overall % of consumer apps that have adopted CUDA ("CUDA adoption in consumer applications is quite low"), he's looking at the wrong statistic.
 
Last edited:
Back
Top