M2 Pro and M2 Max

But if the maxed out Mac mini is as expensive as the Studio, as Cliff says, then the Studio is most likely the better choice.

You have the option to max out a Mini to us$100 less than a Studio with nearly identical specs. The Mini will be an 8+4 with a 19 core GPU, vs the Studio 8+2 with a 24 core GPU, which works out to about 20% less GPU performance for the Mini. If your work does not need the 20% extra GPU, the Mini could be a reasonable choice, and if you do not need maxed-out, it is even better. The 12-core M2 Pro can probably outerform the 10-core M1 Max for CPU-bound work.

Of course, the Studio has the handy SD slot and more handy ports.
 
So Maxtech were the first ones to say that a new mbp would be released this month and to their credit, that happened! Now Vadim is saying the same person who leaked the release date has told them that the "scaling issues" that affected the M1 Max (in some benchmarks/areas) has been substantially improved.

It's Maxtech so take it with a pinch of salt, but it will be interesting to see benchmarks for the gpu to see what they reveal.
 
You have the option to max out a Mini to us$100 less than a Studio with nearly identical specs. The Mini will be an 8+4 with a 19 core GPU, vs the Studio 8+2 with a 24 core GPU, which works out to about 20% less GPU performance for the Mini. If your work does not need the 20% extra GPU, the Mini could be a reasonable choice, and if you do not need maxed-out, it is even better. The 12-core M2 Pro can probably outerform the 10-core M1 Max for CPU-bound work.

Of course, the Studio has the handy SD slot and more handy ports.

Don't forget that the UMA bandwidth is also doubled going from a Mn Pro SoC to a Mn Max SoC...
 
So Maxtech were the first ones to say that a new mbp would be released this month and to their credit, that happened! Now Vadim is saying the same person who leaked the release date has told them that the "scaling issues" that affected the M1 Max (in some benchmarks/areas) has been substantially improved.

It's Maxtech so take it with a pinch of salt, but it will be interesting to see benchmarks for the gpu to see what they reveal.
Well if he’s got a new source this will be an interesting test of that source’s performance as well as the GPU’s ;) - though, if memory serves, Pro > Max wasn’t so bad, it was Max > Ultra where the real issue was.
 
Well if he’s got a new source this will be an interesting test of that source’s performance as well as the GPU’s ;) - though, if memory serves, Pro > Max wasn’t so bad, it was Max > Ultra where the real issue was.
Oh I agree. I place very little stock in any pundit these days, and Max tech certainly has a history of manufactured outrage. It's also difficult to determine what counts as poor scaling and what is lack of software optimisation.

I think in this specific case, Geekbench is the primary offender. I think scaling from M1 to M1 Pro was nearly perfect. From ~20000 to ~40000 on the metal test iirc. From M1 Pro to M1 Max was not too great. From ~40000 to ~65000. As you said, the Ultra didn't scale too well either. I do also realise that Geekbench has its own issues with optimisation for the M series. I know @leman has commented on these issues previously. Nonetheless, it'll be fun to see what if anything has changed.
 
Oh I agree. I place very little stock in any pundit these days, and Max tech certainly has a history of manufactured outrage. It's also difficult to determine what counts as poor scaling and what is lack of software optimisation.

I think in this specific case, Geekbench is the primary offender. I think scaling from M1 to M1 Pro was nearly perfect. From ~20000 to ~40000 on the metal test iirc. From M1 Pro to M1 Max was not too great. From ~40000 to ~65000. As you said, the Ultra didn't scale too well either. I do also realise that Geekbench has its own issues with optimisation for the M series. I know @leman has commented on these issues previously. Nonetheless, it'll be fun to see what if anything has changed.
The Pro to Max issue was mainly with GB, I asked about that on the Anandtech review and Andrei/Ryan dismissed that as a problem with clock spin up time. Basically individual GB tests were too short so the GPU never had time to get up to full speed. Longer graphics tests showed much better scaling. The same was not true for the Max to Ultra.
 
High Yield released an analysis of the M2 Pro die shot.

M2ProShot.jpg
 
What are the horizontal checkerboards, which seem to distinguish all the GPU cores (e.g., 16–18) from the one High Yield didn't label as a GPU core (lower left)?:

1674090607233.png
 
What are the horizontal checkerboards, which seem to distinguish all the GPU cores (e.g., 16–18) from the one High Yield didn't label as a GPU core (lower left)?:

View attachment 21309

Tough to say for sure, because we aren’t zoomed in quite enough, but they could be fused multiply-add blocks, or they could be some sort of local registers. Some sort of “macro” structure that isn’t composed of standard cells.
 
Looks like a GFXBench benchmark leaked: https://gfxbench.com/device.jsp?benchmark=gfx50&os=OS X&api=metal&cpu-arch=ARM&hwtype=GPU&hwname=Apple M2 Max&did=112188307&D=Apple M2 Max

Versus the M1 Max:

+39.1% in Aztec Ruins Normal (695 -> 967 fps)
+41.4% in Aztec Ruins High (281 -> 398 fps)
+35.6% in Car Chase (433 -> 588 fps)
+54.5% in 1440p Manhattan 3.1.1 Offscreen (435 -> 673 fps)
+52.7% in Manhattan 3.1 (851 -> 1300 fps)
+48.6% in Manhattan (1165 -> 1731 fps)
+57.2% in T-Rex (2063 -> 3243 fps)
+35.0% in ALU 2 (1091 -> 1473 fps)
+12.9% in Driver Overhead 2 (389 -> 439 fps)

Quite impressive, if true.
 
I thought most people just used 4K Aztec now. Anyway it’s 178.6 for the M2 Max vs 132.4 for the M1 Max. An increase of 34%.

Edit. Now a compute score for the M2 Pro - 52000

 
Last edited:
Slightly lower single core than expected. Multi core is very good.

Not sure about the Geekbench compute score though, Geekbench has its problems with high-end GPUs.
 
Here's what stands out to me, personally. The M2 Pro in Geekbench Metal scores 52,782. That exactly matches the RX 580 inside of my Blackmagic eGPU.

RX580.jpg


Keep in mind that in order to keep this thing cool and quiet, Apple and Blackmagic had to design it to look like a Soviet nuclear reactor.

BlackmagiceGPU.jpg
 
Looks like a GFXBench benchmark leaked: https://gfxbench.com/device.jsp?benchmark=gfx50&os=OS X&api=metal&cpu-arch=ARM&hwtype=GPU&hwname=Apple M2 Max&did=112188307&D=Apple M2 Max

Versus the M1 Max:

+39.1% in Aztec Ruins Normal (695 -> 967 fps)
+41.4% in Aztec Ruins High (281 -> 398 fps)
+35.6% in Car Chase (433 -> 588 fps)
+54.5% in 1440p Manhattan 3.1.1 Offscreen (435 -> 673 fps)
+52.7% in Manhattan 3.1 (851 -> 1300 fps)
+48.6% in Manhattan (1165 -> 1731 fps)
+57.2% in T-Rex (2063 -> 3243 fps)
+35.0% in ALU 2 (1091 -> 1473 fps)
+12.9% in Driver Overhead 2 (389 -> 439 fps)

Quite impressive, if true.

Are any of these native?
 
The leaked Jan 18 M2 Pro Mini GB result has the same base frequency (and thus essentially the same GB SC score) as the base M2 (3.49 GHz, 1932). It remains to be seen whether they have increased the base freq. in the M2 Max (as suggested by the leaked result from Dec). We'll have to wait to see if anyone new (i.e., other than "iro") posts results for that.

1674162561335.png
 
Well Afterburner 2.0 would not just be adding Afterburner to a machine that can already do better with what's built into its SoC. It would be an improved version that can do 7 gazillion streams of ProRes 4444XQ at 64K resolution. - Numbers are arbitrary but point is that Afterburner should be scaled up and would be able to offload work from the SoC, freeing it up for more effects work or whatnot. They might not do this with the argument that it's unnecessary, but I'm sure there's some studio out there that would want this
Afterburner is just a FPGA on a PCIe card, not custom silicon. FPGAs are extraordinarily inefficient compared to normal chips. You pay a huge price in area and power and clock speed for being able to arbitrarily redefine what they are in the field.

To me, Afterburner feels like a one-and-done product. In the 2018-2019 time frame, Apple should have already had most or all of the work done on the video codec block that was going into Apple Silicon, and they wanted to make at least a limited version of it available on the 2019 Mac Pro. So they had one of their board designers whip up a simple FPGA card and one of their RTL designers port the codec to FPGA, and that was that.

The 7 gazillion streams isn't something they can really address with a similar Afterburner 2, not unless they want to use outrageously expensive FPGAs (unit price: tens of thousands of dollars). The only real scope for it is providing newer codecs to older computers, but Apple will probably just want to sell you a new computer.
 
Afterburner is just a FPGA on a PCIe card, not custom silicon. FPGAs are extraordinarily inefficient compared to normal chips. You pay a huge price in area and power and clock speed for being able to arbitrarily redefine what they are in the field.

To me, Afterburner feels like a one-and-done product. In the 2018-2019 time frame, Apple should have already had most or all of the work done on the video codec block that was going into Apple Silicon, and they wanted to make at least a limited version of it available on the 2019 Mac Pro. So they had one of their board designers whip up a simple FPGA card and one of their RTL designers port the codec to FPGA, and that was that.

The 7 gazillion streams isn't something they can really address with a similar Afterburner 2, not unless they want to use the really ridiculously expensive FPGAs (unit price: tens of thousands of dollars). The only real scope for it is providing newer codecs to older computers, but Apple will probably just want to sell you a new computer.

It wouldn’t be insane for Apple to release dedicated cards for things like ML training, video encoding, etc. They won’t do it, but it wouldn’t be insane.
 
Back
Top