Apple: M1 vs. M2

Plus the computer field is probably more susceptible to having its nomenclature corrupted by how jargon is used in business, which I've noticed is the the opposite of how we use it in the sciences. Our attitude is typically: This stuff is really hard, so let's develop a logical naming system that makes things as clear and simple as possible (not always acheived, but at least that's the goal).

By contrast, in business I suspect the thinking is: This stuff isn't that different from what everyone else has, so instead of making that clear, let's come up with confusing names that make it sound impresive and different (and that obscure responsibility in case anything goes wrong). For instance, consider the names physicists assign to quarks: up, down, charm, strange, top, and bottom. The business jargon equilvalent for the up quark would probably be "leading-edge agile reconceptualized lightweight meta-particle".

Not to mention how some of the jargon is driven by marketing which may or may not be in their right mind at the time:

But yes, there’s definitely a push to “be unique in a saturated marketplace” in the business side which can infest the engineering side. I see a similar approach of “how do we make our library/framework/etc stand out to other engineers?” at times when it’s not being driven by the OSS community.
 
Not to mention how some of the jargon is driven by marketing which may or may not be in their right mind at the time:

But yes, there’s definitely a push to “be unique in a saturated marketplace” in the business side which can infest the engineering side. I see a similar approach of “how do we make our library/framework/etc stand out to other engineers?” at times when it’s not being driven by the OSS community.

1668107338802.png
 
Oh I fully agree with this rant. I've been trying to get into GPU computing and the lack of good quality basic information on how the architecture of a GPU works in practice is just bizarre. On the CPU front, there are several excellent books on the topic (most notably Hennessy and Patterson) that can get you started. On the GPU front, most if not all GPU books seem to revolve around writing software rather than exposing the actual architecture underneath. The few that mention architecture often do so only briefly and you can't trust them to be up to date. Internet resources are often too brief and repeat the same basic concepts over and over with minimal variations between them (which soon become the only interesting bits).

You can piece together some knowledge of the architecture after your go through enough resources, as engineers often brush over some of this details when discussing optimization. But then you have to add on top of all that vendor differences in architecure, different naming systems... it gets exhausting after a while.
Seems this fellow had the same issue back in 2014:

 
What's next in the sequence? RETLA? ("refined extended three letter acronym")? Or ETLAv1?

RETLA sounds like a logical step if you follow the x86-64 register naming scheme.
I typcially use the term alphabet soup for anything longer than an ETLA.
 
Considering all of the unpleasant things happening in the news, I thought we could use some comedy, courtesy of Max Tech.



I appreciate Vadim's enthusiasm, but sometimes he just doesn't seem to understand complex issues. His less sensationalistic brother actually provides useful information, because he typically does the "bakeoffs", while Vadim is the hype man. They're the modern tech equivalent of James Bailey and P.T. Barnum. Bailey was the circus man, Barnum was the side show con artist.
 
Considering all of the unpleasant things happening in the news, I thought we could use some comedy, courtesy of Max Tech.



I appreciate Vadim's enthusiasm, but sometimes he just doesn't seem to understand complex issues. His less sensationalistic brother actually provides useful information, because he typically does the "bakeoffs", while Vadim is the hype man. They're the modern tech equivalent of James Bailey and P.T. Barnum. Bailey was the circus man, Barnum was the side show con artist.

Holy crap, it’s been a while since I’ve seen someone so loosely and superficially string together data points for a narrative, only to be so completely wrong.

Didn’t even make it through the whole video. That was just…wow.

You can tell this guy has *no* understanding of the logistics of any part of the component manufacturing industry.
 
Holy crap, it’s been a while since I’ve seen someone so loosely and superficially string together data points for a narrative, only to be so completely wrong.

Didn’t even make it through the whole video. That was just…wow.

You can tell this guy has *no* understanding of the logistics of any part of the component manufacturing industry.
At times I can tolerate this channel, but in this case I agree. They are desperate for clicks and are making videos about literally anything. Without Apple releasing anything they are in trouble, hence videos like this nonsense.
 
Darn it! I was hoping this was the video where he outed me as the source for Apple SoC fabrications - I do them one at a time in my living room with a pile of sand and a hammer...
 
Purported leaked M2 Max GB5 scores have appeared. Even if it's legit, this could just be one of Apple's many development prototypes, so it may not tell us anything about whether the expected spring 2023 Pro/Max MBP's will be on N4P or N3.

Having said that, and assuming it's not spoofed, the SC score suggests this machine is using the same 5 nm N4P process as the current M2. [Though I suppose it could be on N3, but running other workloads at the same time.] And the 96 GB RAM suggests it's using 12 GB RAM modules, which first appeared on the M2 Air and 13" MBP.


1669838297664.png
 
Last edited:
1) The SC score suggests this machine is using the same 5 nm N4P process as the current M2 (rather than N3). [Though I suppose it could be on N3, but running other workloads at the same time.]

If it is N3, I suppose it’s also possible that they simply kept the same clock speed and turned it all into energy savings, but I don’t how reasonable that would be as a design point - as in when moving nodes sometimes you might leave savings/performance on the table depending on how things are tuned and the libraries, etc … obviously new node so maybe?
 
Those numbers look awfully close to a carbon-copy of M1->M1Max, with the expected small gain in E-cores. I mean, I get that Apple has lost a lot of talent in the SoC department, but are they going to just be treading water like this?
 
Last edited:
Those numbers look awfully close to a carbon-copy of M1->M1Max, with the expected small gain in E-cores. I mean, I get that Apple has lost a lot of talent in the SoC department, but are they going to just be treading water like this?

The performance gains are quite reasonable, assuming this isn’t N3. If they don’t jump in performance when they get to N3, that would indicate a problem.
 
If it is N3, I suppose it’s also possible that they simply kept the same clock speed and turned it all into energy savings, but I don’t how reasonable that would be as a design point - as in when moving nodes sometimes you might leave savings/performance on the table depending on how things are tuned and the libraries, etc … obviously new node so maybe?
It's certainly possible, but my guess is that they wouldn't make that design decision with the Pro/Max chips, since they're more performance-oriented.
 
It's certainly possible, but my guess is that they wouldn't make that design decision with the Pro/Max chips, since they're more performance-oriented.

Again, if it is N3 (edit: and ofc taking the values at face value as you mentioned), they may be going with everyone on the product stack gets the same single core performance. But it would be odd for a node change and as you and @Cmaier pointed out, you could push the multi-core performance harder which is the point of the Pro/Max and it would appear that they didn't? So who knows.
 
Last edited:
Purported leaked M2 Max GB5 scores have appeared. Even if it's legit, this could just be one of Apple's many development prototypes, so it may not tell us anything about whether the expected spring 2023 Pro/Max MBP's will be on N4P or N3.

Having said that, and assuming it's not spoofed, the SC score suggests this machine is using the same 5 nm N4P process as the current M2. [Though I suppose it could be on N3, but running other workloads at the same time.] And the 96 GB RAM suggests it's using 12 GB RAM modules, which first appeared on the M2 Air and 13" MBP.


View attachment 19739
Hmmm. I have quite bullish on Apple Silicon and I'm not changing my mind but...

these are a little disappointing. I mean they are fine, even good for laptops but I'm not sure this is gonna cut it for desktops. Yes Intel/AMD gobble power, but they are iterating and producing really good SC perf. Nvidia is really delivering huge amounts of compute with the 4090. I don't know if Apple can match that tbh. I just saw a test where the 4090 is delivering 300+fps 4k av1 or hevc encode. That's incredible. The M1 Max up until recently was the best for this that I found and it gets around 100fps. Nvidia have just demolished Apple's encoders. I wonder in my more doubtful moments if this isn't a repeat of 2013: Apple making the wrong gpu bet.

As a fan of desktops I'm a little concerned. We just aren't seeing any kind of progress. I was thinking earlier, since 2012 we've seen 4 reasonably powerful desktop macs. I don't count the mini or the iMacs. The 2013 Mac Pro (trash can), the 2017(?) iMac Pro, 2019 Mac Pro and 2022 Studio.

In 10 years, 4 desktops. That's not good enough. It really seems like there is no appetite for desktops within Apple, and no consistent endeavour. Pros want consistency and commitment to a platform. I just don't see that at the moment when it comes to desktops.

I would be interested if any gpu results leaked also. Haven't seen any.

(Yes this is a rambling post, but I'm struggling to keep the faith and these numbers do not help)
 
Back
Top