M3 core counts and performance

Saw this recently. Specint 2017 results for cpus. M3 and earlier tested. I’d love to see the M4 tested too.
https://blog.hjc.im/spec-cpu-2017
1715737932709.png
 
Saw this recently. Specint 2017 results for cpus. M3 and earlier tested. I’d love to see the M4 tested too.
https://blog.hjc.im/spec-cpu-2017
View attachment 29408

The operating system should be unified as natively launched as Debian 12
That’ll be really interesting when they get that working so everything is on an even playing field OS-wise. I wonder why just SpecInt and not float too? Too much work?

In the last column of the second chart, I thought the M3 Pro E-core might come out a bit better, but it is perf/clock rather than perf/W.
 
Last edited:
Saw this recently. Specint 2017 results for cpus. M3 and earlier tested. I’d love to see the M4 tested too.
https://blog.hjc.im/spec-cpu-2017
View attachment 29408

Oh, SPEC results for M4 are already there. Geekerwan posted a video (for now in Chinese). It is being discussed at Anandtech forums.

 
Last edited:
Oh, SPEC results for M4 are already there. Geekerwan posted a video (for now in Chinese). It is being discussed at Anandtech forums.

I noticed in that thread that someone also linked to an old rant about IPC that makes mine look ... moderate - i.e. that IPC should only be measured for precise sections of code never mind specific workloads. Probably true. He also talks about how indeed some algorithms are intrinsically low IPC, you simply aren't going to get around that no matter how smart your silicon designers are. @Jimmyjames you might find this interesting.


What I find really galling is when blue-check tech enthusiast posted a quote tweet of the the original tweet @Jimmyjames found, where he was preemptively upbraiding Youtube tech personalities for just mindlessly posting how Apple's M4 was going to be good because of the GB 6 score without understanding why and how the tweet @Jimmyjames found proved Apple were actually at a standstill. But they themselves don't actually understand the concepts at play either. There is no singular IPC value. Even what @leman and I did is more akin to clock normalized single threaded comparison, though at least we tried to use the available data we had to make it as specific to the workload as possible - again, we have the data we have.*

Edit: Here's the tweet:


Of course he also got that GB6 doesn't support AMX wrong ... and never even discussed the impact of AVX-512. And stated that no one will use it because the GPU and NPU exist ... sigh ... those will sometimes be too busy and too heavy for what's needed and the workload is better suited to an accelerator closer to the CPU, lower latency, lighter weight, and not occupied. That's why those accelerators exist, it's why Intel, AMD, Apple, and a host of other silicon design companies burn precious silicon on them. Gah! And it's not all AI workloads though that's the obvious current hotness.

EDIT2: *I should also point out that @leman used the correct term - iso-clock performance gains in his graph, I (mis)-used the term IPC in my chart and graph. Because those enthusiast tech guys who don't actually know anything? That'd be me too. 🙃

EDIT3: I have to be honest: I don't really have a problem with using IPC in a more colloquial manner ... but if someone's going to make statements about how ignorant other people are, they had better have their own ducks in a row. I know I'm not an expert either and I am also currently shitting on someone from on high right now. So I get the hypocrisy, I'm doing the same thing. But at least I guess I just rant on here to you guys ... and occasionally Macrumors. :) Okay I'll stop digging now ...
 
Last edited:
Oh, SPEC results for M4 are already there. Geekerwan posted a video (for now in Chinese). It is being discussed at Anandtech forums.

Couldn’t tell if the GHz of the GPU was said. Interesting that single thread wattage is getting up to around, even above 7 now. Not terrible by any stretch but starting creep up slowly.
 
Couldn’t tell if the GHz of the GPU was said. Interesting that single thread wattage is getting up to around, even above 7 now. Not terrible by any stretch but starting creep up slowly.
Do we trust Geekerwan’s power measurements? Iirc they measured the A17 at a really high wattage that wasn’t backed up by any other measurement.

Also, why do so many seem underwhelmed in that thread? Are they wrong or am I missing something?
 
Do we trust Geekerwan’s power measurements? Iirc they measured the A17 at a really high wattage that wasn’t backed up by any other measurement.
Memory fades ... I know @leman did his own testing I think he thought it was ballpark correct?

Also, why do so many seem underwhelmed in that thread? Are they wrong or am I missing something?
I dunno, not everyone there is. Some seem more than satisfied. Others ... well as someone said after a decade of big gains if you don't continue to do it even if there are diminishing returns left people grumble. Of course someone else then said it has to be because GW3 left. 🙃 As I wrote in the other thread, unless someone actually manages to leapfrog Apple in a big way, which hey maybe is possible, what do I know?, it seems hard to conclude that slower gains is due to one person ... however, I also wrote that by the same token just because the current lead chip design isn't moving as fast as people might want to see doesn't mean there isn't another design team out there that can do it. But so far I don't see any evidence that there is.
 
Do we trust Geekerwan’s power measurements? Iirc they measured the A17 at a really high wattage that wasn’t backed up by any other measurement.

I think they are as accurate as one can trust Apple APIs (in fact, I based my power measurement code on the scripts Geekerwan uses, so I trust the method). One has to keep in mind however that they are applying some very aggressive cooling to these devices to get the absolute peak performance. The power draw under normal circumstances is going to be more modest, along with a 3-5% lower peak performance.

Also, why do so many seem underwhelmed in that thread? Are they wrong or am I missing something?

I don't get folks there either. AMD scoring 9.4 in SPECint with their 50 watt desktop CPU core is revolutionary and amazing improvements, Apple scoring 11.4 in a passively cooled core that draws 7 watts is "meh".
 
Memory fades ... I know @leman did his own testing I think he thought it was ballpark correct?


I dunno, not everyone there is. Some seem more than satisfied. Others ... well as someone said after a decade of big gains if you don't continue to do it even if there are diminishing returns left people grumble. Of course someone else then said it has to be because GW3 left. 🙃 As I wrote in the other thread, unless someone actually manages to leapfrog Apple in a big way, which hey maybe is possible, what do I know?, it seems hard to conclude that slower gains is due to one person ... however, I also wrote that by the same token just because the current lead chip design isn't moving as fast as people might want to see doesn't mean there isn't another design team out there that can do it. But so far I don't see any evidence that there is.
I guess that’s my question: it seems like there has been a big improvement to me. I assume others know more, so what am I missing?
 
I think they are as accurate as one can trust Apple APIs (in fact, I based my power measurement code on the scripts Geekerwan uses, so I trust the method). One has to keep in mind however that they are applying some very aggressive cooling to these devices to get the absolute peak performance. The power draw under normal circumstances is going to be more modest, along with a 3-5% lower peak performance.

I thought they said the A17 pulled 14 watts single core? I may have misremembered.
I don't get folks there either. AMD scoring 9.4 in SPECint with their 50 watt desktop CPU core is revolutionary and amazing improvements, Apple scoring 11.4 in a passively cooled core that draws 7 watts is "meh".
Not just me then!
 
I guess that’s my question: it seems like there has been a big improvement to me. I assume others know more, so what am I missing?
Sure unless you average the test results in which case it's about 5% "IPC gains" or so ... which people tend to like numbers above 10, high teens especially (and remember SME doesn't count!).

I think they are as accurate as one can trust Apple APIs (in fact, I based my power measurement code on the scripts Geekerwan uses, so I trust the method). One has to keep in mind however that they are applying some very aggressive cooling to these devices to get the absolute peak performance. The power draw under normal circumstances is going to be more modest, along with a 3-5% lower peak performance.

Are you roger_k over there?

I don't get folks there either. AMD scoring 9.4 in SPECint with their 50 watt desktop CPU core is revolutionary and amazing improvements, Apple scoring 11.4 in a passively cooled core that draws 7 watts is "meh".
This is a little bit of the anchoring affect, where Apple had such big gains for so long, that's now the expectation. Plus just like in the worst parts of business, it's not about making a profit, it's about growth. Unless you're growing you're failing no matter how well you're doing. Same thing here. Finally, there is the implicit assumption is that everyone else will catch up to Apple and then surpass them if Apple keeps going "like this". That last part especially I have my doubts.
 
Sure unless you average the test results in which case it's about 5% "IPC gains" or so ... which people tend to like numbers above 10, high teens especially (and remember SME doesn't count!).
For sure, higher is better. Are ther chips actually consistently coming out with. Ipc increases over 10? Hasn’t Intel been churning out chips with basically no improvement for years?
 
For sure, higher is better. Are ther chips actually consistently coming out with. Ipc increases over 10?
The leaks are that Zen 5 will and Intel ... might ... of course the situation has flipped from over a decade ago. Intel and AMD are the ones with a gargantuan IPC/iso-clock single threaded performance deficit to make up for.
 
The leaks are that Zen 5 will and Intel ... might ... of course the situation has flipped from over a decade ago. Intel and AMD are the ones with a gargantuan IPC/iso-clock single threaded performance deficit to make up for.
Ok. Genuine question, I feel like I heard this before. Even if they do manage it, so what? They spent years (Intel) with little improvement. Now they must have credit for ipc gains while reducing clock speed and getting rid of smt? Wild.
 
Ok. Genuine question, I feel like I heard this before. Even if they do manage it, so what? They spent years (Intel) with little improvement. Now they must have credit for ipc gains while reducing clock speed and getting rid of smt? Wild.
Yup, people value gains, recent gains. And yeah we should see what actually gets released of course. AMD and Intel have a long, long way to go. Intel more so. Qualcomm is starting off much closer, because well of course they are, and we'll see what their V2 cores are like. ARM can conceivably speed up with the X5. But even if any of these eventually "catch" Apple as it were, so far there's no evidence of a chip design team that is actually going to do to Apple what Apple did Intel/AMD. This may age poorly, but that's because there is no evidence to believe otherwise yet. If such evidence comes up, I'll happily change my mind!
 
I think they are as accurate as one can trust Apple APIs (in fact, I based my power measurement code on the scripts Geekerwan uses, so I trust the method). One has to keep in mind however that they are applying some very aggressive cooling to these devices to get the absolute peak performance. The power draw under normal circumstances is going to be more modest, along with a 3-5% lower peak performance.

I have to say I am not fond of this extreme cooling that they seem to be doing lately. As far as I can see, it skews results both in terms of the score, and the power. I just want analysis of the M4 at the default power levels. It feels like a move aimed at increasing clicks in YouTube, which is fair if that’s their business, but it’s a shame there is seemingly little room for sober analysis.
 
I have to say I am not fond of this extreme cooling that they seem to be doing lately. As far as I can see, it skews results both in terms of the score, and the power. I just want analysis of the M4 at the default power levels. It feels like a move aimed at increasing clicks in YouTube, which is fair if that’s their business, but it’s a shame there is seemingly little room for sober analysis.
I just want the GPU's GHz! :sneaky: Also them running what I presume were user comments from the machine translations over the video was really annoying ... I don't remember them doing that before. I don't speak the language obviously but ... it seemed superfluous whatever it was.
 
I noticed in that thread that someone also linked to an old rant about IPC that makes mine look ... moderate - i.e. that IPC should only be measured for precise sections of code never mind specific workloads. Probably true. He also talks about how indeed some algorithms are intrinsically low IPC, you simply aren't going to get around that no matter how smart your silicon designers are. @Jimmyjames you might find this interesting.

Haha. I saw this and quoted it lately on twitter. I agree with it and your summary.
What I find really galling is when blue-check tech enthusiast posted a quote tweet of the the original tweet @Jimmyjames found, where he was preemptively upbraiding Youtube tech personalities for just mindlessly posting how Apple's M4 was going to be good because of the GB 6 score without understanding why and how the tweet @Jimmyjames found proved Apple were actually at a standstill. But they themselves don't actually understand the concepts at play either. There is no singular IPC value. Even what @leman and I did is more akin to clock normalized single threaded comparison, though at least we tried to use the available data we had to make it as specific to the workload as possible - again, we have the data we have.*
Oh my god! This person. So infuriating. I have been “discussing” some of his posts with him and some other people. We have been trying to show him some of his errors, but it really seems like he is just dishonest or very confused. Apart from all the things you have mentioned (SME, AVX-512, GB5 etc), he has produced some gems such as “obviously, some of the test data is wrong, so we disregard it”. Excuse me? The data determines the result, not you, unless you are predetermining the result you want. Also nuggets like “We won’t use Speedometer as it’s just a web browser test. Instead we will use Octane and Kraken”. What??
Edit: Here's the tweet:


Of course he also got that GB6 doesn't support AMX wrong ... and never even discussed the impact of AVX-512. And stated that no one will use it because the GPU and NPU exist ... sigh ... those will sometimes be too busy and too heavy for what's needed and the workload is better suited to an accelerator closer to the CPU, lower latency, lighter weight, and not occupied. That's why those accelerators exist, it's why Intel, AMD, Apple, and a host of other silicon design companies burn precious silicon on them. Gah! And it's not all AI workloads though that's the obvious current hotness.
Yes, utter nonsense. AVX is fine, SME is cheating.
EDIT2: *I should also point out that @leman used the correct term - iso-clock performance gains in his graph, I (mis)-used the term IPC in my chart and graph. Because those enthusiast tech guys who don't actually know anything? That'd be me too. 🙃

EDIT3: I have to be honest: I don't really have a problem with using IPC in a more colloquial manner ... but if someone's going to make statements about how ignorant other people are, they had better have their own ducks in a row. I know I'm not an expert either and I am also currently shitting on someone from on high right now. So I get the hypocrisy, I'm doing the same thing. But at least I guess I just rant on here to you guys ... and occasionally Macrumors. :) Okay I'll stop digging now ...
Aren’t we all? Well ok, I’m not on Macrumors…I was banned!
 
I just want the GPU's GHz! :sneaky: Also them running what I presume were user comments from the machine translations over the video was really annoying ... I don't remember them doing that before. I don't speak the language obviously but ... it seemed superfluous whatever it was.
Yeah, I am still very curious about he gpu. The GB results are still showing little variation. I don’t know if there has been some fine tuning to Dynamic Cache, but it really seems to be getting every ounce of possible performance from the gpu. Granted, this is only one test.
 
Back
Top