x86 CPUs from AMD & Intel current/future releases.

Speaking of that guy (actually a bit unfair to deconstruct60 at MR), I feel like I was being clear in the below why I chose to compare the Surface Laptop and the MacBook Pro rather than to the Air and why I was making the comparison at all (i.e. to highlight how the Snapdragon Elite/Pro doesn't really align neatly as an SOC to the M3/M3 Pro making direct comparisons difficult - though it is perhaps closest to the binned M3 Pro):


And you know what ... when I posted the above I knew deconstruct60 was going to reply and I even knew pretty much what he was going to say and he did indeed do so, but I just want to double check that his confusion is not because I was being unclear. Because I recognize sometimes I'm not coherent, I'm sometimes quite tired and rushed or overly verbose, but I reread the above and it seems clear to me that I'm not arguing which device is the best value or that there aren't other better comparable Snapdragon devices. So I'm thinking it's not really worth responding to but I just want to make sure that this is just deconstruct60 being deconstruct60 - i.e. being technically correct (though they aren't always) but badly missing a fairly straightforward point. That maybe I spent too many words explaining. :)
Yeah, I’m gonna be honest, and it’s probably an issue with me, but I don’t think I have ever understood one of deconstruct60’s posts. No idea why. I either lose my train of thought or lose the will to plow through them.
 
Yeah, I’m gonna be honest, and it’s probably an issue with me,

It’s not
but I don’t think I have ever understood one of deconstruct60’s posts. No idea why. I either lose my train of thought or lose the will to plow through them.
He makes me look concise. And he frequently responds to a different point than what was being made.
 
Yeah, I’m gonna be honest, and it’s probably an issue with me, but I don’t think I have ever understood one of deconstruct60’s posts. No idea why. I either lose my train of thought or lose the will to plow through them.
oh, that’s the guy who had such nice things to say about me over there (but who likes to spell my name wrong)
 
It’s
Oof this guy. Agree with what you said to him and would add Geekbench 5.1 added AVX-512 specifically for AES XTS, which adds significantly to the IPC average. Something Anandtech points out.
Yep. I was being ridiculously charitable which is also funny because Apple still comes off hot. I also want to add that Apple doubling *peak* core power is nbd at the end of the day because they’re still so low and they got 20% out of it. Would I run M4 at 4.4GHz vs 4? I mean no, I would like a light mode that lets me run things riiiight at the peak for mobile devices, but this [well light throttling] would happen by default in say an iPad to some degree, and battery life with Apple is a full spectrum dominance. P core ST perf/W, E Core good perf and great efficiency, and excellent fabric/platform management for idle.

And yeah Andy is dishonest. That’s what makes me so irate

He can’t have it both ways, either AVX512/SME is misleading in terms of IPC (with the caveats about an overall score discussed previously) or it isn’t. He also is pontificating on the “M4 runs at 4.5 during multi-core tests and therefore will consume +100W”, even after @leman clarified that this is incorrect.
Yeah I have no idea what he’s on about the M max chips overall lol
So frustrating. I don’t mind people being wrong, I don’t mind people making mistakes, but this kind of dishonesty is infuriating.
Ya. Strix is also not impressive. He’s full of it. He really hates the M Pro/Max for some reason lol.
 
Yeah, I’m gonna be honest, and it’s probably an issue with me, but I don’t think I have ever understood one of deconstruct60’s posts. No idea why. I either lose my train of thought or lose the will to plow through them.
I just tried to read what he was saying and lol. It’s a mess
 
Any comments on the Lunar Lake reveal thus far?
Haven’t seen the presentation so I’m only going on second hand information. People seem happy overall with the IPC increases. I don’t know to what extent the figures were massaged.

Edit. Now I see they used their own compiler for the tests. Always a worry with Intel.
 
Last edited:
It’s

Yep. I was being ridiculously charitable which is also funny because Apple still comes off hot. I also want to add that Apple doubling *peak* core power is nbd at the end of the day because they’re still so low and they got 20% out of it. Would I run M4 at 4.4GHz vs 4? I mean no, I would like a light mode that lets me run things riiiight at the peak for mobile devices, but this [well light throttling] would happen by default in say an iPad to some degree, and battery life with Apple is a full spectrum dominance. P core ST perf/W, E Core good perf and great efficiency, and excellent fabric/platform management for idle.
To some extent I agree about the M P-cores and contrary to some of the commentary I've seen about the AMD Zen 5 release, I like that they didn't increase clocks Zen 5 by default. Leave that to the overclockers. The defaults should be sane and require reasonable power draws for multithreaded and single threaded. Intel should do the same, they were both getting ridiculous. Apple's watts are still very much sane and truthfully they have plenty of headroom left to play with, but I'd rather them not and I hope the power draws don't continue to creep up too much past this point. That said, I do worry that we're hitting IPC walls on some workloads. We can't know for certain unless everyone stalls at the same point or someone breaks through them, but I find that a more reasonable explanation than "Apple is doomed without GW3" for why clang performance hasn't increased beyond clocks. And of course the less exponential, more linear increases in node density and lack of SRAM scaling hurts everyone - chip design may not be ordering a pizza but there's no getting around it that fabrication is getting harder. Backside power delivery and GAA transistors should help, but I'm not saying anything novel by saying that slower fabrication improvements restricts design improvements to some extent.

And yeah Andy is dishonest. That’s what makes me so irate

Yeah I have no idea what he’s on about the M max chips overall lol

Ya. Strix is also not impressive. He’s full of it. He really hates the M Pro/Max for some reason lol.

M4 Max up to 110 watts. lol. Only one thread/core goes to 4.5GHz on the M4. If 3 or more are active, it goes to down to 3.94GHz. He hypes up everything but Apple.

Seeing this guy join Anandtech forums over the weekend and peddle his bullshit (specifically in the AMD thread so far) has been hilarious. Apparently his attempt on Reddit didn't go as well as he hoped. It still does fascinate me watching someone really try to craft a narrative in realtime, however. Like, what is he thinking during all of this? Does he believe it all, or just some of it?

Sounds like he’s on the fake-it-till-you-become-a-sociopath bandwagon.
The thing is I find it all so unnecessary. Basically I think Lion Cove and Skymont and Lunar Lake and Zen 5/c and Strix all look quite good to me. The M4 looks good to me. They're all solid improvements on their predecessors. But engineering isn't magic. This seems to happen to AMD and Apple and basically all of them - but especially "underdogs" even that "underdog" (Apple) is now one of the largest companies in the world. People get unreasonably hyped up, start believing in fairy tales, and then when reality comes crashing down it's a major disappointment - with someone is always there to spin, "no it really is magical!" when the correct response is, "it's still good, unreasonable disappointment is just as much obscuring the truth as unreasonable hype". Sure all of them have pulled off "magical" bits of engineering, but it was still engineering and expecting that every time isn't healthy. That's one of the facets of the Macrumors forums I really dislike - if something Apple designs doesn't poop rainbows despite Apple never promising it would poop at all but leakers assuring everyone it would people get irate and the trolls all come out saying, "see! Apple is doomed! Doomed!" Building hype is something every company does and none of them are blameless here, but fans do it even more - after all it is derived from fanatic. Sure I have my biases, everyone does, but I just like discussing cool tech and shit like Andy or the contrapositive to him, the maudlin reaction to Zen 5, or the fighting over Lunar Lake ... it's all so tiresome. I mean it's amusing watching him get clowned on the some extent, but I'll be honest I'd rather he was simply ignored to oblivion. Sadly that isn't how things work - ignoring something rarely actually consigns it to oblivion.

Anyway, in this case, Zen 5, the leakers actually did accurately predict that the 40% improvement was for AVX-512 workloads only (I guess people don't like Moore's Law is Dead? I don't know much about him other than he's a source of leaks) but people ignored that and just went "40% to the moon!" despite that being completely unreasonable for a performance core IPC improvement in this day and age. I mean I was circumspect when I first heard about it, but was it possible? Sure I suppose technically anything is possible, but it would be unprecedented for a single generation, certainly recently. People just need to temper expectations.
 
Last edited:
With the usual caveats that we don’t know the “real world” performance and energy of these new chips, there does seem to be an increased effort in the industry.

I am wondering what Apple will think/do in response, if anything? Do they start to worry? Does their pace of development increase? It already seems really fast to me since the release of the M3 and given the lead time for these things, they must have been planned many years in advance.

I wonder if we could see an M5 before too long?
 
With the usual caveats that we don’t know the “real world” performance and energy of these new chips, there does seem to be an increased effort in the industry.

I am wondering what Apple will think/do in response, if anything? Do they start to worry? Does their pace of development increase? It already seems really fast to me since the release of the M3 and given the lead time for these things, they must have been planned many years in advance.

I wonder if we could see an M5 before too long?
My default is that we won't see M5 until next year, possibly even in the fall. Yeah I know I said that for the M4 too (fall of this year), but I think the M4 being so early was probably a bit of freak occurrence that won't get repeated anytime soon. Apple's development cadence is still pretty good. My one unfulfilled wish for the M4 is that I had hoped for tensor cores in the GPU or otherwise increased GPU matmul, but again I expected the M4 later too. As for the CPU I don't think Apple has to worry and it's not clear to me if some seemingly stalled areas of development have walls right now or are just waiting for the right combination of advancements to come along and those can't be rushed. In the immediate future, I'm hoping for advancements in Metal at WWDC to catch up with CUDA. If I ever get healthy again, I might investigate actually using Metal for things. I'll admit that's one of the reason I'd like to get my hands on @leman's throughput and L1 cache code, other than just mucking about with it, I was thinking of using it as a springboard to learning Metal, maybe port the code to CUDA that kind of thing - a project to wet my feet with Metal as it were get familiar with someone else's code rather than starting something random from scratch. (@leman this is not me asking for it right away, please take your time, I'm not in any rush)
 
My default is that we won't see M5 until next year, possibly even in the fall. Yeah I know I said that for the M4 too (fall of this year), but I think the M4 being so early was probably a bit of freak occurrence that won't get repeated anytime soon. Apple's development cadence is still pretty good. My one unfulfilled wish for the M4 is that I had hoped for tensor cores in the GPU or otherwise increased GPU matmul, but again I expected the M4 later too. As for the CPU I don't think Apple has to worry and it's not clear to me if some seemingly stalled areas of development have walls right now or are just waiting for the right combination of advancements to come along and those can't be rushed. In the immediate future, I'm hoping for advancements in Metal at WWDC to catch up with CUDA. If I ever get healthy again, I might investigate actually using Metal for things. I'll admit that's one of the reason I'd like to get my hands on @leman's throughput and L1 cache code, other than just mucking about with it, I was thinking of using it as a springboard to learning Metal, maybe port the code to CUDA that kind of thing - a project to wet my feet with Metal as it were get familiar with someone else's code rather than starting something random from scratch. (@leman this is not me asking for it right away, please take your time, I'm not in any rush)
Oh God yes! I would love some Metal developments and sessions at WWDC. Everyone is excited for AI and while it’s definitely cool, I’m more interest in more prosaic improvement (which I might list in another thread). It would be great though to see something and I think there is a good chance.

Fingers crossed you feel better soon.
 
Oh God yes! I would love some Metal developments and sessions at WWDC. Everyone is excited for AI and while it’s definitely cool, I’m more interest in more prosaic improvement (which I might list in another thread). It would be great though to see something and I think there is a good chance.

We might have to wait for the developer sessions depending on what they talk about during the Keynote. But who knows maybe they'll give us some tidbits during the Keynote itself - if at all, it'll probably be how this new Metal feature supports a new AI tool. but still developer sessions should be fun. I hope.
Fingers crossed you feel better soon.
Thanks man!
 
(I guess people don't like Moore's Law is Dead? I don't know much about him other than he's a source of leaks)
He's a source of "leaks", by which I mean you never know if anything he claims is an actual leak. He's playing a very old confidence game: throw out lots of semi-informed guesses with absolute conviction, hype up the hits as evidence that you have inside sources, memory-hole the misses.

Like, I don't doubt that sometimes he may have legitimate sources, but just watch one of his videos and tell me if you really want to spend any time trying to figure out what's bullshit and what's not. He is incredibly smarmy and smug. I guess some people eat that up, but I sure don't enjoy it.
 
OT but looking at some of press shots and videos of Lunar Lake, I really wish the Mac’s Activity Monitor had the same detail as Windows Task Manager. So much detail
 
He's a source of "leaks", by which I mean you never know if anything he claims is an actual leak. He's playing a very old confidence game: throw out lots of semi-informed guesses with absolute conviction, hype up the hits as evidence that you have inside sources, memory-hole the misses.

Like, I don't doubt that sometimes he may have legitimate sources, but just watch one of his videos and tell me if you really want to spend any time trying to figure out what's bullshit and what's not. He is incredibly smarmy and smug. I guess some people eat that up, but I sure don't enjoy it.
Eugh sounds awful. I think I’ll pass on watching even one. Life is too short.
 
Still haven’t had much time to investigate. How do the new generation of gpus compare to Apple’s offerings?
I believe Lunar Lake has a similar TFLOPS to M1 but has matrix accelerators for additional TOPS. It also has a bunch a video codec stuff including a new one I hadn’t heard of before. Obviously actual performance in real applications is unknown.

I’m honestly not sure about Strix Point, given what I’ve seen it seems more like an Mx Pro analog to me though than an Mx base - with next year’s Strix Halo being a Mx Max analog. That certainly seems to be true for the CPU side of it, it might also be true for the GPU. That’s just my sense of it, someone with more insight might have a different take.
 
We might have to wait for the developer sessions depending on what they talk about during the Keynote. But who knows maybe they'll give us some tidbits during the Keynote itself - if at all, it'll probably be how this new Metal feature supports a new AI tool. but still developer sessions should be fun. I hope.
OT and tangentially related, but there was a discussion on Mastodon earlier between ex-Apple engineer Jonathan Wight @schwa and Colin Cornaby about Metal and Vulcan.

To my surprise @schwa was in favour of dumping Metal for Vulcan, whereas previously Metal sceptic Colin was opposed to it. I’ll post the initial link if anyone’s interested, and can post more of the conversation if people can’t access it for any reason.

1717611048033.png
 
OT and tangentially related, but there was a discussion on Mastodon earlier between ex-Apple engineer Jonathan Wight @schwa and Colin Cornaby about Metal and Vulcan.

To my surprise @schwa was in favour of dumping Metal for Vulcan, whereas previously Metal sceptic Colin was opposed to it. I’ll post the initial link if anyone’s interested, and can post more of the conversation if people can’t access it for any reason.

View attachment 29795

Everyone in that conversation agrees that it isn't going to happen but as an interesting thought experiment, could it? Yes. As I linked to below (and I believe was mentioned in the conversation) Asahi Linux has gotten Vulkan 1.3 working on Apple GPUs (with lots of work yet to be done):


My superficial reading is that Colin is right and that indeed the underlying hardware isn't really designed for Vulkan. That said, most of the worst parts of trying to get fully conformant Vulkan working wasn't Vulkan itself but getting a working Vulkan implementation that could also be amenable to DirectX emulation (see April 26th Direct3D). How performant that will be in the long run is unknown, but it definitely seemed like Vulkan itself wasn't as big of a hurdle as you might expect though there were some rough edges (some of which non-Apple hardware shared mind you). As Asahi mentioned in their Mastodon post, Apple could theoretically bake Mesa into the kernel and get Vulkan and OpenGL 4.6 working almost right away, but they can't do it themselves because the underlying graphics driver APIs are hidden and unstable. So in theory, it's very possible. That said is it actually desirable? and I don't just mean Apple having to give up control to Khronos (again) which we know how that went.

The final problem as Colin said and Jonathan agrees with is that one of the largest applications of "interesting graphics" is games and well ... Vulkan is also just a "port-to" API. There's no real value added there by adopting Vulkan. Jonathan describes Vulkan as a more fertile ground than Metal (though obviously preferring the Metal API in many ways), but that must be in applications beyond gaming.

As he says himself, Jonathan's real point comes later:



In your other thread about what Apple can do to catch Nvidia, we all agreed that Apple vastly needs to improve its documentation, tools, and developer outreach/support. It's the nuts and bolts stuff. It's not as sexy as language features and a streamlined API, but it's incredibly important. I feel like Apple is trying here with lots of new initiatives, but they need to go further and not get bored. They can't just run lean and wing it which is sometimes Apple's MO. More than just engineering, that requires an actual organization within Apple actually dedicated to pushing new tools and documentation, liaising with developers, creating online courses and tutorials, etc ... and again Apple historically runs very lean on head count and that takes lots of people.

So yeah hopefully Metal and developing a community around it gets some love from Apple, more than just support for whiz-bang AI. I kinda want to know who the one guy is ...
 
Last edited:
Back
Top