Nuvia: don’t hold your breath

Clock speeds are still the most reliable gains to get and I’m a huge fan of Apple increasing them mostly steadily for instance or Arm, with node gains eating them. But like, this is such a 2017 PC gamer argument about clocks? The performance is what we care about. Fatter wider cores with better prediction and broad IPC in integer code but at slightly slower clocks in general are going to be better than slightly less robust ones with more frequency in real code if anything — consumer code is notoriously branchy, sloppy, etc.

I would take an M2 Avalanche Core at 3.5GHz for instance over Lion Cove at 5.1 without hesitation if we wanted to get super technical about who comes out on top in some mass assay of real applications.
 
IMO the Linux issue wasn’t IPC so much as the power floor which riled me up because it jaded the curve more than anything else. I don’t think IPC even changed or aggregate performance really by much
I dunno actually comparing the two charts it does seem like Qualcomm's SpecInt performance improved, not just its power. Previously the 258V was getting higher performance than the Snapdragon and now it isn't. And yeah of course the power curve looks significantly better too. One of the things I will say for MS, WSL is supposedly *excellent*. I've never used it myself, but lots of people who have give it really high marks saying it has exceptionally low overhead, practically bare metal performance. So that's probably closer to its "real" performance, especially with respect to power, than the earlier charts despite those being "native" Linux.
 
I dunno actually comparing the two charts it does seem like Qualcomm's SpecInt performance improved, not just its power. Previously the 258V was getting higher performance than the Snapdragon and now it isn't. And the curve looks significantly better too. One of the things I will say for MS, WSL is supposedly *excellent*. I've never used it myself, but lots of people who have give it really high marks with exceptionally low overhead. So that's probably closer to its "real" performance, especially with respect to power, than the earlier charts despite those being "native".
I guess what I meant was the curve is the much bigger noticeable thing to me below 8W but fair yeah
 
View attachment 32942
I really cannot stand PC DIY morons, holy hell.
Qualcomm will probably deliver 5 GHz next gen, so they can shut up.
csm_Snapdragon-8-Gen-5-specs_710d8318c1.png
 
Yeah his analysis was similar to mine but way more brutal and frankly exasperated at the end, especially by the compatibility issues. Apple really did themselves a favor by culling 32bit apps a long time ago on both iOS and macOS to motivate the switch before transitioning to everything to ARM. The 32bit translation performance must've been what Charlie Demarjian was referring to when he was talking about the performance being dog shit 2000s Intel Core-level under certain circumstances (and he was blaming MS not Qualcomm).

I only wish, Apple would have made the Intel transition maybe a year later, then they would have skipped x86-32 completely.
I understand why they dropped the 32-bit API, but unfortuately, there is quite some legacy code that was never compiled for x86-64 and therefore doesn't run on current macOS (there might be other problems like deprecated API anyway).

I have to admit that I didn't do any performance tests with Windows on ARM, but I didn't get the feeling that the x86 to ARM64 translation was that bad.
Until GPTK was integrated into CrossOver, games certainly were more performant under Windows on ARM.
 
Last edited:
View attachment 33085
Looks like Qualcomm's roadmap is to follow Apple and creat SoCs with huge integrated GPUs.

I think this is a given to some degree, that said I was surprised Qualcomm is heading for a 192-bit bus with LPDDR5 on the next gen already for one product — while the 6+6 core model is said to be 128-bit. I expected them to keep it at 128-bit with high speed and just fix the GPU outright and make it reasonably sized which which would be more than enough for most, but the rumors point to both that and a separate larger SKU with 50% more cores, or 100% more big cores, and then a much bigger GPU and 50% bigger bus.

**(on LPDDR6, that would be more like a 224 or 256-bit because of signal integrity/ECC & other profile metadata capabilities, at least at the same data rate, and I am not sure LPDDR6 with the same datarate as their 192-bit LPDDR5 solution makes sense with that in mind. So I could see LPDDR6 going to more like 256-bit for that product, at least if they don’t use LPCAMM)


FWIW I doubt we will see Mx Max and up size stuff from QC but they will clearly try to compete with the Pro lineup.
 
View attachment 33085
Looks like Qualcomm's roadmap is to follow Apple and creat SoCs with huge integrated GPUs.
However... for this to be viable, they need to significantly evolve their GPU architecture like Apple has been doing in the past 4 years.

Adreno 8 is a step in the right direction, but there is still a lot of work of to do.

This explains why they are being conservative and going for a 192 bit memory bus for the top tier part in Snapdragon X2.
 
Back
Top