Nuvia: don’t hold your breath

It’s funny how much Intel is throwing at this just like I said. They’re beating QC on Perf/W ST in one benchmark — SpecInt.


Also:

IMG_8142.png

IMG_8141.png


IMG_8138.jpeg

Ignore the Apple comment, they are basically the same thing here. Just think of these as clusters per se or tiers.


This Geekerwan measurement with the Galaxy Book 16 in *Linux* Andrei said was not advised for power reasons both in idle (shows as you see — a .6W platform floor for Intel or so and a 1.8-2W floor for Qualcomm, but there’s a caveat *). Intel wins perf/W ST in SpecInt, Qualcomm wins the SpecFP results but by much more.

Given the GB16 is like the worst implementation out there I think this looks fine tbh.
*I believe his measurements were right in this specific case with the Lenovo or Asus Lunar Lake VRM monitoring vs The Galaxy Book VRM, but I am confident this includes some Samsung idiocy on a platform level whatever that may be or Linux like Andrei said.

Why?

Because A) Andrei but B) we have the XPS 9345 with Lunar Lake and The X Elite built to largely identical standards and power efficiency targets and both have similar web browsing and offline video playback in the 26-29 hour range under ideal constraints.

If you take Geekerwan’s idle floor at face value for the chips themselves and motherboard platforms in general it doesn’t make sense, .6 or so watts for Intel vs 1.8 for Qualcomm as an inherent mobo trait is going to be very influential in idle for video offloading even marginally in web browsing. QC’s probably isn’t perfect and I don’t mind them not being at Apple levels broadly but this is fishy.


In the one head-to-head test, what we actually see both from Dell’s own XPS 9345 stats (on Windows… too) is the Lunar Lake and Qualcomm X Elite laptops (display controlled etc, FHD) are similar (upper 20 hours) and — in laptopmag and tomsguide automated web browsing testing with plenty of idle, here are the results between them:

IMG_8136.jpeg
IMG_8137.jpeg




So I think the idea Linux and the Galaxy Book 16 sucks is almost certainly correct. It simply would not be possible for Qualcomm to get 26-29 hours of video playback in a 50-55whr battery just like Intel with Lunar Lake in the same laptop, much less match or beat them in two entirely separate web browsing tests in the same laptop if their overhead was that much higher (unless you claim a really inefficient active power from Intel which, yes okay but in video playback? And right when they’re using e cores for like 80% of tasks? Lol no.)

Anyway.

I feel pretty reasonable saying it’s clear who’s punching above their weight and who likely has more in store. QC isn’t even using E Cores nor N3 yet which would push power down meaningfully and battery up and as the 8 Gen 4 is likely to show, the standard frequencies achievable with reasonably dense N3B and N3E is pretty solid — if the X Elite were on N3B, I bet starting clocks would be 3.8-4GHz, peak probably 4.7 permitting timing but they had some slack it seems. It’s weird people think Lunar Lake is anything but a temporary lull in mindshare and revenue.

It's not at the top vs Apple, and it certainly isn’t cost competitive. Lunar Lake Graphics is nice but not a huge deal in context of the long arc here.
 
Huh


At the other place people were saying what a fantastic deal this was compared to the mini … apparently it was a bit too good of a deal. Overall, Qualcomm’s launch execution has been pretty lackluster.
 
Last edited:

As I’m sure many of you know I dislike when chip companies continually switch between different processors/devices to make different claims, but that aside Qualcomm seems more correct here than Intel’s marketing (who did the same thing), especially Intel’s misleading SPEC results. Intel doesn’t win on performance “period” and certainly not on performance/W. Of course, Qualcomm still has compatibility issues, a weak GPU, only a single SOC (with multiple bins), and lackluster developer support and overall execution.

Qualcomm’s second generation chips can improve some of these aspects intrinsically, but others will need work from more than just their hardware teams.
 

As I’m sure many of you know I dislike when chip companies continually switch between different processors/devices to make different claims, but that aside Qualcomm seems more correct here than Intel’s marketing (who did the same thing), especially Intel’s misleading SPEC results. Intel doesn’t win on performance “period” and certainly not on performance/W. Of course, Qualcomm still has compatibility issues, a weak GPU, only a single SOC (with multiple bins), and lackluster developer support and overall execution.

Qualcomm’s second generation chips can improve some of these aspects intrinsically, but others will need work from more than just their hardware teams.
Agree that QC is well ahead of Intel/AMD. I do find QC’a hyperbole very off-putting however. They didn’t fulfill their performance promise last year and now we”re back to unsubstantiated claims.

Do you know if they are supposed to announce their second gen laptop chips soon?
 
Agree that QC is well ahead of Intel/AMD. I do find QC’a hyperbole very off-putting however. They didn’t fulfill their performance promise last year and now we”re back to unsubstantiated claims.
Now that both sets of devices are out, as the Tomshardware author said, from reviews they can confirm most of what Qualcomm put out in this press conference wrt to Lunar Lake, just with the caveats that I also mentioned above. Basically any cherry picking and exaggeration done by Qualcomm wrt to Apple in the pre-release of Snapdragon was matched or surpassed by Intel wrt to Qualcomm in the Lunar Lake presentation.
Do you know if they are supposed to announce their second gen laptop chips soon?
They just announced new phone chips.


3nm node, but no description here if they are new Oryon cores or ports. It reads like just a port from 4nm to 3nm (because you’d think that they’d emphasize new cores), but I remember some of the early benchmark leaks were suggestive of a small iso-clock performance uplift. So it’s unclear if we’re getting v2 anytime soon.

Also, of minor interest, apparently the Dimensity 9400, which uses standard ARM cores, is likewise dropping efficiency cores. This is an odd trend on non-Apple flagship SOCs. I’m not quite sure what to make of it.

Edit: Although it should be said it is a little different where for Snapdragon all the cores are Oryons, just two at higher clocks, while the Dimensity 9400 has a mix of X925, X4, and A720 cores (and the A720 are more similar to Apple’s E-cores in performance). Which means in reality, the Dimensity chips have really only dropped the littlest 500-series cores. So I guess it’s not quite a comparable statement.
 
Last edited:
Now that both sets of devices are out, as the Tomshardware author said, from reviews they can confirm most of what Qualcomm put out in this press conference wrt to Lunar Lake, just with the caveats that I also mentioned above. Basically any cherry picking and exaggeration done by Qualcomm wrt to Apple in the pre-release of Snapdragon was matched or surpassed by Intel wrt to Qualcomm in the Lunar Lake presentation.
Oh I’m sure Intel did! I also saw a chart QC put out where they compare their highest score to the average A18 score.
They just announced new phone chips.
Yes, I watched some of the presentation. If I understand, these are newish cores designed for mobile. I’m wondering if a Oryon v2 will be previewed soon?
 
Oh I’m sure Intel did! I also saw a chart QC put out where they compare their highest score to the average A18 score.

Yes, I watched some of the presentation. If I understand, these are newish cores designed for mobile. I’m wondering if a Oryon v2 will be previewed soon?
Apparently Geekerwan has confirmed that in GB, the 8-core Snapdragon does indeed outperform the 6-core A18 to no one’s surprise, but also to no surprise costs more energy:


(Indeed cost more energy than performance you get out of it, though Dimensity is perhaps the least energy efficient of the trio, by a lot)
 
Apparently Geekerwan has confirmed that in GB, the 8-core Snapdragon does indeed outperform the 6-core A18 to no one’s surprise, but also to no surprise costs more energy:


(Indeed cost more energy than performance you get out of it, though Dimensity is perhaps the least energy efficient of the trio, by a lot)
Hm. Dr. Ian Cutress praised the efficiency of the Dimensity chips
 
Hm. Dr. Ian Cutress praised the efficiency of the Dimensity chips
He did. But that was relative to previous designs using ARM Holdings cores - especially those using 500-series cores. Compared to Apple and near-Apple cores, they are possibly still this far behind. Even the data he presented was suggestive of that and I think he even says it explicitly at one point in that video. This is one data point, obviously more are needed to be conclusive but I think that makes sense to me so far.
 
Last edited:
Apparently Geekerwan has confirmed that in GB, the 8-core Snapdragon does indeed outperform the 6-core A18 to no one’s surprise, but also to no surprise costs more energy:


(Indeed cost more energy than performance you get out of it, though Dimensity is perhaps the least energy efficient of the trio, by a lot)
I should've check the charts more closely the SD8E is also a tad more efficient in GB6 MT than the A18 at the same power. Of course it is still 8 vs 6 cores, but that's definitely better than I made it sound like in my post above:

 
I should've check the charts more closely the SD8E is also a tad more efficient in GB6 MT than the A18 at the same power. Of course it is still 8 vs 6 cores, but that's definitely better than I made it sound like in my post above:

I’m just gonna come out and say that I don’t trust Geekerwan’s power measurements. I’ll look for it, but I’m pretty sure Andrei stated they had made some errors of measurement recently.
 
I should've check the charts more closely the SD8E is also a tad more efficient in GB6 MT than the A18 at the same power. Of course it is still 8 vs 6 cores, but that's definitely better than I made it sound like in my post above:


They have more cores, so they can operate them at a lower frequency to reach the same performance. If these figures are correct, ~10% more performance for ~40% more energy does not strike me as a good tradeoff. However, this leads directly to the next point...

I’m just gonna come out and say that I don’t trust Geekerwan’s power measurements. I’ll look for it, but I’m pretty sure Andrei stated they had made some errors of measurement recently.

I don't trust any measurements of battery-powered devices. Internal sensors differ in what they report, and wall power measurement cannot account for what battery is doing.
 
They have more cores, so they can operate them at a lower frequency to reach the same performance.
Aye that’s why I caveated 8 vs 6 cores in both posts. :)

If these figures are correct, ~10% more performance for ~40% more energy does not strike me as a good tradeoff.

It’s not a good trade off. Though they can get slightly better performance at the same power they choose to push past that. That said, the 8 vs 6 cores to achieve this combined with the ST results seems to further support that Oryon is 20% (in this case more) behind its directly competitively-relevant Apple cores.
However, this leads directly to the next point...



I don't trust any measurements of battery-powered devices. Internal sensors differ in what they report, and wall power measurement cannot account for what battery is doing.

I’m just gonna come out and say that I don’t trust Geekerwan’s power measurements. I’ll look for it, but I’m pretty sure Andrei stated they had made some errors of measurement recently.
Interesting! Please do.

The battery powered device issue I understand, it's always been a concern of mine as well, though in practice, especially for things like laptops with large power adapters, I'm not sure how often it happens that wall power measurements are thrown off by the device drawing on battery* (unless a manufacturer is deliberately cheating). For a phone though, maybe. I know software measurements from internal sensors are also not super-reliable (Apple's powermetrics at least for average power -a even come with a disclaimer not to use as a comparison between devices), but that's why I like both when possible.

*e.g. whenever I see power measurements for a chip shared between laptops and desktop PCs they are often close enough, within standard variance, that I generally find the former trustworthy. That said, I do remember NBC managing to get a MacBook Pro 14" to draw more power than the standard power adapter, the 70W charger, was rated for by using the bigger, optional charger and giving the entire SOC (at least the CPU + GPU) a power virus (like both FurMark and Prime95 simultaneously or something). Of course they admitted that kind of power draw wasn't a likely scenario most users would encounter - just that it technically could happen that they could draw on battery while plugged in.
 
Last edited:
The battery powered device issue I understand, it's always been a concern of mine as well, though in practice, especially for things like laptops with large power adapters, I'm not sure how often it happens that wall power measurements are thrown off by the device drawing on battery* (unless a manufacturer is deliberately cheating). For a phone though, maybe. I know software measurements from internal sensors are also not super-reliable (Apple's powermetrics at least for average power -a even come with a disclaimer not to use as a comparison between devices), but that's why I like both when possible.

I think the problem is that with modern intelligent batteries and power adapters the device might decide to take the power from the battery instead/in addition to the power adapter for various reasons (like preserving battery health). You just never know. One could augment the data by monitoring the battery power draw but then we are back to square one: getting reliable and high-fidelity measurements.
 
I think the problem is that with modern intelligent batteries and power adapters the device might decide to take the power from the battery instead/in addition to the power adapter for various reasons (like preserving battery health). You just never know. One could augment the data by monitoring the battery power draw but then we are back to square one: getting reliable and high-fidelity measurements.
Yes. It’s my understanding that Apple keeps power moving in and out of the battery while plugged in. You can observe it go from 100% to 97/98% constantly.
 
Aye that’s why I caveated 8 vs 6 cores in both posts. :)



It’s not a good trade off. Though they can get slightly better performance at the same power they choose to push past that. That said, the 8 vs 6 cores to achieve this combined with the ST results seems to further support that Oryon is 20% (in this case more) behind its directly competitively-relevant Apple cores.



Interesting! Please do.

The battery powered device issue I understand, it's always been a concern of mine as well, though in practice, especially for things like laptops with large power adapters, I'm not sure how often it happens that wall power measurements are thrown off by the device drawing on battery* (unless a manufacturer is deliberately cheating). For a phone though, maybe. I know software measurements from internal sensors are also not super-reliable (Apple's powermetrics at least for average power -a even come with a disclaimer not to use as a comparison between devices), but that's why I like both when possible.

*e.g. whenever I see power measurements for a chip shared between laptops and desktop PCs they are often close enough, within standard variance, that I generally find the former trustworthy. That said, I do remember NBC managing to get a MacBook Pro 14" to draw more power than the standard power adapter, the 70W charger, was rated for by using the bigger, optional charger and giving the entire SOC (at least the CPU + GPU) a power virus (like both FurMark and Prime95 simultaneously or something). Of course they admitted that kind of power draw wasn't a likely scenario most users would encounter - just that it technically could happen that they could draw on battery while plugged in.
So there is the Geekerwan Lunar Lake review discussion on Reddit.
 
So there is the Geekerwan Lunar Lake review discussion on Reddit.

That seems to mostly be that the new chips aren't interacting well with Linux rather than Geekerwan's measurement methodology being bad in general (i.e. affecting iOS/Android). Unfortunately with the video's poor subtitles and Andrei's vague comments it's not clear to me if he's complaining that Geekerwan's measurements were off for Lunar Lake or Snapdragon or both. In the video, the Geekerwan host does indeed complain that the Linux version for Lunar Lake/Snapdragon wasn't fully ready and seemed to have problems with power states in particular which Andrei also brings up. My guess is that Andrei is complaining about missing Snapdragon performance. Looking at Geekerwan's graphs for Snapdragon, I have to say that the SPEC Int results in particular didn't track with my expectations for the Oryon core, but obviously my expectations aren't data so I'm not sure how meaningful that is. Also, some of the subtitles also suggested that Linux wasn't yet fully compatible with Lunar Lake either.

I think the problem is that with modern intelligent batteries and power adapters the device might decide to take the power from the battery instead/in addition to the power adapter for various reasons (like preserving battery health). You just never know. One could augment the data by monitoring the battery power draw but then we are back to square one: getting reliable and high-fidelity measurements.

Yes. It’s my understanding that Apple keeps power moving in and out of the battery while plugged in. You can observe it go from 100% to 97/98% constantly.

I'm just not sure how meaningful that is, compare M2 MacbookPro and M2 mini results:

Power is subtracting idleCB R23 ST (perf/W, W)CB R23 MT (perf/W, W)Witcher 3 (only W)
M2 MacbookPro219.6 pts/W, 7.2 W385.8 pts/W, 22.67 W31.8 W
M2 Mini213.8 pts/W, 7.7 W414.6 pts/W, 21.16 W26.3 W

The Witcher 3 results are the furthest at 20%, but in favor of the Mini not the laptop (the M2 Air did better than both on the Witcher 3 though it has to be said it may have throttled, worst on CB R23 MT, and best on CB R23 ST). If the battery of the MacBook Pro was significantly impacting these results, this shouldn't be the case, the Mini should always be less efficient. Now obviously I can't rule out that every laptop behaves itself under every circumstance, but I don't often see anything that stands out to me. Of course it *could* happen as is always something to watch out for when testing a device on battery. To make extra sure you could test with the battery removed for laptops (that used to work anyway), but I don't know anyone who does that.

==============================================


The user comment in this one is new (to me) ... I've heard Apple's version of Clang described as quirky given that it rarely matches a single mainline Clang version, but it is never been described as super-optimized relative to standard Clang. I've certainly never found it to be so. In general, my understanding and limited experience is that GCC will tend to produce faster code than Apple's Clang and occasionally so will mainline Clang because sometimes Apple's Clang can be a little more out of date than what you can get off the mainline branch. Obviously most of the time they'll be pretty similar. EDIT: Okay I mean I think there are a couple of defaults that are different and those can produce different, may more optimized results if you don't turn similar flags on for standard Clang, but not none of them are close to what Intel's ICC does, especially for SPEC.


==============================================


Adreno GPU and Dimensity outperform Apple's A18 Pro - both have 12 cores and my guess is clocked similarly (and unlike the rumors that the Dimensity would be clocked at 1.6Ghz, it must be closer to 1 in practice). They went with wider and slower designs. Better performance, better efficiency, more expensive. Interestingly the Adreno GPU outperforms the Dimensity in OpenCL which may have been the case the last generation too.
 
Last edited:
That seems to mostly be that the new chips aren't interacting well with Linux rather than Geekerwan's measurement methodology being bad in general (i.e. affecting iOS/Android). Unfortunately with the video's poor subtitles and Andrei's vague comments it's not clear to me if he's complaining that Geekerwan's measurements were off for Lunar Lake or Snapdragon or both. In the video, the Geekerwan host does indeed complain that the Linux version for Lunar Lake/Snapdragon wasn't fully ready and seemed to have problems with power states in particular which Andrei also brings up. My guess is that Andrei is complaining about missing Snapdragon performance. Looking at Geekerwan's graphs for Snapdragon, I have to say that the SPEC Int results in particular didn't track with my expectations for the Oryon core, but obviously my expectations aren't data so I'm not sure how meaningful that is. Also, some of the subtitles also suggested that Linux wasn't yet fully compatible with Lunar Lake either.
Yes, it’s hard to say exactly what he means. I took it to be a complaint about their data for power. I would have thought it’s more straightforward to gather performance data. Of course I could be wrong!
I'm just not sure how meaningful that is, compare M2 MacbookPro and M2 mini results:
We just don’t know. That’s the problem. In this case there is some correlation between laptop and desktop measurements but I’ve seen others where there isn’t. In the case of single core measurements where they are already efficient, small discrepancies can have a larger effect on the over ppw score.
Power is subtracting idleCB R23 ST (perf/W, W)CB R23 MT (perf/W, W)Witcher 3 (only W)
M2 MacbookPro219.6 pts/W, 7.2 W385.8 pts/W, 22.67 W31.8 W
M2 Mini213.8 pts/W, 7.7 W414.6 pts/W, 21.16 W26.3 W

The Witcher 3 results are the furthest at 20%, but in favor of the Mini not the laptop (the M2 Air did better than both on the Witcher 3 though it has to be said it may have throttled, worst on CB R23 MT, and best on CB R23 ST). If the battery of the MacBook Pro was significantly impacting these results, this shouldn't be the case, the Mini should always be less efficient. Now obviously I can't rule out that every laptop behaves itself under every circumstance, but I don't often see anything that stands out to me. Of course it *could* happen as is always something to watch out for when testing a device on battery. To make extra sure you could test with the battery removed for laptops (that used to work anyway), but I don't know anyone who does that.
I feel this has been gone over here many times. I think we’ll just have to accept that there isn’t going to be a consensus on this. Wall measurements are fine for system power, but I don’t believe they can tell us much about the cpu cores, which is what I am interested in.

I’ll also leave it here personally as I have no desire to get into the kind of fiasco that happened last time I discussed it here.
 

Adreno GPU and Dimensity outperform Apple's A18 Pro - both have 12 cores and my guess is clocked similarly (and unlike the rumors that the Dimensity would be clocked at 1.6Ghz, it must be closer to 1 in practice). They went with wider and slower designs. Better performance, better efficiency, more expensive. Interestingly the Adreno GPU outperforms the Dimensity in OpenCL which may have been the case the last generation too.
I continue to be confused by gpu comparisons on mobile chips. How can they quote OpenCL performance for the Elite without mentioning the A18 Pro does better. OK it loses on Steel Nomad Light, but that doesn’t test more demanding aspects of the gpu that the A18 will thrive on. I’d also wager the game they tested has higher quality graphics and resolution on the A18 Pro. How can it be said to have been crushed?

Weird.
 
Back
Top