Nuvia: don’t hold your breath

I’m not stating it’s a failure. I’m stating Qualcomm came out guns blazing about Apple Silicon beating performance, and even M3 beating performance, and that is unproven. If they wanted to be judged on a final product, they should have waited until they released one.

If they are free to use suspect pre-release products for marketing, then why wouldn’t we be able to criticise those numbers? Your response seems like an overreaction. The press has been nothing but fawning over these as yet unreleased and untested chips.

I reject the narrative that they, having started this nonsense, should be free from its consequences. I also don’t see why a consumer should care that Windows scores lower on Geekbench. That is the main product they are releasing.
They only claimed it would beat M2 to my knowledge, and all the info we have suggests… it should beat M2 (efficiency etc. remains to be seen).

I remember Apple claiming M1 Ultra could match the fastest Nvidia card of the time and we know how that played out 🤪

I think it’s important to acknowledge the performance deficit of Geekbench on Windows aarch64 when it’s being used to create a narrative. It’s one data point.
 
Do you have a sense if this is a GB or Windows issue? Is this true with other benchmark software?
I think it’s exclusively a Windows aarch64 issue. Performance is pretty even across Linux and Windows on x86 machines these days.
I don’t know why Windows aarch64 performs worse - something about the Windows aarch64 environment, different compiler tuning… no idea.
Edit: I can try Windows Cinebench aarch64 tomorrow.
 
They only claimed it would beat M2 to my knowledge, and all the info we have suggests… it should beat M2 (efficiency etc. remains to be seen).
No, they also claimed it beat the M3.

Which info suggests it beats the M2? The marketing from last years event, sure. The recent scores, not so much.
I remember Apple claiming M1 Ultra could match the fastest Nvidia card of the time and we know how that played out 🤪
iirc, it was close to the then top of the line 3090 in gfxbench.
I think it’s important to acknowledge the performance deficit of Geekbench on Windows aarch64 when it’s being used to create a narrative. It’s one data point.
If Qualcomm are making claims they can’t back up with actual data from real computers, I don't feel any need to acknowledge any Geekbench deficiencies. That’s the ground they chose.
 
No, they also claimed it beat the M3.

Which info suggests it beats the M2? The marketing from last years event, sure. The recent scores, not so much.

iirc, it was close to the then top of the line 3090 in gfxbench.

If Qualcomm are making claims they can’t back up with actual data from real computers, I don't feel any need to acknowledge any Geekbench deficiencies. That’s the ground they chose.
“Chipmaker Qualcomm has claimed that its new Snapdragon X Elite PC processor is 21% faster than Apple's latest M3 chip in multi-core performance” is possible if we’re talking X Elite 80W vs. a base M3. Maybe it’s not worth investing so much time and energy into vague hand-wavy statements, hmm

> Which info suggests it beats the M2?
The October Linux tests *and* the recent leaks accounting for the Windows performance deficit.

> iirc, it was close to the then top of the line 3090 in gfxbench.
Great. The chart didn’t state that though, did it. It just said “GPU Performance vs. Power” and claimed the same performance with 200W less power. It was both misleading and a stupid own goal, IMO.

> If Qualcomm are making claims they can’t back up with actual data from real computers, I don't feel any need to acknowledge any Geekbench deficiencies. That’s the ground they chose.
They chose to show Linux results! Everything else is leaked.
 
“Chipmaker Qualcomm has claimed that its new Snapdragon X Elite PC processor is 21% faster than Apple's latest M3 chip in multi-core performance” is possible if we’re talking X Elite 80W vs. a base M3. Maybe it’s not worth investing so much time and energy into vague hand-wavy statements, hmm
Andrei himself said there is no such thing as X Elite 80W so I don’t know. The point is they claimed it.
> Which info suggests it beats the M2?
The October Linux tests *and* the recent leaks accounting for the Windows performance deficit.
The October linux tests that got those scores as a result of a non-functioning fan control system? No, they get no credit for those
> iirc, it was close to the then top of the line 3090 in gfxbench.
Great. The chart didn’t state that though, did it. It just said “GPU Performance vs. Power” and claimed the same performance with 200W less power. It was both misleading and a stupid own goal, IMO.
The chart said a 3090 but not gfxbench. It’s not a ground I’m willing to defend Apple on however. The charts were abysmal and they shouldn’t be using them
> If Qualcomm are making claims they can’t back up with actual data from real computers, I don't feel any need to acknowledge any Geekbench deficiencies. That’s the ground they chose.
They chose to show Linux results! Everything else is leaked.
Pardon? The marketing event was literally overflowing with Windows charts and numbers. Linux was the asterisk they used to sneak their bs ~3200 GB 6 score in
1709690453460.png

1709690591943.png

1709690608750.png

1709690643985.png

1709690668697.png

1709690796294.png

1709690811029.png
 
Andrei himself said there is no such thing as X Elite 80W so I don’t know. The point is they claimed it.

The October linux tests that got those scores as a result of a non-functioning fan control system? No, they get no credit for those

The chart said a 3090 but not gfxbench. It’s not a ground I’m willing to defend Apple on however. The charts were abysmal and they shouldn’t be using them

Pardon? The marketing event was literally overflowing with Windows charts and numbers. Linux was the asterisk they used to sneak their bs ~3200 GB 6 score in
View attachment 28602
View attachment 28603
View attachment 28604
View attachment 28605
View attachment 28606
View attachment 28607
View attachment 28608
They included Windows too yes. We will have to wait and see, but I suspect the full speed fans only helped on a little on the multi-core score and provided no benefit to single core. That’s just my guess based on my experience with Geekbench. It’s bursty. Each subtext only lasts a short time - my silly 14700K box only hits 300W a few seconds at a time during the multi-core tests, for example.

By 80W I mean the 4.3GHz boost/3.8GHz base 80W ”Device TDP” test system. It’s in the Anandtech article you pulled those slides from.
 
Anyway, you guys do you

I share your frustrations about Apple coverage. I get irritated by the wild claims, conspiracy theories, misinformation etc. that surrounds Apple Silicon too. It was agonising waiting for Apple Silicon to finally launch to finally be able to say “told you so!” to all the naysayers. I suppose this is why I’m predisposed to question narratives more directly these days - I crave frank, no BS discussions.

If Snapdragon X Elite ends up shit you’ll see me poking fun at it on mastodon later this year.
 
Anyway, you guys do you

I share your frustrations about Apple coverage. I get irritated by the wild claims, conspiracy theories, misinformation etc. that surrounds Apple Silicon too. It was agonising waiting for Apple Silicon to finally launch to finally be able to say “told you so!” to all the naysayers. I suppose this is why I’m predisposed to question narratives more directly these days - I crave frank, no BS discussions.

If Snapdragon X Elite ends up shit you’ll see me poking fun at it on mastodon later this year.
A few things.

Firstly I don’t want to come across in an aggressive fashion, and if I have done so, I apologise. That’s not my intention.

Secondly, I do think it will be great if the X Elite provides some competition to Apple Silicon. We’ve seen improvements (cpu) that seem to be largely based on node shrinks and frequency increases since the M1. If they can get pushed, then so much the better for all.

Lastly, my frustration is do with the talk so far in advance of actual products, along with marketing that takes up silly positions. This applies to Apple’s gpu marketing on the M1 Ultra, but the Qualcomm event was some of the most hilarious stuff. Standing on stage and congratulating themselves on outdoing everyone on their first attempt. This combined with media who is willing to praise any score at all is quite galling.

We’ll see how they do, but I’m not that optimistic for the higher end scores they’ve been boasting of.
 
Last edited:
Here’s a new benchmark result. This time for Samsung book 4 edge. It’s equal to an M2 in Single core.

Nice, ~3.6% higher single core score than the best result I got from an M2 Pro Mac mini https://browser.geekbench.com/v6/cpu/compare/5220819?baseline=3380885

2785 points single core might be about 3064 points under Linux (assuming the -10%ish Windows score penalty I've observed applies here). Base M3 territory.

It appears to be the 4GHz boost SKU too, not the fastest 4.3GHz boost model.

Interesting stuff 🙂
 

Here’s a new benchmark result. This time for Samsung book 4 edge. It’s equal to an M2 in Single core.
That puts it comfortably behind a 12-core M2 Pro in MC (around 2%). Both are on N4, so that makes the comparsion level. M2, though, has 4 E-cores while the Elite X, AIUI, is a big P fest.

Though, I am not clear on the fab. Is it TSMC N4 or Samsung N4? It would be kind of funny if Samsung was selling a notebook with a TSMC-made SoC in it.
 
Last edited:
Interesting indeed. I'm curious on the difference in power consumption. I guess it comfortable beats x86_64 though, it therefore should make for a competitive Linux/Windows machine
 
There are a couple of new videos out about the X Elite. The first one from MaxTech goes over some benchmark numbers. They boast of their NPU getting 45 tops (unspecified precision) vs M3 18. Still seems a slippery to me. Also it seems like they are hinting there will be no 80 watt version. Not surprising.


 
Last edited:
nice to Qualcomm supporting all.
Not just DirectX
“Qualcomm says it has Adreno GPU drivers for DX11, DX12, Vulkan, and OpenCL and will also support DX9 and up to OpenGL 4.6 via mapping layers.”

After this I am positive about this. Now, for the reviews in June.
 
Interesting info if true. I didn’t see it referenced with a quick glance in the article, but perhaps I missed it? I trust Colin’s judgement usually.

View attachment 28777

I do not understand Cornaby’s comment. Qualcomm has its own GPU architecture, which is a TBR rasterizer with small tiles. It’s not TBDR and never was. And they never used PowerVR IP?
 
Back
Top