Oh and as far as I can tell GB 6 is using standard mainline Clang anyway:Snapdragon 8 Elite fails to match A18 Pro in CPU efficiency, Dimensity 9400 lags even further behind
Apple's A18 Pro lags behind the Snapdragon 8 Elite in multi-core CPU performance. However, it does manage to to reign supreme in single-core performance, while also being notably more efficient than both the Snapdragon 8 Elite and the Dimensity 9400 in CPU tests.www.notebookcheck.net
The user comment in this one is new (to me) ... I've heard Apple's version of Clang described as quirky given that it rarely matches a single mainline Clang version, but it is never been described as super-optimized relative to standard Clang. I've certainly never found it to be so. In general, my understanding and limited experience is that GCC will tend to produce faster code than Apple's Clang and occasionally so will mainline Clang because sometimes Apple's Clang can be a little more out of date than what you can get off the mainline branch. Obviously most of the time they'll be pretty similar. EDIT: Okay I mean I think there are a couple of defaults that are different and those can produce different, may more optimized results if you don't turn similar flags on for standard Clang, but not none of them are close to what Intel's ICC does, especially for SPEC.
Yeah ... I wasn't sure about what he meant, but he also seemed to say performance issues too. SoYes, it’s hard to say exactly what he means. I took it to be a complaint about their data for power. I would have thought it’s more straightforward to gather performance data. Of course I could be wrong!
Generally I'm fairly pleasant to disagree with . But part of that is that I definitely don't push when the other party wants to stop and respect that choice.We just don’t know. That’s the problem. In this case there is some correlation between laptop and desktop measurements but I’ve seen others where there isn’t. In the case of single core measurements where they are already efficient, small discrepancies can have a larger effect on the over ppw score.
I feel this has been gone over here many times. I think we’ll just have to accept that there isn’t going to be a consensus on this. Wall measurements are fine for system power, but I don’t believe they can tell us much about the cpu cores, which is what I am interested in.
I’ll also leave it here personally as I have no desire to get into the kind of fiasco that happened last time I discussed it here.
To be fair, I *think* the graphics benchmarks done here are more similar between the two these days, but you never know. Strictly speaking the Dimensity and Adreno should often outperform the A18 Pro given their raw stats. TBDR helps but I doubt it can overcome a potential 20% FP32 throughput deficit (maybe match it depending) - assuming 12 cores, 128 FP32 units per core, 1 GHz vs 6 cores (Dimensity/Adreno), 128 FP32 units per core, 1.6 GHz (A18 Pro) - but of course specific game and driver optimizations can matter more than both - and double the GPU cores clocked lower should have better efficiency stats too. That neither of the competing GPUs actually outperform the A18Pro in OpenCL* (where TBDR is of no help I should add) is ... as always ... fascinating. *Well strictly speaking OpenCL only on the M-series since I think the phones only have Metal scores these days, but we can extrapolate and at least the Adreno also competes (woefully) against the base M GPU.I continue to be confused by gpu comparisons on mobile chips. How can they quote OpenCL performance for the Elite without mentioning the A18 Pro does better. OK it loses on Steel Nomad Light, but that doesn’t test more demanding aspects of the gpu that the A18 will thrive on. I’d also wager the game they tested has higher quality graphics and resolution on the A18 Pro. How can it be said to have been crushed?
Weird.
Last edited: