A18 Pro … your thoughts?

Yeah, I like how he stressed that apple doesn't design to win benchmarks, but they do as a side effect.

WE know they could likely push the chips in the Mac, iPad, etc. further to go faster, but they'd make for crappier products due to heat, battery life, etc.
I did a whole thread there about that a few years back and got slimed for my efforts:


The basic gist was this:

I actually got into a dispute with someone on another forum who claimed Apple went to their own silicon on iPhones and iPads purely to save money. I pointed out that adding together their investments Apple pays a LOT more for their SOCs in iPhones than if they had (for example) used Snapdragons. So why do they accept this cost? Because they are not a seller of chips they sell a complete set of devices and their user experiences - if people pay their price for these experiences then they are successful.

How does this translate to the Mac? Basically it means Apple could care less about Benchmarks or how many TFLOPS their GPU block processes - what they care about is how fast and smooth the user experience is or for a game that it runs with no chop and at the correct acceleration, that they deliver real hardware security and things like Face ID and also that the user is not annoyed by a noisy machine and such. And going to their own silicon lets them go down this path.
 
Speaking of not wining benchmarks as an aim...

The iPhone 16 Pro/Max reviews are out and with it some benchmark scores for cpu/gpu.

The one from Tom’s Guide stood out to me due to the continuing curious conundrum or Qualcomm gpus vs Apple gpus. Most real life tests show Apple’s gpus to be better in terms of performance and features. I had thought the reason Qualcomm had done well on tests like gfxbench/Wildlife Extreme, is there focus on simpler mobile gaming. Steel Nomad Light is a new test which is much more demanding than Wildlife Extreme etc. It still seems that QC is besting the iPhone here. Solar Bay is included and once again, the iPhone loses. The putative 2x ray tracing doesn’t manifest here, although as has been discussed, Solar Bay isn’t a particularly RT intensive test.

1726683679696.png


I wonder if QC are optimising for benchmarks or is something else happening?

Edit. RT still remains a mystery but going back and reading @dada_dave and @leman here and at the other place has revivified my memory!
 
Last edited:
He reiterated a point I’ve raised before - Apple doesn’t give a rat’s ass about designing chips to win artificial benchmarks, because they don’t have to sell chips. Other than that, my main thought was they must train these guys on how to respond, at Apple University. Engineers don’t naturally answer questions the way Srouji did

I've had the opportunity to be one of the engineers behind the scenes of a keynote demo, and the one thing I can probably get away with saying is just how detail oriented and methodical the whole process was.

I would not be surprised if this was agreed in advance and Srouji did prep/training with someone in the days leading up to the interview. It strikes me as a very Apple way to approach it. Interviews are PR, and a keynote speech or public interview that will get a lot of scrutiny isn't that different than a stage performance. Plan, practice, execute.
 
Speaking of not wining benchmarks as an aim...

The iPhone 16 Pro/Max reviews are out and with it some benchmark scores for cpu/gpu.

The one from Tom’s Guide stood out to me due to the continuing curious conundrum or Qualcomm gpus vs Apple gpus. Most real life tests show Apple’s gpus to be better in terms of performance and features. I had thought the reason Qualcomm had done well on tests like gfxbench/Wildlife Extreme, is there focus on simpler mobile gaming. Steel Nomad Light is a new test which is much more demanding than Wildlife Extreme etc. It still seems that QC is besting the iPhone here. Solar Bay is included and once again, the iPhone loses. The putative 2x ray tracing doesn’t manifest here, although as has been discussed, Solar Bay isn’t a particularly RT intensive test.

View attachment 31411

I wonder if QC are optimising for benchmarks or is something else happening?

Edit. RT still remains a mystery but going back and reading @dada_dave and @leman here and at the other place has revivified my memory!

From what I understand Adreno has a shader layout that works extremely well for mobile games. It is not entirely clear to me how many shader partitions the 750 in S24 Ultra contains, but the 730 has 2048 FP16 units and Oryon’s X1 has 3072 of them (so 750 is likely somewhere in the middle). A18 Pro in contrast has only 768 FP16 units. I am specifically mentioning FP16 units here because that what mobile graphic pipelines will use most of the time. Adreno has twice as many FP16 units compared to FP32 units, so the disparity is smaller here, but still heavily weighted in Qualcomms favor.

Of course, there is no free lunch. In order to achieve such high compute density, Qualcomm sacrifices caches and execution granularity. Their register files are also allegedly small, resulting in major loss of performance on slightly more complex shaders. The execution granularity issue is particularly problematic if they want to take advantage of their FP16 capabilities, as they have to execute a full 128-wide operation. Got less data? Your utilization will suffer. It matters less in graphics, where Qualcomm likely rasterizes and shades 8x16 fragment blocks at once, although the utilization is still poor if you get smaller triangles, as evident from the benchmarks (remember, Adreno does not have TBDR, so it can’t do perfect shading dispatch like Apple). On paper, Adreno should have a massive advantage. In reality, it’s 2-3x compete lead translates to mere 5-10% lead in synthetic graphics tests, pretty much zero lead in real-world games, and an extremely poor performance on anything even slightly more demanding.

IMO, it is a poor GPU architecture. It has been developed for simpler mobile games, and it works well enough there, but I just don’t see how it can be scaled to support the complex compute demands of modern software.
 
From what I understand Adreno has a shader layout that works extremely well for mobile games. It is not entirely clear to me how many shader partitions the 750 in S24 Ultra contains, but the 730 has 2048 FP16 units and Oryon’s X1 has 3072 of them (so 750 is likely somewhere in the middle). A18 Pro in contrast has only 768 FP16 units. I am specifically mentioning FP16 units here because that what mobile graphic pipelines will use most of the time. Adreno has twice as many FP16 units compared to FP32 units, so the disparity is smaller here, but still heavily weighted in Qualcomms favor.

Of course, there is no free lunch. In order to achieve such high compute density, Qualcomm sacrifices caches and execution granularity. Their register files are also allegedly small, resulting in major loss of performance on slightly more complex shaders. The execution granularity issue is particularly problematic if they want to take advantage of their FP16 capabilities, as they have to execute a full 128-wide operation. Got less data? Your utilization will suffer. It matters less in graphics, where Qualcomm likely rasterizes and shades 8x16 fragment blocks at once, although the utilization is still poor if you get smaller triangles, as evident from the benchmarks (remember, Adreno does not have TBDR, so it can’t do perfect shading dispatch like Apple). On paper, Adreno should have a massive advantage. In reality, it’s 2-3x compete lead translates to mere 5-10% lead in synthetic graphics tests, pretty much zero lead in real-world games, and an extremely poor performance on anything even slightly more demanding.

IMO, it is a poor GPU architecture. It has been developed for simpler mobile games, and it works well enough there, but I just don’t see how it can be scaled to support the complex compute demands of modern software.
Many thanks for your fantastic answer!

I suppose part of my confusion is due to the belief that Steel Nomad Light was a more demanding test than gfxbench etc. It’s possible that it is more demanding and still simple enough to allow QC gpus to perform well.
 
Speaking of not wining benchmarks as an aim...

The iPhone 16 Pro/Max reviews are out and with it some benchmark scores for cpu/gpu.

The one from Tom’s Guide stood out to me due to the continuing curious conundrum or Qualcomm gpus vs Apple gpus. Most real life tests show Apple’s gpus to be better in terms of performance and features. I had thought the reason Qualcomm had done well on tests like gfxbench/Wildlife Extreme, is there focus on simpler mobile gaming. Steel Nomad Light is a new test which is much more demanding than Wildlife Extreme etc. It still seems that QC is besting the iPhone here. Solar Bay is included and once again, the iPhone loses. The putative 2x ray tracing doesn’t manifest here, although as has been discussed, Solar Bay isn’t a particularly RT intensive test.

View attachment 31411

I wonder if QC are optimising for benchmarks or is something else happening?

Edit. RT still remains a mystery but going back and reading @dada_dave and @leman here and at the other place has revivified my memory!
From what I understand Adreno has a shader layout that works extremely well for mobile games. It is not entirely clear to me how many shader partitions the 750 in S24 Ultra contains, but the 730 has 2048 FP16 units and Oryon’s X1 has 3072 of them (so 750 is likely somewhere in the middle). A18 Pro in contrast has only 768 FP16 units. I am specifically mentioning FP16 units here because that what mobile graphic pipelines will use most of the time. Adreno has twice as many FP16 units compared to FP32 units, so the disparity is smaller here, but still heavily weighted in Qualcomms favor.

Of course, there is no free lunch. In order to achieve such high compute density, Qualcomm sacrifices caches and execution granularity. Their register files are also allegedly small, resulting in major loss of performance on slightly more complex shaders. The execution granularity issue is particularly problematic if they want to take advantage of their FP16 capabilities, as they have to execute a full 128-wide operation. Got less data? Your utilization will suffer. It matters less in graphics, where Qualcomm likely rasterizes and shades 8x16 fragment blocks at once, although the utilization is still poor if you get smaller triangles, as evident from the benchmarks (remember, Adreno does not have TBDR, so it can’t do perfect shading dispatch like Apple). On paper, Adreno should have a massive advantage. In reality, it’s 2-3x compete lead translates to mere 5-10% lead in synthetic graphics tests, pretty much zero lead in real-world games, and an extremely poor performance on anything even slightly more demanding.

IMO, it is a poor GPU architecture. It has been developed for simpler mobile games, and it works well enough there, but I just don’t see how it can be scaled to support the complex compute demands of modern software.
Many thanks for your fantastic answer!

I suppose part of my confusion is due to the belief that Steel Nomad Light was a more demanding test than gfxbench etc. It’s possible that it is more demanding and still simple enough to allow QC gpus to perform well.


So after writing my response here about the RE:V results, I've got a new hypothesis. It's similar to the old one but a small addendum. First off the Qualcomm GPU on the phone SOC is just bigger than the iPhone's and indeed is geared towards simple mobile gaming, as @leman went through above. This lets it win on graphics benchmarks both intensive and light while getting crushed in compute or tasks that contain compute. That's it's a bigger GPU is the main thing that we're seeing here in @Jimmyjames's chart. Like @leman I've had difficulty finding exact specs for the 750, but we see that in the chipsandcheese* and NotebookCheck article the cutdown X1 3.8TFLOPs is only about 20% faster than the 750 in all the benchmarks whereas the M3/M4 is almost (or sometimes more than) double their equivalent iPhone's scores. So the 750 is just a physically bigger, more powerful GPU than what's in the iPhone and most of the time that's simply unassailable for parallel tasks ... unless your microarchitecture is so bad like Qualcomm's is for compute.

So why doesn't the Adreno GPU in the 8 gen 3 do better in games, including mobile games? I've been thinking about that and I think one part of this conundrum might also be the CPU.

It's true that for light graphics benchmarks, graphics benchmarks can become CPU-limited too. It gets to the point where the GPU is completing its tasks so quickly that the bottleneck starts to become the CPU sending commands to the graphics pipeline. However, for a game on an iPhone/QC device rendered at low resolutions, that problem becomes even more pronounced because the CPU has a lot more work to do. There's a lot of physics and object tracking and AI and all sorts of things the CPU is doing that the graphics benchmark isn't. Some of these things can also be done as compute on the GPU as well! - which we know Qualcomm GPUs suck at so there's another reason. The 8 gen 3 only has the old ARM chips which are not great for ST/lightly MT tasks, especially compared to Apple's cores. To test this hypothesis, we would need an iPhone and an Android 8 gen 3 (and preferably 8 gen 4 when it comes since my hypothesis is that it should suffer less from this!) and test the behavior of each on different graphics settings. Now unfortunately as covered extensively already for mobile games especially they often don't have graphics settings and those settings aren't the same between platforms. That makes this test difficult in practice. I did try testing this hypothesis with graphics benchmarks themselves and fascinatingly it came out opposite to how I expected. The lower the resolution or degree of difficulty, the better the Adreno 750 GPU did compared to the iPhone 15 Pro/16. I expected it to become CPU limited at some point and the power of the CPU to take over, but that never really happened. You can see this in the android authority article comparing Wildlife basic vs Extreme and here on GFXbench comparing the different flavors of Aztec offscreen and the 750 GPUs with the iPhone 16. Either that or for both 3D Wildlife and Aztec ruins the MT CPU score is more important for the graphics benchmark than the ST score and there the two CPUs are closer. Again, this could be very different than for an actual game!

There is another possible explanation related to the point about how games treat different platforms: iPhone is where the money is and devs spend more time optimizing their mobile games for performance on iPhones than Androids. It's also easier to optimize for iPhone as there are fewer chips and models. This possibility has been brought up a few times, especially with Genshin Impact. And would be almost impossible for us to test.

So bottom line: the Adreno 750 is almost certainly bigger and more powerful than the GPU in the A17 Pro/A18 (Pro). It's going to win on graphics benchmarks. For actual games, it's possible the CPU lets it down, it's possible its lack of compute ability for some games that require that lets it down, and it's possible devs optimize for iPhone more than for Android. Also, once we equilibrate things a bit, i.e. the X1 compared to the M3/M4 which have comparable GPU sizes, the latter wipe the floor with the X1 in both benchmarks and games (notebookcheck article and M3 Air reviews). So all of the above?

*In my previous post on your RE:V thread, I mistakenly thought the chipsandcheese article had an Elite 84 chip with 4.6TFLOP GPU but it didn't it only had the 80 with the 3.8 TFLOPs - chipsandcheese said they had the 80 and did mention both clocks and core counts but separated them in text and I missed the clock speed, my fault. The nice thing about that is the benchmark results are now comparable with notebookcheck who didn't do Solar Bay. The less nice thing is I thought in the Cyberpunk section that the laptop class 8cs gen3 they compare the X1 to was the same as the 8 gen 3 forgetting that the 8cs gen 3 had an even older Adreno 690 GPU and so that section is completely irrelevant to our discussion here. I forgot just how bad Qualcomm's laptop SOCs were prior to the X1.
 
Last edited:
So after writing my response here about the RE:V results, I've got a new hypothesis. It's similar to the old one but a small addendum. First off the Qualcomm GPU on the phone SOC is just bigger than the iPhone's and indeed is geared towards simple mobile gaming, as @leman went through above. This lets it win on graphics benchmarks both intensive and light while getting crushed in compute or tasks that contain compute. That's it's a bigger GPU is the main thing that we're seeing here in @Jimmyjames's chart. Like @leman I've had difficulty finding exact specs for the 750, but we see that in the chipsandcheese* and NotebookCheck article the cutdown X1 3.8TFLOPs is only about 20% faster than the 750 in all the benchmarks whereas the M3/M4 is almost (or sometimes more than) double their equivalent iPhone's scores. So the 750 is just a physically bigger, more powerful GPU than what's in the iPhone and most of the time that's simply unassailable for parallel tasks ... unless your microarchitecture is so bad like Qualcomm's is for compute.

So why doesn't the Adreno GPU in the 8 gen 3 do better in games, including mobile games? I've been thinking about that and I think one part of this conundrum might also be the CPU.

It's true that for light graphics benchmarks, graphics benchmarks can become CPU-limited too. It gets to the point where the GPU is completing its tasks so quickly that the bottleneck starts to become the CPU sending commands to the graphics pipeline. However, for a game on an iPhone/QC device rendered at low resolutions, that problem becomes even more pronounced because the CPU has a lot more work to do. There's a lot of physics and object tracking and AI and all sorts of things the CPU is doing that the graphics benchmark isn't. Some of these things can also be done as compute on the GPU as well! - which we know Qualcomm GPUs suck at so there's another reason. The 8 gen 3 only has the old ARM chips which are not great for ST/lightly MT tasks, especially compared to Apple's cores. To test this hypothesis, we would need an iPhone and an Android 8 gen 3 (and preferably 8 gen 4 when it comes since my hypothesis is that it should suffer less from this!) and test the behavior of each on different graphics settings. Now unfortunately as covered extensively already for mobile games especially they often don't have graphics settings and those settings aren't the same between platforms. That makes this test difficult in practice. I did try testing this hypothesis with graphics benchmarks themselves and fascinatingly it came out opposite to how I expected. The lower the resolution or degree of difficulty, the better the Adreno 750 GPU did compared to the iPhone 15 Pro/16. I expected it to become CPU limited at some point and the power of the CPU to take over, but that never really happened. You can see this in the android authority article comparing Wildlife basic vs Extreme and here on GFXbench comparing the different flavors of Aztec offscreen and the 750 GPUs with the iPhone 16. Either that or for both 3D Wildlife and Aztec ruins the MT CPU score is more important for the graphics benchmark than the ST score and there the two CPUs are closer. Again, this could be very different than for an actual game!

There is another possible explanation related to the point about how games treat different platforms: iPhone is where the money is and devs spend more time optimizing their mobile games for performance on iPhones than Androids. It's also easier to optimize for iPhone as there are fewer chips and models. This possibility has been brought up a few times, especially with Genshin Impact. And would be almost impossible for us to test.

So bottom line: the Adreno 750 is almost certainly bigger and more powerful than the GPU in the A17 Pro/A18 (Pro). It's going to win on graphics benchmarks. For actual games, it's possible the CPU lets it down, it's possible its lack of compute ability for some games that require that lets it down, and its possible devs optimize for iPhone more than for Android. Also, once we equilibrate things a bit, i.e. the X1 compared to the M3/M4 which have comparable GPU sizes, the latter wipe the floor with the X1 in both benchmarks and games (notebookcheck article and M3 Air reviews). So all of the above?

*In my previous post on your RE:V thread, I mistakenly thought the chipsandcheese article had an Elite 84 chip with 4.6TFLOP GPU but it didn't it only had the 80 with the 3.8 which chipsandcheese doesn't mention. However, the nice thing about that is the benchmark results are now comparable with notebookcheck who didn't do Solar Bay. The less nice thing is I thought in the Cyberpunk section that the 8cs gen3 they compare the X1 to was the same as the 8 gen 3 forgetting that the 8cs gen3 had an even older Adreno 690 GPU and so that section is completely irrelevant. I forgot just how bad Qualcomm's laptop SOCs were previous to the X1.
Another fantastic answer. Many thanks!
 
For anyone interested, Geekerwan have released their iPhone 16/Pro (A18 Pro) analysis. I haven’t had a chance to watch it yet but someone posted this screenshot, seemingly showing higher Specint scores with more power draw. I am not even gonna touch on their power measurement methodology this time.
1726708719669.png


One tidbit, it seems Apple has delayed the “Boost mechanism” on the A18 to give better battery life. This has the effect of lowering scores on benchmarks such as Geekbench.

1726709026033.png


It seems, at least according to Geekerwan, that Apple’s claim about 2x ray tracing performance are correct.

1726709494422.png

 
Last edited:
For anyone interested, Geekerwan have released their iPhone 16/Pro (A18 Pro) analysis. I haven’t had a chance to watch it yet but someone posted this screenshot, seemingly showing higher Specint scores with more power draw. I am not even gonna touch on their power measurement methodology this time.
View attachment 31418

i’ll watch this, but i want to see curves, not points. What’s important is where the chip operates on the curve - the left, right, or center of the knee. Optimum is right on the knee.
 
i’ll watch this, but i want to see curves, not points. What’s important is where the chip operates on the curve - the left, right, or center of the knee. Optimum is right on the knee.
Unfortunately it's a little harder to manipulate iPhone/Mac clock speeds to get curves. EDIT: although wait a minute ... didn't @leman do that?

EDIT2: yes he did: https://techboards.net/threads/power-curves-of-a17.4348/ ... huh ... although it isn't obviously running SPEC at those different frequencies and measuring output which is I think what you really want, but even so, it's been awhile since I looked through the code, @leman how did you get the core to run at different frequencies at all?
 
Last edited:
@leman how did you get the core to run at different frequencies at all?

Natural charisma and personal charm?

I simply observed the power and frequency counters over time. The frequency fluctuates naturally in response to various conditions. Testing a single thread while introducing additional work elsewhere already gives you a range of frequencies. Testing multi-core operation gives you another range of frequencies.

For my purpose, which was building frequency/power curves, this worked rather well. Of course, you can’t really use this method to measure performance, precisely because frequency varies over time.
 
Natural charisma and personal charm?

I simply observed the power and frequency counters over time. The frequency fluctuates naturally in response to various conditions. Testing a single thread while introducing additional work elsewhere already gives you a range of frequencies. Testing multi-core operation gives you another range of frequencies.

For my purpose, which was building frequency/power curves, this worked rather well. Of course, you can’t really use this method to measure performance, precisely because frequency varies over time.
Ah I see. Got it, thanks.
 
So after writing my response here about the RE:V results, I've got a new hypothesis. It's similar to the old one but a small addendum. First off the Qualcomm GPU on the phone SOC is just bigger than the iPhone's and indeed is geared towards simple mobile gaming, as @leman went through above. This lets it win on graphics benchmarks both intensive and light while getting crushed in compute or tasks that contain compute.
I think the philosophical difference here is that Apple are building their GPU architecture to be reused across the stack from watch to Mac Pro / Cloud compute server and Qualcomm are building to try and win the mobile space.

not only will the Qualcomm chip likely get crushed on compute, it will also likely get crushed on general desktop workloads vs. something like the iPad, which has a GPU built with scaling to high end desktop or cloud compute in mind.

I don't see this changing until or unless Qualcomm make a proper push into cloud computing or high end desktop, but right now the market for their product is low-cost low-power devices, where Windows on X86/x64 totally sucks; partially due to the processors but mostly due to the 40 years of legacy baggage inside of windows and the fact that peripheral drivers have been written by the lowest bidder with little thought to proper testing against things like sleep, power consumption, etc.

Breaking compatibility with legacy peripheral drivers and having most things built into the SOC that microsoft can have a trusted developer build drivers and firmware for is a side effect of the architecture change but probably one of the biggest opportunities for Microsoft to fix windows properly, at least as far as efficiency and reliability goes.

But, PC land being PC land, shitware drivers will inevitably creep in (due to lowest bidder for software point above) so I don't hold much hope.
 
I think the philosophical difference here is that Apple are building their GPU architecture to be reused across the stack from watch to Mac Pro / Cloud compute server and Qualcomm are building to try and win the mobile space.

not only will the Qualcomm chip likely get crushed on compute, it will also likely get crushed on general desktop workloads vs. something like the iPad, which has a GPU built with scaling to high end desktop or cloud compute in mind.

I don't see this changing until or unless Qualcomm make a proper push into cloud computing or high end desktop, but right now the market for their product is low-cost low-power devices, where Windows on X86/x64 totally sucks; partially due to the processors but mostly due to the 40 years of legacy baggage inside of windows and the fact that peripheral drivers have been written by the lowest bidder with little thought to proper testing against things like sleep, power consumption, etc.

Breaking compatibility with legacy peripheral drivers and having most things built into the SOC that microsoft can have a trusted developer build drivers and firmware for is a side effect of the architecture change but probably one of the biggest opportunities for Microsoft to fix windows properly, at least as far as efficiency and reliability goes.

But, PC land being PC land, shitware drivers will inevitably creep in (due to lowest bidder for software point above) so I don't hold much hope.
I think you're probably right, but that has roughly no bearing on smartphones, and probably not that much on thin/light windows/arm laptops, because of their typical use cases (which will rarely involve peripherals and associated shitty drivers). In that way, QC's strategy here is reasonably savvy.

Where they *may* fall down is with PC gaming. If it turns out that modern PC gaming is demanding enough that their GPU can't reasonably handle it, that's bad - but maybe not bad enough to matter. After all, to a first approximation that's been true for Macs for many years, though they are maaaaybe showing some signs of life now. And yet as a product they're doing fine, because it turns out there's plenty of people interested in a thin/light laptop that has great battery life and performance, even if it's not useful for gaming.

And if smartphones are running less demanding code, then that's even better - they can probably compete in that category, even if (and this remains to be seen) real-world performance doesn't live up to the benchmarks, and they don't measure up to current iphone standards.

I don't expect them to beat Apple significantly in most ways. But they may well be good enough.
 
Speaking of not wining benchmarks as an aim...

The iPhone 16 Pro/Max reviews are out and with it some benchmark scores for cpu/gpu.

The one from Tom’s Guide stood out to me due to the continuing curious conundrum or Qualcomm gpus vs Apple gpus. Most real life tests show Apple’s gpus to be better in terms of performance and features. I had thought the reason Qualcomm had done well on tests like gfxbench/Wildlife Extreme, is there focus on simpler mobile gaming. Steel Nomad Light is a new test which is much more demanding than Wildlife Extreme etc. It still seems that QC is besting the iPhone here. Solar Bay is included and once again, the iPhone loses. The putative 2x ray tracing doesn’t manifest here, although as has been discussed, Solar Bay isn’t a particularly RT intensive test.

View attachment 31411

I wonder if QC are optimising for benchmarks or is something else happening?

Edit. RT still remains a mystery but going back and reading @dada_dave and @leman here and at the other place has revivified my memory!

View attachment 31427

In Steel nomad both A18 models beat the A750 GPU
Interesting … then there’s an inconsistency with the results @Jimmyjames posted from Tom's Guide showing the A18 Pro losing to the A750 GPU on Steel Nomad Light.

Edit: This isn’t them cooling it with liquid nitrogen is it?

EDIT2: Doesn't appear to be. And the 8 gen 3 scores are pretty much the same. Huh. They also have the A17Pro beating the 8 gen 3 in Steel Nomad Light too.

EDIT 3:



Tom's S24 Ultra result is a little higher than the S24+ result above but not out of the range of other 8 gen 3 results:


Xiaomi 14Pro result which should be the same as in the video:


which is a little lower than the S24+/Ultra results from UL/Tom's.

No data on UL's website for the iPhone 16 (Pro) yet, but this is more in agreement with Tom's than Geekerwan whose results show the iPhone 15 pro beating the Xiaomi 14 Pro. The A18 Pro should be about 10-15% faster than the iPhone 15 Pro above. Tom's has it about 12% faster which makes sense and would indeed place it behind the 8 gen 3 and Dimensity 9300+ - again those latter scores Geekerwan agrees with. I'd say it looks likely that Geekerwan screwed something up here but only with their iPhones somehow ... not sure.

Now I'm curious how Geekerwan gets the GPU to adopt three different performance points ... there's only low power mode right?

EDIT 4: Geekerwan's FPS line up with Tom's Guide/UL though - For the A18 Pro Tom's got ~11.6, Geekerwan got 11.63. And for the A17 Pro Geekerwan say 9.47 FPS while UL says 9-10 FPS. So why are the scores so different?
 
Last edited:
Back
Top