So after writing my response
here about the RE:V results, I've got a new hypothesis. It's similar to the old one but a small addendum. First off the Qualcomm GPU on the phone SOC is just bigger than the iPhone's and indeed is geared towards simple mobile gaming, as
@leman went through above. This lets it win on graphics benchmarks both intensive and light while getting crushed in compute or tasks that contain compute. That's it's a bigger GPU is the main thing that we're seeing here in
@Jimmyjames's chart. Like
@leman I've had difficulty finding exact specs for the 750, but we see that in the
chipsandcheese* and
NotebookCheck article the cutdown X1 3.8TFLOPs is only about 20% faster than the 750 in all the benchmarks whereas the M3/M4 is almost (or sometimes more than) double their equivalent iPhone's scores. So the 750 is just a physically bigger, more powerful GPU than what's in the iPhone and most of the time that's simply unassailable for parallel tasks ... unless your microarchitecture is so bad like Qualcomm's is for compute.
So why doesn't the Adreno GPU in the 8 gen 3 do better in games, including mobile games? I've been thinking about that and I think one part of this conundrum might also be the CPU.
It's true that for light graphics benchmarks, graphics benchmarks can become CPU-limited too. It gets to the point where the GPU is completing its tasks so quickly that the bottleneck starts to become the CPU sending commands to the graphics pipeline. However, for a game on an iPhone/QC device rendered at low resolutions, that problem becomes even more pronounced because the CPU has a lot more work to do. There's a lot of physics and object tracking and AI and all sorts of things the CPU is doing that the graphics benchmark isn't. Some of these things can also be done as compute on the GPU as well! - which we know Qualcomm GPUs suck at so there's another reason. The 8 gen 3 only has the old ARM chips which are not great for ST/lightly MT tasks, especially compared to Apple's cores. To test this hypothesis, we would need an iPhone and an Android 8 gen 3 (and preferably 8 gen 4 when it comes since my hypothesis is that it should suffer less from this!) and test the behavior of each on different graphics settings. Now unfortunately as covered extensively already for mobile games especially they often don't have graphics settings and those settings aren't the same between platforms. That makes this test difficult in practice. I did try testing this hypothesis with graphics benchmarks themselves and fascinatingly it came out opposite to how I expected. The lower the resolution or degree of difficulty, the better the Adreno 750 GPU did compared to the iPhone 15 Pro/16. I expected it to become CPU limited at some point and the power of the CPU to take over, but that never really happened. You can see this in the
android authority article comparing Wildlife basic vs Extreme and here on
GFXbench comparing the different flavors of Aztec offscreen and the 750 GPUs with the iPhone 16. Either that or for both 3D Wildlife and Aztec ruins the MT CPU score is more important for the graphics benchmark than the ST score and there the two CPUs are closer. Again, this could be very different than for an actual game!
There is another possible explanation related to the point about how games treat different platforms: iPhone is where the money is and devs spend more time optimizing their mobile games for performance on iPhones than Androids. It's also easier to optimize for iPhone as there are fewer chips and models. This possibility has been brought up a few times, especially with Genshin Impact. And would be almost impossible for us to test.
So bottom line: the Adreno 750 is almost certainly bigger and more powerful than the GPU in the A17 Pro/A18 (Pro). It's going to win on graphics benchmarks. For actual games, it's possible the CPU lets it down, it's possible its lack of compute ability for some games that require that lets it down, and its possible devs optimize for iPhone more than for Android. Also, once we equilibrate things a bit, i.e. the X1 compared to the M3/M4 which have comparable GPU sizes, the latter wipe the floor with the X1 in both benchmarks and games (notebookcheck article and
M3 Air reviews). So all of the above?
*In my previous post on your RE:V thread, I mistakenly thought the chipsandcheese article had an Elite 84 chip with 4.6TFLOP GPU but it didn't it only had the 80 with the 3.8 which chipsandcheese doesn't mention. However, the nice thing about that is the benchmark results are now comparable with notebookcheck who didn't do Solar Bay. The less nice thing is I thought in the Cyberpunk section that the 8cs gen3 they compare the X1 to was the same as the 8 gen 3 forgetting that the 8cs gen3 had an even older Adreno 690 GPU and so that section is completely irrelevant. I forgot just how bad Qualcomm's laptop SOCs were previous to the X1.