PC emulation on Android vs iPhone.

Jimmyjames

Site Champ
Joined
Jul 13, 2022
Posts
996
In the last few months, there has been a rise in pc game emulation on Android phones. A twitter account recently posted a comparison of Resident Evil Village on the Qualcomm 8 gen 3 (via emulation) vs the native version of the iPhone. They claim superior visuals on the Android device and thus, QC has the better gpu: effectively they claim QC can run full pc games better than the native iOS ports of those same games.

I”d be very interested in more knowledge members views on this. I know QC gpus vs Apple is a well worn topic here, but it would be interesting to get some opinions. There are some screen shots below along with a link to the video. The first thing that jumped out to me is the slight resolution difference.

Youtube: https://t.co/u8hAM3GrS2


1726335696941.png

1726335721854.png

1726335778942.png

1726335811297.png

1726335833038.png
 
From the description in the video:

Even without an fps counter, you can still see that the game isn't smooth once you reach the actual village. I'd estimate it drops to 20 fps in open areas.

What is the iPhone's average FPS? I would imagine some of those visual compromises are to improve FPS both average and 1%, but maybe the iPhone isn't any better?

As @casperes1996 says the performance of the Adreno GPU is impressive, especially under emulation, but from what I can tell, it is basically the same size GPU as Qualcomm put in their laptop SOC on a device with more memory and active cooling and yes slightly lower resolution than the iPhone. We know what happens when those things equilibrate and the Qualcomm GPU underperforms most of the time. So I personally wouldn't describe the Adreno GPU as "better" than the iPhone GPU, but it is certainly bigger. Depending on your definition, bigger can be better, but even then is a specific form of it. As you alluded to already, we've covered extensively that it isn't more "advanced".
 
Weren‘t there comparisons between iPhone graphics and the same game on one of the newest Snapdragon and the iPhone‘s graphics looked better?
If this emulation has better graphics, then that would primarily mean that the Android games scale badly or are made for the lowest common denominator and without high res textures for more capable SoCs.

Those graphics shown are definitely impressive, but the description of the video mentions that the FPS drops to at least 20 when you reach the village. I‘m guessing that the iPhone port tries to achieve a more playable frame rate.

So, what is actually used here is Box64, which lets you run x86-64 Linux applications on ARM64 Linux. I‘m guessing the application running is either Wine or Proton (which is based on Wine).

That means this is similar to running a game in CrossOver (which is still an Intel app) through Rosetta 2 on an Apple Silicon Mac. I‘m not totally sure how performant RE:Village would be that way. With the integration of Apple‘s GPT performance certainly has gotten better.

I was a bit apprehensive about the future of solutions like CrossOver, if Apple should decide to drop Rosetta 2, but it seems that some of the open source solutions have a very capable ARM64 dynarec as well.
Rosetta 2 cannot play its trump card for Windows applications anyway. While CrossOver is statically recompiled to ARM64 code, all Windows applications running through CrossOver have to be translated on the fly.
 
From the description in the video:



What is the iPhone's average FPS? I would imagine some of those visual compromises are to improve FPS both average and 1%, but maybe the iPhone isn't any better?
It seems higher from MrMacRight’s review here. at 1536x720 it was between 30-40fps. I don’t know how the settings in terms of quality would match up.
As @casperes1996 says the performance of the Adreno GPU is impressive, especially under emulation, but from what I can tell, it is basically the same size GPU as Qualcomm put in their laptop SOC on a device with more memory and active cooling and yes slightly lower resolution than the iPhone.
Interesting, thanks. For some reason I thought it was the gpu in the S24 etc.
We know what happens when those things equilibrate and the Qualcomm GPU underperforms most of the time. So I personally wouldn't describe the Adreno GPU as "better" than the iPhone GPU, but it is certainly bigger. Depending on your definition, bigger can be better, but even then is a specific form of it. As you alluded to already, we've covered extensively that it isn't more "advanced".
Indeed!
 
Weren‘t there comparisons between iPhone graphics and the same game on one of the newest Snapdragon and the iPhone‘s graphics looked better?
If this emulation has better graphics, then that would primarily mean that the Android games scale badly or are made for the lowest common denominator and without high res textures for more capable SoCs.

Those graphics shown are definitely impressive, but the description of the video mentions that the FPS drops to at least 20 when you reach the village. I‘m guessing that the iPhone port tries to achieve a more playable frame rate.

So, what is actually used here is Box64, which lets you run x86-64 Linux applications on ARM64 Linux. I‘m guessing the application running is either Wine or Proton (which is based on Wine).

That means this is similar to running a game in CrossOver (which is still an Intel app) through Rosetta 2 on an Apple Silicon Mac. I‘m not totally sure how performant RE:Village would be that way. With the integration of Apple‘s GPT performance certainly has gotten better.

I was a bit apprehensive about the future of solutions like CrossOver, if Apple should decide to drop Rosetta 2, but it seems that some of the open source solutions have a very capable ARM64 dynarec as well.
Rosetta 2 cannot play its trump card for Windows applications anyway. While CrossOver is statically recompiled to ARM64 code, all Windows applications running through CrossOver have to be translated on the fly.
Thanks for the information on Box64.
 
It seems higher from MrMacRight’s review here. at 1536x720 it was between 30-40fps. I don’t know how the settings in terms of quality would match up.
Aye probably lower, but that's why they're lower! Something has to give, it's a phone!
Interesting, thanks. For some reason I thought it was the gpu in the S24 etc.

It is, but depending on the source the TFLOPs for the 8 Gen 3 is about 4 right? from our previous thread? which is roughly what is in the Elite as well. Though I think the architecture is one generation older in the 8 Gen 3 than the Elite? (Edit: Actually might be the opposite, the 8 gen 3 I think is the Adreno 750 but the Elite's Adreno X1 is apparently the Adreno 741 internally, so the phone might be newer than the Elite, more proof that the Elite was delayed IMO ...) I can't remember. I was just remarking more on the size of the GPU seems huge for a phone (CPU Monkey claims the 8 gen 3 SOC is only 2.2 TFLOPs for FP32 and only double that for FP16, but CPU monkey makes mistakes and had a lot of missing info, so I'm going with our earlier thread). And since the S24 doesn't have active cooling it almost certainly thermally throttles fast. In fact, I think I had link in one of the other Qualcomm threads where they showed exactly that.

Yeah here it is:


For what it's worth the lower clocked Adreno X1 (3.8 TFLOPs) seems to be 20% faster than the Adreno 750 in benchmarks comparing the above to:


(can be clocked up to get 4.6TFLOPs but few reviewers seemed to get Elite 84 chips)

For games the difference can be more substantial (also bigger variant): https://chipsandcheese.com/2024/07/04/the-snapdragon-x-elites-adreno-igpu/
See Cyberpunk results. But there the CPU/memory bandwidth may come into play as well.

More to the point, comparing 3D Wildlife Extreme the M3 iGPU has a much bigger lead over the A17Pro than the X1 over the 750. And the M3/M4 basically crush the X1 in almost every test (unfortunately have to go to the M3 Air's individual review to get its game performance scores for some reason). And those are similar GPU sizes (M3 GPU is roughly 3.6 TFLOPs).


Weren‘t there comparisons between iPhone graphics and the same game on one of the newest Snapdragon and the iPhone‘s graphics looked better?
If this emulation has better graphics, then that would primarily mean that the Android games scale badly or are made for the lowest common denominator and without high res textures for more capable SoCs.
Yeah I think it was Genshin Impact. For RE, the better textures is probably because it is running in emulation since it wasn't designed to run on an Android device (i.e. it might be a little unfair to say that they scale badly based on this since this game wasn't designed for them at all). I strongly suspect if it had been, they would've lowered the textures and so forth to make the game more playable!
Those graphics shown are definitely impressive, but the description of the video mentions that the FPS drops to at least 20 when you reach the village. I‘m guessing that the iPhone port tries to achieve a more playable frame rate.

So, what is actually used here is Box64, which lets you run x86-64 Linux applications on ARM64 Linux. I‘m guessing the application running is either Wine or Proton (which is based on Wine).

That means this is similar to running a game in CrossOver (which is still an Intel app) through Rosetta 2 on an Apple Silicon Mac. I‘m not totally sure how performant RE:Village would be that way. With the integration of Apple‘s GPT performance certainly has gotten better.

I was a bit apprehensive about the future of solutions like CrossOver, if Apple should decide to drop Rosetta 2, but it seems that some of the open source solutions have a very capable ARM64 dynarec as well.
Rosetta 2 cannot play its trump card for Windows applications anyway. While CrossOver is statically recompiled to ARM64 code, all Windows applications running through CrossOver have to be translated on the fly.
One of my faint hopes is that if Qualcomm is successful in the PC market (yet to be determined obviously) then maybe more games will be compiled for ARM Windows and at least reduce that overhead on translated games. Of course an even fainter hope is that Apple will be successful in simply getting a larger fraction of the gaming market to just release native Mac games (ported well) ...
 
Last edited:
Thanks for the information on Box64.

If you want a bit more insight how it works:

EDIT: Updates information on how the dynarec works:
 
Last edited:

So one thing about the Qualcomm phone/laptop GPUs is that they seem to have the opposite strategy to Apple's GPU design. If the above report is accurate, the upcoming phone GPU will be 12 compute units (of presumably 128 FPU each) clocked at 1GHz, it should basically be the same size GPU as the Snapdragon Elite GPU but clocked lower. In contrast, as far as I can tell Apple clocks its GPUs at the same speed across its entire lineup and increases core counts as it goes up the lineup (and of course its lineup extends much further). This enables Qualcomm to run a much more powerful phone GPU at probably similar if not less power. The downside (from Qualcomm's perspective) is that they have to spend more silicon die area to achieve this so it's probably more expensive (while Apple's clocks being higher mean it has to have a slightly higher quality of silicon, but that's more than compensated for by its smaller silicon size). Apple's approach is cheaper for phones and peaks in performance/efficiency for mobile (laptop) GPUs where they have larger ratio of core counts to clock speed than the typical iGPU/dGPU. The same hold true for desktop but the clocks of desktop GPUs are so much higher (and are no slouch with respect to core count) that Apple struggles to keep up again.

It's interesting because Apple does change CPU clocks between A- and M-series, so I wonder why they don't the same for the GPU? Cost as already mentioned is one obvious reason. Maybe keeping clock speeds constant possibly helps GPU development/driver optimization to have everything at consistent speeds? (speculation) Not to get too far off the topic but assuming the rumor of the Hidra SOC for desktops has any validity, maybe we'll see clock increases for it, especially the GPU (along with of course the higher core counts already expected in the Ultra/desktop design). That said, I think Apple would probably benefit from doing the same (but in reverse) for their phone GPUs (increase core counts, lower clocks relative to their mobile chips). Apple probably doesn't feel too much pressure to do so since "they don't design to win benchmarks". And their performance in actual mobile games is fine - probably better game/driver optimizations and of course TBDR giving them a boost and their superior cache structure/compute capabilities. But still ... it would be nice to see.
 
Last edited:
Meanwhile the upcoming ARM based Immortalis GPU is set to be 12 cores clocked at 1.6GHz if leaks are to be believed. The previous Dimensity GPU reportedly suffered from cooling problems and without active cooling I don’t see how a phone will be able to run such a monster at anywhere near peak for any length of time.

 
Meanwhile the upcoming ARM based Immortalis GPU is set to be 12 cores clocked at 1.6GHz if leaks are to be believed. The previous Dimensity GPU reportedly suffered from cooling problems and without active cooling I don’t see how a phone will be able to run such a monster at anywhere near peak for any length of time.


One of these statements has to be wrong:

1) peak power draw is basically the same as the iPhone A18Pro
2) core count is 12
3) clock speed is 1.6GHz

That would be running the same clockspeed as the iPhone A18Pro with twice the cores on the exact same lithography with half the power if true ... which is not possible because of physics. The running cooler with a vapor chamber I completely get. The power usage I don't, unless the Genshin Impact, because of frame rate lock, doesn't actually cause the GPU to go anywhere near peak. Haven't watched the original video yet.
 

One of these statements has to be wrong:

1) peak power draw is basically the same as the iPhone A18Pro
2) core count is 12
3) clock speed is 1.6GHz

That would be running the same clockspeed as the iPhone A18Pro with twice the cores on the exact same lithography with half the power if true ... which is not possible because of physics. The running cooler with a vapor chamber I completely get. The power usage I don't, unless the Genshin Impact, because of frame rate lock, doesn't actually cause the GPU to go anywhere near peak. Haven't watched the original video yet.
Did I see the magic name wccftech? Yeah lol.
 
Native port vs emulation test from YouTube channel DameTech.

iPhone has a higher frame rate and 90% higher resolution. Android has better quality, which is to be expected as it’s the pc version. What’s unclear to me is whether upscaling is used. I believe it is on the iPhone.

 
Back
Top