3DMark’s new benchmark: Steel Nomad and Steel Nomad Light.

Modern gaming laptops? Sure. I wonder about laptops of the same age. Even many modern non-gaming laptops get a lower score than yours.
Your system has the same average score as a 3060 laptop. The 5060 laptops are averaging 2500. The M3 Ultra is the equivalent to a 4070ti super (500 points away from the 5090 notebook).
 
Your system has the same average score as a 3060 laptop. The 5060 laptops are averaging 2500. The M3 Ultra is the equivalent to a 4070ti super (500 points away from the 5090 notebook).
Ultra aside (which is underwhelming), don’t forget to compare to compare to non-discreet Nvidia machines.

I’m genuinely not sure what your expectations are given the delta in power. A 3060 laptop gpu uses between 80-115 watts.
 
Last edited:
Ultra aside (which is underwhelming), don’t forget to compare to compare to non-discreet Nvidia machines.

I’m genuinely not sure what your expectations are given the delta in power. A 3060 laptop gpu uses between 80-115 watts.
I had to ask gemini was iGPU is in Strix Halo (8060S). It gets 2200 points.

For the price? More really. That is the feedback I tend to get from my friends, they see the performance for the price and think I'm the idiot for sticking with macOS (granted I do have a PC that I game on, but they don't have separate systems).
 
I had to ask gemini was iGPU is in Strix Halo (8060S). It gets 2200 points.

For the price? More really. That is the feedback I tend to get from my friends, they see the performance for the price and think I'm the idiot for sticking with macOS (granted I do have a PC that I game on, but they don't have separate systems).
The 8060s gets 2200 with a clock boost of 2.9ghz found on some configurations I believe. A more typical score is 2000. At 80-90w btw.

I don’t know. I can’t stop you being disappointed, and I also wish gaming were a little better in the Mac. I do think the comparison with other laptops is off. You aren’t paying for a gpu, you’re paying for a machine. If gaming is the bulk of your use, then I get it. If you have a mixed usage load, then I don’t think these other machines compare. CPU, battery life, reliability, video encoding etc. there are many ways a Mac laptop is better. Hell, even LTT has switched lately, as bad as that video is.
 
Your system has the same average score as a 3060 laptop. The 5060 laptops are averaging 2500. The M3 Ultra is the equivalent to a 4070ti super (500 points away from the 5090 notebook).

I had to ask gemini was iGPU is in Strix Halo (8060S). It gets 2200 points.

For the price? More really. That is the feedback I tend to get from my friends, they see the performance for the price and think I'm the idiot for sticking with macOS (granted I do have a PC that I game on, but they don't have separate systems).

Slight note: as @Jimmyjames alluded to with respect to the 8060S a number of the results in the PC side regardless of processor (including, perhaps especially mobile) of the database are overclocked variants. Whether that changes things for you depends on priorities of price vs noise, heat, battery life, weight, etc … Personally I wouldn’t mind some extra GPU performance, but truthfully Apple perf/$ is typically decent (GPU) to excellent (CPU) - it’s really storage costs that are killer. Memory price isn’t great but since the M4 bumped base configurations that should be less of an issue.
 
Slight note: as @Jimmyjames alluded to with respect to the 8060S a number of the results in the PC side regardless of processor (including, perhaps especially mobile) of the database are overclocked variants. Whether that changes things for you depends on priorities of price vs noise, heat, battery life, weight, etc … Personally I wouldn’t mind some extra GPU performance, but truthfully Apple perf/$ is typically decent (GPU) to excellent (CPU) - it’s really storage costs that are killer. Memory price isn’t great but since the M4 bumped base configurations that should be less of an issue.
Yeah agree. I think most people want a bit more oomph from Apple’s gpus. I’m hoping that arrives with the M5.
 
Yeah agree. I think most people want a bit more oomph from Apple’s gpus. I’m hoping that arrives with the M5.
Enough performance that one could use MetalFX to upscale to their panels native resolution and hit the panels max refresh rate doesn't seem like a tall ask for what is being spent.

The disconnect is that Mac hardware does so well in all the other things that when it comes to gaming, the performance disparity can be jarring.
 
Enough performance that one could use MetalFX to upscale to their panels native resolution and hit the panels max refresh rate doesn't seem like a tall ask for what is being spent.

The disconnect is that Mac hardware does so well in all the other things that when it comes to gaming, the performance disparity can be jarring.
It doesn’t help that according to Nat Brown, there are basically zero games which use tile based rendering. One of Apple’s gpus big advantages gone immediately.
 
It doesn’t help that according to Nat Brown, there are basically zero games which use tile based rendering. One of Apple’s gpus big advantages gone immediately.
it doesn't help that consoles use AMD/Nvidia gpus, seems next gen is sticking with amd too.
 
Click on thumbnail for full image:
Screenshot 2025-07-02 at 7.08.47 PM.png

With the 3DMark website now including Mac info (not all data updated from that, some still using outside information, but eyeballing a couple they were pretty close), I decided to expand one of my graphs about Nvidia's dual-issue advantage in various 3DMark benchmarks.

As a reminder, the y-axis is 2 x the TFLOPs normalized benchmark ratio of the RTX 40XX GPU over the M4 or 20XX GPU, while the various GPUs comparison pairs are chosen on the basis of having roughly equivalent memory bandwidth and non-dual issue TFLOP (i.e. the Mac and 20XX Nvidia GPU has roughly half the theoretical TFLOPs of the 40XX GPU). This metric is designed to test how close in practice the 40-series GPUs dual-issue FP32 gets to its theoretical doubling of the number of TFLOPs per core.

I've added more Mac data as well as Steel Nomad and Solar Bay to the mix. It doesn't really change the conclusions a ton but there are a few nuggets:

1) I had expected given the improvements Nvidia made to ray tracing that the Solar Bay results for the 20XX GPUs would be pretty bad (high ratio), but they weren't! In fact, the 20XX GPUs did fantastic on a per TFLOP basis, even better than the other benchmarks (i.e. Nvidia's most recent GPU had the least dual-issue gain here). Not sure if that that says more about the extent ray tracing actually impacts the render times for Solar Bay or Nvidia GPUs (my guess is the former)

2) The more punishing Steel Nomad benchmark sees the 40XX GPUs get less benefit from the dual-issue design as compared to its "light" counterpart. It's possible it relies more on GPU features where the extra FP32 units per core aren't able to provide as much uplift.

As before Mac performance on Wildlife Extreme is extremely good - what exactly about the scene/engine enables such performance is still a mystery to me. The two (non-mutually exclusive) main candidates (as far as I know) are that they designed the scene/engine to make good use of Apple's TBDR design (lots of occluded objects and an engine that take advantage) or use a lot of FP16 which Apple has a separate pipe for and even before Apple's dual-issue design in the M3-4 (but still only one FP32 pipe) Apple would recommend developers use FP16 when possible.
 
Last edited:
From my understanding Steel Nomad Light to Steel Nomad is just a resolution change (1440p -> 4k). Would that matter for dual issue designs?
 
From my understanding Steel Nomad Light to Steel Nomad is just a resolution change (1440p -> 4k). Would that matter for dual issue designs?
According to this there are more differences:


In the less demanding version of the test, Steel Nomad Light, the scene is illuminated by moonlight, there are fewer objects and the desert basin in the background is filled with water. It’s designed for high-end portable Windows, Windows-on-Arm, iOS or Android devices and supports the DirectX feature level 12 API, Vulkan 1.1 and Metal. Internally, it runs at a lower resolution of 2560×1440 pixels.

3DMark also has a description of each benchmark though I don’t recall if it goes into this kind of comparative depth.

As for whether or not 4K vs 1440p itself can make a difference: maybe!
 
According to this there are more differences:




3DMark also has a description of each benchmark though I don’t recall if it goes into this kind of comparative depth.

As for whether or not 4K vs 1440p itself can make a difference: maybe!
well I'll be. I never noticed it was moon-lit instead of sunny. I've run the bench, but usually turn away (do something else) and haven't looked at it.

In that case yeah maybe the scene complexity does matter, and maybe that is why the score isn't a straight 100 x fps like it is for Steel Nomad and Speedway.

EDIT: @dada_dave were the scores you used from the DX12 runs or the Vulkan runs? It seems like the Vulkan scores are higher (which is interesting).
 
Last edited:
well I'll be. I never noticed it was moon-lit instead of sunny. I've run the bench, but usually turn away (do something else) and haven't looked at it.

In that case yeah maybe the scene complexity does matter, and maybe that is why the score isn't a straight 100 x fps like it is for Steel Nomad and Speedway.

EDIT: @dada_dave were the scores you used from the DX12 runs or the Vulkan runs? It seems like the Vulkan scores are higher (which is interesting).
DX12. I noticed the same thing, but wasn’t sure why and when I used information from beyond 3DMark’s search I noticed most reviewers that I was using seemed to use DX12 (sometimes both) if they said at all. The difference isn’t enough to change the conclusions any but it was noticeable by eye - though sometimes swamped by noise (some of my PC specifications where tight enough that I had very few samples). Especially since, there’s also the case that some of the runs, sometimes enough of them to the point the median was affected, were bad (i.e. abnormally low, sometimes because of running obviously substandard CPUs, sometimes unknown but probably due to running when the machine was already loaded running other tasks) so I often had to carefully exclude computers using the parameters or do so manually.

As for whether or not 4K vs 1440p itself can make a difference: maybe!

To expand on this, I can make an argument (no idea if it is true) that 4K resolution impacts bandwidth more than compute so when controlling for bandwidth the extra compute doesn't help as much as one might think. As in general, I think I found 4K tests to be less helped by Nvidia's dual issue design than 1440p tests, but I'm not sure - so heavy asterisk on that.
 
Last edited:
Back
Top