Apple Sillicon behind in GPU both RT and Raster and Pref/$.

This is because Apple uses a better node. TSMC 5nm is very good and RTX 3000 uses Samsung 8nm. We should wait and see how the RTX 4000 and RX 7000 do as they will be based on 5nm from TSMC.

Hardly. 5nm is good, but it’s no magic. The node advantage can explain maybe 40% (if you want to be very generous). There is no way Nvidia will be able to gain perf/watt parity with Apple just by using 5nm. They would also need to do major reengineering of their GPU.

The reasons why Apple GPUs are so efficient are these:

- they are very streamlined and compact
- they run on a relatively low frequency compared to gaming GPUs
- they use TBDR and large caches which improve resource usage efficiency
- node advantage
- probably some circuit design sauce, cmeier might have better idea how a long vector ALU can be made run on lower power


Everything else you said I agree. But 3D and GFX has been Apple's area. They JUST made Metal better and added APIs that have been in PCs and Consoles for years.

Metal is pretty much cutting edge by any standard. The way how you are portraying it is highly manipulative. Sure, Metal 3 focused on closing some feature gaps to other API (most notably mesh shaders), but there are also things that Metal had for years and other APIs either don’t offer at all or only now catching up (e.g. templates, pointers, GPU-driven pipeline management).
 
Apple's huge advantage in PPW is much more driven by microarchitecture than by node. This is true both in CPU and GPU (and ML and other custom accelerators).
 
I was going to post another thread, but thought that these links may be interesting here…

Apple historically has a licensing agreement with Imagination Technologies for their graphics IP.

It won’t be long before Apple is likely to include GPU IP based on CXT and Photon…



I like their overview of CXT and photon here..




I think this could be game changing for Apple …..
 
I was going to post another thread, but thought that these links may be interesting here…

Apple historically has a licensing agreement with Imagination Technologies for their graphics IP.

It won’t be long before Apple is likely to include GPU IP based on CXT and Photon…



I like their overview of CXT and photon here..




I think this could be game changing for Apple …..

Recall that they moved away from imagination, then turned around awhile later and did a deal. I’m convinced it had to do with this technology.
 
Apple has a bunch of RT-related patents that are distinct (albeit based on similar ideas) from IMG RT patents. Maybe Apple can use some of IMGs IP stack, that for sure, but they have their own solution in the oven and it paints an interesting picture.
 
Apple has a bunch of RT-related patents that are distinct (albeit based on similar ideas) from IMG RT patents. Maybe Apple can use some of IMGs IP stack, that for sure, but they have their own solution in the oven and it paints an interesting picture.
I am really excited to see what Apple comes up with, and also equally excited to see how quickly it is adopted.
 
You good folks will have to forgive me for my ignorance. I haven't spent much time on this forum, or any tech site in general, for many weeks. That's because my Mac Pro blew up in my face. Hence, I'm going to be moving to Apple Silicon much sooner than I had anticipated. I've tried to catch up on Mac news and rumors from the past month or so, but it's been diluted among all of the announcements from Intel, Nvidia, and AMD, which are getting most of the tech press attention right now. In regards to the M-series, do we know if the M2 Pro/Max will be using 3nm or not? Has there been any new speculation on the next Mac Pro and the M2 "Extreme" alleged to be inside it? Have there been any hints about ray tracing or new developments with future Mac hardware efforts? I think I saw something about Ross Young expecting a mini-LED display potentially coming out early next year, but I'm not certain of the details. Also, I think Ventura is supposed to be released later this month, is there any news on the software front, not just with macOS, but Metal 3?

Sorry for hitting a lot of topics with a barrage of questions. I haven't been paying attention to tech news, since my Mac Pro died an untimely death, and I was tasked with cleaning up its corpse. So, I'm curious about any recent rumors concerning the Mac, since I'll be looking at getting an Apple Silicon Mac sooner than I had expected, to replace my 2018 Mac mini, and any thoughts on recent developments would be much appreciated.
 
Forgive my Metal ignorance, but is hardware-accelerated rendering not relatively transparent to code?
For RT, devs would have to start using the code, I suppose Apple adding RT hardware would make that code automagically run on the RT hardware.
 
I
Everything @leman said above basically.

I'd add for raster performance you can look at benchmarks like Gfxbench for more positive scores. The Max and Ultra are very competitive with the best AMD and Nvidia have to offer on that benchmark.

Finally, I've already posted this in another thread, but
https://www.twitter.com/i/web/status/1538362629818703872/
If this guy thinks Apple are onto something, (and he was very high up in Nvidia before leaving for Apple) then he's probably correct.
I recall an interview with an nvidia engineer talking about data compression (say 3-4 years ago on YouTube).

He said essentially (paraphrasing) that “compute is essentially free” in terms of energy efficiency and that most of the power is spent (inside a GPU at least) shuffling data around. In the context he was discussing it, that was why nvidias compression inside their GPUs was so critical for performance. At the time AMD had much worse colour compression.

This expense (moving data) creates bias towards shared memory systems at the lower power levels exactly as above.

System memories are now getting into the range where they are fast enough for GPU especially when you no longer need to waste bandwidth on useless copies from pool to pool.

This is no doubt why nvidia wanted ARM, to create their own high powered SOCs.

This is why both AMD and Apple are in very strong positions for the long term at the moment and why intel is trying desperately for a decent GPU play, and nvidia clearly want their own CPUs for a high performance SOC.
 
Last edited:
Hope this isn't a distraction - appologies if it is.

On the subject of Metal 3 (available with upcoming macOS Ventura)... was wondering if people here can help me understand how that translates to better graphics performance, for a given processor, with respect to Metal 2? I'm a complete newb on graphic APIs.

As an example... Let's assume a certain level Mac Studio, and a graphics intensive app such as X-Plane 12, and that it takes advantage of Metal 2 and Apple Silicon.

Question: Would simply running X-Plane on macOS Ventura with Metal 3, on the same Mac Studio, automatically result in better graphics performance due to coding efficiencies in Metal 3? Or... would performance gains only result by using X-Plane software being modified to use new more efficient routines in Metal 3?

Any other insights on the subject would be appreciated. Thanks!
 
On the subject of Metal 3 (available with upcoming macOS Ventura)... was wondering if people here can help me understand how that translates to better graphics performance, for a given processor, with respect to Metal 2? I'm a complete newb on graphic APIs.

It doesn’t.

As an example... Let's assume a certain level Mac Studio, and a graphics intensive app such as X-Plane 12, and that it takes advantage of Metal 2 and Apple Silicon.

Question: Would simply running X-Plane on macOS Ventura with Metal 3, on the same Mac Studio, automatically result in better graphics performance due to coding efficiencies in Metal 3? Or... would performance gains only result by using X-Plane software being modified to use new more efficient routines in Metal 3?

It is possible that Ventura comes with better drivers which might improve performance, but even if the performance improves (and I’m not saying it will), it won’t have anything to do with Metal 3.

From the top of my head, there are two new features in Metal that might improve performance in some cases. Both of them require code changes:

1. The revamped resource binding model might reduce CPU bottlenecks when adopted (more flexible encoding, encoding possible in parallel, easier porting of applications developed for Vulkan/DX12). But a game that’s already optimized for Metal is unlikely to benefit from adopting this model unless it’s a very complex game that does weird things.

2. New raytracing APIs that allow more information to be encoded in the spatial index which might reduce expensive memory lookups for some cases. Very little real world impact right now as nobody uses Metal RT in performance critical paths.
 
It doesn’t.

There’s some stuff that can improve load times, but that’s not really the same thing.

There’s also the scaling APIs that got added. It can get developers higher frame rates, particularly on high pixel density displays that Apple likes to use. Something long overdue in my opinion. But this doesn’t make the graphics hardware any more performant, it just lets the developer control how much work the GPU is doing per frame, and using an upscaler. Consoles have been doing this for a while, and it’s been coming to PCs more recently.
 
There’s also the scaling APIs that got added. It can get developers higher frame rates, particularly on high pixel density displays that Apple likes to use.

Thanks for the heads-up on that. I suspect that'll make a large difference using multiple displays trying to get decent frame rates.

X-Plane is also pretty compute intensive with respect to the CPU. It's one of the few flight simulators that takes it out of the "game" category (like Microsoft Flight Simulator) by focusing on flight physics and dynamics based on the shapes of multiple aircraft elements and determining the resulting forces on each piece of the plane as it's pushed through the air. For those interested, here's a brief introduction.
 
Thanks for the heads-up on that. I suspect that'll make a large difference using multiple displays trying to get decent frame rates.

X-Plane is also pretty compute intensive with respect to the CPU. It's one of the few flight simulators that takes it out of the "game" category (like Microsoft Flight Simulator) by focusing on flight physics and dynamics based on the shapes of multiple aircraft elements and determining the resulting forces on each piece of the plane as it's pushed through the air. For those interested, here's a brief introduction.
Is the GPU pegging in xplane? MSFS is mainly CPU limited so running lower resolution doesn't help as much as you would hope.
 
Back
Top