Apple M2 Ultra and Extreme Design

leman

Site Champ
Posts
285
Reaction score
484
But this was all relatively minor compared with everything else presented at WWDC22 related to Metal (Metal FX Upscaling, new workflows to pre-compile shaders, geometry shaders, a C++ API for Metal...), most of it unrelated to ray tracing. Nothing groundbreaking on the raytracing field that would suggest they're leveling the field for a grand reveal. Maybe on WWDC23?

My impression is that the raytracing API in Metal is already rather mature. It is fairly similar to what DX12 Ultimate or Vulkan extensions offer, and I think even more capable in some key areas (I don't think that DX12 offers motion blur APIs or in-node storage, plus Metal RT has larger data structure limits more suitable fro production ray tracing). So lacks of big upgrades might simply mean that the API is "done".

Anyhow, while the exact details of the Apple Silicon architecture weren't discussed in WWDC20, they unveiled it was coming. I guess they could do the same here: announce raytracing is coming (and how the new APIs work, or how to improve the existing code to tailor it to the new hardware), and release the first commercial products that use it later. But it would be a bit awkward without something like the Developer Transition Kit they announced in WWD20 for the Apple Silicon transition.

I would be surprised if their RT API was not developed with the upcoming hardware in mind. Most likely, the code will just work on new GPUs with hardware acceleration, with no changes or tweaks required.
 

Nycturne

Site Champ
Posts
828
Reaction score
955
I would be surprised if their RT API was not developed with the upcoming hardware in mind. Most likely, the code will just work on new GPUs with hardware acceleration, with no changes or tweaks required.

Sounds a little like the rare case of the software guys being ahead of the hardware guys?
 

Yoused

up
Posts
4,458
Reaction score
6,666
Location
knee deep in the road apples of the 4 horsemen
Sounds a little like the rare case of the software guys being ahead of the hardware guys?
Rare? It is typical. RT has been happening in SW for ages. It is not at all surprising that Metal would have a RT API. Just like the original Mac did floating point in software until the hardware designers decided it was important enough to implement in circuitry. Or image compression, which is currently handled by specialized SoC units. I suspect the M-series processor cores have dedicated hardware to speed up object method dispatch, which has always been done in software and is extremely heavily used (the performance benefit would be too ridiculous to ignore).

Just look back through computer development and you will see endless examples of operations getting folded into hardware to gain better performance. It is the norm, not a rarity.
 

Nycturne

Site Champ
Posts
828
Reaction score
955
Just look back through computer development and you will see endless examples of operations getting folded into hardware to gain better performance. It is the norm, not a rarity.
That's not what I mean in this case, and I am not that dense in my own field, I would hope. Apple's co-development of hardware and the software meant to leverage it generally has a smaller gap. I'm more poking fun at this as if Apple had launched the pen input APIs but sat on the iPad Pro for 3 years.

Auto layout had 2 years of runway before the iPhone 6 leveraged it for the new screen sizes, but that's about the longest runway Apple's given folks in recent memory.

That all said, it does look like Metal RT for cases like Blender does use GPU resources to accelerate things, but nothing as nice as the dedicated hardware for doing ray intersections that Nvidia, and now AMD, have.
 

leman

Site Champ
Posts
285
Reaction score
484
Sounds a little like the rare case of the software guys being ahead of the hardware guys?

They probably wanted to get all the infrastructure ready. The nice thing about having a mature API is that future hardware should be able to accelerate todays software.
 
Last edited:

Andropov

Site Champ
Posts
469
Reaction score
454
Location
Spain
I would be surprised if their RT API was not developed with the upcoming hardware in mind. Most likely, the code will just work on new GPUs with hardware acceleration, with no changes or tweaks required.
I'm sure they had the upcoming hardware in mind, and I agree that it's most likely that the RT code will just work without changes. However, I do expect talks outlining things like best practices on how to setup the RT code to maximize performance. Maybe even minor additions to the API to make the hardware's job easier/faster (could the in-node storage have been that?). Even with the knowledge of how the RT hardware was going to be, it's been 3 years, it'd be impressive if the software team finished an API that long in advance and it didn't require even minor changes. Not because the API was badly designed or anything, just because profiling usually shows critical (an often unexpected) bottlenecks, and even with pre-silicon access, thing might be different on the newest hardware.
 
Top Bottom
1 2