M4 Mac Announcements

The only other GPUs that can do path tracing with acceptable frame rates is the RTX 4070 Ti and higher cards from Nvidia with DLSS. Even RDNA 3 is not enough, supposedly AMD has a big RT uplift coming with RDAN4 in Q1 2025. It will be interesting to see how all GPUs compare.

I think the press also mentioned frame gen, this is likely FSR 3.1.
Almost certainly FSR, since it's also already implemented in Cyberpunk...But MetalFX FG would very much great to see
I’ve noticed that people are using the term “path tracing” a lot recently. How is that different from ray tracing?
My understanding is that after the term "ray tracing" was used a lot for mixed RT+raster lighting and lower quality RT approximations, the marketting department took the synonymous term "path tracing" to mean "fully ray traced" and ray tracing thus means "makes use of ray tracing in some capacity"
 
However, it will never work the same way for games. Software is too complex. And with the gamedev market and Apple's share of it being what it is, nobody is going to spend extensive time optimizing for Apple platforms.

I think its possible as the same architecture (Just scaled down) is used in the iPhone, iPad, AppleTV, Vision Pro and Mac.

Sure, the gaming market share right now is tiny, but the potential there is absolutely massive, and the hardware is properly capable.
 
I’ve noticed that people are using the term “path tracing” a lot recently. How is that different from ray tracing?
Skimming this:

Sounds like it is taking more bounces into account.
 
My understanding of the difference is that path tracing actually limits the number of rays you have to trace through a scene. The problem is that fully ray traced scenes present as an unbounded computation. You don’t know how many bounces will occur, and how many splits will happen at each bounce.

Path tracing says "I’m only going to allow N bounces, and X rays per pixel", and randomly follow some of the possible bounces off each surface. The end result is random sampling the scene. The benefit is that you can actually tune this algorithm for the time you want to spend producing the image, where more time means more samples which gets you closer to the ‘true’ image. The downside is noise in the resulting frame as adjacent pixels can get very different answers from each other, which is what NVidia’s denoiser tech is for.

Ray traced games we’ve gotten are really using path tracing. Quake 2 and Minecraft’s RTX enhancements use path tracing for example, as real time processing needs the ability to bound the computation time for each frame. The visual artifacts are because the tracing is a random sampling rather than a fully traced scene.
 
Skimming this:

Sounds like it is taking more bounces into account.

My understanding of the difference is that path tracing actually limits the number of rays you have to trace through a scene. The problem is that fully ray traced scenes present as an unbounded computation. You don’t know how many bounces will occur, and how many splits will happen at each bounce.

Path tracing says "I’m only going to allow N bounces, and X rays per pixel", and randomly follow some of the possible bounces off each surface. The end result is random sampling the scene. The benefit is that you can actually tune this algorithm for the time you want to spend producing the image, where more time means more samples which gets you closer to the ‘true’ image. The downside is noise in the resulting frame as adjacent pixels can get very different answers from each other, which is what NVidia’s denoiser tech is for.

Ray traced games we’ve gotten are really using path tracing. Quake 2 and Minecraft’s RTX enhancements use path tracing for example, as real time processing needs the ability to bound the computation time for each frame. The visual artifacts are because the tracing is a random sampling rather than a fully traced scene.
In theory this is true, but in practice I believe the below from @casperes1996 is closer which is why "path tracing" is considered more computationally expensive than what some game makers have been calling "ray tracing" up until now. Basically path tracing is a more computationally manageable version of idealized ray tracing but we've already been using path tracing techniques, just with even more approximations (fewer bounces) and baked in effects. Of course, my understanding is that approximations will still play a huge role going forwards in the form of AI denoising and AI techniques to cut down on the number of random rays each bounce.
Almost certainly FSR, since it's also already implemented in Cyberpunk...But MetalFX FG would very much great to see

My understanding is that after the term "ray tracing" was used a lot for mixed RT+raster lighting and lower quality RT approximations, the marketting department took the synonymous term "path tracing" to mean "fully ray traced" and ray tracing thus means "makes use of ray tracing in some capacity"

==========================

According to MR, in today's Power On newsletter, Gurman said the M4 Ultra will "probably" be just 2x the current M4 Max for CPU & GPU core counts (32/80).

I'm having trouble deciding if this is Gurman backing away from his Hidra predictions or if he's saying the Hidra chip won't be M4 or that maybe the Hidra chip is only coming to the Mac Pro and therefore the Studio is getting the same chip designs as before. Interesting to note that it doesn't appear that M4 Ultra Studios or any kind of Mac Pro are currently listed in the known Apple IDs in @Altaic's lists. Hmmmmm ... hard to say what is going on here. MacRumors simply going with no M4 Extreme, but we don't know what the Hidra chip will be, M5 and if it would qualify as an "Extreme".


==========================

They are not interested in that part of the market. Never have been. Production is limited and R&D is expensive. Why would they spend resources to sell cheap GPUs if their customers are happy to pay $4000 for a Max-sized die? And of course, companies like Nvidia have a huge advantage. They can make Max-sized die packed full of GPU compute. Apple needs to build a full system in the same footprint. For a similar die size, Nvidia will always be ahead, unless Apple uses its $$$ to invest into some new tech like die stacking.

While I agree that they will never be the budget option (except maybe on VRAM), I do think Apple can push the GPU cores a bit more than they do. My extremely rough estimate is about 20% more cores and they'd be in a much stronger perf/$ when the whole device is taken into account. Of course with Apple the big pricing issue is still storage upgrades, much more so than GPU cores - though at least with the M4 RAM base size and bandwidth increases we've gotten, the RAM upgrade pricing shouldn't be as big of an issue now for most people.
Besides, GPU performance is extremely overrated. Nvidia’s and AMD normalized oversized behemoths that draw more power than a water boiler. You don’t need 60 TFLOPs to play games. Sadly, because of this (and the toxic gaming industry) we have lost the precious art of optimization. Base M4 is more than sufficient for running pretty much any game. It’s the code that sucks. Just look at Blender. M3 Max already outperforms 7900 XTX, a GPU that has more than 3x compute capacity! That’s what smart use of technology, compute, and software optimizations can bring.

Sure ... although with Blender that's also because AMD's ray tracing hardware is pretty terrible. Supposedly RDNA 4 will be a big uplift there.
 
Last edited:
In theory this is true, but in practice I believe the below from @casperes1996 is closer which is why "path tracing" is considered more computationally expensive than what some game makers have been calling "ray tracing" up until now. Basically path tracing is a more computationally manageable version of idealized ray tracing but we've already been using path tracing techniques, just with even more approximations (fewer bounces) and baked in effects. Of course, my understanding is that approximations will still play a huge role going forwards in the form of AI denoising and AI techniques to cut down on the number of random rays each bounce.

The problem there is mixing up the colloquial use of terms with the engineering use of terms. The description I gave dates back to the 1980s when path tracing was first described.

Nvidia themselves split up ray tracing (an umbrella term in their view) wrt games into ray casting (one ray per pixel) and path tracing (multiple rays per pixel). Not perfect, but still more precise than how it gets used colloquially, which ironically is NVidia’s own fault.
 
Just following up on my Hidra speculation (which was incorrect). According to one of the hackers in the appledb community, Nicolás:

Donan = M4
Brava Chop = M4 Pro
Brava = M4 Max

Hidra will be something else, likely Ultra & Extreme (or whatever it’ll be called). So, to wrap back to the beta identifier leak with future products, Mac Studio M4 Max and two MBA M4 models should be coming soon-ish, and the Hidra stuff should be later on (IIRC Gurman is saying March to July or something).
 
Last edited:
Just following up on my Hidra speculation (which was incorrect). According to one of the hackers in the appledb community, Nicolás:

Donan = M4
Brava Chop = M4 Pro
Brava = M4 Max

Hidra will be something else, likely Ultra & Extreme (or whatever it’ll be called). So, to wrap back to the beta identifier leak with future products, Mac Studio M4 Max and two MBA M4 models should be coming soon-ish, and the Hidra stuff should be later on (IIRC Gurman is saying March to July or something).
Works for me, I’m ready to buy the M4 Max Studio. ;)
 
Me too, but I can't imagine a Mac Studio being released with only the Max and then what? A M4 Max or M2 Ultra setup? That wouldn't fly. Only M4 Max and no Ultra for now? Maybe but feels odd too
As much as I hate to admit it, you're absolutely right. I think the Studio will only be released when they're ready with the Ultra as well. Maybe even the Extreme for the Mac Pro.
 
And the all-new Mac Pro Cube, for those who want the ultimate in macOS hardware horsepower, but have no need for PCIe slots...! ;^p

I know y'all are just busting my chops, but I did say all-new...

The Cube had an AGP slot though.

No need for AGP these days, the GPU is now integrated to the ASi chip...

And hairline cracks in the acrylic base.

I would think an all-new Mac Pro Cube would basically be a taller variant on the Mac Studio, to allow for the larger cooling subsystem a Mn Extreme chip would require...
 
I know y'all are just busting my chops, but I did say all-new...



No need for AGP these days, the GPU is now integrated to the ASi chip...



I would think an all-new Mac Pro Cube would basically be a taller variant on the Mac Studio, to allow for the larger cooling subsystem a Mn Extreme chip would require...
So would the cube just be a bigger studio? Like would it share the Mac Pro design language or the Studio?
 
So would the cube just be a bigger studio? Like would it share the Mac Pro design language or the Studio?

I actually have a Cube. Its power supply is broken, but I upgraded it before that happened to a 1.2GHz G4. It was a lovely machine, but the form factor is outdated.

I would like to see a Studio with a connector for some sort of card rack attachment, which would eliminate the need for a Mac Pro. "Cube" is, quite frankly, no longer meaningful – it was awesome in 2000, but a quarter century later, it is merely "cool".
 
Me too, but I can't imagine a Mac Studio being released with only the Max and then what? A M4 Max or M2 Ultra setup? That wouldn't fly. Only M4 Max and no Ultra for now? Maybe but feels odd too
If you look at the Mac Pro board, there's a massive empty space that looks like it was intended for a second CPU socket.

Maybe they'll add the ability to add another SOC to it when the M4 ultra is released? They kinda need to do something to alleviate the RAM capacity problem vs. the old intel machine.

Sure, 256 GB is a decent amount (presumably what an ultra will cap out at given what we see with the max), but its nowhere near what you can get with an EPYC or thread ripper.
 
If you look at the Mac Pro board, there's a massive empty space that looks like it was intended for a second CPU socket.

Maybe they'll add the ability to add another SOC to it when the M4 ultra is released? They kinda need to do something to alleviate the RAM capacity problem vs. the old intel machine.

Sure, 256 GB is a decent amount (presumably what an ultra will cap out at given what we see with the max), but its nowhere near what you can get with an EPYC or thread ripper.

My comment you replied to was talking about Mac Studio release timeline, not Mac Pro.
 
Back
Top