M4 Mac Announcements

How much does interpolation cost compared to straight frame rendering? Seems to me game video does not have a lot of cut-like full image changes, so it seems like interpolation could be a big gain if it is cheap enough.
 
How much does interpolation cost compared to straight frame rendering? Seems to me game video does not have a lot of cut-like full image changes, so it seems like interpolation could be a big gain if it is cheap enough.
It depends: are you talking about generating frame A, then frame B, then frames in between? That has a pretty hard limit on how much that helps because you still have to wait until frame B is done. That’s why the new AI techniques of frame generation are generate frame A, then generate 2-4 frames forwards in time based on a guess about what scene should like while real frame B is being generated.

This works well except input lag becomes very noticeable, especially at low non-frame generation frame rates. In other words, if you can’t play the game well “natively” (or with just standard DLSS-type optimizations) this kind of frame generation won’t deliver a playable game. If your computer can play the game well and then you turn on frame generation then you can get even smoother visuals (but still your input latency won’t improve for games where that matters). This AI frame generation is thus basically a “win more” for your graphics card - even smoother graphics for those already able to play smoothly.
 
It depends: are you talking about generating frame A, then frame B, then frames in between? That has a pretty hard limit on how much that helps because you still have to wait until frame B is done. That’s why the new AI techniques of frame generation are generate frame A, then generate 2-4 frames forwards in time based on a guess about what scene should like while real frame B is being generated.

This works well except input lag becomes very noticeable, especially at low non-frame generation frame rates. In other words, if you can’t play the game well “natively” (or with just standard DLSS-type optimizations) this kind of frame generation won’t deliver a playable game. If your computer can play the game well and then you turn on frame generation then you can get even smoother visuals (but still your input latency won’t improve for games where that matters). This AI frame generation is thus basically a “win more” for your graphics card - even smoother graphics for those already able to play smoothly.

It depends: are you talking about generating frame A, then frame B, then frames in between? That has a pretty hard limit on how much that helps because you still have to wait until frame B is done. That’s why the new AI techniques of frame generation are generate frame A, then generate 2-4 frames forwards in time based on a guess about what scene should like while real frame B is being generated.

This works well except input lag becomes very noticeable, especially at low non-frame generation frame rates. In other words, if you can’t play the game well “natively” (or with just standard DLSS-type optimizations) this kind of frame generation won’t deliver a playable game. If your computer can play the game well and then you turn on frame generation then you can get even smoother visuals (but still your input latency won’t improve for games where that matters). This AI frame generation is thus basically a “win more” for your graphics card - even smoother graphics for those already able to play smoothly.
Above answer by dada_dave is very good I'd just like to add a little to it.

The cost of doing the frame interpolation/generation is fairly cheap. You generally get very very close to doubling your frame rate by turning it on in most cases, so the cost of frame interpolation isn't much of a consideration I think. It is still a matter of the tradeoffs; As dada says; Frame interpolation increases smoothness, but your input latency isn't helped by it. If you natively run the game at 30FPS, frame interpolation may basically get you to 60FPS, but your input latency may now be the equivalent of running at 28FPS - In total this may actually be a perceivably worse experience than just VSYNC-ing to 30FPS and having less smooth visuals, but slightly better input latency than with the incurred cost of frame interpolation.

The interpolated frames also aren't exactly perfect every time, you can get some visual artefacts with it.

In my opinion, but once again this is quite subjective, frame interpolation/generation is worth it if you have a VRR/high refresh rate monitor and you're playing natively with ~40-45FPS or above. Getting the smoothness of 80+ FPS and still having input latency around the *40FPS mark is usually pretty good to me.

*Note: 40FPS obviously isn't a measure of input latency, but as the generation speed of frames is tied to input latency as well, what I mean is that the usual input latency, for most games, while running at about 40FPS, is typically fine to me, and noticeably better than 30FPS, but not meaningfully different to 60 - I play almost exclusively with controllers, not mouse and keyboard
 
I thought Hidra was the base M5? Around the release of the M3 Ultra Studio, certain YouTubers were adamant that Apple told them no M4 Ultra. Now it exists? Very confusing.
 
I thought Hidra was the base M5? Around the release of the M3 Ultra Studio, certain YouTubers were adamant that Apple told them no M4 Ultra. Now it exists? Very confusing.
The Hidra SoC was a part of the M4 generation rumors, and IIRC it was thought to have a different architecture than the usual Ultra = 2x Max dies joined w/ UltraFasion.
 
New Mac Pro rumor surfaced [1]. Nothing particularly surprising— it’s claiming the MP will use the M4 Ultra, code-named Hidra. With a name like Hidra, I’m hoping it’ll come in configurations with capabilities that are >2x M4 Max.

[1] https://www.macworld.com/article/28...king-on-an-m4-ultra-chip-for-new-mac-pro.html

I thought Hidra was the base M5? Around the release of the M3 Ultra Studio, certain YouTubers were adamant that Apple told them no M4 Ultra. Now it exists? Very confusing.

The Hidra SoC was a part of the M4 generation rumors, and IIRC it was thought to have a different architecture than the usual Ultra = 2x Max dies joined w/ UltraFasion.
Difficult to say. @Jimmyjames is also right there is a competing rumor/leak that puts Hidra as the base M5:


Hidra as it turns out is a Norwegian island (Apple likes island names for its internal chip codenames, well the PR "internal" names anyway, @Cmaier loves to point the engineers rarely use those themselves), so is Sotra which is thought maybe to be the next M5 Pro/Max.

Like others I thought maybe Hidra was a play on Hydra - and maybe it still is it could be an island name and a pun, but ... also it could just be another island name and the Hydra association is purely coincidental.

EDIT: I think there is also a problem here:

The findings reveal the internal identifier t8152, which matches the “M4 Ultra” scheme and also goes by the codename “Hidra.”

t8***s are usually iPhone and base M chips, the higher level Mac chips are t6***s.


Apple could change things up. but right now I'd say t8512 looks more like a base M6 chip.

Side note: I think it's weird that the A18/A19 have higher ID numbers than the M4/M5 - it almost makes it seem like the Apple chips are a full generation behind their iPhone counterparts, but I don't think that's right ... it seems to have started when the A-series got an extra chip generation that the M-series didn't get. (The M1 is synced up)
 
Last edited:
Difficult to say. @Jimmyjames is also right there is a competing rumor/leak that puts Hidra as the base M5:


Hidra as it turns out is a Norwegian island (Apple likes island names for its internal chip codenames, well the PR "internal" names anyway, @Cmaier loves to point the engineers rarely use those themselves), so is Sotra which is thought maybe to be the next M5 Pro/Max.

Like others I thought maybe Hidra was a play on Hydra - and maybe it still is it could be an island name and a pun, but ... also it could just be another island name and the Hydra association is purely coincidental.

EDIT: I think there is also a problem here:



t8***s are usually iPhone and base M chips, the higher level Mac chips are t6***s.


Apple could change things up. but right now I'd say t8512 looks more like a base M6 chip.

Side note: I think it's weird that the A18/A19 have higher ID numbers than the M4/M5 - it almost makes it seem like the Apple chips are a full generation behind their iPhone counterparts, but I don't think that's right ... it seems to have started when the A-series got an extra chip generation that the M-series didn't get. (The M1 is synced up)

Interesting about the chipid; that would be a change from the usual pattern. It’s been awhile since I’ve poked around firmware so I’ll see what I can glean.

Another interesting factoid from Wikipedia about Hidra is that the island is split in two (rather than many parts as hydra might imply):

The Old Norse form of the name was 'Hitr'. The name is probably derived from a word meaning "split" or "cleft", referring to the island's near division by the Rasvåg fjord.

So, something like a quad “M4 Extreme” configuration wouldn’t be implied by the internal name. But perhaps it’ll be 2x monolithic Ultra-class SoCs… one can dream 🙂

Edit: I checked up on some RE folks and it seems the leaked identifiers are part of the kernel debug kit. Edit 2: Unsurprisingly, that build is no longer available.

IMG_4033.jpeg
 
Last edited:
Unrelated to the M4 but I cannot fathom why an upcoming Studio Display would have an A19 Pro and an AppleTV would have an A17 Pro.

What is it used for? I know the current display has an A13 for Center Stage etc, but an A19? Weird.
 
Back
Top