It depends: are you talking about generating frame A, then frame B, then frames in between? That has a pretty hard limit on how much that helps because you still have to wait until frame B is done. That’s why the new AI techniques of frame generation are generate frame A, then generate 2-4 frames forwards in time based on a guess about what scene should like while real frame B is being generated.How much does interpolation cost compared to straight frame rendering? Seems to me game video does not have a lot of cut-like full image changes, so it seems like interpolation could be a big gain if it is cheap enough.
It depends: are you talking about generating frame A, then frame B, then frames in between? That has a pretty hard limit on how much that helps because you still have to wait until frame B is done. That’s why the new AI techniques of frame generation are generate frame A, then generate 2-4 frames forwards in time based on a guess about what scene should like while real frame B is being generated.
This works well except input lag becomes very noticeable, especially at low non-frame generation frame rates. In other words, if you can’t play the game well “natively” (or with just standard DLSS-type optimizations) this kind of frame generation won’t deliver a playable game. If your computer can play the game well and then you turn on frame generation then you can get even smoother visuals (but still your input latency won’t improve for games where that matters). This AI frame generation is thus basically a “win more” for your graphics card - even smoother graphics for those already able to play smoothly.
Above answer by dada_dave is very good I'd just like to add a little to it.It depends: are you talking about generating frame A, then frame B, then frames in between? That has a pretty hard limit on how much that helps because you still have to wait until frame B is done. That’s why the new AI techniques of frame generation are generate frame A, then generate 2-4 frames forwards in time based on a guess about what scene should like while real frame B is being generated.
This works well except input lag becomes very noticeable, especially at low non-frame generation frame rates. In other words, if you can’t play the game well “natively” (or with just standard DLSS-type optimizations) this kind of frame generation won’t deliver a playable game. If your computer can play the game well and then you turn on frame generation then you can get even smoother visuals (but still your input latency won’t improve for games where that matters). This AI frame generation is thus basically a “win more” for your graphics card - even smoother graphics for those already able to play smoothly.
The Hidra SoC was a part of the M4 generation rumors, and IIRC it was thought to have a different architecture than the usual Ultra = 2x Max dies joined w/ UltraFasion.I thought Hidra was the base M5? Around the release of the M3 Ultra Studio, certain YouTubers were adamant that Apple told them no M4 Ultra. Now it exists? Very confusing.
New Mac Pro rumor surfaced [1]. Nothing particularly surprising— it’s claiming the MP will use the M4 Ultra, code-named Hidra. With a name like Hidra, I’m hoping it’ll come in configurations with capabilities that are >2x M4 Max.
[1] https://www.macworld.com/article/28...king-on-an-m4-ultra-chip-for-new-mac-pro.html
I thought Hidra was the base M5? Around the release of the M3 Ultra Studio, certain YouTubers were adamant that Apple told them no M4 Ultra. Now it exists? Very confusing.
Difficult to say. @Jimmyjames is also right there is a competing rumor/leak that puts Hidra as the base M5:The Hidra SoC was a part of the M4 generation rumors, and IIRC it was thought to have a different architecture than the usual Ultra = 2x Max dies joined w/ UltraFasion.
The findings reveal the internal identifier t8152, which matches the “M4 Ultra” scheme and also goes by the codename “Hidra.”
Difficult to say. @Jimmyjames is also right there is a competing rumor/leak that puts Hidra as the base M5:
![]()
Apple A19, C2, M5 chip identifiers all leaked in early iOS 18 code
Several unreleased Apple Silicon chips recently surfaced in an internal build of iOS 18, including A19, M5, and C2, according to exclusive information provided to AppleInsider.appleinsider.com
Hidra as it turns out is a Norwegian island (Apple likes island names for its internal chip codenames, well the PR "internal" names anyway, @Cmaier loves to point the engineers rarely use those themselves), so is Sotra which is thought maybe to be the next M5 Pro/Max.
Like others I thought maybe Hidra was a play on Hydra - and maybe it still is it could be an island name and a pun, but ... also it could just be another island name and the Hydra association is purely coincidental.
EDIT: I think there is also a problem here:
t8***s are usually iPhone and base M chips, the higher level Mac chips are t6***s.
![]()
CHIP
The CHIP tag is a 16-bit unsigned integer that denotes the type of chip the firmware is to be installed to. It is one of the few tags that is not read from the fuses...theapplewiki.com
Apple could change things up. but right now I'd say t8512 looks more like a base M6 chip.
Side note: I think it's weird that the A18/A19 have higher ID numbers than the M4/M5 - it almost makes it seem like the Apple chips are a full generation behind their iPhone counterparts, but I don't think that's right ... it seems to have started when the A-series got an extra chip generation that the M-series didn't get. (The M1 is synced up)
The Old Norse form of the name was 'Hitr'. The name is probably derived from a word meaning "split" or "cleft", referring to the island's near division by the Rasvåg fjord.
Another interesting factoid from Wikipedia about Hidra is that the island is split in two (rather than many parts as hydra might imply):
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.