Oh yeah, the M1 (LPDDR4X) vs M1 Pro/Max (LPDDR5) split. I could see that. There are reports like
this which show that even though LPDDR6 hasn't been certified there seems to be accelerated interest in it so maybe TigerRick is making some points
Given that I would say it is more likely that Apple will go with LPDDR5X/T/whatever but I could see Apple paying Samsung for early access to 6 before general availability for the M-series. That said, the A18s would almost certainly have to have LPDDR5X/T/whatever.
So Phoenix L/M are Nuvia cores ... apparently looking stuff up I'm not the only one confused about how they relate to Oryon. And I guess they will have efficiency cores? which weren't ready for the laptop chips just a few months earlier? Though if Gen4 is announced in October how long until it actually ships to in products to customers?
Even if it were ready, will there be enough production to satisfy the dozens of millions chips Apple would need?
For the iPhones, very unlikely if certification isn't even until Q3 2024. For the Macs, maybe. Naturally I have no idea what Samsung's production availability is, but I wouldn't rule it out entirely like I'm basically doing for the A18.
I don't think losing efficiency cores makes much sense. These occupy very little space and offer similar perf/mm2 performance as the P-clusters. Also, they are great for running auxiliary tasks, freeing P-cores for higher-priority work.
Agreed if I were put in charge of designing a monolithic Ultra (and it's a good thing I am not), I would keep some. Assuming a 45+ TOP NPU but keeping core counts of the M3 Max the same for the M4 (which may not be the case naturally), here's how I would slim down the M4 Ultra from two M4 Maxes:
basically 75%, why? I dunno I just chose that.
CPU: Max: 12 P-cores / 4 E-cores -> Ultra: 18 P-cores / 4 E-cores -> Extreme: 36 P-cores / 8 E-cores
NPU: Max: 1 NPU -> Ultra: 1 NPU -> Extreme: 2 NPUs
GPU: Max: 40 cores -> Ultra 60 cores -> Extreme 120 cores
Everything else about 75% too. Since these are desktop-only chips, you can then boost clock speeds for some of these cores, especially the GPUs, to (partially? wholly?) make up for the difference in what would've been had the Ultra been 2x a Max (taking into consideration lost compute bandwidth due to the interconnect, so the GPU would only have to boost < 10%, maybe even 5%). That would be fun and make the high end desktops true monsters.
Edit: I forgot to add additional PCIe lanes for the Ultra for additional Mac Pro expansion.