Apple M6 rumors/discussion

I think it much more likely that Apple is having difficulty sourcing enough RAM than there are hordes of people clamoring to run openclaw.
Anyone here buying a Mac Mini and running openclaw?
 

This report originates from Gurman, so take with a grain of salt, but it's possible
 

This report originates from Gurman, so take with a grain of salt, but it's possible
Even if they are creating a new tier above MBP, why wouldn’t they call it MacBook Studio instead of MacBook Ultra? Oh, I guess Studio is below Pro?

Maybe they rename the Pro Studio and name the new thing Pro.

Then pretend we all already own Studios.
 

This report originates from Gurman, so take with a grain of salt, but it's possible
It's also possible the exact decision on how to brand this device isn't nailed down yet. My money is on it just being a higher spec, higher priced product in the MacBook Pro line and the existing M5s keep the current price, but that it will just be sold as the higher spec MBP similar to what has been done a few times in the past.
But my money was also on the Neo just being called MacBook, so my track record with predictions is as poor as Gurman's
 
Even if they are creating a new tier above MBP, why wouldn’t they call it MacBook Studio instead of MacBook Ultra? Oh, I guess Studio is below Pro?

Maybe they rename the Pro Studio and name the new thing Pro.

Then pretend we all already own Studios.

Wouldn't that imply that Mac Studio (desktop) is a higher tier than Mac Pro (desktop)?.... Actually.... I guess that is also the case, given the Mac Pro still has M2 Ultra only...
 
Wouldn't that imply that Mac Studio (desktop) is a higher tier than Mac Pro (desktop)?.... Actually.... I guess that is also the case, given the Mac Pro still has M2 Ultra only...
well, as i said “oh, i guess studio is below pro” and hence they would rename all the existing products, like they did with the P and S cores.

Because I am being sarcastic.
 
well, as i said “oh, i guess studio is below pro” and hence they would rename all the existing products, like they did with the P and S cores.

Because I am being sarcastic.
I somehow missed all but the top paragraph in your prior post. Though my comment was also an attempt at a humorous dig at the Mac Pro anyway
 
keep on eye on the A20 Pro in iPhone 18 Pro, the iPhone 18 should have at least 115.2GB/s memory bandwidth if they move to 6-channel as per Weibo leak.
6 channel (96 bit) LPDDR5X ?

Qualcomm / Mediatek are rumoured to be doing 4 channel (96 bit) LPDDR6.

What are the benefits of each approach?
 
Last edited:
Any speculation for core count increases next gen?

A20 Pro = 2S+6E
A20 keeps 2S+4E

M6 = 6S+6E

M6 Pro/M6 Max = 8S+16P

On the GPU side we could see a core count increase due to;

- M6 will probably reuse the M5 GPU architecture (Family 10)
- Density improvement of N2

Possibly 20% increase (12/24/48 for M6/M6 Pro/M6 Max).
 
Any speculation for core count increases next gen?

A20 Pro = 2S+6E
A20 keeps 2S+4E

M6 = 6S+6E

M6 Pro/M6 Max = 8S+16P

On the GPU side we could see a core count increase due to;

- M6 will probably reuse the M5 GPU architecture (Family 10)
- Density improvement of N2

Possibly 20% increase (12/24/48 for M6/M6 Pro/M6 Max).

Based on my M5 GPU analysis published in the M5 thread, as impressive as the M5 is, I really do hope they add at least a couple of GPU cores to the base M6:

View attachment 38308

Base M5 GPU analysis: This comes from NotebookCheck running CP2077 (now native to macOS) with load-only power (i.e. load - idle). As we can see, the M5 is a massive upgrade over the M4 in the MB Air, however performance increases quickly diminish with increasing power. While the 10-core M5 in the 14" MBP is by no means inefficient, it can't keep up with the larger B390 with its 2 extra cores in terms of performance/efficiency. Of course, as discussed in the Panther Lake-H analysis, that chip straddles the base-Pro divide for Apple processors and no processor measured even attempts to operate in the power regime of the Air (again which loses far less performance than you might think). The 10-core M5 is only 15% more performant than the 8-core while using 25% more power. at first glance this might be surprising, but I've noticed this pattern throughout Apple's GPUs across different levels where the binned GPU performs much better than expected (or conversely the full GPU performs worse than expected). Part of it may be that memory bandwidth isn't always binned, but if memory serves this appears to be true even when it is.
 
6 channel (96 bit) LPDDR5X ?

Qualcomm / Mediatek are rumoured to be doing 4 channel (96 bit) LPDDR6.

What are the benefits of each approach?
I can't imagine Apple doing 16-bit channels on LPDDR6. Aside from the ick factor, I would be surprised if such memory were widely available. It would likely cut their sourcing options way too much when that's already a huge problem to solve.

The only benefit of going with 16-bit channels I know of is the small energy savings, as you wouldn't be paying to move around the extra 32 bits per 256 used for ECC. That's 12.5% of your total cost for moving data between RAM and CPU, plus the cost of running the ECC engines. But you'd lose the ECC, and I think they won't make that choice, even if supply constraints are somehow not a factor.
 
I can't imagine Apple doing 16-bit channels on LPDDR6. Aside from the ick factor, I would be surprised if such memory were widely available. It would likely cut their sourcing options way too much when that's already a huge problem to solve.

The only benefit of going with 16-bit channels I know of is the small energy savings, as you wouldn't be paying to move around the extra 32 bits per 256 used for ECC. That's 12.5% of your total cost for moving data between RAM and CPU, plus the cost of running the ECC engines. But you'd lose the ECC, and I think they won't make that choice, even if supply constraints are somehow not a factor.
16-bit channels are the normal (maybe only? Not sure) option for LPDDR5, and they aren't icky at all.

They're a performance feature, rather than an efficiency feature. For a given total DDR interface width, splitting it into more channels gives you more command parallelism and more total open DRAM pages. Both of these are important for memory performance in complex SoCs with lots of agents generating memory traffic. Since LPDDRn originated as a standard for phone SoCs where the path to DRAM usually isn't more than 64 bits, you need narrow channels if you want more memory parallelism. Apple kept using it with M-series chips because lots of channels is a really good thing for desktop class performance, too.
 
Any speculation for core count increases next gen?

A20 Pro = 2S+6E
A20 keeps 2S+4E

M6 = 6S+6E

M6 Pro/M6 Max = 8S+16P

On the GPU side we could see a core count increase due to;

- M6 will probably reuse the M5 GPU architecture (Family 10)
- Density improvement of N2

Possibly 20% increase (12/24/48 for M6/M6 Pro/M6 Max).
This sounds nice but do you really think they'd bump both CPU and GPU core count at the same time?

Even going to a new node, I would expect them to only bump one or the other - My money is on GPU this time. Though it is possible that they'll bump both if the new "Fusion Architecture" and moving from mostly P cores to mostly E cores on the Pro and Max give enough space savings even with more cores that they'll bump both.
 
This sounds nice but do you really think they'd bump both CPU and GPU core count at the same time?

Even going to a new node, I would expect them to only bump one or the other - My money is on GPU this time. Though it is possible that they'll bump both if the new "Fusion Architecture" and moving from mostly P cores to mostly E cores on the Pro and Max give enough space savings even with more cores that they'll bump both.

I’ve seen this comment on RTW
You're right. I didn't realize there is such a small density improvement from N3P to N2. Even A16, with backside power, is only a 10% density improvement over N2P. Since density improvements are slowing down, total silicon area will have to grow.

If this is accurate, Apple going chiplet route already now makes perfect sense. Monolithic solutions are becoming non-viable.
 
Back
Top