M5 Pro and Max unveiled

My logic is the following — it is much more economical to produce two small dies than one large die due to how defects work. And that could really add up with such an expensive process.
Maynard is here so he can speak for himself, but briefly, that ignores the costs (including yield issues, since the process isn't perfect) of packaging.
 
Maynard is here so he can speak for himself, but briefly, that ignores the costs (including yield issues, since the process isn't perfect) of packaging.
Packaging costs are a lot less than die manufacturing costs. It’s also sort of exponential - double the size of the die, and you more than double likelihood of defects. In fact, it’s usually exponential. Like yield=e^(-kA) where A is the area.

Wafers are very expensive, so if I can cut my die size in half (from reticle limit to ½), my yield increases so substantially that the costs of packaging are tiny in comparison. These packaging technologies are sort of as complex as die manufacturing was 10 or more years ago, so it’s comparatively cheap.
 
Where do you think that baidu user got the clocks, width info and llc information from? lol

It’s from the hubweb.cn site which I clearly posted before posting the baidu link.
Sorry not sure I follow? I’m simply saying I don’t see any mention of L1 or L2 on the linked post. I’m sure it’s my issue as I said.

Edif. Oh I was referring to the last link you posted. I didn’t know there was a previous link.
 
It looks like THERE is a misunderstanding on my part. On Anandtech, a user corrected me and said it’s like this. Looks like I read the Chinese labels wrong, I apologise for the confusion.

it should be like below:
S core: 1 MB of L2 per core + 16MB shared L3
P core: 16 MB shared L2

Thanks, that would make more sense to me. What's interesting is that it's not far off from what we had until now. Each core had a priority fast access to a part of the L2 (I think it was around 2MB), and slightly slower access to the rest of the L2. So in a way each core did have its own "private" L2, but other cores could access it directly. So I wonder whether this new info is more of that, or whether we indeed get a new intermediate level of cache. If it's the latter, it's hard to imagine Apple going back to a classical L1->L2->L3 hierarchy after having had a superior solution for a while (their shared L2 essentially did the same as the traditional L3, but was faster). A new 1MB of very fast cache (more like a L1.5) could be interesting though.
 
Thanks, that would make more sense to me. What's interesting is that it's not far off from what we had until now. Each core had a priority fast access to a part of the L2 (I think it was around 2MB), and slightly slower access to the rest of the L2. So in a way each core did have its own "private" L2, but other cores could access it directly. So I wonder whether this new info is more of that, or whether we indeed get a new intermediate level of cache. If it's the latter, it's hard to imagine Apple going back to a classical L1->L2->L3 hierarchy after having had a superior solution for a while (their shared L2 essentially did the same as the traditional L3, but was faster). A new 1MB of very fast cache (more like a L1.5) could be interesting though.
yeah, “shared” may just mean “outside the cores.” Physical distribution is different from logical distribution, so hard to interpret any of this.
 
1772751106185.png
 
Back
Top