M3 core counts and performance

I’m sure more knowledgeable members will be able to give you a better answer, but this accounts proximity to max tech leaves me suspicious!

Would defective chips be salvaged by a voltage increase? That doesn’t seem like a plausible solution, but I could be wrong.

Chips are defective because of the presence of defects. There are many types of defects, but the gist of it is that these are mechanical problems; for example, gaps where there should be metal, metal where there shouldn’t be (shorting things together), foreign objects (like dust particles), polygons that didn’t end up with the correct shape, pinholes between adjacent vertical layers, etc.

Increasing voltage cannot solve any such problem.
 
Chips are defective because of the presence of defects. There are many types of defects, but the gist of it is that these are mechanical problems; for example, gaps where there should be metal, metal where there shouldn’t be (shorting things together), foreign objects (like dust particles), polygons that didn’t end up with the correct shape, pinholes between adjacent vertical layers, etc.

Increasing voltage cannot solve any such problem.
Happy to know my instincts were correct!
 
No. How would increasing supply voltage improve fault tolerance? That’s crazy. Increasing the supply voltage moves the Schmoo plot to the right (you get more chips that work at a given frequency), but cannot cure faults.

Could the chip failures to which he was referring be those caused by random doping fluctuations (RDFs)? I don't have the expertise to interpret this paper, but it seems to be saying that increasing the voltage reduces the extent to which RDF's cause failures.


"Classically, failures in embedded memory cells are categorized as either of a transient nature, dependent on operating conditions, or of a fixed nature due to manufacturing errors. Symptoms of these failures are expressed as either: (1) an increase in cell access time, or (2) unstable read/write operations.

In process technologies greater than 100nm, fixed errors are predominant, with a minority of the errors introduced due to transient effects. This model cannot be sustained as scaling progresses due to the random nature of the fluctuation of dopant atom distributions and variation in gate length. In fact, in sub 100nm design, Random Dopant Fluctuation (RDF) has a dominant impact on the transistors’ strength mismatch and is the most noticeable type of intra-die variation that can lead to cell instability and failure in embedded memories."


So while these RDFs may have nothing to do with the manufacturing errors TSMC is encountering because of N3B's complexity, could increasing the voltage still reduce the net failure rate (which would be partly a product of rate of mfr. errors x fault rate due to RDF's)? I.e., is he saying TSMC (or Apple) is upping the voltage to reduce a defect rate it can reduce to compensate for a defect rate it can't reduce?
 
Could the chip failures to which he was referring be those caused by random doping fluctuations (RDFs)? I don't have the expertise to interpret this paper, but it seems to be saying that increasing the voltage reduces the extent to which RDF's cause failures.


"Classically, failures in embedded memory cells are categorized as either of a transient nature, dependent on operating conditions, or of a fixed nature due to manufacturing errors. Symptoms of these failures are expressed as either: (1) an increase in cell access time, or (2) unstable read/write operations.

In process technologies greater than 100nm, fixed errors are predominant, with a minority of the errors introduced due to transient effects. This model cannot be sustained as scaling progresses due to the random nature of the fluctuation of dopant atom distributions and variation in gate length. In fact, in sub 100nm design, Random Dopant Fluctuation (RDF) has a dominant impact on the transistors’ strength mismatch and is the most noticeable type of intra-die variation that can lead to cell instability and failure in embedded memories."


So while these RDFs may have nothing to do with the manufacturing errors TSMC is encountering because of N3B's complexity, could increasing the voltage still reduce the net failure rate (which would be partly a product of rate of mfr. errors x fault rate due to RDF's)? I.e., is he saying TSMC (or Apple) is upping the voltage to reduce a defect rate it can reduce to compensate for a defect rate it can't reduce?

This is not a thing that actually happens. First, all of the memory cells on the SoC are static, not dynamic, and the bit lines are all differential, so it is not going to be particularly sensitive to the tiny fluctuations in threshold voltage that would be caused by local variations in dopant concentration; it’s the difference between voltages that matters, not the absolute value of any voltage in the RAM. The noise of nearby switching would dwarf the effect of dopant variation. Additionally, in real life, any lateral variations in dopant concentration would be incredibly small (as opposed to vertical variations, which don’t matter). I don’t believe it. And nobody would refer to this affect as there being “faults” or “defects.” It’s just margin. The same as if the wire resistances end up higher than expected. It affects the Schmoo, but is not a “defect” and does not prevent the chip from working.
 
Chips are defective because of the presence of defects. There are many types of defects, but the gist of it is that these are mechanical problems; for example, gaps where there should be metal, metal where there shouldn’t be (shorting things together), foreign objects (like dust particles), polygons that didn’t end up with the correct shape, pinholes between adjacent vertical layers, etc.

Increasing voltage cannot solve any such problem.
The most generous reading I can think of is that they were misusing "defect" to talk about bin sort failures. In which case, sure, you can harvest some underperforming die by boosting voltage beyond what would normally be acceptable.

But it doesn't fit with the way Apple usually handles harvesting, which is to stockpile them for use in some other product. One of the dumping grounds is AppleTVs, which always use a standard A series chip with a few CPU/GPU cores fused off. Wouldn't be surprised if they also allow lower frequency (or higher power) for those chips than iPad or iPhone.
 
No. How would increasing supply voltage improve fault tolerance? That’s crazy. Increasing the supply voltage moves the Schmoo plot to the right (you get more chips that work at a given frequency), but cannot cure faults.

Maybe that's what they meant? That due to some manufacturing difficulties the chips were not reaching their target frequencies and the voltage had to be increased to compensate?
 
After being told that no new Macs were inbound this year, especially not M3 Macs, now uncle Gurman is changing his tune and saying new iMacs with M3 and maybe even Macbook Pros with M3+ will be announced at the end of the month. We’ll see I guess.
 
After being told that no new Macs were inbound this year, especially not M3 Macs, now uncle Gurman is changing his tune and saying new iMacs with M3 and maybe even Macbook Pros with M3+ will be announced at the end of the month. We’ll see I guess.
I really hope he’s correct… the situation is farcical. I don’t understand how the Pro/Max arrives before the base M3, but I won’t argue with it.
 
I really hope he’s correct… the situation is farcical. I don’t understand how the Pro/Max arrives before the base M3, but I won’t argue with it.

Is the idea really that far fetched though? It seems that N3B is quite expensive and N3E is not quite ready for volume production yet. It might make sense to delay the volume SoC series and start with a more premium, lower volume product on N3B.

Not that I buy the idea of any chips in the M3 family being available this soon, I think we'll have to wait a few months more. But I could see Apple releasing an M2 Pro iMac, if only just to fill the holiday void (although it would be a bad sign for the eventual refresh).
 
Is the idea really that far fetched though? It seems that N3B is quite expensive and N3E is not quite ready for volume production yet. It might make sense to delay the volume SoC series and start with a more premium, lower volume product on N3B.

Not that I buy the idea of any chips in the M3 family being available this soon, I think we'll have to wait a few months more. But I could see Apple releasing an M2 Pro iMac, if only just to fill the holiday void (although it would be a bad sign for the eventual refresh).
It’s just that historically the base has arrived first. True to say that two generations doesn’t create much history!

I hope it’s more than an M2 iMac. I can’t understand what would have taken that long for an M2 iMac to arrive. It’s very possible that I am letting my wish for the M3 to cloud my judgement though.
 
Here is an odd thought: the M3 iMac will be built with support for the Pencil. After all, it is already kind of easel-shaped. The Pencil will not come with the iMac, but there will be a charger on the side of the bezel, allowing one to be added easily.
 
Maybe we’ll just get M2 iMacs and small spec adjustments to the current MacBooks?
e.g. bump the base 14” Pro to 12C/19C, bump the 13” to 16GB (8GB is just offensive)
Would love to see M3 but who knows 🤷‍♂️
 
I can see any and all of the options you guys have mentioned as viable. I would say the pencil is less likely but not totally impossible. I just think that’ll be a “really big deal” announcement if they were to bring a feature like that to the Mac. But we’ll see.
 
And we’ll see really soon. Seems that Gurman was right on the money!
His predictions a couple weeks out are usually better than his normal tea leaf reading, that’s true. Previously I’m pretty sure he was predicting no October event. It’ll be interesting to see what actually gets released. An M2/3 iMac seems a given, but beyond that though? Hopefully it’ll be an M3 just a get a look at it.
 
His predictions a couple weeks out are usually better than his normal tea leaf reading, that’s true. Previously I’m pretty sure he was predicting no October event. It’ll be interesting to see what actually gets released.

He said that there would be an event just a couple of days ago.
 
Last edited:
Given that Apple titled the event "Scary Fast"—which is pretty on-the-nose for them; as you know, their event titles are typically cryptic—I'm guessing we'll be seeing M3's. And perhaps even M3's with boosted clocks.
 
He said that there would be an event just a couple of days ago.
That’s what I mean. His predictions made at around 2 weeks or less from the event, in this case his predictions 2 days ago for Oct 30th, are usually fine. His previous predictions from a month or two ago however I’m pretty sure were in direct opposition to what he correctly said a couple of days ago where previously he was saying not to expect any M3 macs and likely no new macs at all until next year.

Basically it pays to not pay attention to him unless what he is predicting is right around the corner. If it’s about to happen, then he’s much more accurate. To be fair that’s expected! But still I see people getting wrapped around the axel for long-term predictions made by him and other gurus that have almost no bearing on what actually happens.
 
Last edited:
Back
Top