Apple GPU stumble reported

Colstan

Site Champ
Posts
822
Reaction score
1,124
And internal combustion engines will be around more than 10 years, too. But that’s not where the innovation and excitement is.
The last telegram was sent on July 14 of 2013. The telegraph machine had a nice 175 year run. I doubt x86 will have that same longevity, but it has its niche, and will continue to inefficiently power Windows boxen for years to come.
you say that but x86 will stay alive in consumer, prosumer and server for a long time(more than 10 years) but time will give us an answer.
I personally will never be buying one. To steal a line from @theorist9, I'd be using macOS if it ran on vacuum tubes. Thankfully, I have confidence in Apple's semiconductor design team. Despite alleged setbacks, Johny Srouji runs a tight ship.

"Mr. Srouji is known for demanding hard truths with the axiom that his meetings focus on problems, not successes."

If there is a kernel of truth to these latest rumors, then Apple won't be turning a blind eye to it, and will adjust accordingly.

Here's my prediction: the M2 was a modest update, meaning we had a year or so of less exciting products, but we'll all forget about it when the M3 is released and features a substantial architectural overhaul. Then Pat Gelsinger will start complaining about getting his lunch money stollen by a "lifestyle company", yet again. AMD is doing better on the technology front than Intel, but that's like bragging that you can out drink the village drunk. I ran the numbers for the last quarter, and the short version is:

Intel's Client group: -17%
AMD's Client group: -40%
Mac sales growth: +25%

We nerds concentrate on the tech details, because that's the most interesting to us, but good technology alone isn't enough. You need to have smart folks developing your tech, but you also need an effective business strategy to leverage those innovations. Tim Cook and Apple have all the pieces necessary for the Mac to continue to succeed, not just technologically, but as a strong business.
 

exoticspice1

Site Champ
Posts
298
Reaction score
101
The last telegram was sent on July 14 of 2013. The telegraph machine had a nice 175 year run. I doubt x86 will have that same longevity, but it has its niche, and will continue to inefficiently power Windows boxen for years to come.

I personally will never be buying one. To steal a line from @theorist9, I'd be using macOS if it ran on vacuum tubes. Thankfully, I have confidence in Apple's semiconductor design team. Despite alleged setbacks, Johny Srouji runs a tight ship.

"Mr. Srouji is known for demanding hard truths with the axiom that his meetings focus on problems, not successes."

If there is a kernel of truth to these latest rumors, then Apple won't be turning a blind eye to it, and will adjust accordingly.

Here's my prediction: the M2 was a modest update, meaning we had a year or so of less exciting products, but we'll all forget about it when the M3 is released and features a substantial architectural overhaul. Then Pat Gelsinger will start complaining about getting his lunch money stollen by a "lifestyle company", yet again. AMD is doing better on the technology front than Intel, but that's like bragging that you can out drink the village drunk. I ran the numbers for the last quarter, and the short version is:

Intel's Client group: -17%
AMD's Client group: -40%
Mac sales growth: +25%

We nerds concentrate on the tech details, because that's the most interesting to us, but good technology alone isn't enough. You need to have smart folks developing your tech, but you also need an effective business strategy to leverage those innovations. Tim Cook and Apple have all the pieces necessary for the Mac to continue to succeed, not just technologically, but as a strong business.
Yes we need to consider all things.
AMD main money maker is server/datacenter those Eypc chips are very powerful and efficient. AMD gained and will a lot more market share in 2023 in server. AMD's designs are server first and they then trickle down to desktop and mobile.
I would say AMD also has the capability to succeed technologically as well making bank/doing great in busniess. AMD's market cap in 2015 was around $1 billion and in 2023 its $101 billion. In 7 years AMD managed 100X GROWTH in market cap not bad for a company that makes only makes chips.

Now Apple desings are mobile first and desktop last. They make the most money from iphones.

Intel is strange, its designs are not mobile nor server focused and Intel managament sucks. Always delaying the product and not delivering on time.
 

Citysnaps

Elite Member
Staff Member
Site Donor
Posts
3,603
Reaction score
8,825
Main Camera
iPhone
Here’s a big bag of conjecture on my part…

I suspect ray-tracing will play a large role in Apple’s upcoming AR project and device. To the point where it will be a make-or-break requirement for Apple’s overall AR success.

I’m still clinging to my long-held view that whatever the device will be, glasses (my bet) or headset, the heavy-lift image processing will occur in a user’s iPhone tucked away in their pocket. With a low-power short distance wireless wideband data link streaming multiple video streams to and from the glasses/headset (which would have minimal silicon and battery capacity).

Sticking my neck out with more conjecture… Considering the above (if true), performance requirements and an accelerated development timeframe, that might have influenced the direction and corresponding risk tradeoffs made regarding Apple’s latest GPU silicon.

3) If there WAS a power prediction problem, it would have affected the whole chip. The only way it could happen is if capacitance was wildly mispredicted, which would affect the CPU too, and would also mean that predicted clock speed would be way off.

That bit us back in the Pleistocene era, transitioning to (IIRC) 150nm and faster clocks. We took a hit due to sidewall capacitance from parallel metal runs; requiring modifying our simulator; and then moving wires around and doing another proto fab run (zebra’d with others’ devices, fortunately).
 

theorist9

Site Champ
Posts
603
Reaction score
548
And internal combustion engines will be around more than 10 years, too. But that’s not where the innovation and excitement is.
I wonder when biological computing will take off. The human brain only consumes 12 W. Though in spite of the low power consumption I expect it would be better for desktop than mobile applications, given the size of the case and that it needs to be recharged three times per day.
 

Yoused

up
Posts
5,511
Reaction score
8,687
Location
knee deep in the road apples of the 4 horsemen
I wonder when biological computing will take off. The human brain only consumes 12 W. Though in spite of the low power consumption I expect it would be better for desktop than mobile applications, given the size of the case and that it needs to be recharged three times per day.

Well, and it has to go into low-power mode for at least 90 minutes at a stretch or the filesystem will start to malfunction and feed spurious data to the processors. Then there are quirky items tucked away in obscure directories that corrupt data and cause unpredictable behaviors. At least we know that a given computer will operate almost exactly the same as a similar model – the biocomp design is desperately variable in functionality and subject to chemical signals that are difficult to debug.
 

Nycturne

Elite Member
Posts
1,111
Reaction score
1,426
Well, and it has to go into low-power mode for at least 90 minutes at a stretch or the filesystem will start to malfunction and feed spurious data to the processors.

I don’t know about you, but I’m pretty sure the filesystem is the variety that corrupts data every time it’s read, and will do it differently each time. We just hope the corruption isn’t of something important.
 

theorist9

Site Champ
Posts
603
Reaction score
548
Then there are quirky items tucked away in obscure directories that corrupt data and cause unpredictable behaviors. At least we know that a given computer will operate almost exactly the same as a similar model – the biocomp design is desperately variable in functionality and subject to chemical signals that are difficult to debug.
I don’t know about you, but I’m pretty sure the filesystem is the variety that corrupts data every time it’s read, and will do it differently each time. We just hope the corruption isn’t of something important.
These are software issues that can be fixed with better programming.
 

Colstan

Site Champ
Posts
822
Reaction score
1,124
It looks like Apple will defintely get volume production on 3nm next year. TSMC is having a ceremony on Dec. 29th to commemorate the launch.

 

theorist9

Site Champ
Posts
603
Reaction score
548
It looks like Apple will defintely get volume production on 3nm next year. TSMC is having a ceremony on Dec. 29th to commemorate the launch.

And there was the mention on semianalysis that TSMC is having yield trouble with this first gen N3 (N3B), and that they thus plan (hope) to move to N3E, which has better yields, for devices sold later in 2023. I wonder if N3E's lower density makes it easier to produce:


"With N3B failing to reach TSMC’s targets for performance, power, and yield, N3E was developed. Its purpose was to fix the shortcomings of N3B. The first significant change is a slight relaxation of the metal pitches....N3E is doing much better than N3B and will be in high-volume production in the middle of next year.....Unlike the previous nodelets that TSMC has launched for its N7 and N5 family of nodes, N3E is not IP-compatible with N3B IP. This means that IP blocks have to be reimplemented. As such, many companies, such as GUC, have chosen only to implement their IP on the more long-lasting N3E node."
 
Last edited:

Cmaier

Site Master
Staff Member
Site Donor
Posts
5,215
Reaction score
8,264
And there was the mention on semianalysis that TSMC is having yield trouble with this first gen N3 ( N3B), and that they thus plan (hope) to move to N3E, which has better yeilds, for devices sold later in 2023. I wonder if N3E's lower density makes it easier to produce:


"With N3B failing to reach TSMC’s targets for performance, power, and yield, N3E was developed. Its purpose was to fix the shortcomings of N3B. The first significant change is a slight relaxation of the metal pitches....N3E is doing much better than N3B and will be in high-volume production in the middle of next year.....Unlike the previous nodelets that TSMC has launched for its N7 and N5 family of nodes, N3E is not IP-compatible with N3B IP. This means that IP blocks have to be reimplemented. As such, many companies, such as GUC, have chosen only to implement their IP on the more long-lasting N3E node."
Sounds like it’s not a density thing, but an issue with metal patterning. They must be having alignment problems with all the mask steps necessary on N3B. Not surprising.
 

theorist9

Site Champ
Posts
603
Reaction score
548
Sounds like it’s not a density thing, but an issue with metal patterning. They must be having alignment problems with all the mask steps necessary on N3B. Not surprising.
The metal patterning does sound like it's a primary issue. According to TSMC: "Furthermore, three critical layers requiring EUV double patterning in previous generation are replaced by single EUV patterning, which reduces process complexity, intrinsic cost and cycle time." And Patel adds: "The number of EUV layers goes from 25 in N3B to 19 on N3E".

However, is part of what enabled the reduction in patterning complexity the small reduction in density? Or if not, why did they reduce the density?

Separately, Patel says "TSMC backed off of the SRAM cell size versus N3B due to yields." But the statement about relaxing the metal pitches would apply to the chip generally, no?
 

Cmaier

Site Master
Staff Member
Site Donor
Posts
5,215
Reaction score
8,264
The metal patterning does sound like it's a primary issue. According to TSMC: "Furthermore, three critical layers requiring EUV double patterning in previous generation are replaced by single EUV patterning, which reduces process complexity, intrinsic cost and cycle time." And Patel adds: "The number of EUV layers goes from 25 in N3B to 19 on N3E".

However, is part of what enabled the reduction in patterning complexity the small reduction in density? Or if not, why did they reduce the density?

Separately, Patel says "TSMC backed off of the SRAM cell size versus N3B due to yields." But the statement about relaxing the metal pitches would apply to the chip generally, no?

I don’t know - “density” is meaningless to me. We never thought about it that way. We looked at specific design rules - metal pitch, metal spacing, minimum metal width, minimum polygon area, maximum polygon area, etc.

Those rules can translate into a “density,” but it’s not straightforward. On one process, given those rules, you arrange the transistors for a particular gate one way. On another process, with different “less dense” rules, you arrange transistors another way, and you may actually be able to fit more stuff in less area.

In other words, there is some hidden assumption going on when they say one is more dense than the other - they are assuming a particular layout that is only allowed to be altered by moving things apart or something. It just has no meaning to me as someone who spent considerable time figuring out where each piece of metal and transistor went.

What I assume is going on here is that N3B allows smaller metal spacing than N3E. Probably the same minimum width and thus N3B also allows smaller pitch than N3E. I guess. Assuming that is true on all metal layers, and assuming you have a design that uses 100% of metal tracks on all layers (unlikely), then you would be “more dense” on N3B if nothing else is different.

Presumably, too, all the extra patterning on N3B is to give the wires flat/straight vertical edges, and N3E allows for sloppier vertical edges on wires.
 
Top Bottom
1 2