The Fall of Intel

the G3/G4 chips could operate in little endian mode which would effectively allow little endian code to operate on a system that was big endian without changes by doing some trickery with memory addresses

As I recall, this was mostly true. The LE mode worked perfectly on aligned scalars (which would be most of them), but experienced some hiccups if they were out o alignment, which might be almost never, but it only takes one to mess everything up. I could be remembering incorrectly, though.
 
I’m skeptical of this. Panther Lake is going to be on 18A and while die dize is related to yield and node differences there are less obvious than with larger dice, this is missing something. Panther Lake is less tiled than Meteor Lake — it should be a CPU + SoC tile on 18A and GPU on N3 — and that 18A tile won’t be too far off from the dice used in a phone either + or -.

Of course it could be another 10nm I just wouldn’t take this too seriously. No clients that have *officially* announced their use of 18A rather than interest have backed out. What we do know is the HD cells aren’t good enough yet on density and leakage, which is why 18AP is coming to amend that with Intel explicitly noting it for mobile, and this is also the reason Qualcomm apparently dropped early interest.

View attachment 33030
Simplified $20,000 wafer cost adjusted for various dice. In practice it might be better than this if your marginal area is a lot of parallel logic with redundancy (e.g. SoC blowing up from 150mm^2 to 250mm^2 where the main difference is GPU and encoding size) but still.

lol yeah I called this. They also mention D0 vs yields as I did and that’s being discussed. Notably Patrick Moorehead also said this is bull.

Broadcom also didn’t use PDK 1.0 he says.
 
I can smell BS like this a mile away and you see a lot of it with fab hype often coming from overseas with even seemingly reputable sources. Even the Broadcom claims re Intel and yields are apparently wrong. Suggests something else.
 
I can smell BS like this a mile away and you see a lot of it with fab hype often coming from overseas with even seemingly reputable sources. Even the Broadcom claims re Intel and yields are apparently wrong. Suggests something else.
where you would see real problems being leaked is if there is a performance issue - like too much current leakage, ring oscillators running too slow, etc. Yield can be fixed, but these other problems are much more of a pain. (In my CPU career I never panicked about yields. But when the fab did something stupid like adding a nitride layer to solve some yield problem and didn’t warn us, so that frequencies were 20% less than expected, that’s when it was all hands on deck.
 
where you would see real problems being leaked is if there is a performance issue - like too much current leakage, ring oscillators running too slow, etc. Yield can be fixed, but these other problems are much more of a pain. (In my CPU career I never panicked about yields. But when the fab did something stupid like adding a nitride layer to solve some yield problem and didn’t warn us, so that frequencies were 20% less than expected, that’s when it was all hands on deck.


Yes, parametric yields are a separate issue and on top of catastrophic stuff. TSMC is good with both.
 
their most valuable patents may be fab-related.

Which, why would you buy the fab without the associated patents? I took the patent warchest to be something separable from the foundry side of things: i.e. their CPU design patents.

As I recall, this was mostly true. The LE mode worked perfectly on aligned scalars (which would be most of them), but experienced some hiccups if they were out o alignment, which might be almost never, but it only takes one to mess everything up. I could be remembering incorrectly, though.

Which, ultimately, was still a win. I mostly remember the big lag on VirtualPC supporting the G5 because of these rather specific optimizations.

It's mostly an example where an existing mode on one architecture, made for the sake of that architecture, happens to make it easier to translate a different ISA.
 
Endianness matters mostly in the context of stored data records, and only a little bit in other code. A program in general, in memory, will function the same in LE or BE mode exactly the same for all but a tiny fraction of its run. Some programs do not even care at all. Taking apart the elements of a scalar is a fairly uncommon thing in most programs. The transition from BE PPC to LE x86 was smooth because the endianness issue was not really that big an issue. All they needed to keep track of was which layout a stored record was in so that it could be handled properly.
 
Endianness matters mostly in the context of stored data records, and only a little bit in other code. A program in general, in memory, will function the same in LE or BE mode exactly the same for all but a tiny fraction of its run. Some programs do not even care at all. Taking apart the elements of a scalar is a fairly uncommon thing in most programs. The transition from BE PPC to LE x86 was smooth because the endianness issue was not really that big an issue. All they needed to keep track of was which layout a stored record was in so that it could be handled properly.

As I was an engineer that worked on such transitions, I’m well aware of what it took to make the transition (and what did/didn’t break). But that’s still not the context I made the statement in, nor is it making any statements about the claims I actually made.
 
I can smell BS like this a mile away and you see a lot of it with fab hype often coming from overseas with even seemingly reputable sources. Even the Broadcom claims re Intel and yields are apparently wrong. Suggests something else.
Apparently it’s not that the Broadcom yield issues were wrong per se, rather that they were, partially, expected based on die size. In fact, the “less than 10% yield” claims probably came from that test run but it’s also because the die size Broadcom was using is near the reticle limit. With that huge of a die, a low yield is expected. that said, the defect is rate is indeed higher than what we saw from TSMC (on earlier nodes, not N3) at this point in time, which may be why Broadcom was reportedly “disappointed”. Having said that, the article didn’t say what TSMC’s defect rate was for its current nodes at the same point of development and as I said Intel still has time left to improve it before full volume manufacturing and it’s already at the point of being viable.


That isn’t to say that Intel doesn’t have its work cut out being a 3rd party fab as the semianalysis article @The Flame linked in his post goes into, but the node itself being in decent shape is indeed good news.

Seminalysis Article:


While it reads well and is very thorough, I think Dylan (I know the article was written by more than just him) is a little prone to exaggeration (not uncommon in his line of work). As I recall, he also said that Apple’s chip team would stall out completely because of GW3 (and others) leaving. I think @Cmaier would also dispute the article’s contention that Gelsinger was the savior of Intel. 🙃

The idea to spin off the chip design and everything else but keep the fabs under Intel is certainly interesting.
 
Apparently it’s not that the Broadcom yield issues were wrong per se, rather that they were, partially, expected based on die size. In fact, the “less than 10% yield” claims probably came from that test run but it’s also because the die size Broadcom was using is near the reticle limit. With that huge of a die, a low yield is expected. that said, the defect is rate is indeed higher than what we saw from TSMC (on earlier nodes, not N3) at this point in time, which may be why Broadcom was reportedly “disappointed”. Having said that, the article didn’t say what TSMC’s defect rate was for its current nodes at the same point of development and as I said Intel still has time left to improve it before full volume manufacturing and it’s already at the point of being viable.


That isn’t to say that Intel doesn’t have its work cut out being a 3rd party fab as the semianalysis article @The Flame goes into, but the node itself being in decent shape is indeed good news.



While it reads well and is very thorough, I think Dylan (I know the article was written by more than just him) is a little prone to exaggeration (not uncommon in his line of work). As I recall, he also said that Apple’s chip team would stall out completely because of GW3 leaving. I think @Cmaier would also dispute the article’s contention that Gelsinger was the savior of Intel. 🙃

The idea to spin off the chip design and everything else but keep the fabs under Intel is certainly interesting.
the problem is that the chip designs business isn’t worth anything. They have no moat anymore, they have way too many employees per chip, and x86 is going to die. They should keep them at least long enough to guarantee customers for the fab business, and quickly pivot the design business to look a lot more like qualcomm. But whatever.
 
blah blah blah
I thought it was interesting that they are already partially spinning them out. I knew they were making them their own division, but I didn’t realize that they were already going to become their own separate entity with their own operational board and ERP - potentially even an IPO, just with Intel retaining a controlling stake. That will take time but they’re already planning on making the fabs a lot more independent than I had thought. Spinning them off suddenly becomes a real possibility.
 
Last edited:
I thought it was interesting that they are already partially spinning them out. I knew they were making them their own division, but I didn’t realize that they were already going to become their own separate entity with their own board and maybe even an IPO, just with Intel retaining a controlling stake. That will take time but they’re already planning on making the fabs a lot more independent than I had thought. Spinning them off suddenly becom
Given their lack of clear strategy, it’s going to end up worse than the spinoff of global foundries. Who are the customers for this thing? 99% of the volume is from Intel itself. But if they spin off the fabs, the design team has no incentive to keep using those fabs, since they aren’t as good as TSMC.

You have to do these things carefully and strategically, not just react.
 
I don't know enough about chip making to speculate but is Intel's fab well-positioned to support external clients and provide whatever necessary related services as needed, especially if it requires collaboration with other vendors?

I also wonder why TSMC doesn't offer a "white box" chip development service for device makers who just want a slightly tweaked generic ARM processor with a choice of different IP blocks for their needs. But I guess it's also possible TSMC is really dedicated in not competing against their clients and want to work just as a fab. But it's interesting Google has their "own" Tensor chip which is usually thought to be a modified Samsung Exynos.

On a somewhat related note this reminds me when the first H1 chip came out one guy I played sports with claimed it was based on another IP even though he wouldn't say anything in detail. I do know he worked as an engineer in one of those companies that opened up chips and analyzed them for IP-related info, so I don't think he was making it up but curious how these things work behind the scenes.
 
I don't know enough about chip making to speculate but is Intel's fab well-positioned to support external clients and provide whatever necessary related services as needed, especially if it requires collaboration with other vendors?
It is not.

In Intel's glory years, tight integration was their big advantage: they were the biggest integrated device manufacturer (IDM) with the most advanced process tech. They could tailor their manufacturing process to the needs of their design team, and their designs to the needs of their process.

They've been trying to offer foundry services to outside customers for quite a lot longer than they've been in an existential crisis. It has never worked all that well, and a major reason why is that their fab just doesn't know how to support external clients, or collaborate with anyone.

To date, the most positive outcome (for the money people anyways) of engaging seriously with Intel foundry services is to get bought by Intel (see: Altera).

I also wonder why TSMC doesn't offer a "white box" chip development service for device makers who just want a slightly tweaked generic ARM processor with a choice of different IP blocks for their needs. But I guess it's also possible TSMC is really dedicated in not competing against their clients and want to work just as a fab.
That is precisely it. TSMC tries very hard to be a trusted partner to all of its customers, and that means there are red lines they do not cross. Competing with their customers is one of those lines. Perhaps this will change, eventually, but for now their management seems to know and understand that trust is the most precious commodity in the line of business they have put themselves in.

But it's interesting Google has their "own" Tensor chip which is usually thought to be a modified Samsung Exynos.
Not sure how that's linked to TSMC? Samsung Exynos chips are made by Samsung in their own fabs. (At least for now, apparently there's rumors that they're going to have to pull an Intel and start using TSMC, since Samsung fabs have been falling behind too.)

On a somewhat related note this reminds me when the first H1 chip came out one guy I played sports with claimed it was based on another IP even though he wouldn't say anything in detail. I do know he worked as an engineer in one of those companies that opened up chips and analyzed them for IP-related info, so I don't think he was making it up but curious how these things work behind the scenes.
IP cores are licensed all the time. This isn't behind the scenes, it's very public info. That said, who licensed what isn't necessarily public.
 
I don't know enough about chip making to speculate but is Intel's fab well-positioned to support external clients and provide whatever necessary related services as needed, especially if it requires collaboration with other vendors?

It is not.

In Intel's glory years, tight integration was their big advantage: they were the biggest integrated device manufacturer (IDM) with the most advanced process tech. They could tailor their manufacturing process to the needs of their design team, and their designs to the needs of their process.

They've been trying to offer foundry services to outside customers for quite a lot longer than they've been in an existential crisis. It has never worked all that well, and a major reason why is that their fab just doesn't know how to support external clients, or collaborate with anyone.

To date, the most positive outcome (for the money people anyways) of engaging seriously with Intel foundry services is to get bought by Intel (see: Altera).

If we're bending over backwards to be fair, 18A is the first node meant for customers at volume and so we could say that the success or failure of Intel's latest attempt to be a foundry is currently unknown. Previously they attempted to be a foundry without abandoning the internal tools that Intel chip designers used to make their own chips and the result was predictably a disaster as customers were using a different set of tools than what the foundry was designed to use. Now Intel has forced themselves to be compatible with standard chip design tools. Having said that, how well Intel can design their operations for 3rd party customers remains to be seen. @mr_roboto's point still stands that integrating with customers is not something Intel fabs were designed to do so it will almost certainly have teething problems at the very least. We also don't know exactly when customers will have access to 18A in volume and thus when Intel will start to be able to make profit off their foundry business instead of it being a gigantic financial sinkhole.
 
Back
Top