The Fall of Intel

I know little to nothing about this, but given their track record so far I can easily believe Tom’s is misapprehending what a TSMC engineer actually said. That said, TSMC does seem to suggest that A16 might only be viable for larger chips “with dense power delivery networks”. Which I’ll be honest I’m not sure what that means for its use in Apple products, but it doesn’t sound conducive for building A/M chips.

i think the issue is

(1) it costs more, so you aren’t going to do it unless you really need to do it
(2) the vias through the silicon may be fairly large (i would guess they might approach the size of a standard cell), so you have to be able to accommodate that
(3) backside power delivery isn’t all that helpful unless your current density is so high that power rails on M1-M3 would limit routing channels for signals. That doesn’t happen for most circuits, not even for most parts of CPUs.

I don’t think backside power delivery lets you get rid of the M1 VSS/VDD in each standard cell row - the vias are likely to just be too big - even if they are a fraction of the size I think they are - so your savings are on M2 and M3 (and M4 and up, but most signals are routed on M1-M3, and there’s rarely much routing congestion above M3).

I’d love to see a stack up or design rules, though. Maybe TSMC has something planned that is completely different than how I think it would work.
 
“because hot spots of the chip will now be located under a set of wires, making heat dissipation harder.“

what?
If anything, I thought backside power delivery was expected to improve thermal dissipation by flipping the active layer once again, meaning there's not a giant slab of silicon sitting between the active layer and the metal heatspreader.
 
If anything, I thought backside power delivery was expected to improve thermal dissipation by flipping the active layer once again, meaning there's not a giant slab of silicon sitting between the active layer and the metal heatspreader.
yeah, i would think so. The copper interconnect should conduct heat to the spreader much better than the substrate.

All I can think of is that the comment was referring to the off-chip power/ground connections? The network shouldn’t be a problem, but I have no idea how you then connect to the board. Are there area pads, or are they on the perimeter? If they are area pads, I can see a bit of a problem to think about - you don’t want to short VDD/VSS and you’d prefer a solid chunk of metal on that back surface. Perimeter pads probably wouldn’t give you sufficient current density and you might have voltage sag in the center of the die. So it’s probably area pads, with some sort of funky spreader that has to have holes in it or something to allow connections to the board for VSS and VDD>
 
I could swear I heard somewhere that Gelsinger had gotten Intel to relax its mandatory CEO retirement age rule so he wouldn't have to retire in 2026. If so, more reason to think he was forced out.
 
I could swear I heard somewhere that Gelsinger had gotten Intel to relax its mandatory CEO retirement age rule so he wouldn't have to retire in 2026. If so, more reason to think he was forced out.
CEOs don’t generally retire without warning and without a transition plan, so I’m sure he was kicked out.
 
I could swear I heard somewhere that Gelsinger had gotten Intel to relax its mandatory CEO retirement age rule so he wouldn't have to retire in 2026. If so, more reason to think he was forced out.
They did.
CEOs don’t generally retire without warning and without a transition plan, so I’m sure he was kicked out.
Yup. The snippet Gruber posted from Bloomberg paints it as the failure of Intel’s AI chips as being a particular problem amongst Intel’s other recent failings. No doubt it was multifactorial. I found it interesting how many comments online defended Gelsinger. Not that I disagree that he inherited a troubled company and maybe his Hail Mary approach might indeed have been the best one, but even so. A lot of the comments are just about how he was from engineering originally and therefore must be blameless.

I was amused by Gruber’s aside at the end of his summary, he’s quite right of course. Bloomberg has never retracted that story which is the expectation for any serious journalistic outlet.
 
CEOs don’t generally retire without warning and without a transition plan, so I’m sure he was kicked out.
Absolutely, it's just one more thing on the big pile of evidence. If this was voluntary and the board liked him, he'd have at least a few more months and would be involved in picking a successor.

It's going to be interesting to see where Intel goes from here. With Intel, it always comes back to manufacturing. I thought that surely they'd have been able to get process node development back on track by now, but instead things have gotten so bad they're fabbing their most important chip(lets) on TSMC N3B. The news stories would have us believe that things like Gelsinger's overpromises on AI products were the board's biggest issue, but his inability to turn around process node R&D has to be regarded as his biggest leadership failure IMO.
 
They finally ditched Gelsinger. He “retired” suddenly.


It’s a start but I think intel is way too far gone at this point.

Maybe TSMC can buy their USA foundries as a hedge against China invading Taiwan or something.

I haven’t seen Intel do anything innovative in a decade or more and I don’t expect them to start now with no budget.

ARC is just going to get slaughtered by nvidia and both Apple and AMD are killing them in CPU. They killed optane, the only interesting thing I’ve seen from them in decades.

Maybe they can find an edge in some new product category but my thoughts are that this is pat taking his golden parachute before the stock and company collapses.

Maybe Nvidia should buy intel for their x86 license? AMD have both CPU/GPU, NVIDIA are missing a CPU division and in terms of where their GPUs run, having the x86 platform would enable them to focus on integrations that they currently can't.
 
Last edited:
It’s a start but I think intel is way too far gone at this point.

Maybe TSMC can buy their USA foundries as a hedge against China invading Taiwan or something.

I haven’t seen Intel do anything innovative in a decade or more and I don’t expect them to start now with no budget.

ARC is just going to get slaughtered by nvidia and both Apple and AMD are killing them in CPU. They killed optane, the only interesting thing I’ve seen from them in decades.

Maybe they can find an edge in some new product category but my thoughts are that this is pat taking his golden parachute before the stock and company collapses.
US would never allow a foreign sale. They need to spin off the fabs and put qualified technical management in charge of the spin-off. The design portion of the company doesn’t matter. X86 is fading and they can’t compete on anything else because they don’t already have monopolist lock-in anywhere.

The fabs are critical, and need to be competitive. Put all the energy and effort into that, and allow the chip side to wither away if need be.
 
Ah, just edited as you replied... any thoughts on Nvidia buying intel? I think it would work out pretty well for both.

Nvidia have the funds, the talent, and could do with an x86 license? If for no other reason than to transition their entire stack to ARM + x86 emulation like Apple has.
 
Ah, just edited as you replied... any thoughts on Nvidia buying intel? I think it would work out pretty well for both.

Nvidia have the funds, the talent, and could do with an x86 license? If for no other reason than to transition their entire stack to ARM + x86 emulation like Apple has.
I don’t know if the license from AMD is transferable. Also don’t think NVIDIA has any use for the fabs (and they have no expertise that would enable them to fix the problems). I dont know why anyone would want to make x86 chips at this point. No future in it.
 
Nah I was referring to buying INTEL for the IP ownership :)

i.e., AMD keep theirs, Nvidia gain intel's ownership of x86 by buying intel. AS a mechanism of incorporating compatibility into their designs as a gradual transition away to ARM (whilst also presumably knocking out half of the competition from x86 with their future ARM/GPU platform).

I.e., basically buying them to kill it off. Either phase out (compatibility integrated into transition platforms) or immediately.
 
Nah I was referring to buying INTEL for the IP ownership :)

i.e., AMD keep theirs, Nvidia gain intel's ownership of x86 by buying intel. AS a mechanism of incorporating compatibility into their designs as a gradual transition away to ARM (whilst also presumably knocking out half of the competition from x86 with their future ARM/GPU platform).

I.e., basically buying them to kill it off. Either phase out (compatibility integrated into transition platforms) or immediately.

remember that you can’t make an x86-64 chip without AMD’s IP. Buying Intel’s IP doesn’t get you anywhere unless you are sticking to 32-bit chips.
 
remember that you can’t make an x86-64 chip without AMD’s IP. Buying Intel’s IP doesn’t get you anywhere unless you are sticking to 32-bit chips.
i should add, i don’t know what happens to the license in either direction between AMD/Intel if Intel is purchased. I also don’t know if nvidia/AMD already have some sort of cross-license (they may, in order to avoid GPU lawsuits, but I have no idea).
 
RE; A16/18

Apple is not going to use Intel anytime soon unless it’s near parity on performance/price/density and at volume, too. That or some serious (more than already) national security situation.
 
US would never allow a foreign sale. They need to spin off the fabs and put qualified technical management in charge of the spin-off. The design portion of the company doesn’t matter. X86 is fading and they can’t compete on anything else because they don’t already have monopolist lock-in anywhere.

The fabs are critical, and need to be competitive. Put all the energy and effort into that, and allow the chip side to wither away if need be.
Yep. Exactly right. The fabs might not be the short term financially beneficial division but they are in the strategic IP & physical capability sense beneficial for America, short and long term, and with that, also a sort of financial tool by proxy if we at all care about economics and national security.

Very important they live on and are spun off from the vertically integrated dysfunction Intel left them with for so long. It is clear they have enough IP even in spite of that to have a *reasonably* competitive process with Intel 4/3 and 18A from what I’ve seen, so the future could be bright if we’re willing to invest in it.

And we should!
 
In the long term fwiw if we do it right, I could absolutely see Apple using some Intel nodes. Things are changing with costs and proliferation of silicon (demand is going up for leading parts and content! Chiplet tech improvement etc) and Intel seems to have a much better electric grasp of silicon than Samsung does looking at the Meteor Lake on Intel 3 results vs Intel 4 (major iso-power performance improvement), and 18A for Panther Lake. So in principle I think the foundation is there, working on some of their HD libraries (or lack thereof) and leakage is what would or is going to be next.
 
Back
Top