The Fall of Intel

Bill Gates supposed utterance of this is likely a myth
He says that the characterization, that no one would ever need more than 640K, is wrong, but that he thought the 640K ceiling would be good for at least a decade. When Apple pioneered the consumer-level GUI a few years later, it became clear pretty quickly that RAM needs would be going up fast.
 
He says that the characterization, that no one would ever need more than 640K, is wrong, but that he thought the 640K ceiling would be good for at least a decade. When Apple pioneered the consumer-level GUI a few years later, it became clear pretty quickly that RAM needs would be going up fast.
That’s what I’d heard as well, but apparently it’s even less true than that:



At most he thought it would last 5 years and even then was pissed the usable address space was that low.
 
I’m a little confused by this rumored lineup … who exactly is the 52-core count consumer i9 chip for and why?
Intel can't beat AMD on low-ish thread count benchmarks, but their area-efficient performance cores are great for massively parallel benchmarks. Lots of people in the enthusiast market have been trained to believe in high cinebench MT scores even though nothing they do with their computers resembles that kind of load. That's what it's for.

The 16 performance cores are what most of the people buying these chips actually need.

*Bill Gates supposed utterance of this is likely a myth
I agree. I hate Bill Gates as much as anyone, but it's obviously an apocryphal quote, or perhaps a joke being relayed without enough context to make it clear it was a joke.

640K was a platform limitation, after all. IBM chose the layout of the IBM PC's memory map, not Microsoft. The 8086/8088 had a 20-bit (1024K) address space, and IBM split that into 640K for RAM and 384K for ROM / IO. Wasn't even Microsoft's choice.
 
Intel can't beat AMD on low-ish thread count benchmarks, but their area-efficient performance cores are great for massively parallel benchmarks. Lots of people in the enthusiast market have been trained to believe in high cinebench MT scores even though nothing they do with their computers resembles that kind of load. That's what it's for.

The 16 performance cores are what most of the people buying these chips actually need.

Yeah it feels like they're trying to do to AMD what AMD did to them with Zen, but I'm not sure that's going to work. Intel was stuck at 4 cores for years and so AMD not being able to beat Intel in ST/low thread workloads but outpacing them in multithreaded was a genuine win for Zen 1 HEDT Ryzen/threadripper over Kaby Lake (even Coffee Lake only went to 8 cores) and by Zen 2 in AMD mobile vs Intel mobile (and then of course Zen 3 delivering the coup-de-grace by beating Intel in ST as well).

However, my understanding is the same as yours that most consumer software and workloads are still coming to grips with having 16-24 cores/effective threads available (effective thread being my own personal measure of 1.25x core count for SMT2-based cores). There are of course workloads for developers and animators which will benefit from big core counts, but Geekbench literally changed their entire MT benchmark algorithm specifically to curtail these kinds of shenanigans in the "core wars" because they felt most consumers were being oversold on systems they not only didn't need but were possibly worse. Thus, while those core wars between AMD and Intel indeed means that consumers have been primed by "big number go up" marketing, is that really enough here? Humanity often disappoints in these matters, but I'm still skeptical unless Intel can also keep the price down - maybe Intel 18A will be cheap? Like if Intel can be competitive in ST and handily win in those professional/semi-professional MT workloads while being as cheap, then I suppose you might as well go with Intel on the off chance you actually do those workloads from time to time, but I'm struggling to believe they can pull all 3 off unless Nova Lake's microarchitecture is truly impressive and 18A is magic.

I know I'm preaching to the choir here, I'm just trying to figure out if there's something I'm missing.
 
AMD is also rumoured to increase cores for Zen 6. The "10950X" will have 24C/48T
The Ryzen 7 one is 12C/24T.

I think for gaming which makes up majority of DIY/enthusiast will wait for the X3D version.
 
AMD is also rumoured to increase cores for Zen 6. The "10950X" will have 24C/48T
The Ryzen 7 one is 12C/24T.

I think for gaming which makes up majority of DIY/enthusiast will wait for the X3D version.
According to the NBC article I posted earlier, Intel’s rumored upcoming x3D variant will be on the (comparatively) low core count models 8/16 and 8/12 cores which makes sense given … you know … gaming. I wonder which AMD models will get it.

If the high end chips from both AMD and Intel provide little to no or even negative benefit to gamers, that’s good for gamers (they get the cheaper models), but surely bad for Intel/AMD who will suffer lower ASPs. Then again maybe everyone’s prices go up …
 

Officially, Intel is eliminating excessive management layers, but a CRNreport indicates that the company is surprisingly laying off chip design engineers and architects

Earlier this month, it turned out that the company will lay off 15% to 20% of its fab staff as well as outsource a significant portion of marketing operations to Accenture, which is projected to use AI to communicate with Intel customers.
 

Intel’s long-standing dominance in the server processor market is declining at an accelerating pace, as AMD continues its rapid growth with EPYC CPUs, while ARM-based solutions gain traction among enterprise customers.

Recent data from leading market research firms confirm that AMD not only surpassed Intel in server processor revenue at the end of 2024 but is also set to further close the gap in the coming years.

Looking further ahead, by 2027 AMD could reach a 40% market share, while Intel is expected to fall below the symbolic 50% threshold. ARM processors will likely continue their steady rise, potentially reaching between 10% and 12%. Analysts predict that by 2028, AMD and Intel could stand on equal footing — marking a significant turning point in the industry.
 
Those figures are based on revenue share, not SKUs. AMD servers trend toward more expensive on average (according to the article), so their product volume is probably somewhat lower. I am not sure, but I suspect ARM servers have average lower prices, so the stats probably mask their volume.
 
Those figures are based on revenue share, not SKUs. AMD servers trend toward more expensive on average (according to the article), so their product volume is probably somewhat lower. I am not sure, but I suspect ARM servers have average lower prices, so the stats probably mask their volume.

It’s CPU revenue share, not server revenue share. Do AMD server chips really cost more than Intel’s?
 
Those figures are based on revenue share, not SKUs. AMD servers trend toward more expensive on average (according to the article), so their product volume is probably somewhat lower. I am not sure, but I suspect ARM servers have average lower prices, so the stats probably mask their volume.
It’s CPU revenue share, not server revenue share. Do AMD server chips really cost more than Intel’s?

The article is a little confusing on that. It also says:

Recent data from leading market research firms confirm that AMD not only surpassed Intel in server processor revenue at the end of 2024 but is also set to further close the gap in the coming years.

But then the other numbers, alternately describes as AMD’s current market share as by processors or revenue, are lower. If AMD’s market share was 30% but it’s revenue share was higher to reflect higher ASPs that would make sense but they sometimes describe that lower number as revenue share which has definitely not surpassed Intel’s. Maybe I’m just misunderstanding, is there a difference between CPU revenue share and server revenue share?

The implication is supposed to be that Intel is getting killed at the high end where people are buying AMD’s best most expensive server chips over Intel’s, but the shifting terminology confuses me.
 
The article is a little confusing on that. It also says:



But then the other numbers, alternately describes as AMD’s current market share as by processors or revenue, are lower. If AMD’s market share was 30% but it’s revenue share was higher to reflect higher ASPs that would make sense but they sometimes describe that lower number as revenue share which has definitely not surpassed Intel’s. Maybe I’m just misunderstanding, is there a difference between CPU revenue share and server revenue share?

The implication is supposed to be that Intel is getting killed at the high end where people are buying AMD’s best most expensive server chips over Intel’s, but the shifting terminology confuses me.
yeah, it’s very confusing.

When I was at AMD, all we cared about was CPU ASP, and our goal was always to get to $100. Intel’s was always much higher than that (I can’t remember the number, but I think it was closer to $200). When things were going well, like in the opteron days, maybe we hit $100. In any event, it would surprise me if AMD is charging more than Intel for its chips (though it gets complicated because it also depends on the product mix).

One other point, at least back then - machine price had little to do with CPU price. Sometimes our cheaper CPU ended up in a more expensive box, because the OEM decided people would pay more for an Opteron, or whatever.
 
Yeah it feels like they're trying to do to AMD what AMD did to them with Zen, but I'm not sure that's going to work. Intel was stuck at 4 cores for years and so AMD not being able to beat Intel in ST/low thread workloads but outpacing them in multithreaded was a genuine win for Zen 1 HEDT Ryzen/threadripper over Kaby Lake (even Coffee Lake only went to 8 cores) and by Zen 2 in AMD mobile vs Intel mobile (and then of course Zen 3 delivering the coup-de-grace by beating Intel in ST as well).

However, my understanding is the same as yours that most consumer software and workloads are still coming to grips with having 16-24 cores/effective threads available (effective thread being my own personal measure of 1.25x core count for SMT2-based cores). There are of course workloads for developers and animators which will benefit from big core counts, but Geekbench literally changed their entire MT benchmark algorithm specifically to curtail these kinds of shenanigans in the "core wars" because they felt most consumers were being oversold on systems they not only didn't need but were possibly worse. Thus, while those core wars between AMD and Intel indeed means that consumers have been primed by "big number go up" marketing, is that really enough here? Humanity often disappoints in these matters, but I'm still skeptical unless Intel can also keep the price down - maybe Intel 18A will be cheap? Like if Intel can be competitive in ST and handily win in those professional/semi-professional MT workloads while being as cheap, then I suppose you might as well go with Intel on the off chance you actually do those workloads from time to time, but I'm struggling to believe they can pull all 3 off unless Nova Lake's microarchitecture is truly impressive and 18A is magic.

I know I'm preaching to the choir here, I'm just trying to figure out if there's something I'm missing.
According to the NBC article I posted earlier, Intel’s rumored upcoming x3D variant will be on the (comparatively) low core count models 8/16 and 8/12 cores which makes sense given … you know … gaming. I wonder which AMD models will get it.

If the high end chips from both AMD and Intel provide little to no or even negative benefit to gamers, that’s good for gamers (they get the cheaper models), but surely bad for Intel/AMD who will suffer lower ASPs. Then again maybe everyone’s prices go up …

The conversation also touched on concerns raised by several game developers, particularly those working with Unreal Engine, about the long-term practicality of extremely high core counts in consumer CPUs.

While Unreal Engine 5.6 has made strides in parallelizing workloads, Jesse alleges that this progress often comes with trade-offs. These include increased race conditions, rendering glitches, and added complexity that makes development and debugging significantly more challenging.

Yup. Again not sure who these new super high core count consumer chips from AMD/Intel will be aimed at … other than benchmarks …
 

Similar threads

Back
Top