X86 vs. Arm

When I look at the wikithingy article on "microcode", it comports with my understanding of the term. It is at the heart of the RISC concept: processors had little internal programs that implemented the elaborate ISAs of earlier processors, but analysis revealed that compilers were eschewing many of the complex instructions in favor of the more basic ones, leaving much of the architecture largely unused. RISC simply pulled the function of the processor out of the internal microcode and put it into the ISA, leaving more room in the core for things like more registers.

I suspect that an op like udiv on an 8086 or 68000 was basically the simple program that would be written in software, distilled down into microcode, which would execute much faster primarily because there would be no code fetching, but it was still a lot like what one would write by hand. Of course, udiv is not a frequently-used op, but it still took up space in the processor core (ARM32 never had dedicated division ops). The microcode design gave engineers a simple way to make modifications to the processor, but when RISC came along and started pantsing the older processor designs, they had to switch to the micro-op structure to keep up. Pentium was the first x86 processor to almost completely ditch the microcoded design pattern.
 
I suspect that an op like udiv on an 8086 or 68000 was basically the simple program that would be written in software, distilled down into microcode, which would execute much faster primarily because there would be no code fetching, but it was still a lot like what one would write by hand. Of course, udiv is not a frequently-used op, but it still took up space in the processor core (ARM32 never had dedicated division ops). The microcode design gave engineers a simple way to make modifications to the processor, but when RISC came along and started pantsing the older processor designs, they had to switch to the micro-op structure to keep up. Pentium was the first x86 processor to almost completely ditch the microcoded design pattern.
Pentium Pro was the first x86 to poularize the term "micro ops". More to the point, as @Cmaier said, micro ops are just a different name for microcode. It's a distinction without a difference; x86 chips never stopped being microcoded.
 
My understanding has always been that “micro ops” is whatever comes out of the decoder while “microcode” are internal programs used to implement more complex or situational instructions.

Interestingly enough, some ARM designs used to have micro op caches until recently. From what I’ve read Apple never used micro op caches, but they describe some instructions as “microcoded” (for example complex table lookup).
 
*e.g. Arrow Lake's gaming performance relative to AMD and even its own older chips is for some reason, head scratchingly bad, despite its productivity gains - seriously no one can figure it out as far as I can see as with better single thread and no SMT gaming should be better

I was wrong, people do have theories:


Apparently Arrow Lake separated its memory controller onto a different die and with a slower than normal ring bus, latencies are really bad. Pretty sure Zen 2 or 3 or both did something similar (separate IO die, not ring bus) but I guess the infinity fabric is good enough that it didn’t really affect Zen as much? Regardless this seems to be the prevailing theory on why Arrow Lake gaming is so underwhelming.
 
Last edited:
Who could predict, ditching backwards compatibility, the only reason you’d run x86, would fail.
yep. they needed to do this before Arm PCs/Macs became a thing. If you’re going to ditch compatibility today, why would you do this for x86S instead of Arm?
 
x86S didn't break backwards compatibility that hard. The biggest change is that it was to be a native 64-bit CPU (no more booting in 16-bit real mode) which only supported running 64-bit operating systems. Changes at the OS level were expected to be minimal, and at the application level, it was supposed to run existing x86-64 and x86-32 binaries without change.

That covers everything modern PC buyers want to run. You wouldn't be able to boot an ancient OS any more, but almost nobody cares. To a great extent, this would've just been formalization of the status quo - extremely few system vendors bother to test whether their shiny new 64-bit hardware and firmware can still successfully boot Windows 95, much less provide the drivers to properly support it.

The one exception to that rule: there are old embedded systems trundling along on a software stack that hasn't been updated in 30 years, with no support or source code available. There's a very small market in retrofitting these with a modern board and CPU once the original embedded computer fails. x86S CPUs wouldn't be able to do this without resorting to software emulation. I don't think Intel regards that as a commercially significant market, though.
 
x86S didn't break backwards compatibility that hard. The biggest change is that it was to be a native 64-bit CPU (no more booting in 16-bit real mode) which only supported running 64-bit operating systems. Changes at the OS level were expected to be minimal, and at the application level, it was supposed to run existing x86-64 and x86-32 binaries without change.

That covers everything modern PC buyers want to run. You wouldn't be able to boot an ancient OS any more, but almost nobody cares. To a great extent, this would've just been formalization of the status quo - extremely few system vendors bother to test whether their shiny new 64-bit hardware and firmware can still successfully boot Windows 95, much less provide the drivers to properly support it.

The one exception to that rule: there are old embedded systems trundling along on a software stack that hasn't been updated in 30 years, with no support or source code available. There's a very small market in retrofitting these with a modern board and CPU once the original embedded computer fails. x86S CPUs wouldn't be able to do this without resorting to software emulation. I don't think Intel regards that as a commercially significant market, though.
Then why not pursue it? Why did the "input from the ecosystem" lead to Intel not continue it? I mean see in the article that 32b app support would continue in ring 3 (untrusted user land, right?). So if all the x86-32 cruft was going to remain in the CPU, why would Intel's partners care? (I know you weren't privy to those conversations, I'm just confused)
 
Then why not pursue it? Why did the "input from the ecosystem" lead to Intel not continue it? I mean see in the article that 32b app support would continue in ring 3 (untrusted user land, right?). So if all the x86-32 cruft was going to remain in the CPU, why would Intel's partners care? (I know you weren't privy to those conversations, I'm just confused)
Yep. 32-bit userlands would continue to work. OS vendors may not have wanted to patch the kernels for it for expected gains, or perhaps there's important enough players caring about 16-bit apps in 32b OSes
 
Then why not pursue it? Why did the "input from the ecosystem" lead to Intel not continue it? I mean see in the article that 32b app support would continue in ring 3 (untrusted user land, right?). So if all the x86-32 cruft was going to remain in the CPU, why would Intel's partners care? (I know you weren't privy to those conversations, I'm just confused)
Definitely not privy but I can speculate just fine :)

Perhaps the ecosystem simply objected to any change of that magnitude, not wanting any risk of significant adoption costs (no matter how much Intel assured everyone they'd be small). Never underestimate the forces conserving all the worst misfeatures of PC system architecture long past the end of their relevance - google "A20 gate" to learn about a terrible 80286-era hack that stayed with us far too long. There's lots more of this out there, A20 is just a particularly egregious example.

Perhaps it was an initiative linked to Pat Gelsinger, and Intel's interim C-suite decided to throw it out along with him.

Perhaps Intel's new execs decided it needed to go for budget reasons. It's the kind of thing which is nice in the long term, but has short term costs, and right now Intel seems to be all about being shortsighted if it saves them a few pennies.
 
yep. they needed to do this before Arm PCs/Macs became a thing. If you’re going to ditch compatibility today, why would you do this for x86S instead of Arm?
Especially given the situation we have where Apple Silicon (the most popular ARM desktop platform) can run x86 in emulation just fine....


edit:
as an enterprise, given the choice between AMD which would "run everything in hardware" and intel x86s which "might not run some legacy stuff you may have" - what decision maker is going to put their balls on the line (so to speak) to go with x86s for their fleet? You'd need to do a full audit of all of the software in your org, including shadow IT apps, etc. Or you could just... not do that.

For better or worse, intel would be expecting purchasing decision makers to take responsibility for that choice. As a business fleet decision maker, it's a total no-brainer to say "fuck that!". Enough stuff breaks in the technology landscape without creating more problems unnecessarily.

The technology benefits are somewhat irrelevant for the customer if they're putting themselves at risk per above.
 
Last edited:
Especially given the situation we have where Apple Silicon (the most popular ARM desktop platform) can run x86 in emulation just fine....


edit:
as an enterprise, given the choice between AMD which would "run everything in hardware" and intel x86s which "might not run some legacy stuff you may have" - what decision maker is going to put their balls on the line (so to speak) to go with x86s for their fleet? You'd need to do a full audit of all of the software in your org, including shadow IT apps, etc. Or you could just... not do that.

For better or worse, intel would be expecting purchasing decision makers to take responsibility for that choice. As a business fleet decision maker, it's a total no-brainer to say "fuck that!". Enough stuff breaks in the technology landscape without creating more problems unnecessarily.

The technology benefits are somewhat irrelevant for the customer if they're putting themselves at risk per above.
i get the feeling that after intel and AMD formed that “x86 alliance” thing, the first order of business was Intel asking AMD if they were interested in x86s. They needed AMD buy-in to get MS on-board. AMD probably said no thanks, and then Intel gave up.
 
i get the feeling that after intel and AMD formed that “x86 alliance” thing, the first order of business was Intel asking AMD if they were interested in x86s. They needed AMD buy-in to get MS on-board. AMD probably said no thanks, and then Intel gave up.
Another possibility that comes to mind: the alliance decided to pursue something similar to x86S, but different enough to not call it the same thing.
 
Another possibility that comes to mind: the alliance decided to pursue something similar to x86S, but different enough to not call it the same thing.
i sort of doubt AMD would be that dumb. But we'll see.
 
i sort of doubt AMD would be that dumb. But we'll see.

Yeah, I mean what's the win for AMD here? They're killing it right now, and this would level the playing field somewhat at best, and at worst give intel a leg up with whatever they've been cooking for years before proposing it to their frenemy.
 
Yeah, I mean what's the win for AMD here? They're killing it right now, and this would level the playing field somewhat at best, and at worst give intel a leg up with whatever they've been cooking for years before proposing it to their frenemy.
I think you're being too cynical / paranoid about Intel's motivations.

Turns out I had drafts 1.0 and 1.1 in my downloads folder (fortunate, as they took down the PDFs). On skmming through, x86S can't be the bad end of the range you describe. It's just not that complex. The 'S' stands for Simplified, after all.

In fact, it only adds one feature, 64-bit native boot. The document even mentions that this can be implemented independent of the rest, which is all removal of legacy features.

So, if I were at AMD, I doubt I'd find x86S threatening at all. At worst, if the market accepted it, being a year or two behind would mean carrying some legacy engineering and validation costs a bit longer than Intel. But these would be a drop in the bucket relative to all the red ink Intel's locked into spilling over the next several years. They've got fab money issues, and those are not cheap (as AMD well knows!).
 
I think you're being too cynical / paranoid about Intel's motivations.

Turns out I had drafts 1.0 and 1.1 in my downloads folder (fortunate, as they took down the PDFs). On skmming through, x86S can't be the bad end of the range you describe. It's just not that complex. The 'S' stands for Simplified, after all.

In fact, it only adds one feature, 64-bit native boot. The document even mentions that this can be implemented independent of the rest, which is all removal of legacy features.

So, if I were at AMD, I doubt I'd find x86S threatening at all. At worst, if the market accepted it, being a year or two behind would mean carrying some legacy engineering and validation costs a bit longer than Intel. But these would be a drop in the bucket relative to all the red ink Intel's locked into spilling over the next several years. They've got fab money issues, and those are not cheap (as AMD well knows!).

Nah I get their motivations.

But from AMDs perspective it doesn’t make business sense.

And from my perspective as a purchasing decision maker for an enterprise, the product doesn’t make sense due to risk.

Intels motivations don’t matter.
 
I could see decrufting the back end, though. Streamlining startup does make sense, and cleaning up some of the other non-user-level stuff. Those ISA mods would not be worth much, mostly because they would be sticking on what are organic RISC-like characteristics with bubblegum and twine and sacrificing simple backward compatibilty – the only real reason x86 is still around.
 
Back
Top