Intel proposes x86S, a 64-bit only architecture

Aaronage

Power User
Joined
Feb 26, 2023
Posts
144
Would love to know your opinions on this proposal šŸ˜Š
Proposed benefits from the article:
Benefits of x64-bit only architecture according to Intel:
  • Using the simplified segmentation model of 64-bit for segmentation support for 32-bit applications, matching what modern operating systems already use.
  • Removing ring 1 and 2 (which are unused by modern software) and obsolete segmentation features like gates.
  • Removing 16-bit addressing support.
  • Eliminating support for ring 3 I/O port accesses.
  • Eliminating string port I/O, which supported an obsolete CPU-driven I/O model.
  • Limiting local interrupt controller (APIC) use to X2APIC and remove legacy 8259 support.
  • Removing some unused operating system mode bits.
 
Donā€™t care too much what intel does these days.

Yes overdue, but I think most of the talent left and itā€™s going to be a shit show that nobody will want.
 
Donā€™t care too much what intel does these days.

Yes overdue, but I think most of the talent left and itā€™s going to be a shit show that nobody will want.
I think we've found @Cmaier's doppelgƤnger.

Spider-Man-Meme-Pointing-Featured.jpg
 
Interesting


So less meaningful than at least I supposed?

@Cmaier whatā€™s the privileged 32-bit ISA?
I assume heā€™s referring to instructions that are only allowed to run with CPL = 0. Things like HALT or MOV when used with control registers.
 
I assume heā€™s referring to instructions that are only allowed to run with CPL = 0. Things like HALT or MOV when used with control registers.
So not all 32-bit instructions are getting removed? I remember you saying it was more complicated than ARMā€™s transition given the relationship between the two, 32 and 64-bit for x64.
 
So not all 32-bit instructions are getting removed? I remember you saying it was more complicated than ARMā€™s transition given the relationship between the two, 32 and 64-bit for x64.
well, i mean, i donā€™t even know if intel actually intends to build such a thing. I thought it was just a research paper.

In the end, there are lots of useful 32-bit instructions - you seldom need the precision of 64-bit data, and even less seldom need a 64-bit address space, so if they ever did pull out 32-bit instructions it would be the ones that cause the most problems by complicating the design, Iā€™d imagine.
 
I guess by "32 bit removed" they mean the machine boots up in 64 bit mode (long mode? its been a while) rather than 8/16 bit DOS/real mode, but as per above the 32 bit instructions would still exist - you'd need to have at least some of them (e.g., the instructions to move, increment, decrement, IO, etc.) as sometimes you need to work with quantities of data smaller than 32 bits for poking at IO peripherals, etc.

Never mind, as above if you don't need 64 bit quantities of data you can save both space (and possibly processing time) with smaller units.

unless intel intend on breaking most PC peripherals by doing this.

maybe... intel be doing intel things... they did some "no 16 bit mode" processor before to try and flood the embedded market with fire-sale price 386 era (from memory) chips deliberately crippled to not be usable in PCs due to lack of 16 bit mode.


 
Intel has published a version 1.2:


While simplifying and modernizing one of the worldā€™s most used architectures could be a worthwhile goal, support for legacy hardware and software has been one of x86ā€™s primary characteristics. Windows, primarily run on PCs equipped with x86 chips, has traditionally had legacy support, and the switch to X86S could break with that history.

If Intel were to use X86S for its CPUs in the future, thereā€™s also a big question of what AMD, the other primary x86 CPU designer, would do. Although Intel has always been the bigger company (and still is today despite Intelā€™s ongoing crisis), AMD has been crucial in the history of x86 and developed the current 64-bit version of the ISA, known as x86-64 or AMD64.

Incidentally, AMDā€™s 64-bit version of x86 likely doomed Intelā€™s own 64-bit architecture, Itanium. If Intel seriously plans to use X86S, it might be beneficial for the company to get AMDā€™s support or avoid a disagreement on legacy support.
 
Intel has published a version 1.2:


If AMD has any sense, they would ignore x86s and start designing RISC chips (Arm or RISC-V) if Intel did that. No reason for AMD to help Intel out, and if weā€™re going to break compatibility, why not instead go to something that already has ecosystems?

Note that even if we could have gotten a license to Merced/Itanium, Iā€™m quite sure we would not have adopted it. We would have used the break-in-compatibility-event to do our own thing and hope that ours was better than theirā€™s. Adopting theirā€™s just would keep the status quo going.
 
If AMD has any sense, they would ignore x86s and start designing RISC chips (Arm or RISC-V) if Intel did that.
I believe AMD already has a fancy ARM Arch license. If they were truly ambitious, they would design dual-decode E cores, which would provide x86-64 compatibility or be able to run ARM code, with the ability to gate off the x86 side to save juice when it is not needed. It would be a little like the ARM 710 cores but with x86 instead of AArch32.
 
I believe AMD already has a fancy ARM Arch license. If they were truly ambitious, they would design dual-decode E cores, which would provide x86-64 compatibility or be able to run ARM code, with the ability to gate off the x86 side to save juice when it is not needed. It would be a little like the ARM 710 cores but with x86 instead of AArch32.
Exponential Technology was doing that, using x86 wrapped around PowerPC. In fact, our team in Texas took that over, and eventually became Intrinsity which is now owned by Apple.
 
Exponential Technology was doing that, using x86 wrapped around PowerPC. In fact, our team in Texas took that over, and eventually became Intrinsity which is now owned by Apple.
Except, I would envision something not like a wrapper but a discrete decoder that delivers Ī¼ops to dispatch and maybe implements TSO. Running x86 code should mostly be pretty straightforward, since either arch, properly implemented, uses Ī¼ops, and x86 could probably do some degree of OoOE without spitting up. An ARM E core is small enough that tacking a x86 decoder onto it would not be that expensive. Naturally, it would not decode priviledged ops, because the compatibility mode would be restricted to userland.
 
Except, I would envision something not like a wrapper but a discrete decoder that delivers Ī¼ops to dispatch and maybe implements TSO. Running x86 code should mostly be pretty straightforward, since either arch, properly implemented, uses Ī¼ops, and x86 could probably do some degree of OoOE without spitting up. An ARM E core is small enough that tacking a x86 decoder onto it would not be that expensive. Naturally, it would not decode priviledged ops, because the compatibility mode would be restricted to userland.
yeah, i was using the term ā€œwrapperā€ loosely. The ā€œmicrocodeā€ much more-closely aligned with PPC than x86, and there was never any sequencing of microcode when just using PPC instructions.
 
I honestly think just following Apple's lead would make more sense. Design an ARM or RISC-V core that has extensions to enable efficient translation.

Maybe you can get better performance out of dual decoders, but now you've got the decoder itself on a mode switch, and two ISAs that need to be included in your validation plan. My naive take is that you're probably going to have a higher chance of success, and get most of the benefit, going Apple's route. It's not like AMD has resources to spare to go chase something this ambitious, IMO.
 
My naive take is that you're probably going to have a higher chance of success, and get most of the benefit, going Apple's route.

I definitely do not disagree with that. The ARM decoders are so incredibly lightweight that they take up almost no space, and the branch predictors probably take up slightly less space. Overall, the RISC front-end is enormously less logic than the x86 nonsense (they really really should have transitioned to 16-bit opcodes with 386 in the mid 80s to make decode much simpler). What Apple did to simplify translation cost almost nothing while providing excellent results. It is just that AMD has so much invested in x86-64 (which they invented) that they might be inclined to continue to support it because, reasons.
 
I definitely do not disagree with that. The ARM decoders are so incredibly lightweight that they take up almost no space, and the branch predictors probably take up slightly less space. Overall, the RISC front-end is enormously less logic than the x86 nonsense (they really really should have transitioned to 16-bit opcodes with 386 in the mid 80s to make decode much simpler). What Apple did to simplify translation cost almost nothing while providing excellent results. It is just that AMD has so much invested in x86-64 (which they invented) that they might be inclined to continue to support it because, reasons.
i doubt that it would be a big consideration for them - AMDā€™s DNA is doing other peopleā€™s architectures. And I think maybe 2 people involved in inventing AMD64 are still there, and they arenā€™t likely in any position to have a say in anything. If they think the strategic thing is to do Arm, I think theyā€™d likely do Arm. And for the vast majority of the team, it would make no difference. Only the microarchitects have to worry about it, and thatā€™s by-far the smallest part of the design team. Most people on the design team only know they need to make a logic block that produces output vector X from input vector Y in Z clock cycles.
 
Back
Top