Intel proposes x86S, a 64-bit only architecture

I believe AMD already has a fancy ARM Arch license. If they were truly ambitious, they would design dual-decode E cores, which would provide x86-64 compatibility or be able to run ARM code, with the ability to gate off the x86 side to save juice when it is not needed.

AMD must have had an ARM license when they were still working on the K12. If they still have it, I don't know, although it is quite possible.
Combining RISC and x86 in the same chip was planned in the PowerPC 615, but it was never released. I'm not sure if that's the same project that Cliff mentioned.

On one hand, it would make the chip more versatile and x86 code faster than emulation, and if the x86 decoder isn't used it shouldn't need any power.
On the other hand, emulation is actually quite good (except for dynamically generated code) and another decoder would make the chip more complicated and would need some die area that could be used for something else.
I'm a bit torn, but it could be something that might ease the transition from x86 to ARM.

Although my biggest problem with Windows on ARM isn't the applications, but rather the drivers. I've run several x86 games on Windows on ARM without any problem, but I couldn't install a VPN driver when I would have needed it.
And the fact that you cannot get x86 drivers to work on Windows on ARM isn't something that would be solved by a chip like this, because that's an operating system issue, and frankly, I can understand that they don't want to support x86 drivers.
But in my specific case it sucked, because that meant I had to work on a small Dell laptop screen (one of my customers scheduled a meeting so late that I only could take it from home). Originally I hoped that I could have used a Windows VM on my M1, since I only needed VPN access anyway, but that didn't work. And while the Dell recognized my Apple monitor, it only supported it either at max 5K or in the same resolution as the laptop screen. While I was able to scale the Windows GUI that unfortunately did not apply to the VM that I was using for VPN, which meant that I had a really tiny window with tiny text on a 5K display. Thus, in the end I used the laptop screen and had to scroll around, because the application didn't fit the screen...

Long story short, it might help applications, but those are running pretty well already. But without drivers for ARM the transition won't come any easier.
 
I remember reading about x86S when I was at university so it’s over s year since they originally showed off a paper on this.
I thought it would’ve been nice for the operating system we wrote to avoid setting up all the initial stuff. Enabling protected and long mode and all that segmentation stuff.

I was told that the cost of supporting real mode and all the legacy stuff in terms of transistors was negligible though and I believed it. So not sure how big the hardware win is. But I’d love to have a higher baseline for instruction output in software. And might as well remove the hardware at that point.
Though you still need to be instruction level compatible so you can’t re-compact the ISA. Still need the longer binary formats for 64 bit instructions and all that.
 
If AMD has any sense, they would ignore x86s and start designing RISC chips (Arm or RISC-V) if Intel did that. No reason for AMD to help Intel out, and if we’re going to break compatibility, why not instead go to something that already has ecosystems?

Note that even if we could have gotten a license to Merced/Itanium, I’m quite sure we would not have adopted it. We would have used the break-in-compatibility-event to do our own thing and hope that ours was better than their’s. Adopting their’s just would keep the status quo going.
It seems AMD feels that Intel’s direction for the future of x86 is worth a shot:

 
The dinosaurs held a focus group just before the asteroid hit.

This…doesn’t seem like a great move.
Predictions like the following:


are notoriously unreliable (even by experts) and I agree with the author about being cautious. But having said that ARM is clearly going to continue to push into traditional x86 territory and with Apple Silicon still anchoring expectations about what is possible to build, it seems like AMD and Intel will be competing within a continuingly shrinking market segment (even if the overall PC market increases). So I make no predictions myself about what percentage of the market will by ARM by what date, each architecture has its own share of issues, but x86’s definitely seem greater and ARM devices will be difficult to stave off forever.
 
Predictions like the following:


are notoriously unreliable (even by experts) and I agree with the author about being cautious. But having said that ARM is clearly going to continue to push into traditional x86 territory and with Apple Silicon still anchoring expectations about what is possible to build, it seems like AMD and Intel will be competing within a continuingly shrinking market segment (even if the overall PC market increases). So I make no predictions myself about what percentage of the market will by ARM by what date, each architecture has its own share of issues, but x86’s definitely seem greater and ARM devices will be difficult to stave off forever.
x86 is still relevant for gaming and businesses. The PS5 and later the PS6 will be on x86. AMD is also ahead in servers so I really don't get this bashing on x86.
 
This…doesn’t seem like a great move.
For whom? Seems like a logical step for both AMD and Intel.

Don't get me wrong, I hope arm64 manages to finally kill off the ugly 1970s zombie ISA, but X1 Elite is only a first step towards that. x86 is going to be around for a while yet. Both companies know this, while also being aware that for the first time in 30ish years there's now serious competition for the future of Windows. If they can coordinate on the right technical moves, they can help x86 stay relevant longer (or, perhaps in the minds of some execs, even stay dominant forever). On the other hand, if they fight each other on how to extend and/or streamline x86, pack it up, x86 is done.
 
Predictions like the following:


are notoriously unreliable (even by experts) and I agree with the author about being cautious. But having said that ARM is clearly going to continue to push into traditional x86 territory and with Apple Silicon still anchoring expectations about what is possible to build, it seems like AMD and Intel will be competing within a continuingly shrinking market segment (even if the overall PC market increases). So I make no predictions myself about what percentage of the market will by ARM by what date, each architecture has its own share of issues, but x86’s definitely seem greater and ARM devices will be difficult to stave off forever.
look at it this way - what new device category to be invented in the next 20 years will run on x86?

Perhaps the point of this is they will work together to come up with some new patent-encumbered RISC86 or something, which has some features that allow it to run x86 emulation 3% better or whatever. That would be better than if they really think x86 has a future. But it’s still dumb.
 
For whom? Seems like a logical step for both AMD and Intel.

Don't get me wrong, I hope arm64 manages to finally kill off the ugly 1970s zombie ISA, but X1 Elite is only a first step towards that. x86 is going to be around for a while yet. Both companies know this, while also being aware that for the first time in 30ish years there's now serious competition for the future of Windows. If they can coordinate on the right technical moves, they can help x86 stay relevant longer (or, perhaps in the minds of some execs, even stay dominant forever). On the other hand, if they fight each other on how to extend and/or streamline x86, pack it up, x86 is done.

uh oh. did you just say “if they can ANTITRUST VIOLATION on the right technical moves?”. This doesn’t seem to be like a standards-setting organization - i doubt they will let other chip makers do whatever it is they come up with.
 
x86 is still relevant for gaming and businesses. The PS5 and later the PS6 will be on x86. AMD is also ahead in servers so I really don't get this bashing on x86.
I’m not bashing? The article is specifically in reference to notebooks and a large percentage of those will be Apple. Further these are projections over the remaining decade and it isn't that x86 is projected to disappear tomorrow but rather occupy a smaller and smaller percentage of devices sold over time. ARM is continually assaulting x86-held strongholds of servers, laptops, and probably soon desktops. Meanwhile, x86 has all but stopped even trying to get into mobile in any meaningful way (though is occasionally found in chrome books and 2-in-1s). The article is quite correct that we don't know if ARM's momentum will slow and as @mr_roboto points out this gives x86 the best chance of remaining as competitive as possible for as long as possible. Others here feel that x86 has too many issues to be viable in the long term, however long that ends up being, and so it's best to switch now than dig in.

As for the Ampere One vs Epyc 9965 results, several mitigating factors should be noted: the price, ~$5000 vs ~$15,000; the threads, 192 vs 384 (although an argument to be made that SMT2 only represents a 25% increase in performance - maybe more since server workloads are particular adept at taking advantage of extra threads - so maybe a 240 thread or more equivalent); the node, N5 vs N3; and finally the idle power draw, ~100W vs ~20W. Basically in Ampere One we have a much cheaper, probably smaller chip manufactured on an older node with worse idle power that uses fewer threads. In multithreaded scenarios all of that is going to lead to lower performance per watt that any architectural advantage couldn't possibly overcome - hell for an extreme example lower the power draw of the HX 370 to 20W and it matches, even slightly beats the power efficiency of the base M3, which obviously doesn't make the HX 370 a good choice for fan-less devices or the core architecture "better". This isn't as extreme since they have the same core count, but clearly not the same core size/cost. Now the devices are what they are and I am not disputing the findings, but again looking at the future, at the potential here ... I'd still say that these results show that ARM looks more promising for servers. That's not a guarantee of success of course, we've seen promising failures before over the last 30 years of server architectures (and I believe Ampere is up for sale). But we've seen what the ARM architecture can deliver.

look at it this way - what new device category to be invented in the next 20 years will run on x86?

Perhaps the point of this is they will work together to come up with some new patent-encumbered RISC86 or something, which has some features that allow it to run x86 emulation 3% better or whatever. That would be better than if they really think x86 has a future. But it’s still dumb.
Oh I agree, most future devices that are low power, efficiency oriented, and cheap will be going ARM or RISCV.
uh oh. did you just say “if they can ANTITRUST VIOLATION on the right technical moves?”. This doesn’t seem to be like a standards-setting organization - i doubt they will let other chip makers do whatever it is they come up with.
That is an interesting concern ... when would it become an actual problem?
 
Last edited:
I’m not bashing? The article is specifically in reference to notebooks and a large percentage of those will be Apple. Further these are projections over the remaining decade and it isn't that x86 is projected to disappear tomorrow but rather occupy a smaller and smaller percentage of devices sold over time. ARM is continually assaulting x86-held strongholds of servers, laptops, and probably soon desktops. Meanwhile, x86 has all but stopped even trying to get into mobile in any meaningful way. The article is quite correct that we don't know if ARM's momentum will slow and as @mr_roboto points out this gives x86 the best chance of remaining as competitive as possible for as long as possible. Others here feel that x86 has too many issues to be viable in the long term, however long that ends up being, and so it's best to switch now than dig in.

As for the Ampere One vs Epyc results several mitigating factors should be noted: the price, ~$5000 vs ~$15,000; the threads, 192 vs 384 (although an argument to be made that SMT2 only represents a 25% increase in performance - maybe more since server workloads are particular adept at taking advantage of extra threads - so maybe a 240 thread equivalent); the node, N5 vs N3; and finally the idle power draw, ~100W vs ~20W. Basically in Ampere One we have a much cheaper, probably smaller chip manufactured on an older node with worse idle power that uses fewer threads. In multithreaded scenarios all of that is going to lead to lower performance per watt that any architectural advantage couldn't possibly overcome - hell for an extreme example lower the power draw of the HX 370 to 20W and it matches the power efficiency of the base M3, which obviously doesn't make the HX 370 a good choice for fan-less devices or the core architecture "better". This isn't as extreme since they have the same core count, but clearly not the same core size/cost. Now the devices are what they are and I am not disputing the findings, but again looking at the future, at the potential here ... I'd still say that these results show that ARM looks more promising for servers. That's not a guarantee of success of course, we've seen promising failures before over the last 30 years of server architectures (and I believe Ampere is up for sale). But we've seen what the ARM architecture can deliver.


Oh I agree, most future devices that are low power, efficiency oriented, and cheap will be going ARM or RISCV.

That is an interesting concern ... when would it become an actual problem?

I don’t give legal advice on the internet. But we all know there is some latitude given to standards setting organizations. However, I don’t get the feeling this is one of those - companies in a standards setting organization can get in trouble, for example, for adopting standards that are blocked by patents they own, so that nobody can adopt the standards without paying. I assume that this organization has no intention of allowing anyone to make x86 chips (at least not without paying huge licensing fees). I don’t see Intel licensing stuff under FRAND conditions. I’m not an antitrust lawyer, but I think every commercial lawyer would say “um…can we get the antitrust folks to take a look at this first?”
 
I don’t give legal advice on the internet. But we all know there is some latitude given to standards setting organizations. However, I don’t get the feeling this is one of those - companies in a standards setting organization can get in trouble, for example, for adopting standards that are blocked by patents they own, so that nobody can adopt the standards without paying. I assume that this organization has no intention of allowing anyone to make x86 chips (at least not without paying huge licensing fees). I don’t see Intel licensing stuff under FRAND conditions. I’m not an antitrust lawyer, but I think every commercial lawyer would say “um…can we get the antitrust folks to take a look at this first?”
How different is this from say AMD's and Intel's current cross licensing agreements? Could they argue this is just a formalization/continuation of that? I know you can't comment specifically, I'm just trying to wrap my head around all the various liaisons companies are allowed to form vs ones they aren't.
 
How different is this from say AMD's and Intel's current cross licensing agreements? Could they argue this is just a formalization/continuation of that? I know you can't comment specifically, I'm just trying to wrap my head around all the various liaisons companies are allowed to form vs ones they aren't.

I don’t see how it’s the same thing. cross-licensing just means “we won’t sue you for violating our intellectual property rights.” The cross-licensing agreements, as far as I know, do not include any provisions where the two companies exchange technical information, collaborate on technical details, plan new extensions to the ISA, etc. There’s no coordination when two parties cross-license - in fact, the two parties are adverse to each other when there is a cross-license, typically, each constantly in danger of violating (or being accused of violating) some provision of the license (I’ve handled several cases where parties to a license have sued each other.)

This new thing appears to be all about coordination between two direct market competitors (and some other stakeholders), unless I am missing something.
 
I don’t see how it’s the same thing. cross-licensing just means “we won’t sue you for violating our intellectual property rights.” The cross-licensing agreements, as far as I know, do not include any provisions where the two companies exchange technical information, collaborate on technical details, plan new extensions to the ISA, etc. There’s no coordination when two parties cross-license - in fact, the two parties are adverse to each other when there is a cross-license, typically, each constantly in danger of violating (or being accused of violating) some provision of the license (I’ve handled several cases where parties to a license have sued each other.)

This new thing appears to be all about coordination between two direct market competitors (and some other stakeholders), unless I am missing something.
Agreed.
 
Genuinely, I think this could be really serious trouble for intel.

They won't pass the audit - even if there's nothing wrong, the USA has been barring Huawei from supplying equipment so no doubt China is using this as an excuse (legit or not) to do the same.


Also, unless the USA really hurries up to get some class leading fabs on home soil - TSMC/Taiwan is looking extremely precarious.
 


Apparently Intel had their own x86-64 project but it was killed by higher ups who were afraid it would harm Itanium. Then AMD did it. @Cmaier
 


Apparently Intel had their own x86-64 project but it was killed by higher ups who were afraid it would harm Itanium. Then AMD did it. @Cmaier

Of course this wouldn’t have been the same as x86-64, but some other thing.

I was just discussing this an hour ago with our architect (who was not Keller!) and CTO via facebook messages. This wasn’t a huge secret.
 
Back
Top