Additionally... x86 on ARM works pretty well these days.I bet the vast majority of people running windows are not running software that is more than a few years old.
Knowing Intel, it will be like it was with ARMv8: you will be able to run Athlon-type code in compatibility mode; you will be able to run 80386-type code directly in compatibility mode; you will be able to run 8086-type code in another compatibility mode.So basically your code won't run on any older machine. Forced hardware upgrades = may as well review which platform to go with...
Additionally... x86 on ARM works pretty well these days.
So basically your code won't run on any older machine. Forced hardware upgrades = may as well review which platform to go with...
Well, x86 can be translated to Apple Silicon to run very well, because of AS features designed to assist with that. Vanilla Cortex-X does not translate it quite as smoothly, lacking those enhancements.
Yeah I guess that's my point. If they release these instructions today, it will be 3-5 years before they can be used widely for fear of cutting off compatibility with the existing fleet of machines. It's not going to be a quick catch up for intel.This has already happened a number of times (SSE, AVX…) But every time it took years until new instructions would really become widely used.
It's not going to be a quick catch up for intel.
Yeah I guess that's my point. If they release these instructions today, it will be 3-5 years before they can be used widely for fear of cutting off compatibility with the existing fleet of machines. It's not going to be a quick catch up for intel.
Intel may spin it as "merely an incremental change to the instruction decoder", and for intel maybe it's that "simple". But the reality is that it won't make any real world difference outside of niche applications for at least 3-5 years as the existing fleet of machines need to be supported by any software that may otherwise make use of this.
Of course the application vendor could deploy two binaries, one compiled for this and one not... but that's more support burden.
Never mind all of the compiler vendors needing to update their development tools for this first as well.
If Intel is willing to transition its customer base to something new in 3-5 years, they picked the wrong new thing to transition to.
They should have remembered Apple’s rule: you’re going to get disrupted, so better to do it to yourself.There is a reason why Hennessy & Patterson called x86 the "golden handcuffs".
Yes, it makes Intel a lot of money (still), but it also shackles them to this almost antique ISA.
Aye but we all know that the current situation is really the fault of those ne’er-do-wells who designed the x86-64 - not only effectively keeping x86 alive, but allowing it to grow to new heights. I mean fuck those guys! Amiright?I looked at the base 80386 instruction set with an eye toward how they could have built a practical 16-bit-opcode instruction set for the 32-bit architecture while maintaining an energy-efficient 8086 compatibility layer. It looks like they could have transitioned without too much difficulty, included a bit pattern that the fetcher could use to determine how long a variable-length op would be, eliminated all prefixes and segment nonsense as well as the string direction flag, and expanded to a 24 register set.
It would have taken considerable effort to redesign the ISA, but it would have put them in a very good position. No one runs 8086 code on newer systems, so it would have been just the right time to transition. But Intel is staid in their ways and only interested in preserving their crap design forever. If you step into their building, check your foresight at front door security.
I mean fuck those guys! Amiright?
Sorry I was just joking around, an opportunity to rib @Cmaier about how the design he and his colleagues came up for AMD64 with was so successful that it not only allowed x86 to survive in the consumer market, it helped lead to its near dominance across computing, including servers where it previously hadn’t been dominant or even all that viable.Again, I looked at how x86 could be converted to a 16-bit ISA by embedding the 8-bit opcode byte into 16-bit opcode words with bits that explicitly signal opcode length. Kind of a weird obsession of mine, designing ISAs. In the end, the x86 ISA is inherentlly a mess, which worked ok-ish for the 8086 but really needed to be shed by the 80386. The design decisions were understandable, but wrong.
AArch64 has an organic simplicity that harkens back to AArch32. It simply does its thing simply. x86 will never be able to match the elegance of ARM. They seem to be able to keep pace with the performance, but for how long? EvenMIPSRISC-V is posing a serious hazard to them. At some point, backward compatibility will simply not matter anymore, and Intel will have no cards left in their hand.
Sorry I was just joking around, an opportunity to rib @Cmaier about how the design he and his colleagues came up for AMD64 with was so successful that it not only allowed x86 to survive in the consumer market, it helped lead to its near dominance across computing, including servers where it previously hadn’t been dominant or even all that viable.
It wasn’t really a response to your previous post, I just used it as a springboard to make a little joke*. I apologize if I wasn’t clear on that point.
I did pick up on that, although I was under the impression thah it was mostly not his design concept, he just drew the pictures to make it work.Sorry I was just joking around, an opportunity to rib Cmaier
I did pick up on that, although I was under the impression thah it was mostly not his design concept, he just drew the pictures to make it work.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.