- Joined
- Sep 26, 2021
- Posts
- 6,254
- Main Camera
- Sony
At “the other place” I promised to address the fundamental disadvantage that Intel has to cope with in trying to match Apple’s M-series chips. I’ll do that in this thread, a little bit at a time, probably as a stream of consciousness sort of thing.
Probably the first thing I’ll note is that, from the perspective of a CPU architect, the overall “flavor” one gets when one looks at x86 is that the architecture is optimized for situations where (1) instructions take up a large percentage of the overall memory footprint, and (2) memory is limited. The point of the complicated (*) instructions supported by x86 is to encode as much instruction functionality in as little memory as possible.
I asterisked “complicated” because, to an architect, it means something a little different than one might think. ”Complicated” here means that the instructions have variable lengths, and can touch multiple functional units at once - for example, requiring one or more memory loads or stores to happen as part of the instruction as something that exercises part of the integer arithmetic/logic unit (fetch a number from memory, add it to something else, and put the result in memory, for example).
x86-64 tried to minimize this kind of stuff - we designed those extensions to be as clean as we could while still fitting into the x86 paradigm. The problem is that x86 chips still have to be compatible with the older 32/16/8-bit instructions.
Anyway, having discussed the clear advantages provided by x86, the question is whether they matter in modern computers. If you have 640KB of memory, and your spreadsheet takes 400KB before you’ve even loaded a file, you can see where shrinking the footprint of the instructions in memory would be a big deal. But in modern computers, not only do we have a lot more memory available, but we are working with a lot more data - most of the memory you are using at a given time is likely data, not instructions.
So what you have with x86 is an instruction set architecture that was fundamentally designed to optimize for problems we don’t have anymore. It’s true that there have been improvements bolted on over the years, but backward compatibility means we still have to live with a lot of those early decisions.
Anyway, that’s just to get started. More later…
Probably the first thing I’ll note is that, from the perspective of a CPU architect, the overall “flavor” one gets when one looks at x86 is that the architecture is optimized for situations where (1) instructions take up a large percentage of the overall memory footprint, and (2) memory is limited. The point of the complicated (*) instructions supported by x86 is to encode as much instruction functionality in as little memory as possible.
I asterisked “complicated” because, to an architect, it means something a little different than one might think. ”Complicated” here means that the instructions have variable lengths, and can touch multiple functional units at once - for example, requiring one or more memory loads or stores to happen as part of the instruction as something that exercises part of the integer arithmetic/logic unit (fetch a number from memory, add it to something else, and put the result in memory, for example).
x86-64 tried to minimize this kind of stuff - we designed those extensions to be as clean as we could while still fitting into the x86 paradigm. The problem is that x86 chips still have to be compatible with the older 32/16/8-bit instructions.
Anyway, having discussed the clear advantages provided by x86, the question is whether they matter in modern computers. If you have 640KB of memory, and your spreadsheet takes 400KB before you’ve even loaded a file, you can see where shrinking the footprint of the instructions in memory would be a big deal. But in modern computers, not only do we have a lot more memory available, but we are working with a lot more data - most of the memory you are using at a given time is likely data, not instructions.
So what you have with x86 is an instruction set architecture that was fundamentally designed to optimize for problems we don’t have anymore. It’s true that there have been improvements bolted on over the years, but backward compatibility means we still have to live with a lot of those early decisions.
Anyway, that’s just to get started. More later…