X86 vs. Arm

VLIW has been commercially implemented twice – by Intel and by Transmeta. The Intel version has been EoLed for several years now, being way too hot and not all tat impressive. The Transmeta design had some small amount of success, but not enough to keep them from going under.
VLIW is a good way to lock yourself into a dead end, when it comes time to change the microarchitecture to improve performance and you end up having to either on-the-fly translate everything or recompile or whatever. You also give up a little bit because the compiler cannot take advantage of things that only are knowable at run-time (i.e. data-dependent pipeline holes, etc.), at least not without blowing up the instruction stream in size to handle every contingency. The complexity introduced by the scheduler, in the grand scheme of things, is not so huge that it’s worth the trade-off to get rid of it, at least not when you factor in that you need to recompile things every time you come out with a new chip with different characteristics.

My advisor was all over John Ellis’ work and I had some conversations with him about Bulldog many many years ago, but we didn’t go anywhere with it in my research group. I still have a bound copy of his dissertation sitting in my office, though.
 
GPUs are one case where you can sidestep the downsides of VLIW - it's the norm for shader programs to be distributed in source or IR (partially compiled) form, with code generation for the actual GPU in the system deferred to runtime.
 
I think what actually happens is some overstate the importance of the ISA and understate the importance of microarchitecture. Apple Silicon's success is much more microarchitecture than ISA as witnessed by it running circles around other solutions using the same ISA.
 
I think what actually happens is some overstate the importance of the ISA and understate the importance of microarchitecture. Apple Silicon's success is much more microarchitecture than ISA as witnessed by it running circles around other solutions using the same ISA.
Yep. In my experience, ISA rarely matters once you’ve at least made the choice between RISC and CISC, and the number of registers. Everything else is just differences in the margins.
 
Apple Silicon's success is much more microarchitecture than ISA as witnessed by it running circles around other solutions using the same ISA.
When I look at GB6, the top Mac score is 27something while the top Android score is 18something on a Snapdragon 8 gen2. The latter, however, is running at 2.0, while the Mac is at 3.7GHz. That looks like a rather linear scale, assuming the Snapdragon could be clocked up by ~80% and still run properly. And, of course, Apple seems to be the only company putting more than 1 or 2 P cores in a SoC.
 
When I look at GB6, the top Mac score is 27something while the top Android score is 18something on a Snapdragon 8 gen2. The latter, however, is running at 2.0, while the Mac is at 3.7GHz. That looks like a rather linear scale, assuming the Snapdragon could be clocked up by ~80% and still run properly. And, of course, Apple seems to be the only company putting more than 1 or 2 P cores in a SoC.

From what I understand the fast X3 core on Snapdragon 8 gen2 is clocked between 3.2-3.36 ghz at peak, depending on the platform. Probably draws a substantial amount of power at that frequency as well.
 
From what I understand the fast X3 core on Snapdragon 8 gen2 is clocked between 3.2-3.36 ghz at peak, depending on the platform. Probably draws a substantial amount of power at that frequency as well.
Correct! It draws about 25% more power while being rather slower.
 
I keep hearing "… but Apple Silicon is not ARM …", which is a reach. In terms of architecture, it is ARM – if you compile AArch64 code and run it through an Apple processor, it will yield the desired result. You could, in theory, swap the BroadCom processor out of a R-Pi for a M1 and it would run the code, if you could manage to get everything else (e.g., the GPU) to work for you. The μarch is significantly different from stock, but in practice, it still is ARM.
 
I keep hearing "… but Apple Silicon is not ARM …", which is a reach. In terms of architecture, it is ARM – if you compile AArch64 code and run it through an Apple processor, it will yield the desired result. You could, in theory, swap the BroadCom processor out of a R-Pi for a M1 and it would run the code, if you could manage to get everything else (e.g., the GPU) to work for you. The μarch is significantly different from stock, but in practice, it still is ARM.

Now do “but Linux is not UNIX”
 
I keep hearing "… but Apple Silicon is not ARM …", which is a reach. In terms of architecture, it is ARM – if you compile AArch64 code and run it through an Apple processor, it will yield the desired result. You could, in theory, swap the BroadCom processor out of a R-Pi for a M1 and it would run the code, if you could manage to get everything else (e.g., the GPU) to work for you. The μarch is significantly different from stock, but in practice, it still is ARM.
I think people just want to emphasise the point that Apple Silicon is not ARM cores. They are AArch64 but not any of the core designs ARM offers.

I don't think the majority of people who say this would dispute that the instructions it accepts are a superset of ARMv8.5.
At the same time though, I do think it is important to acknowledge that it's not an off-the-shelf ARM core design when the majority of the market is. As an x86-world comparison, I don't think "Apple Silicon is not ARM" should be understood as in "Intel Raptor Lake is not x86_64", but instead as "Intel Raptor Lake is not AMD".

As for replacing the Broadcom chip in a Raspberry Pi with an M1/M2, I really doubt that would work at all, disregarding the physical aspects of fitting it on there. Apple Silicon chips have an SSD controller as part of the chip and it is fairly tightly tied to its memory controller. And you'd have to Frankenstein together some middle thing between Asahi and the device tree associated with whatever other chips exist on a Raspberry board.
I know you were mostly just making a point about the instruction stream but it's a wider issue.

I also think it's worth pointing out that "Apple Silicon" is more than just the SoC. It's the entire platform as a whole. The system design is different, and ARM does not standardise things the same way the PC Compatible space has standardised things over the years. Device discovery does not happen via ACPI any more for example. - Well, at the very least not all of it. Some might still, not sure. That has nothing to do with instruction set and is all about platform choices
 
Now do “but Linux is not UNIX”
Linux doesn't get an official certificate that Craig Federighi hangs on his wall.

Screenshot 2023-03-26 at 9.49.35 PM.jpg


Other than that, it's kinda nifty that macOS is the last official desktop UNIX operating system on the market. So, there's that, I suppose.
 
I would say Minix, but I never used it, so I’d just be snarky. We used Linux and Zenix and freeBSD…
Everyone knows that all the cool kids use DOS Shell.

maxresdefault.jpg


When I was a youngin I knew a dude who refused to use the command line, and instead did everything this way. Good times.
 
My Dad’s favorite was and probably still is VAX/VMS. He vastly preferred it to the Unix family of systems.
I used a lot of VMS on VAX in college (we had a lot of DEC stuff. And PDP-11’s for hardware interface practice). I used it for a bunch of chip design classes. I found it way too verbose, and I was always annoyed at having to use the ”purge” command.
 
I used a lot of VMS on VAX in college (we had a lot of DEC stuff. And PDP-11’s for hardware interface practice). I used it for a bunch of chip design classes. I found it way too verbose, and I was always annoyed at having to use the ”purge” command.
Never used it myself. Never even seen it in fact. My Dad said he liked it because he found the commands very intuitive especially as compared to Unix. Also it was the OS that came with the Dec Alpha machines which he absolutely loved.
 
Back
Top