New Apple CPU rumors

Back when ARMv9 was announced, you didn't seem too impressed with it, and weren't sure that it really deserved a full increment, being more along the lines of a +0.1 release. Regardless of naming, @leman seemed really jazzed about the addition of SVE2, but there didn't seem to be much else that Apple didn't already have an equivalent for.
For what it's worth, SVE2 already existed as an optional extension in Arm v8. And despite some initial and widely reported claims that it would become mandatory in v9, it remains an optional extension.
 
My first computer had a Cyrix 6x86 onboard. My memories of it are not fond.
I never knew anyone who worked on those things. Not sure how good the designers were. They seemed to give up on even trying for the high end of the market around 1997 or so.
 
I never knew anyone who worked on those things. Not sure how good the designers were. They seemed to give up on even trying for the high end of the market around 1997 or so.

It was around 95 or so when my mom bought it for me at a local computer show. I didn't know much of anything about computers at the time beyond little tidbits like 486s being better than 386s and so on, so I begged for it on the notion that a 6x86 must be INCREDIBLE!

...I couldn't even run Windows 95 on the thing.
 
What is the macOS cycle? Do they make OS announcements at WWDC? l am just wondering, will the new version have some new features available only on M-series platforms? I remember Leopard was full-featured on both the PPC and x86 sides. But, this feels like a different ballgame.
 
What is the macOS cycle? Do they make OS announcements at WWDC? l am just wondering, will the new version have some new features available only on M-series platforms? I remember Leopard was full-featured on both the PPC and x86 sides. But, this feels like a different ballgame.

Yep, they announce new macOS versions at WWDC, just like iOS. Also likely there will be features that require Apple Silicon, though they will continue to generally support x86 for at least a few more years. I expect we will hear about the Mac Pro replacement, too. If they don’t replace it soon, then they get stuck having to support x86 that much longer.
 
If they don’t replace it soon, then they get stuck having to support x86 that much longer.
So, Apple announced the transition from PowerPC to Intel at WWDC 2005, with the first Intel Mac shipping in January of 2006. The final version of OS X to support PPC was Leopard released in 2007, with the final build being released as 10.5.8 in August of 2009. I can't see Apple doing the transition from Intel to ARM that quickly, but I suspect they will move faster than a lot of folks think. I was looking at the Mac Pro forum over MR, and some people were thinking that the current 7,1 Intel Mac Pro would get another 10 years of support, because surely Apple would want to keep their pro customers happy. Maybe they'll get another decade of support, and a free pony, too.
 
So, Apple announced the transition from PowerPC to Intel at WWDC 2005, with the first Intel Mac shipping in January of 2006. The final version of OS X to support PPC was Leopard released in 2007, with the final build being released as 10.5.8 in August of 2009. I can't see Apple doing the transition from Intel to ARM that quickly, but I suspect they will move faster than a lot of folks think. I was looking at the Mac Pro forum over MR, and some people were thinking that the current 7,1 Intel Mac Pro would get another 10 years of support, because surely Apple would want to keep their pro customers happy. Maybe they'll get another decade of support, and a free pony, too.

They are silly over there. The majority of Pros are sold into businesses that, in the unlikely event they keep them for a long time, depreciate them (over no more than 7 years), anyway.
 
… they will continue to generally support x86 for at least a few more years. I expect we will hear about the Mac Pro replacement, too. If they don’t replace it soon, then they get stuck having to support x86 that much longer.

I say, no Pro yet. Studio Ultra already basically grinds Xeon-W's gonads into the gravel. Anyone still using a Pro will keep using a Pro – Apple wants to have something stunning to take over the top end, which they will not have before next spring. If I am very much mistaken.

And supporting x86 merely means maintaining security updates/bug fixes. It is not exactly trivial, but not that big a deal. The last x86 update will probably appear in '27 or '28, just to be safe.
 
So, Apple announced the transition from PowerPC to Intel at WWDC 2005, with the first Intel Mac shipping in January of 2006. The final version of OS X to support PPC was Leopard released in 2007, with the final build being released as 10.5.8 in August of 2009. I can't see Apple doing the transition from Intel to ARM that quickly, but I suspect they will move faster than a lot of folks think. I was looking at the Mac Pro forum over MR, and some people were thinking that the current 7,1 Intel Mac Pro would get another 10 years of support, because surely Apple would want to keep their pro customers happy. Maybe they'll get another decade of support, and a free pony, too.
They were probably on track to make the transition faster than PowerPC -> Intel before COVID happened.

We'll see how long Intel support last. For Metal-related things, it's a pain to keep around separate code just for Intel Macs.
 
https://www.twitter.com/i/web/status/1530849359579197441/

Because I guess microarchitecture and physical design make no difference, Kuo?

I taped out so many CPUs that were either physical design spins (increase the clock rate by 10% by laboriously optimizing each critical path, one-by-one, hand-sizing and locating each logic gate, optimizing each logical equation, and sometimes hand drawing the wires), or were new microarchitectures on the same process node (giving you 10-15% performance improvement due to CPI improvements), or both.

New process nodes give you a jump in performance/watt, but are not necessary in order to still make at least one very nice year-over-year improvement. Where it gets tough is to do the optimization thing more than once. You spend one year on it and you get 15% improvement. Two years gets you only another few percent, because you run out of things to optimize.

Not to mention that A-series is a SoC, and even if the CPU cores don’t improve by more than 10%, there are lots of other improvements to be had via new memory controllers, new neural engines, GPUs, etc., etc.
 
Over at the other place the general consensus is that Apple silicon has stagnated. It is quite bizarre. People seem to have already forgotten that the M1 Ultra came out in March. Unreasonable expectations are a thing I guess.
 
Over at the other place the general consensus is that Apple silicon has stagnated. It is quite bizarre. People seem to have already forgotten that the M1 Ultra came out in March. Unreasonable expectations are a thing I guess.

The world believes that CPU advancements are somehow tied to process node. They‘ve reached that conclusion because they’re too young to remember what it used to be like when Intel was able to tick-tock and improve performance each year by a healthy amount regardless of whether it was a new node year or a new microarchitecture year.

Apple has hundreds of design engineers, and they don’t just call up TSMC and say “please make our chip faster for us, okay?”
 
Over at the other place the general consensus is that Apple silicon has stagnated. It is quite bizarre. People seem to have already forgotten that the M1 Ultra came out in March. Unreasonable expectations are a thing I guess.

Just a little more color on this - I just had a flashback. When i first went to AMD after my brief stint at Sun, I was working for Nexgen team in Milpitas - they had been bought by AMD but very recently, and hadn’t moved to AMD headquarters. I was assigned to take over the ALUs for a K6 variant we were doing. They had a problem with the fab, so the expected node improvements weren’t going to be there, so they apparently promised the market we’d hit our speed goal anyway somehow. As is the case in most CPUs, the clock frequency is pretty much constrained by the speed of the ALU. In particular, we had to be able to do, essentially, a three-input 32-bit addition within one clock cycle (along with various multiplexor delays in the path, potentially inverting one of the inputs, etc.).

I had just started and was asked to evaluate what I could do. The design was essentially a giant text file of spaghetti code, listing logic gates and their connections. The only “documentation“ was hand scrawled schematics of bits and pieces of it written in my predecessor’s notebooks. I spent a week or so trying to make heads or tails of it, and couldn’t understand any of the logic, though my predecessor gave me some vague suggestions about how it could be made faster.

My manager (one of two who shared responsibility, and who I really didn’t know) comes to me and asks “ok. you’ve looked at it. We need to speed it up by 10%. Can you do that?”

I paused for just a second and said “well, I don’t have any idea how, but even if I have to start from scratch and redo the design, I will get you your 10%.”

He smiled and said - and I will never forget this because it was so weird - “you are exactly my idea of the perfect microprocessor engineer.”

The punchline: I have no recollection at all as to whether I got it done or not, but since I kept my job I assume so :-). I probably did something terrible like clock borrowing across latch boundaries to get the last 1% done (I recall we used latches instead of flip flops on that project - the last project we ever did that on, I think).

But my first 4 or 5 years at AMD (and my time at Exponential) were largely spent on that sort of optimization; given a chip on a process node, find a way to make it 10-20% better without changing process node. Each time we did that we’d also make small microarchitecture optimizations - support for different memory standards, a new instruction here or there, make the multiplier take 4 cycles instead of 5, etc.
 
When it's said that an M-series chip is expected to be based on, say, the A16, is that intended to mean only that it will share the A16's microarchitecture (as indicated by the core names), or that it's expected to share its node process as well?

I know the number following the A only refers to the microarchitecture; e.g., there were versions of the A9 ("Twister") built with both the Samsung 14 nm FinFET and TSMC 16 nm FinFET processes. But here I'm assuming the more common case in which the specific A# chip is only built with one node process.
 
Last edited:
When it's said that an M-series chip is expected to be based on, say, the A16, is that intended to mean only that it will share the A16's core names (its microarchitecture?), or that it's expected to share its node process as well?

When people say that they mean it has the same core microarchitecture (usually, anyway).

You can have the same core microarchitecture and be on a different node, if you are willing to put in the work to port it to the new node (or you put the work in up-front to design to a common-denominator set of design rules that allow you to port by simply doing a mathematical scaling).

Typically what happens is you take the same RTL, and redo the netlist and placement and routing, because even though it’s the same microarchitecture, you may want to tweak some things and your floor plan is likely different, so you may need to fit the cores into differently-shaped or sized rectangles, or you may need to move the I/O pins around the perimeter to different locations.
 
You can have the same core microarchitecture and be on a different node, if you are willing to put in the work to port it to the new node (or you put the work in up-front to design to a common-denominator set of design rules that allow you to port by simply doing a mathematical scaling).
Isn't this essentially what happened with Rocket Lake, and why it wasn't an improvement over 10th gen? They took Ice Lake and backported it to 14nm, and while Intel claims to have learned a lot, the end result was less than impressive.

I guess my question would be, say that Apple targets TSMC's 3nm (just using this as an example), and it gets delayed by a year. Does Apple get stuck doing a minor tweak to an earlier design, have to pull a Rocket Lake (assuming there is time), or does it appear that they plan for these eventualities ahead of time and their microarchitectures are relatively node agnostic?
 
Isn't this essentially what happened with Rocket Lake, and why it wasn't an improvement over 10th gen? They took Ice Lake and backported it to 14nm, and while Intel claims to have learned a lot, the end result was less than impressive.

I guess my question would be, say that Apple targets TSMC's 3nm (just using this as an example), and it gets delayed by a year. Does Apple get stuck doing a minor tweak to an earlier design, have to pull a Rocket Lake (assuming there is time), or does it appear that they plan for these eventualities ahead of time and their microarchitectures are relatively node agnostic?
So far it looks like the latter. I don’t see any evidence yet that they’ve ever had to scramble.
 
I'm really keen to see what they manage to fit in an AR headset in 3nm.

If they go the way i hope they do, it will be a highly efficient (hence small, low power, lightweight) streaming processor to run the headset off a wirelessly tethered iphone or ipad (where the bulk of the processing is done using the A14 or better processor there).


And as to the AR headset mock up pictures floating around... lol. They're clearly nothing like what will ship. Looks worse than an oculus dev kit.

I'm expecting something in a glasses form factor from Apple that doesn't make you look like a space cadet whilst wearing it, is comfortable for extended use, etc. Hence the tethering requirement.... put just the bare minimum, highly efficient processor in the glasses (maybe batteries in the arms).
 
I'm really keen to see what they manage to fit in an AR headset in 3nm.

If they go the way i hope they do, it will be a highly efficient (hence small, low power, lightweight) streaming processor to run the headset off a wirelessly tethered iphone or ipad (where the bulk of the processing is done using the A14 or better processor there).


And as to the AR headset mock up pictures floating around... lol. They're clearly nothing like what will ship. Looks worse than an oculus dev kit.

I'm expecting something in a glasses form factor from Apple that doesn't make you look like a space cadet whilst wearing it, is comfortable for extended use, etc. Hence the tethering requirement.... put just the bare minimum, highly efficient processor in the glasses (maybe batteries in the arms).

It *sounds* like they are not planning to tether, which seems kind of crazy since even the apple watch was, in a sense, “tethered” for most of its life. But who knows. There have been so many confusing rumors - a developer-focussed early version that looks like traditional VR goggles with glasses following in a couple years, tethered VR goggles and untethered glasses, etc. etc.

The ”apple” thing to do would be to have glasses that look like glasses and which can tether for some things but which are still useful when not tethered. It sounds like, though, that whatever the first product is, it is not going to look like regular glasses.
 
Back
Top