oh I thought it was going to be ready? but I'll admit I didn't check, maybe my timeline is off. A pity if not.
Even if it were ready, will there be enough production to satisfy the dozens of millions chips Apple would need?
oh I thought it was going to be ready? but I'll admit I didn't check, maybe my timeline is off. A pity if not.
Oh yeah, the M1 (LPDDR4X) vs M1 Pro/Max (LPDDR5) split. I could see that. There are reports like this which show that even though LPDDR6 hasn't been certified there seems to be accelerated interest in it so maybe TigerRick is making some points
Even if it were ready, will there be enough production to satisfy the dozens of millions chips Apple would need?
I don't think losing efficiency cores makes much sense. These occupy very little space and offer similar perf/mm2 performance as the P-clusters. Also, they are great for running auxiliary tasks, freeing P-cores for higher-priority work.
For the iPhones, very unlikely if certification isn't even until Q3 2024. For the Macs, maybe. Naturally I have no idea what Samsung's production availability is, but I wouldn't rule it out entirely like I'm basically doing for the A18.
I don't think losing efficiency cores makes much sense. These occupy very little space and offer similar perf/mm2 performance as the P-clusters. Also, they are great for running auxiliary tasks, freeing P-cores for higher-priority work.
Is LPDDR6 even ready to be used in anything launching in October? This same guy has been over on the Anandtech forums peddling this same thing and no one's buying it much. I think they'll just go to LPDDR5X as expected.
Agreed if I were put in charge of designing a monolithic Ultra (and it's a good thing I am not), I would keep some. Assuming a 45+ TOP NPU but keeping core counts of the M3 Max the same for the M4 (which may not be the case naturally), here's how I would slim down the M4 Ultra from two M4 Maxes:
basically 75%, why? I dunno I just chose that.
CPU: Max: 12 P-cores / 4 E-cores -> Ultra: 18 P-cores / 4 E-cores -> Extreme: 36 P-cores / 8 E-cores
NPU: Max: 1 NPU -> Ultra: 1 NPU -> Extreme: 2 NPUs
GPU: Max: 40 cores -> Ultra 60 cores -> Extreme 120 cores
Everything else about 75% too. Since these are desktop-only chips, you can then boost clock speeds for some of these cores, especially the GPUs, to (partially? wholly?) make up for the difference in what would've been had the Ultra been 2x a Max (taking into consideration lost compute bandwidth due to the interconnect, so the GPU would only have to boost < 10%, maybe even 5%). That would be fun and make the high end desktops true monsters.
Edit: I forgot to add additional PCIe lanes for the Ultra for additional Mac Pro expansion.
The current rumor, which may be incorrect, is that the Mac Pro will tap out at 512GB.True, but I would think 16 E-cores might be too many...?
I have been saying Apple needs to move the high-end desktop Macs to LPDDR5X for awhile; this could allow up to 1TB of RAM in a Mn Extreme configuration, with 2.16TB/s UMA bandwidth...?
My initial condition is that the monolithic Ultra is too big to be economically feasible to fab for a reasonable Studio price, combined with Apple’s preferred profit margins ofc, and that cutting subsystems alone makes too small a difference*. The math behind that comes from earlier in the thread but even just a priori there must be a reason why, financially, Apple went with ultra fusion rather than monolithic for the Ultra chip beyond the upfront cost of designing another die. The GPU and CPU are the two largest areas. Therefore I was theorizing about how I would dramatically reduce the die area by 25% or more while reducing the performance by as little as possible compared to the standard Ultra design with an interconnect. Now 25% is a little arbitrary but it does allow for some clean multiples of the current design which may not be that important since the M4 Max may have a different core count. 25% is also nice since I’m assuming, perhaps erroneously, that the M2 Ultra GPU’s remaining lack of perfect scaling, the M1 Ultra was worse, is due to the interconnect alone. Thus for the GPU clock speeds would not even have to increase substantially to make up the difference. The CPU I believe would but then this is a desktop part, Apple might not throw efficiency away completely and follow AMD and Intel off that particular cliff, but they can stand to increase clocks on a desktop only part. Thus they can mitigate lower core counts and die area cut to save money by increasing clocks on the P-cores and GPU cores.Seems like you are arbitrarily cutting core count as the chip size goes up, just because...? The compute cores (CPU/GPU/NPU) are not what need trimmed, it is the extra subsystems that would only be used in the most extreme (no pun intended) cases; I could see something like this:
As for more power delivery, that might be possible if Apple designs the Mn Ultra/Mn Extreme SoCs for the N3X node...?
- 64-core CPU (56P/8E)
- 192-core GPU (the GPU is the main area where NO cores need to be cut)
- 128-core Neural Engine (more NPU cores than less; Apple IS focusing on AI this go-around, after all)
- 1TB LPDDR5X RAM
- 2.16TB/s UMA bandwidth
- For the Mac Pro, keep the same PCIe slot arrangement, but with full bandwidth available to all slots at once (no apportioning)
Over on ars, they are relating that the Great Gurman says the Mac Mini will skip M3 and go straight to M4 this fall. I have not kept tabs on his recent record, but why the hell is this hot air getting headlines?
Also, in a bit of "sad" news, they report the death of the Z80, at least in its classical DIP stand-alone form.
didn’t even realize they were still making the Z80’s.
It looks like it’s $50 million for their next-gen system: https://www.theregister.com/AMP/2022/08/17/nasa_space_computer/You realize that the Space Shuttle thingy was running 8086 processors right up to the end, some 13 years ago? They were finding it hard to get parts for. Seems kind of odd that NASA, home of Nerd Central, could not spend a million or two to upgrade the fight systems.
John Gruber posted this article on the M4 and the expected cadence.Over on ars, they are relating that the Great Gurman says the Mac Mini will skip M3 and go straight to M4 this fall. I have not kept tabs on his recent record, but why the hell is this hot air getting headlines?
John Gruber posted this article on the M4 and the expected cadence.
Making a Mountain Out of Molehill-Sized M4 News
“*The entire Mac product line is set for annual speed-bump Apple silicon updates*” is, as far I can tell, the actual story. Not “*Mac sales are in the tank and Apple is overhauling the whole product line to change its focus to AI.*”daringfireball.net
Raising the memory ceiling to 512 GB would be a significant improvement from the M2 Ultra, but would still offer just one-third the RAM ceiling of the 2019 Intel-based Mac Pro. Raising the ceiling to 512 GB would simultaneously be a nice upgrade for Apple silicon, but still not enough for the highest of high-end computing needs.
I didn’t even realize they were still making the Z80’s. I assume they long ago switched these to CMOS on a somewhat reasonable process? Or were these things still nMOS?
First computer I ever touched was a TRS-80 model 1. Fun that derivatives of these things are still in use - I think my daughter’s calculator has one in there.
Neat. I was always an HP guy. RPN or die. But it’s fun to imagine the calculator with the same chip as all those early computers and game consoles.Yeah, the TI-8x line had them when I was still in college. Because we were already supposed to be learning assembler, I used mine in a couple CE projects with apps written in assembly. The eZ80 is still used in TI's current crop I believe.
Over on ars, they are relating that the Great Gurman says the Mac Mini will skip M3 and go straight to M4 this fall. I have not kept tabs on his recent record, but why the hell is this hot air getting headlines?
I view it as all a game: it’s fun to talk about and fuel for speculation but no proclamations from anyone outside of Apple should be taken too seriously until made less than a few weeks before the claimed launch - basically anything said ~3 weeks from now for any WWDC launch (or lack thereof) in early June and ~early October for any possible M4 release. Of course WWDC launches (or no launch) and the iPhone launch will already give us some idea of what is coming (or not) by then for the M4.John Gruber posted this article on the M4 and the expected cadence.
Making a Mountain Out of Molehill-Sized M4 News
“*The entire Mac product line is set for annual speed-bump Apple silicon updates*” is, as far I can tell, the actual story. Not “*Mac sales are in the tank and Apple is overhauling the whole product line to change its focus to AI.*”daringfireball.net
I know Gurman has good sources but often makes mistakes. Gruber also has sources and often used to talk about the “little birdies” that told him things. His article mentions he expects the M3 Mini around May/June. He may just be guessing, but I wouldn’t be surprised if he’s correct and Gurman’s wrong.
That’s a fair point though of course part of that was due to the infrequency of updates for the Mac Pro line (the previous update was 2013 I think? And that was the trash can Pro?). But yes they offered that config only once and was, if I remember right, incredibly pricey (even if you got the memory 3rd party).RE: Gruber and this comment from the linked blog post...
Just kinda irks me how everyone goes on about the 2019 Intel Mac Pro and the 1.5TB maximum RAM it supported...
Said RAM was only available for the top-end CPUs, and was a by-product of the 12-channel memory bus...?
All Mac Pros until then maxed out at 256GB of RAM...
The point being, the 2019 Intel Mac Pro & 1.5TB of RAM was an anomaly, an outlier...
Neat. I was always an HP guy. RPN or die. But it’s fun to imagine the calculator with the same chip as all those early computers and game consoles.
The Space Shuttle didn't use the 8086. It couldn't, they began designing its avionics before single-chip microprocessors were even a thing.You realize that the Space Shuttle thingy was running 8086 processors right up to the end, some 13 years ago? They were finding it hard to get parts for. Seems kind of odd that NASA, home of Nerd Central, could not spend a million or two to upgrade the fight systems.
And (as you probably know) even with the advent of single-chip microprocessors, NASA sticks to much older systems. AFAIK, NASA and others are still using the radiation-hardened RAD750 CPU for their most extreme applications (it was included in the Webb), which is made on either a 150 nm or 250 nm process—and costs ~$300k/unit!The Space Shuttle didn't use the 8086. It couldn't, they began designing its avionics before single-chip microprocessors were even a thing.
It used IBM AP-101 computers, a miniaturized variant of IBM's 1960s S/360 mainframe architecture. According to this NASA history:
computer
www.nasa.gov
each AP-101 weighed 50 pounds and used 370W. It had five of these - four in a quadruple redundant setup as the primary flight control system (using voting to resolve differences in opinion), and a fifth running a different software stack (written by a different team, to somewhat different requirements, reusing zero lines of code). This was a kind of ultimate backup - in case some bug in the complex primary system with its redundancy made the flight computers unusable in a mission, they'd have an alternative to switch to.
True. But I thought those complaining the max RAM on the AS MP was too low weren't saying it was too low for a Mac (where, as you say, 1.5 TB RAM was an outlier), but rather that it was too low for a workstation (where 1 TB - 2 TB max RAM is typical), since the MP is supposed to be a workstation-class machine.The point being, the 2019 Intel Mac Pro & 1.5TB of RAM was an anomaly, an outlier...
And (as you probably know) even with the advent of single-chip microprocessors, NASA sticks to much older systems. AFAIK, NASA and others are still using the radiation-hardened RAD750 CPU for their most extreme applications (it was included in the Webb), which is made on either a 150 nm or 250 nm process—and costs ~$300k/unit!
They stick with it because they know it works, and its ~300 MIPS is sufficient for their rather modest local processing requirements. The large feature size is a benefit, since it reduces the ability of radiation to disrupt or damage the processor. I assume they have something similar for RAM.
RAD750 - Wikipedia
en.wikipedia.org
True. But I thought those complaining the max RAM on the AS MP was too low weren't saying it was too low for a Mac (where, as you say, 1.5 TB RAM was an outlier), but rather that it was too low for a workstation (where 1 TB - 2 TB max RAM is typical), since the MP is supposed to be a workstation-class machine.
Yes, the comparison to a PC workstation is a shifting of the goalposts for the MP—but those goalposts were shifted by Apple itself! And we can hardly fault folks for assessing the MP in a way consistent with Apple's own shift in their presentation of the machine. Here's an excerpt from the 2019 press release...and they've done nothing since then, with their marketing of the ASi MP, to temper people's expectation that the latter should also offer workstation-class performance:Potatoes, tomatoes...!
Seems to me the comparison to a PC workstation was a shifting of the goal posts since the Mac Pro only ever supported (up to, in certain models)128GB of RAM and made a fine OS X / macOS based workstation for years...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.