Mac Pro - no expandable memory per Gurman

The PowerVR architecture used in the M-series consists of a cluster of compute units that handle SIMD instructions, with predication, in a massively parallel way. In essence, it is a bunch of "GPGPU" compute cores, with a separate display output generator which is not really directly connected to the compute cores. If you want the GPGPU power, you just add more GPU compute cores, because that is what they are. For example, if you wanted to add realtime gravity to Celestia, you would do it in the GPU. If Metal lacks the proper tools for implementing GPGPU compute on AS, that is a huge deficiency in the API that Apple needs to work on (I was given to understand that Metal was meant to converge OpenGL and OpenCL capabilities in an improved design).
@leman and @Andropov know more than me here, but as far as I understand Metal is a perfectly capable API for compute. But I’ll defer to them. If I remember right in fact they added some nice compute features in on of the recent hardware/API co-updates. While Apple does have its own neural engine and CPU-side matrix accelerator it does however lack the large number of tensor cores in the GPU for machine learning. As far as I know that’s the GPU’s biggest deficiency when it comes to compute features relative to its competitors.
 
The PowerVR architecture used in the M-series
As @dada_dave said, others are more well-versed in this, certainly better than I am. However, as far as I am aware, Apple has a license to Imagination's patents and intellectual property, but uses their own custom GPU, which they implemented years ago. (The agreement between Apple and Imagination may have been to avoid a patent fight in court.)

consists of a cluster of compute units that handle SIMD instructions, with predication, in a massively parallel way.
I believe what @B01L is asking for is an external GPGPU, sitting in its own lonesome slot (or eGPU enclosure), used for additional grunt beyond what the M-series SoC provides. I don't personally see that as necessary, which is why I inquired about it. I still don't see it as necessary, because I believe Apple's current solution is satisfactory for the vast majority of Mac users.
 
As @dada_dave said, others are more well-versed in this, certainly better than I am. However, as far as I am aware, Apple has a license to Imagination's patents and intellectual property, but uses their own custom GPU, which they implemented years ago. (The agreement between Apple and Imagination may have been to avoid a patent fight in court.)


I believe what @B01L is asking for is an external GPGPU, sitting in its own lonesome slot (or eGPU enclosure), used for additional grunt beyond what the M-series SoC provides. I don't personally see that as necessary, which is why I inquired about it. I still don't see it as necessary, because I believe Apple's current solution is satisfactory for the vast majority of Mac users.
Yeah it’s still very much a PowerVR style architecture - and the Asahi reverse engineering showed on occasion just how much. Let’s just say that it’s good that Apple agreed to re-license from ImgTech both legally and morally. Apple probably did it to get access to ImgTech’s ray tracing tech but it’s good nonetheless.

I can understand @B01L’s perspective if the ultra is the top chip in the Mac’s lineup. I think though if the Extreme does see the light of day that will more than service the needs and wants of most if not nearly all users of even the Mac Pro workstation. And if the future of building chips like Lego bricks does come to fruition that even further solves a lot of these problems both in terms of economies of scale and raw power. But that’s for the (hopefully near) future …
 
I am seeing a GPGPU as a GPU with no display output, more a General Purpose device for handling "blind" compute/render tasks...

That’s exactly how Apple Silicon GPUs work :) They take data from memory, do parallel processing in it and write the results back to memory.

I believe what @B01L is asking for is an external GPGPU, sitting in its own lonesome slot (or eGPU enclosure), used for additional grunt beyond what the M-series SoC provides.

This kind of device is useful in the traditional GPU model, where the device sits in its own memory pool and communication with the rest of the system is slow, but Apple is working hard to transition to the next step. It’s not in their interest to pursue this kind of solution as it will sabotage their long term strategy. Not to mention that the professional GPU market is transitioning to physical unified memory (in this shape or the other) anyway. Apple is just ahead of the curve.

Yeah it’s still very much a PowerVR style architecture - and the Asahi reverse engineering showed on occasion just how much. Let’s just say that it’s good that Apple agreed to re-license from ImgTech both legally and morally. Apple probably did it to get access to ImgTech’s ray tracing tech but it’s good nonetheless.

The fixed function (rasterizer/TBDR machinery) is inherited from PowerVR, the processing clusters and the rest are custom Apple. In kinguistics we call such unholy abominations creoles :) It’s hard to tell how much Rogue is left in current Apple GPUs. What Asahi team found is that some of the interfaces are closely related to earlier PowerVR drivers, but that doesn’t tell us much about the hardware itself.

I can understand @B01L’s perspective if the ultra is the top chip in the Mac’s lineup. I think though if the Extreme does see the light of day that will more than service the needs and wants of most if not nearly all users of even the Mac Pro workstation. And if the future of building chips like Lego bricks does come to fruition that even further solves a lot of these problems both in terms of economies of scale and raw power. But that’s for the (hopefully near) future …

Apple has the unique advantage in that they don’t have to care about other vendors. The industry is moving towards CXL as interface glue: these protocols define a common “language” for handling synchronization and memory coherency. This can be used for things like RAM over PCIe (eliminating needs for RAM slots on workstations) and multi-chip modules using CPUs and GPUs from different manufacturers. Apple however doesn’t need to go this way, which allows them to build solutions with less overhead. Although the RAM over PCIe could be a potential solution to Mac Pro expandability…
 
the RAM over PCIe could be a potential solution to Mac Pro expandability…

I had a different notion – after reading about CAMM, it seemed like their approach might be to adopt the new standard for the Pro (as it already has a faster memory speed ceiling than DIMM) and add a proprietary extension connector such that you could put in an Apple GPU extender that could raise the cores count (probably with a small CPU core to manage the GPU set) and the extension connector would support a TB port that would allow you to get closer to the 17 monitor array that some users seem to claim is desperately needed, and it would still be effectively UMA; or you could just use the mounts (which are right next to the SoC) for packing in more almost-as-fast RAM.
 
I just found this patent, published just the other week: https://patentscope.wipo.int/search/en/detail.jsf?docId=WO2023028042

This seems to depict an Ultra-style combination of two chips, using network topologies mentioned in previous Apple patents, but with a more detailed description of inter-chip communication and protocols. I only glanced over though, so I might have missed something important. Anyway, given the concrete nature of this patent (as opposed to more generic previous patents), I think it is likely that this is the actual description of the next M-series. I don't find any mention of external memory or more than two chip modules.
 
Anyone want to ask ChatGPT for a prediction? I tried signing up and had issues....their system probably screens out other AI's :D.
 
Anyone want to ask ChatGPT for a prediction? I tried signing up and had issues....their system probably screens out other AI's :D.

ChatGPT is not very good with hardware specs though. This is what it answered.
As an AI language model, I don't have access to any insider information or privileged knowledge about Apple's future product releases. However, based on recent trends and industry expectations, here's a possible prediction for the next Mac Pro's specs:

Processor: The next Mac Pro will likely feature the latest generation of Apple's own silicon chips, possibly the M2 or M3, with up to 32 high-performance cores and up to 128 efficiency cores.

Memory: The new Mac Pro could come with up to 1.5TB or even 2TB of unified memory, which would be a significant upgrade from the current maximum of 1.5TB.

Storage: The next Mac Pro might feature an SSD storage capacity of up to 8TB or more, which would offer a substantial increase from the current maximum of 4TB.

Graphics: The new Mac Pro may feature a range of high-performance graphics options from AMD, such as the Radeon Pro W6800X or the Radeon Pro VII, which would offer significantly better performance than the current Vega II options.

Ports: The next Mac Pro is expected to include a range of connectivity options, such as Thunderbolt 4, USB 4, and Ethernet, along with the possibility of including an SD card reader and HDMI 2.1.

Design: Apple may continue with the current Mac Pro's modular design, which allows for easy customization and upgrades. The new Mac Pro could feature a more compact and refined design that improves upon the current tower form factor.

Again, this is just a prediction, and there's no way to know for sure what Apple has in store for the next Mac Pro until an official announcement is made.

Can't be much wrong-er than that.

EDIT: Wait, it gets better:
Yes, it's very likely that the next Mac Pro will feature dedicated graphics cards in addition to the integrated graphics provided by the Apple silicon chip. Historically, the Mac Pro has always been designed with high-performance graphics capabilities in mind, and many professional applications in fields such as video editing, 3D modeling, and scientific computing rely heavily on dedicated graphics cards for their performance. So, it's very likely that the next Mac Pro will offer a range of powerful graphics options from vendors like AMD or Nvidia to meet the demands of professional users.
 
ChatGPT is not very good with hardware specs though. This is what it answered.


Can't be much wrong-er than that.

EDIT: Wait, it gets better:
I asked chatgpt about me, and it said i designed all sorts of chips i never designed, starting about 20 years before I started designing CPUs, and told me that the exponential x704 had multiple cores (it did not).

It just makes stuff up.
 
ChatGPT is not very good with hardware specs though. This is what it answered.
Can't be much wrong-er than that.
EDIT: Wait, it gets better:

Wow, this is just a combination of all the information from the latest Apple Silicon Macs combined with the specs from the last Mac Pro.
This reminds me of students ignoring the question in the test, simply writing down all they can think of, hoping that the teacher will search for all the right answers and ignore all the wrong ones.

But the part that it will also feature Nvidia graphics cards is pure comedy gold.
 
Wow, this is just a combination of all the information from the latest Apple Silicon Macs combined with the specs from the last Mac Pro.
This reminds me of students ignoring the question in the test, simply writing down all they can think of, hoping that the teacher will search for all the right answers and ignore all the wrong ones.

But the part that it will also feature Nvidia graphics cards is pure comedy gold.

Verbal hallucination is great for writing letters or short stories (ChatGPT is excellent at fairy tales!) Not so good as an expert knowledge system…

This trend of companies like MS aggressively pushing these models in s very worrying. I am afraid we are entering a new era of stupidity, because people are going to trust these tools without fact checking.
 
I cannot remember who said it, but I once heard the following quote:
You cannot distinguish artificial intelligence from a faulty computer. In both cases you get results you don't expect.

But it's the problem with all powerful tools that became easier to use:

Typesetting used to be hard. Then came DTP and you got lots of new magazines that looked like crap and were very hard to read, because the people who made them had no idea of typesetting rules.
I wrote the document for one of my school projects in LaTeX. Suddenly I understood, why I had such a hard time with one of the school books: Small sans serif type and way too many characters per line. The effect was that at the end of the line I had to actively search for the correct next line. After a paragraph I had definitely read every single word, but I had a hard time to say what the actual content was.

Nowadays a lot of people work with data bases, but they don't really know how to use them correctly. (Also, at least the German politicians often say "file" when they actually mean a data base. Although the first COVID-19 data base in Germany was supposedly just an Excel file, which explains a lot...)
Case in point: The employment agency once thought I was someone else, because that person has the same name and data of birth as I do. No one thought of checking the unique social security number, because most people who work with data bases nowadays don't know what a primary key is. I was only able to clear up this issue quickly, because I learnt that I had this "data twin" when I got my social security number.

Artificial intelligence used to be something for specialists or nerds. Now everyone can use it, but they have no concept of its limitations and thus might take the answers at face value.
 
You’re going to trigger @Colstan.
Hell yeah.

My problem with most of the speculation concerning the Apple Silicon Mac Pro is that it violates Hyman's Categorical Imperative: Do not try to explain something until you are sure there is something to be explained.

This is particularly bad over at MacRumors, where even @leman, the most patient poster on planet Earth, got sick of it in one discussion and essentially said, "I'm trying to be serious, there are plenty of fantasy threads you can go speculate in". The most fascinating thing about the Mac Pro isn't what it is, it's what it isn't, and many nerds have used it as an excuse for fantasy booking their own personal dream machine. I'll give two examples.

First, this should seem obvious, but it's constantly brought up: The argument is "Apple has to make the Mac Pro fast enough to beat an RTX 4090". If having the fastest GPU in the world inside of the Mac Pro is necessary, then Apple would have made nice with Nvidia and make their latest and greatest graphics cards available inside the Mac Pro already. Apple stopped shipping Nvidia drivers with macOS Mojave and has shown no interest in doing business with them for some time now.

Instead, Apple has put minimum effort into third-party GPU drivers, with support for only some of the AMD 6000-series GPUs, none of the 6000-series refresh, and thus far, zero support for 7000-series. We already have a public indicator of where Apple is headed, namely in gaming. While the Mac Pro is not intended to be a gaming machine, those cards could certainly be used with any Mac inside of an eGPU. Thus far, despite some AMD cards supporting Metal 3, all of the new computer games that are exclusive to the Mac App Store (such as Resident Evil Village) are Apple Silicon only. Apple is all but telling us directly that third-party GPUs are a thing of the past; they've been telegraphing that since Apple Silicon was announced. The M-series weren't just about replacing the CPU, but the GPU, as well.

Second, this being my personal favorite of fantasy booking, is that Apple is going to release another Xeon model of the Mac Pro. Thankfully, nobody here at TechBoards thinks this. There are a couple dozen hanger-ons over at MR that still hope for another x86 Mac Pro; enthusiast types that won't let go of the Intel era. I got pilloried by them for politely suggesting that they plan for the future. Instead, they claim that the delay in the Apple Silicon version is evidence that Apple will change their minds and release another Xeon model. (Never mind the fact that they'd simply be kicking the bucket down the road by one generation.)

There's been a lot of wish casting with the Apple Silicon Mac Pro, until it gets warped to the point that, in some user's imaginations, Apple has effectively made a PC clone (at a cheaper price, no less). Every indication we have, in technical, marketing, and business terms have shown otherwise. When all wishes run dry, and the genie doesn't manifest, they'll expect another apology tour; an apology that will never come. But I suppose hope springs eternal, and denial ain't just a river in Africa.

I realize that, in five years time, there will still be folks asking for Boot Camp support, third-party graphics cards, eGPU compatibility, a switch back to Intel, even a switch to Ryzen, and all manner of other legacy trappings of the bygone x86 era, but at least the speculation on that for future products will be behind us. I'll be glad when Apple finally announces the damn thing so that I don't have to listen to that nonsense anymore, or see articles from ostensibly reputable news publications feeding into those notions.

Well, you just triggered @Colstan for me right there.

Oh, oops, I did it again.
Hell yeah.
 
First, this should seem obvious, but it's constantly brought up: The argument is "Apple has to make the Mac Pro fast enough to beat an RTX 4090". If having the fastest GPU in the world inside of the Mac Pro is necessary, then Apple would have made nice with Nvidia and make their latest and greatest graphics cards available inside the Mac Pro already. Apple stopped shipping Nvidia drivers with macOS Mojave and has shown no interest in doing business with them for some time now.

Why would Apple go to Nvidia if they can realistically deliver superior tech themselves? I don't know whether it will happen for the Mac Pro, but it will happen eventually. They published a bunch of new patents over the last few weeks that deal with efficient program execution on very large GPUs, so something big is likely incoming :)
 
I have to admit I am not feeling the need for a new Mac Pro. I get the desire of some for expandability but it is a tradeoff - you pay for the expandability with performance (because the components are farther apart). We haven't seen the M2 Ultra yet but if they have alleviated the scaling issue (which seemed to be transport related?) then the Ultra GPU should be ridiculously powerful as the M2Max already either ties or beats the RTX 3080 (depending on test). And in that scenario why not just get an M2 Ultra Mac Studio?

Just an opinion.
 
At that point I think it would depend on whether you need a bus (PCIe) and slots, or not.

This is pretty much it.

1) Are you buying as much GPGPU compute as you can? Are you buying as many CPU cores as the OEM will let you?
2) Are you using after-market PCIe hardware for audio, storage or some other purpose?
3) Are you the type to swap components to make a machine last as long as possible?

If not doing the above, the AS Mac Pro isn't going a big win for you these days. However, while a small market, it's one that has knock on effects down the line. Used to be that large developers had to use Mac Pros to do their work in reasonable amounts of time. PCIe SSDs, RAM and CPU have all managed to outpace the size of these legacy codebases though, and so that's no longer true, and those folks are probably drooling over M2 Pro Mac Minis or M1 Max Studios, to be honest.

(Edit: And those M1 Max Studios are cheaper than the Mac Pros that I worked on when I was on a project large enough that build time was worth spending thousands per person to minimize)
 
Last edited:
Back
Top