macOS 26.2 adds Infiniband over Thunderbolt support

Last post on this topic: Apple's CIOMesh with PCC says they use USB 4 predominantly (and sometimes Ethernet) to connect an ensemble of PCC nodes together currently for user requests. I again recommend reading Apple's PCC stuff. It's very very cool! :)
 
Last edited:
Your point #2 above suggests you are confused.

I was not talking at all about nVidia's GPUs. I was talking about the ConnectX series, now made by nVidia, which comes from their Mellanox acquisition.

As for your point #1: Apple faces a stark choice, because Thunderbolt simply doesn't have the bandwidth for connecting hosts as fast as you want to for scaling out AI (much less scaling up). TB is great compared to 10GbE, but not to 100, much less the 200 used in the DGX Spark, or the 400/800 used in bigger servers.

Putting that speed Ethernet in the Studio would be utterly impractical for the base model. But there's an easy solution. After all, they already make 10GbE optional. It would not be a stretch to make an OSFP or QSFP port with a really fast Ethernet chip behind it an option as well. And if they are supporting a mellanox chip already for their PCC servers, it's reasonable to use those for Studios as well, though there are also several other options. Perhaps Tim Cook has decided it's time to bury the hatchet?

Coming back to your second comment... If you think they're not connecting them with TB, then what do you think they're using? And why wouldn't they consider that tech for the studio, at least as an option?

I can see a much higher speed port like this going on the studio (ultra only) as another differentiator vs. lesser studios and the mini.

Realistically, there's little point including it on lower than the ultra spec due to both available bandwidth/ram and overhead to cluster the things.
 
Also, as far as I can tell, 128GB RAM per Mac is the minimum to be interesting when clustering Macs to run large models. As of right now, a 128GB Mac costs a minimum of $3500 in Mac Studio form, $4700 in MacBook Pro. These are not consumer Macs! The average Mac owner spent a fraction of those amounts on a base model Mini or Air with at most 16GB RAM. Clustering those together gets you nowhere.

I think you potentially underestimate what hobbyists and small business will perhaps be inclined to spend on low end consumer AI.

Gamers are buying 5090 GPUs, to play games on.

As we move into the AI era, I can easily see somebody spending 5-10k on home AI hardware if it is actually useful and enables them to get such use without a subscription. I've personally got way more than that in apple hardware purchased with my own money in the past 12 months - and I'm nowhere near the upper end of Apple device spending. These are not the "typical" user, but there's certainly a market that does exist for this.

Mac developers / "prosumers" would certainly like to run local AI, especially if it is running Apple trained models for development.

(MacBook Pro m4 max, iPhone 16 pro max, iPad Pro m5, multiple HomePods and other things)...
 
Please stop scrubbing your comments. You seem very impassioned but you keep doing that and it makes following the conversation difficult. Let’s slow it down a bit and appreciate the intuition and experience of people involved here. I count myself lucky to have such people to chat with, differences of opinion and all.
 
Last edited:
Please stop scrubbing your comments. You seem very impassioned but you keep doing that and it makes following the conversation difficult. Let’s slow it down a bit and appreciate the intuition and experience of people involved here. I count myself lucky to have such people to chat with, differences of opinion and all.
Again, Feel free to discuss the cool stuff RDMA over Thunderbolt enables for consumer ML, and if so inclined, read and discuss the heavily documented info Apple gave for Private Cloud Compute. I've contributed everything I've could, so I'm ending my part of this post here
 
Last edited:
Let’s remember that we are all impassioned about one thing or another, and we all have intuition and experience about different things. And with that, let’s focus on substance rather than discussing enthusiasm or credentials.

Thanks!
 
I think you potentially underestimate what hobbyists and small business will perhaps be inclined to spend on low end consumer AI.

Gamers are buying 5090 GPUs, to play games on.

As we move into the AI era, I can easily see somebody spending 5-10k on home AI hardware if it is actually useful and enables them to get such use without a subscription. I've personally got way more than that in apple hardware purchased with my own money in the past 12 months - and I'm nowhere near the upper end of Apple device spending. These are not the "typical" user, but there's certainly a market that does exist for this.

Mac developers / "prosumers" would certainly like to run local AI, especially if it is running Apple trained models for development.

(MacBook Pro m4 max, iPhone 16 pro max, iPad Pro m5, multiple HomePods and other things)...

Just on this... what I'd consider the upper end of spending... my previous financial director had a bespoke, glossy printed 15 page proposal submitted to kit out his home theatre room.... no idea what he spent, but that's an indication of the scale... :D
 
Back
Top