I don‘t buy the claim of it being comparable to 16GB on a Windows laptop as a blank statement, but I’m sure there are some cases where this is approximately true. After all Apple engineers have spent the most time profiling this and we still got a base config of 8GB for all M3 (non Pro/Max) computers.
However the opposite can also be true (relevant Twitter thread below):
The statement from Apple regarding memory is just nonsense guys.
- Memory compression is not unique to macOS. Windows has it. Using this to justify under-speccing memory is asinine.
- Unified memory is not magic! It doesn’t mean you need less memory. It’s disappointing that Apple would even suggest that.
“Actually, 8GB on an M3 MacBook Pro is probably analogous to 16GB on other systems.” Is just rubbish. If your workflow needs 16GB on Windows, it needs 16GB on macOS. There’s no debate here - it annoys me that we’re even having this conversation.
Swap is not a replacement for memory. Fast SSDs are not a replacement for memory.
You might intuitively think that the performance of modern SSDs makes swapping less of an issue, but this is, in general, not true. The problem is not just throughput but latency - memory access is <150ns, SSD access is typically >10,000ns. This matters a lot! The penalty from going to disk is severe. There’s no point having a shiny new M3 if it spends most of the time waiting for memory.
You can get away with 8GB for casual use, but this isn’t a casual machine.
The one time I’d argue Apple can make a case is actually the case
@Andropov linked to as a counter example: gaming. But his counter example is mostly applicable to Windows. Windows laptops with integrated graphics tend to hard split their memory pool between the CPU and GPU - at least they did, I haven’t checked recently but my understanding that is still the case. So effectively your Windows laptop with an iGPU with 16GB of RAM is really only 8+8. While Apple very oddly doesn’t have the unified virtual memory pool that Nvidia uses, they don’t do this. Both CPU and GPU have full access to the same RAM and you don’t need to duplicate memory and transfer memory between partitions*. Programmatically you still treat the CPU and GPU separately, again no unified virtual addressing, but physically you don’t. So Apple has some odd omissions to its programming model that could make that even easier for developers. But overall that’s the one time, where the GPU and CPU require access to the same memory pool, where unified memory could make a difference and you can almost make that claim with a straight face. But only almost as I have to point out the narrowness of this claim: note I said this was only in comparison to PC laptops with integrated graphics which are generally on the cheap end in more ways than one. As Seb says in
@Andropov ’s link, if you have a GPU with its own memory (which is common in the high end PCs that Apple actually competes against) this doesn’t apply. Again for the Mini or even the Air you might, might get away with the comparison. But for the $1600 MacBook Pro where the 8GB of base RAM is most egregious? Nah. They don’t get use dinky integrated graphics PCs for that comparison. But I bet you that, if there is any reasoning behind their statement, that’s what they are doing and then acting like it’s a general claim they can make.
Also, just to push gently back on the SSD comments, yes going to SSD swap is very slow but depending on your application the swap penalty can be hid if you have enough other work to do so that the data can be streamed. Again, there are limits to this, more RAM for big problem sets will be better, sometimes dramatically so, and it’s not like this logic doesn’t hold for PCs. And similarly you’re right that Windows likewise has memory compression. My understanding is similar to
@Joelist that Apple’s memory compression is better and they’re smarter about swap memory but no they aren’t going to turn 8GB into a 16GB equivalent through that alone. It’s also possible some of the tested applications Joelist alludes to were more tightly compiled/programmed on the Mac-side than the Windows side.
So I’m not disagreeing with your overall assessment of Apple’s statement. It’s mostly BS. I just want to inject some nuances here though.
EDIT: *I'd have to double check that with
@leman that transferring memory between one that's seen by the GPU to one that is seen by the CPU (and visa versa) doesn't require at least temporary duplication physically for Apple Silicon. He's much more of an expert on that than I. Even if not, it still would be far more flexible than transferring between hard memory partitions, but obviously not as beneficial as not having to do it at all. Also for gaming at least some of the newer capabilities like DirectStorage APIs and the like can be benefit here for Windows, transferring memory directly from the disk to the GPU memory but I think that really only helps dGPU setups, not iGPU. I'd also be curious if more Windows laptops with iGPUs only have adopted a similar approach to Apple's since the Apple Silicon release. I don't believe that there is a technical limitation why they couldn't - I bet the Qualcomm devices do it and if they don't, I bet the upcoming Elite devices will.