M4 Mac Announcements

You can replace the M4 Mini's NAND chip with higher-capacity chip from another M4 Mini, and it will work. You can do this on the Studio as well, but it's more complicated because of the dual-slot configuration, so there are some swaps that will work and some that won't.

The NAND chip on the M4 Pro Mini (512 GB – 8 TB) has a different form factor from the M4 Mini's (256 GB - 2 TB)

 
Last edited:
For our purposes here I'm defining the device as the removable physical object that contains the storage. [Yes, sometimes it's soldered and non-removable, but those aren't the implementations that are interesting to us for this discussion.] For most PC's, it's an SSD (which always contain a controller). For Apple, it's something like what I screenshotted below (which is controller-less).
You're making too strong a distinction between Apple's soldered and removable physical objects, IMO. Same flash devices, the only difference is what PCB they're soldered to.

And what do you mean "the controller is always off-device"? On SSD's the controller is always on-device. From Wikipedia: "The primary components of an SSD are the controller and the memory used to store data....Every SSD includes a controller." https://en.wikipedia.org/wiki/Solid-state_drive But I know you know all that, so there seems to be some miscommunication here. I think you've misunderstood what I'm asking.
I know I have misunderstood, because you're trying to read far too much meaning into a somewhat sloppily written marketing page, and it's causing you to use terminology in odd (or perhaps overly narrow) ways.

In this case... what is a "device"? To me, from an engineering perspective, even a flash memory package (which many might call a "chip", based on visual appearance) qualifies. They're complex assemblies - a multilayer organic substrate (AKA a fancy PCB), a stack of multiple flash memory die electrically connected to the substrate, and sometimes plastic overmold. Seems like a device to me!

So that's why I said the flash management controller is always off-device. I think most engineers with experience in silicon design would nod and understand the sense in which I was using the word. But if you're insisting that "device" can only refer to a larger printed circuit board assembly like a M.2 SSD, then sure, under that meaning every M.2 has an on-device controller.

So: I'm not asking about the software, I'm asking about the hardware. The article's discussion of software was interesting to me because it implies that bare hardware is standard in the industry. Specifically, the question was whether the high-end bare flash that Apple is using (and, again, by bare flash I mean it has no controller) is something that only Apple has commissioned, or if such devices are standard within the industry. If the latter, one can ask this question:
What even is "bare hardware"? Once again, not even trying to be snarky, it's a serious question. Today's "hardware" is often firmware running on an embedded microcontroller, a layer below anything the main OS can perceive.

M1's CPUs weren't limited to just the Firestorm and Icestorm cores. There are also somewhere around a dozen (IIRC) little high performance microcontroller-class ARMv8 cores; Apple's codename for them was "Chinook". One of the Chinooks is part of the SSD controller, and it runs a firmware blob which manages SSD flash and presents a NVMe interface to the rest of the system.

A M.2 SSD is no different! Their controllers also contain embedded microcontrollers which run firmware that does the same things as Apple SSD firmware.

If OEM controllerless flash is standard in the industry, was it technologically necessary for Apple to commission an entirely proprietary solution, or could Apple have used an existing solution?
Once again we have to things to unpack. Which industry? To me, that article reads as being from the embedded computing world, which is a very different (and often more cost sensitive) world than personal computing. That cost sensitivity is what motivates interest in what the article calls "unmanaged" NAND, even if there's some downsides to it.

When Apple put the controller on the processor, everyone was talking as if this is a unique approach that Apple alone has chosen, so of course Apple had to come up with a proprietary solution, which explains why you can't just buy aftermarket upgrades for Macs with slotted storage. But that article implies this approach is not unique after all.
The thing that gets in the way of upgrades isn't something I would describe as "high end" - it's on a different graph axis than that. Apple SoCs use x1 PCIe lanes for flash memory controller channels rather than the industry standard ONFI or Toggle NAND interfaces. Those interfaces get pushed out into these tiny PCIe to NAND bridge chips designed by Apple; there's one of these bridges in every NAND package inside an Apple product.

The NAND flash die inside each of these Apple-only NAND devices are presumably normal commodity NAND die - it's Apple's profit margin if they aren't. Much like DRAM, flash manufacturers really don't want to run lots of different types of wafer.

The PCIe bridge is the main reason why upgrades are hard. It's proprietary, and the likes of Samsung probably aren't allowed to sell flash devices with an integrated Apple PCIe bridge to just anyone.
 
If you put that sort of flash management logic in an OS-level driver, how do you ever load the OS? At that point it can only work for extra storage and not the boot medium, right?

Keep in mind that the context is embedded devices which usually "boot" from firmware. If you wanted to do OS-level flash management on a larger system, then yes, you'd need to implement some additional mechanisms.

For our purposes here I'm defining the device as the removable physical object that contains the storage. [Yes, sometimes it's soldered and non-removable, but those aren't the implementations that are interesting to us for this discussion.] For most PC's, it's an SSD (which always contain a controller). For Apple, it's something like what I screenshotted below (which is controller-less).

It seems that you are proposing your own terminology (which is perfectly fine) and then trying to directly match that to the terminology used by others (which doesn't really work). There are at least two levels of architecture here — hardware organization (the "placement" of the relevant hardware blocks) and functional organization (the isolation levels and protocols). From the perspective of hardware organization, Apple's approach appears "unmanaged" (but it's not the same kind of unmanaged as what the article you linked talks about), from the perspective of the functional organization it behaves just like any other commercial SSD.

And what do you mean "the controller is always off-device"? On SSD's the controller is always on-device.

As mentioned by @mr_roboto, this depends on what you define as a "device". Your definition appears to include the entire modular unit. It is certainly a definition (and it makes perfect sense in the context of what you are interested in), but it does not capture all the relevant distinctions.


I'm not asking about the software, I'm asking about the hardware. The article's discussion of software was interesting to me because it implies that bare hardware is standard in the industry.

Embedded industry, sure.

Specifically, my question was whether the high-end bare flash that Apple is using (and, again, by bare flash I mean it has no controller) is something that only Apple has commissioned, or if such devices are standard within the industry

Every SSD is using high-end bare flash, but that is not what you mean, right? I am not aware of any such application in high-performance desktop computing (there are fast PCIe SSDs, but they integrate the controller on the board). I wonder about smartphones. I can imagine that other smartphone makers use an approach similar to Apples, with flash controller integrated into the SoC.
 
You're making too strong a distinction between Apple's soldered and removable physical objects, IMO. Same flash devices, the only difference is what PCB they're soldered to.
Nope, not making a strong distinction at all. I'm well aware that soldered vs. slotted refers only to how the device is attached. But since you seemed confused about what I wrote, I was trying to simplify things by saying let's just talk about the removable storage.
I know I have misunderstood, because you're use terminology in odd (or perhaps overly narrow) ways.

I'm going to push back on this. I'm using the same language that's been used generally, since soon after the introduction of AS, to describe the distinction between Apple's approach to storage and that of PC's. My language is no different from that used by, for instance, Andrew Cunningham in this 2022 article in Ars Technica (emphasis mine):

"To dramatically oversimplify, all SSDs need at least two things: NAND flash chips that store data and an SSD controller that handles the particulars of reading from and writing to those chips. (Some SSDs also use a small amount of DRAM as a cache, though budget-priced and mainstream SSDs increasingly just steal a small chunk of your system's memory to perform the same operations with a minor performance penalty.) PC SSDs like Samsung's 980 Pro or Western Digital's WD Blue SN570 all include the controller and the NAND, which is what makes them easy to replace. Each SSD is a self-contained device, usable in any PC that has a physical SATA port or M.2 slot and that supports the SATA/NVMe storage specs. Apple's SSDs used to work this way, but starting with the Apple T2 chip and continuing into the Apple Silicon era, Apple began building storage controllers directly into its own chips instead. This means that the Mac Studio's SSD cards, while removable instead of soldered down, are just NAND plus what Martin calls a 'raw NAND controller/bridge.' "


So my question was simple: Are there other applications in which manufacturers do the same as Apple, and commission high-performance storage devices that are just NAND plus a bridge chip?

I feel like your objective was to more to argue why my question was poorly formed, while also demonstrating your techncial expertise, rather than trying to get to the heart of my question and give an answer that would provide clarity. I understand that when people have technical expertise they want to share it, which is probably why you wanted to riff on the semantics of "controller" but, as a teacher, I always first ask: Is this technical information going to bring clarity to my answer or just act as a distraction?

I see this all the time from experts on Chemistry Stack Exchange who come from industry rather than academia, and thus have significant technical knowledge but may not have significant teaching experience. They focus their answers on explaining why the poster's question is poorly formed or needs much more detail or clarity to be properly answered, and also provide lots of technical details that just add confusion, leaving the poor poster embarrassed and confused, instead of actually trying to serve the poster's needs, get to the heart of what they are wondering about, and provide an answer to that. Then I'll step in and give an answer that does serve the poster. Take a look at this example. Contrast my answer (I'm theorist) with the one given by Buck Thorn (immediately below mine), which qualitatively resembles the answer you gave me—he starts by saying how the poster's question is poorly formed, then goes into a long riff about thermometry, and at end of it never answers the question! [Except his answer is also wrong, which I don't think yours was.]. By contrast, since the poster emphasized "EXACT", I gently corrected him about that, then proceeded to give him a clear, direct and useful answer. I didn't need his question to be perfect in order to give it a direct answer, because I could see exactly what he was wondering about:


I know there is also a cultural difference between us. As a biophysicist, I am always trying to simplify things as much as possible: What is the least amount of information we need to answer this question? What is the most coarse-grained view that will give us a useful answer? Anything not essential gets thrown out. While your job, I assume, requires close attention to technical details, and you thus try to preserve them.

BTW, that's not to say I don't ever provide technically detailed answers. Sometimes I do, but it's only specifically when I judge the details are needed to give the poster the best clarity, i.e., to best serve the poster's needs, e.g., here:


OK, one last example: Here's an instance when five other experts on the site collectively decided the poster's question was bad and closed it as unaswerable, but I saw the question was fine and gave an answer just before they did that.


Imagine being a student asking what you think is a reasonable question, and five experts, each with decades of experience in the field, all say your question is inadequate. How do you fight back against that? Well you can't. I sympathize with that student because when I'm asking questions outside my field, at least online, I'm often on the receiving end of that. All you can hope for is that a sixth expert comes along (in this case me), judges that the other experts are being unreasonably difficult and your question is fine, and proves that by giving you an answer.
 
Last edited:
Interesting AI result.

Normally I'd just post the original source but there's so much brain damage in that article, I couldn't resist.
Never actually read anything on Tom's Hardware before even though I of course often see it around; Whatever code they have running to generate the pages doesn't work with Speak Selection, so that sucks (I have bad eyesight - I can read visually, but prefer screen reader style reading for longer content, so Speak Selection is fantastic)
 
So, thoughts.

New M4 max, 64 GB, 2TB, Nano-texture.

It's fast (obviously). Running my regular workload plus installing a bunch of stuff it hasn't even gotten the hotspot at the top of the keyboard warm, never mind the fan coming on. Which is to be expected, the M4 max is a monster.

More easily seen but not benchmarked things:

  • Nano-texture is a total win in an office with office lighting, windows, etc.
  • Space black doesn't seem anywhere near as fingerprint-y as I had feared.
  • The app store version of parallels doesn't work on M4 yet. The standalone does, but I have an App Store subscription which isn't transferrable (!)

Super happy with the machine so far, aside from Parallels quirks. Hopefully they get the App Store version updated soon.

I've "got to" go camping this weekend (Saturday/overnight). I'll try some games and stuff out on it on Sunday :D
 
In terms of heat, heavy CPU loads don’t seem to be an issue these days. I get a bit of fan doing pre-processing in PixInsight, but not too bad for a fully loaded CPU going for 10+ minutes.

Games though, they make the system hot and spin up the fans quick in the 14”. And this is the 20-core GPU in the Pro. If you want a toaster oven laptop, engage the GPU.
 
There have been a bunch of articles recently about the inability to run MacOS VMs older than 13.4 using the M4. Howard Oakley did the only technically substantial writeup of this that I've seen so far (unsurprising, he wrote a nice lightweight VMM "Viable"). But multiple "news" pieces I've seen say that it's unlikely that this will be fixed because Apple would have to patch all the earlier OSes and issue new IPSWs.

This makes no sense to me at all. Why wouldn't Apple simply fix whatever's broken in the hypervisor?
 
This makes no sense to me at all. Why wouldn't Apple simply fix whatever's broken in the hypervisor?

They are likely quoting Howard’s notes on this:

It thus appears most likely that this bug strikes in the early part of kernel boot, in which case the most feasible solution would be to fix the bug in macOS kernels prior to 13.4, and promulgate new IPSW image files for those. I suspect that’s very unlikely to happen, and as far as I’m aware it would be the first time that Apple has issued revised IPSWs.
 
In terms of heat, heavy CPU loads don’t seem to be an issue these days. I get a bit of fan doing pre-processing in PixInsight, but not too bad for a fully loaded CPU going for 10+ minutes.

Games though, they make the system hot and spin up the fans quick in the 14”. And this is the 20-core GPU in the Pro. If you want a toaster oven laptop, engage the GPU.
This is cooling tech is paramount. Apple needs to massively improve here.

They can start by including better heatsinks in their pro laptops
 
So why would he say this? It seems extremely weird.

Based on the information provided I’m not sure why it’s weird. Reality is that we don’t know for sure why the kernel gets stuck, so it’s educated speculation without more specific data. If you have a better argument as to why it would be the hypervisor that should materially be the same as the one that runs on the M3, go for it.
 
If you have a better argument as to why it would be the hypervisor that should materially be the same as the one that runs on the M3, go for it.

The most likely scenario is that M4 has some "chicken bits" in one or more SPSRs that were don't-care bits in the same SPSRs in M3, and setting them to 0, as M3 would, has undesirable effects on the configuration of M4. It would make sense, since it is a privileged process such as would twiddle with SPSRs.
 
Based on the information provided I’m not sure why it’s weird. Reality is that we don’t know for sure why the kernel gets stuck, so it’s educated speculation without more specific data. If you have a better argument as to why it would be the hypervisor that should materially be the same as the one that runs on the M3, go for it.
I'm not sure it's a better argument, but making the hypervisor "just do what it did on the M3" seems like something that ought to be feasible. This feels more like something that didn't get enough test coverage and that they should fix soon enough... if they care. Which they may not.

If @Yoused is correct, or it's something similar, then that ought to be something trappable. It might be a little slower but I can't imagine that that's something that would come up much or at all once the machine is booted. (While I'm not expert on this I don't find the argument all that convincing though. That seems like an obvious trap not to fall into.)
 
I'm not sure it's a better argument, but making the hypervisor "just do what it did on the M3" seems like something that ought to be feasible.

That's a vague statement, because for all we know, there could be no material changes to the hypervisor for M4, in which case one could argue it does do what it did on M3. Perhaps that's even the problem.

The issue is that we just don't know. If someone had found out why the kernel is halting in the VM, or what changed in XNU in 13.4, we'd have something more to go on. For now, all we really know is it's some compatibility issue between these kernels and the hypervisor, and that it's specific to XNU. We don't even know if Apple was already aware of the issue and decided this was the right call, or incorporated the fix in 13.4 as engineering samples showed up and they started hitting this internally.

I've investigated enough things where the "obvious" or "likely" thing wasn't the problem that I just do not try to speculate about someone else's legacy code base. e.g. investigating bugs where malloc() failed to allocate a 1MB buffer with a heap of only 30MB.

If @Yoused is correct, or it's something similar, then that ought to be something trappable. It might be a little slower but I can't imagine that that's something that would come up much or at all once the machine is booted. (While I'm not expert on this I don't find the argument all that convincing though. That seems like an obvious trap not to fall into.)

I was going to respond that it seems weird to not trap or protect certain bits you don't want VM kernels to be able to set. But again, that could even be a problem if you need to protect a bit but that protection itself produces undesirable behavior in the VM's kernel.
 
Last edited:
The issue is that we just don't know.
I agree. And that's my point - everyone seems to just be blindly charging over the cliff of "they have to fix the IPSWs" without actually knowing anything about the problem. Or do they? That's what I was asking - is there something known about this that I missed, that makes that conclusion reasonable?

(It's mostly interesting for Howard's case, since he's very clever and quite deliberate; the regurgitations from the "news" media are ignorable as they generally have no understanding of the subject.)
 
I agree. And that's my point - everyone seems to just be blindly charging over the cliff of "they have to fix the IPSWs" without actually knowing anything about the problem. Or do they? That's what I was asking - is there something known about this that I missed, that makes that conclusion reasonable?

The blogosphere is not doing their own investigations here. They don't have the skillset nor the budget to do such a thing. They have to rely on people who are closer to experts in the area, and Oakley is the only one so far to offer insight. Of course they are going to go with it.

This is no different than what I see with science reporting on the regular. Not sure why this is surprising. Maybe I'm just jaded?
 
Back
Top