M4 Mac Announcements

I think Mac Pro and even old Intel iMac Pro modules have EMI shielding too.

I'm honestly unsure why the modules have EMI shields (sometimes?) when the same flash devices soldered directly to other Macs' logic boards usually do not. See for example the image of the 14" M4 Pro/Max logic board on this Apple self service repair page:


Four locations for flash memory packages (two under each fan, only half are populated here, and there's probably four more locations on the reverse side of the board). No EMI can in sight.
If the black tape is just electrical insulation and not an EMI shield like @Nycturne suggested, it could make sense for removable modules to have it (to protect it from static electricity damage when the user handles/replaces it) while the soldered down modules wouldn't (as there's no need for a person to touch them directly).
 
If the black tape is just electrical insulation and not an EMI shield like @Nycturne suggested, it could make sense for removable modules to have it (to protect it from static electricity damage when the user handles/replaces it) while the soldered down modules wouldn't (as there's no need for a person to touch them directly).
Perhaps, but Apple is very fond of EMI cans that use black laminated tape (with one layer being metal foil) rather than sheet metal as the top of the can.
 
These dumb tear down videos also keep damaging it when they remove it. No surprise when it compromises the machine.
 
During a discussion I had with @leman on MR, I came across an article indicating that having the controller-off package may not be unique to Apple, since it appears there is a term of art for flash where the controller is not in the flash device itself: Unmanaged flash.

"Using unmanaged devices, copy-on-write (COW), bit error correction, bad block tracking, read disturbance handling and other flash management tasks must be taken care of on the host side."

That got us to wondering whether there are standard methods for producing high-end unmanaged flash (leman pointed out the article appears to be about lower-end unmanaged flash devices) and, if so, how Apple's differs from these (or whether it differs). The bigger question is whether Apple needed to use a proprietary (and thus not-easily-replicable) approach to accomplish this.

 
Side note: I used the Apple Migration Assistant to move from my MBP 16 to this MBP 14 direct connection with the Thunderbolt 4 cable I usually have my Studio display hooked up to my Studio with. Watching the Migration Assistant on a connection clocking in over 600Mb/sec was funny - it moved everything in about 10 minutes total.
 
"Using unmanaged devices, copy-on-write (COW), bit error correction, bad block tracking, read disturbance handling and other flash management tasks must be taken care of on the host side."

Makes me wonder if apple are taking some sort of approach similar to ZFS with their controller - using the M SOC to do hash checks, etc. for content validation. Also, having the filesystem aware of the physical storage. They're already using the SOC for encryption for anyone who uses FileVault anyway, so would make sense to move all of that workload to the SOC?

i.e., because of their OS/hardware integration, having a typical flash controller is both redundant and detrimental to their ability for the SOC to more intelligently manage the storage.
 
Side note: I used the Apple Migration Assistant to move from my MBP 16 to this MBP 14 direct connection with the Thunderbolt 4 cable I usually have my Studio display hooked up to my Studio with. Watching the Migration Assistant on a connection clocking in over 600Mb/sec was funny - it moved everything in about 10 minutes total.
Looking forward to this on Thursday :)
 
Makes me wonder if apple are taking some sort of approach similar to ZFS with their controller - using the M SOC to do hash checks, etc. for content validation. Also, having the filesystem aware of the physical storage. They're already using the SOC for encryption for anyone who uses FileVault anyway, so would make sense to move all of that workload to the SOC?

i.e., because of their OS/hardware integration, having a typical flash controller is both redundant and detrimental to their ability for the SOC to more intelligently manage the storage.

This bolded bit is accurate based on my reading. Apple's done white papers on the SSD security for both A-series and M-series processors, along with the T2, if you want to read up on it.

Normally, an SSD controller manages the encryption of data at rest. Anything like FileVault sits on top of this, meaning you have two layers of encryption. The goal being that the NAND is coupled to the controller via the controller's encryption, but if you want more protection, a second layer of encryption is needed, which does sap some performance.

Apple's approach integrates FileVault with the device-based keys used to encrypt the data at rest on the NAND. So it does the job of both coupling the NAND to the SSD controller (in the SoC), and to your password as well. No second layer of encryption required, so no performance hit to enable FileVault. The SSD controller isn't that special, beyond its integration with the AES Engine and the Secure Enclave to my knowledge. But to have this level of integration done securely, it's better to not exchange key data over wires external to the die if at all possible.

The T2 chip embedded a Secure Enclave and AES Engine into itself to protect the keys used with the T2's SSD controller. The M-series and A-series chips integrate this on the SoC, meaning you don't have unencrypted SSD data exposed on your external PCIe traces like we did with the Intel machines.
 
That got us to wondering whether there are standard methods for producing high-end unmanaged flash (leman pointed out the article appears to be about lower-end unmanaged flash devices) and, if so, how Apple's differs from these (or whether it differs). The bigger question is whether Apple needed to use a proprietary (and thus not-easily-replicable) approach to accomplish this.

The article seems to use this term roughly like this: if a software driver running on the main application processors manages the flash, the flash is "unmanaged", but if firmware running on dedicated microcontrollers embedded in a SSD/USB/eMMC/etc controller manages the flash, it's managed.

By that standard, Apple's SSDs are firmly in the "managed" category. They have their own processor core(s?) that run a copy of Apple's RTKit realtime OS and manage the flash so that the main OS doesn't need to worry about any of the details. The interface presented to the main system is NVMe with some custom Apple extensions. The extensions don't expose any flash management details, they're mostly there to support use of the Secure Enclave for encryption. (As I understand it, getting Apple SSDs to work under Linux required reverse engineering the details of how Apple extended NVMe and modifying the Linux NVMe driver to just kind of ignore the Apple extensions for now, since they're not trying to do Secure Enclave based FDE yet.)

Makes me wonder if apple are taking some sort of approach similar to ZFS with their controller - using the M SOC to do hash checks, etc. for content validation. Also, having the filesystem aware of the physical storage. They're already using the SOC for encryption for anyone who uses FileVault anyway, so would make sense to move all of that workload to the SOC?

i.e., because of their OS/hardware integration, having a typical flash controller is both redundant and detrimental to their ability for the SOC to more intelligently manage the storage.
They're not integrating that deeply.

RE: ZFS checks, all SSD controllers checksum everything. In fact, they do more than checksum, they use advanced error correction codes that can repair many bad bits per disk block. This is required to make high density flash function as a reliable storage medium, and is a lot of what that paper's talking about when it mentions the necessity of managing flash.

In fact, high density NAND is so unreliable that manufacturers just go ahead and give you more capacity to use on ECC and other overhead. As an example, a 2006 Micron white paper I just found mentions that pages in circa 2006 Micron NAND flash are 2112 bytes. Most applications would choose to split this page into a data payload of 2048 bytes and use the remaining 64 bytes to store an ECC syndrome and management data structures (pointers, wear counter, etc).

Modern NAND should generally have significantly more overhead than 64B per 2048B of usable data storage - the raw error rate has gone up over time, not down. (There was a one-time reduction that happened when they started building 3D NAND; the tremendous increase in density from going vertical allowed them to relax horizontal cell dimensions back to what they were a few generations before, which improved error rate.)
 
Last edited:
The article seems to use this term roughly like this: if a software driver running on the main application processors manages the flash, the flash is "unmanaged", but if firmware running on dedicated microcontrollers embedded in a SSD/USB/eMMC/etc controller manages the flash, it's managed.

By that standard, Apple's SSDs are firmly in the "managed" category. They have their own processor core(s?) that run a copy of Apple's RTKit realtime OS and manage the flash so that the main OS doesn't need to worry about any of the details. The interface presented to the main system is NVMe with some custom Apple extensions. The extensions don't expose any flash management details, they're mostly there to support use of the Secure Enclave for encryption. (As I understand it, getting Apple SSDs to work under Linux required reverse engineering the details of how Apple extended NVMe and modifying the Linux NVMe driver to just kind of ignore the Apple extensions for now, since they're not trying to do Secure Enclave based FDE yet.)
This article says that, on unmanaged flash, the flash management is done off-device rather than on the flash device. Since flash management is done by the controller, wouldn't that mean when the controller is off-device (as it is with the Apple flash chips), flash management is likewise off-device, and that this would thus be considered unmanaged flash (which they also refer to as "bare flash")?

I.e., simply put, wouldn't flash without a controller, aka bare flash, be considered unmanaged flash by definition?
 
Last edited:
This article says that, on unmanaged flash, the flash management is done off-device rather than on the flash device. Since flash management is done by the controller, wouldn't that mean when the controller is off-device (as it is with the Apple flash chips), flash management is likewise off-device, and that this would thus be considered unmanaged flash (which they also refer to as "bare flash")?

I.e., simply put, wouldn't flash without a controller, aka bare flash, be considered unmanaged flash by definition?
Well... how do you define "device", then? After all, there's no flash memory which does this kind of management itself. Flash process nodes aren't well-suited to making logic, so it doen't make a lot of sense to integrate complex controllers into the same die. Does that mean all flash is unmanaged, since the controller's always off-device?

Basically, I don't think there's a point in finely parsing that article's use of language - it's a bit sloppy and you won't get anywhere. The reason I interpreted it as I did ("unmanaged" => a driver running on an application processor is responsible for flash management) is that this seems to cut to the chase. That article has a goal, which is to educate potential customers on why they might want to buy the company's software product, a customizable flash management driver core.
 
Basically, I don't think there's a point in finely parsing that article's use of language - it's a bit sloppy and you won't get anywhere. The reason I interpreted it as I did ("unmanaged" => a driver running on an application processor is responsible for flash management) is that this seems to cut to the chase. That article has a goal, which is to educate potential customers on why they might want to buy the company's software product, a customizable flash management driver core.

If you put that sort of flash management logic in an OS-level driver, how do you ever load the OS? At that point it can only work for extra storage and not the boot medium, right? Or you need some kind of über dumb "read this byte from this address to initialise always" ala MBR boot partitions in style and no wear levelling or any sort of flash memory management aside from that baked into the BIOS or something similar, right? Perhaps a UEFI driver could work here but it would again need to be embedded pre-flash on a ROM or something
 
Well... how do you define "device", then? After all, there's no flash memory which does this kind of management itself. Flash process nodes aren't well-suited to making logic, so it doen't make a lot of sense to integrate complex controllers into the same die. Does that mean all flash is unmanaged, since the controller's always off-device?

Basically, I don't think there's a point in finely parsing that article's use of language - it's a bit sloppy and you won't get anywhere. The reason I interpreted it as I did ("unmanaged" => a driver running on an application processor is responsible for flash management) is that this seems to cut to the chase. That article has a goal, which is to educate potential customers on why they might want to buy the company's software product, a customizable flash management driver core.
For our purposes here I'm defining the device as the removable physical object that contains the storage. [Yes, sometimes it's soldered and non-removable, but those aren't the implementations that are interesting to us for this discussion.] For most PC's, it's an SSD (which always contain a controller). For Apple, it's something like what I screenshotted below (which is controller-less).

And what do you mean "the controller is always off-device"? On SSD's the controller is always on-device. From Wikipedia: "The primary components of an SSD are the controller and the memory used to store data....Every SSD includes a controller." https://en.wikipedia.org/wiki/Solid-state_drive But I know you know all that, so there seems to be some miscommunication here. I think you've misunderstood what I'm asking.

So: I'm not asking about the software, I'm asking about the hardware. The article's discussion of software was interesting to me because it implies that bare hardware is standard in the industry. Specifically, my question was whether the high-end bare flash that Apple is using (and, again, by bare flash I mean it has no controller) is something that only Apple has commissioned, or if such devices are standard within the industry. If the latter, one can ask this question:

If OEM controllerless flash is standard in the industry, was it technologically necessary for Apple to commission an entirely proprietary solution, or could Apple have used an existing solution? If the latter, that would indicate Apple made it proprietary purely for business reasons (to ensure it would be difficult for after-market suppliers to replicate). After all, storage upgrades are probably a significant contributor to the Mac division's profit margins.

When Apple put the controller on the processor, everyone was talking as if this is a unique approach that Apple alone has chosen, so of course Apple had to come up with a proprietary solution, which explains why you can't just buy aftermarket upgrades for Macs with slotted storage. But that article implies this approach is not unique after all.




1731452432094.png
 
Last edited:
If you put that sort of flash management logic in an OS-level driver, how do you ever load the OS? At that point it can only work for extra storage and not the boot medium, right? Or you need some kind of über dumb "read this byte from this address to initialise always" ala MBR boot partitions in style and no wear levelling or any sort of flash memory management aside from that baked into the BIOS or something similar, right? Perhaps a UEFI driver could work here but it would again need to be embedded pre-flash on a ROM or something
Well... what's the boot medium? That gets a bit murky and I'll try to illustrate why with the Apple Silicon boot process. Here are its bootloader stages, and where they're stored:

Stage 0: Boot ROM - Stored in mask ROM in the SoC
Stage 1: "Low Level Boot" - Stored in unmanaged NOR flash
Stage 2: "iBoot" - Stored in managed NAND flash (AKA the boot SSD)
"stage 3": The XNU kernel - Stored in the SSD

Stage 0 is very simple. It's the root of trust in their secure boot scheme, and mask ROM is both small and read-only. There's not enough space for complex software, and even if there was, they wouldn't take advantage of it. Complexity increases the risk of bugs, and they must live with any bugs in this ROM forever.

That's why they picked low density NOR flash to store stage 1. As long as you don't write to it a lot, NOR doesn't need management because it's far less dense and far more inherently reliable than NAND flash. Besides stage 1, NOR contains firmware images for every Apple Silicon subsystem which must be brought up to make it possible to load and run stage 2. So, this is where NAND flash management starts up in Apple platforms - stage 1 reads a firmware image stored in NOR, points the SSD management CPU at it, and releases it from reset. After the SSD firmware finishes initializing, there's a live NVMe device that stage 1 (and subsequent software) can use to access the SSD's contents.

Other arrangements are possible! If Apple didn't want to hide NAND flash management from the application processors, they wouldn't need different external hardware. Fold NAND flash management into stage 1's code, and Bob's your uncle. Doesn't even have to be the same exact code as the eventual XNU flash management driver, as long as both agree on the data structures stored in the raw flash.

Another possibility is what we did in a chip I once worked on. It needed to boot Linux with a minimum of external components - so, no separate NOR flash firmware chip, only one NAND. This meant baking some very basic flash management into the mask ROM and hardware (we had a hardware engine for the BCH error correction, or whatever algorithm it was - don't remember exactly). I wasn't directly involved so I don't know or remember all the details, but I'm sure it was something along the lines of "the chip knows how to read a fixed size bootstrap program from a fixed address in flash, so that's where you put your stage 1 bootloader. Don't update it often or you'll wear out that location in flash and brick the device." Stage 1 could then contain software for more sophisticated wear-levelled management of the rest of the flash.
 
Now we need OWC to reverse engineer the board and provide upgrade SSDs. Get moving, OWC.

A group from the other place (on the Mac Studio forums) have a working solution for the Mac Studio. Maybe they'll do the Mini too...



Hmm, so the second board is the power supply. In the animation of the thermals it looked like the power supply was a smaller board below.
Ask and ye shall receive


Unfortunately not identical to the Studio drives (because of course not), but still
 
Ask and ye shall receive


Unfortunately not identical to the Studio drives (because of course not), but still
I wonder if Apple could brick the PolySoft NAND, if it so desired, with a firmware update.
 
Back
Top