Yoused
up
- Joined
- Aug 14, 2020
- Posts
- 7,798
- Solutions
- 1
I have been keeping an eye on a certain type of magnet memory that is looking more and more promising. It is coming close to DRAM/SRAM speeds while also pushing toward low power requirements and, unlike Flash, supports discrete data access (by bytes, not by blocks). It has the advantage of eliminating the DRAM refresh cycle, further reducing a system's power needs, and seems to have essentially limitless cycle endurance.
The one major downside, of course, is that its data is stored in magnetic domains, meaning that a compact device like a phone or MBA would require some kind of shielding to protect it from stray fields. I am not clear what the requirement would be like, but it might make a phone using this kind of memory heavier/thicker than what we are used to, and could make some types of accessories unusable. Wireless charging could be an issue.
But, if in a few years, it starts to penetrate the market in practical ways, (TSMC is working on developing two of the memory types), will we be ready for it?
If they get the speeds down and the densities up, this type of NVRAM could replace both Flash and DRAM at the same time. If you have a 512GB computer, that becomes both memory and storage. Which means that your device no longer has a sleep state that is different from the off state (other than that networking is not active during off state).
But what does this do to OS design? If your working RAM is unified with your storage memory, this creates all kinds of weird issues. When you install a program, the system will splat the original file into its native operating form, and loading a program becomes a simple matter of mapping it into a set of pages and jumping into it. Moreover, the environment of some applications could simply be stored as-is, and the program would be freed of the task of reconstructing its workspace. App loading would be absolutely instantaneous.
Data files would be a similar situation. They could just be mapped directly into memory, which is wonderful if you have a file that tolerates instant modification – some files might be better mapped into read-only pages when opened and remapped when written to, to prevent unintended damage.
All this makes the traditional file system obsolete, at least at the system volume local level. How a computer with unified storage/memory would be optimally organized remains an open question, but contemporary operating systems are just not ready to be ported to it.
And, of course, most storage is at least somewhat compressed. Some files will just not work right in the unified layout, yet copying them from stored content to another part of NVRAM seems like a waste of time. I am wondering if there is a compression scheme that could expand file data into a cache-like buffer and recompress it into a storage format in a randomly-accessible way. I can imagine a SoC that includes multiple units that would manage this transparently to a running process.
And, of course, there is the issue of programs crashing. The crash daemon has to be able to figure out what it must scrub before the program can be allowed to restart. Nuke-and-pave is obviously the easiest, but also the least efficient.
It seems like Apple is probably better positioned to handle this sort of transition, with their somewhat modular OS that can be ported more easily. I like to hope that they are already on top of this, so that when we get to the NVRAM revolution, they will be ready to make the most of it.
The one major downside, of course, is that its data is stored in magnetic domains, meaning that a compact device like a phone or MBA would require some kind of shielding to protect it from stray fields. I am not clear what the requirement would be like, but it might make a phone using this kind of memory heavier/thicker than what we are used to, and could make some types of accessories unusable. Wireless charging could be an issue.
But, if in a few years, it starts to penetrate the market in practical ways, (TSMC is working on developing two of the memory types), will we be ready for it?
If they get the speeds down and the densities up, this type of NVRAM could replace both Flash and DRAM at the same time. If you have a 512GB computer, that becomes both memory and storage. Which means that your device no longer has a sleep state that is different from the off state (other than that networking is not active during off state).
But what does this do to OS design? If your working RAM is unified with your storage memory, this creates all kinds of weird issues. When you install a program, the system will splat the original file into its native operating form, and loading a program becomes a simple matter of mapping it into a set of pages and jumping into it. Moreover, the environment of some applications could simply be stored as-is, and the program would be freed of the task of reconstructing its workspace. App loading would be absolutely instantaneous.
Data files would be a similar situation. They could just be mapped directly into memory, which is wonderful if you have a file that tolerates instant modification – some files might be better mapped into read-only pages when opened and remapped when written to, to prevent unintended damage.
All this makes the traditional file system obsolete, at least at the system volume local level. How a computer with unified storage/memory would be optimally organized remains an open question, but contemporary operating systems are just not ready to be ported to it.
And, of course, most storage is at least somewhat compressed. Some files will just not work right in the unified layout, yet copying them from stored content to another part of NVRAM seems like a waste of time. I am wondering if there is a compression scheme that could expand file data into a cache-like buffer and recompress it into a storage format in a randomly-accessible way. I can imagine a SoC that includes multiple units that would manage this transparently to a running process.
And, of course, there is the issue of programs crashing. The crash daemon has to be able to figure out what it must scrub before the program can be allowed to restart. Nuke-and-pave is obviously the easiest, but also the least efficient.
It seems like Apple is probably better positioned to handle this sort of transition, with their somewhat modular OS that can be ported more easily. I like to hope that they are already on top of this, so that when we get to the NVRAM revolution, they will be ready to make the most of it.