doesn’t look like it?I read somewhere that on the POWER10, the L3 cache is actually a collaboration between the L2s for all the cores.
What you're describing is called an inclusive cache hierarchy. As Cliff laid out, inclusivity isn't a requirement. Lots of designers have used other arrangements.Pardon me for correcting your math, but the correct number is 192K. Having an Ln cache that contains lines that are not present in Ln+1 is highly problematic and will likely result in subtle bugginess. Ln+1 will contain all the lines that are in Ln, so the capacities are not additive.
One thing which might help is to put some effort into really understanding how a direct-mapped cache works, then think of a 2-way set associative cache as two identical direct-mapped caches which, with the addition of a little glue logic, work in tandem.
This is a helpful way to think about it because it's not even an analogy - each way of a set associative cache could function as a direct-mapped cache, if separated from its peers.
It‘s been years that I went into caches this deeply, so I hope this is correct, but it might not be…
With direct-mapping, if you have cache lines with the same address offsets in a page, cached lines will be removed as soon as you access the same offset in a different page.
You can think of associativity as several instances of cache lines with the same offset in different pages. Of course this makes the replacement a bit more complicated. Which of the cache lines do you reuse when all are in use already?
Some early ARM processors had fully-associative L1 caches. In this case the cache was sometimes referred to as CAM (content addressable memory), because it operates similar to a hash.
As I said, this is from very hazy memory, so take this with a grain of salt.
If you are really interested, I‘ll have to go through some of my books.
I was going to tell you about how a very handsome man once wrote his PhD dissertation on caches and discussed associativity in detail with pictures and stuff, but when I went looking for the link, I realized that sometime in the last few months my alma mater finally took down my website. It had been there since 1996! Bastards.
Broke some links in some of the articles I wrote here, too.
Anyway…
Associativity is all about *where* data corresponding to a given address is allowed to be placed in the cache.
The cache is smaller than the memory it is caching (obviously). So it’s not a 1:1 mapping. If I want to cache the contents of memory address 0001, let’s say, where do I put that in the cache?
In a “direct mapped” cache, there is only one possible place to put it. I may have very simple circuitry that says anything that matches the pattern 000? goes in the first row of my cache.”. So 0001, 0009, 000F, all go in the first row. The first row can only hold one of them at a time, so if I’m storing 0001, and there’s a memory access to 0003, then I have to discard 0001.
In a 2-way set associative cache, each row of the cache can store the contents of two different memory addresses. So, in the first row I could store both 0001 and 0003. This complicates the circuitry and slows the cache, because if I already have 0001 and 0003, and there’s a memory access to 000F, I have to decide which of 0001 and 0003 I am going to discard (I usually discard the one that is “Least Recenty Used,” so I need to keep track of that). I also have to have some way to find where, in each set, each address is stored - so I store some of the address bits along with each entry. This is called the “tag.”. I have to do that with a direct mapped cache, too, but here I have to do two comparisons - one to find the correct row of the cache, and one to find the correct set).
An 8-way set associative cache works the same way, allowing you to store the contents of 8 addresses that otherwise would collide, all in the same cache row.
In a *fully associative* cache, the contents of any memory address can go anywhere in the cache. These are fun to model, but are rarely seen in the wild.
Typically you are using an N-way set associative cache, where N>1. You use the rightmost address bits to determine which row of the cache to use, then use the next set of bits as the tag, to keep track of what is in each position within the set. (You ignore the very rightmost bits, because those are bits within a word, and you are typically addressing words, not bits).
so:
[ignored][tag][row][0000]
or whatever (depends on your addressable data chunk size)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.