Cache Memory: Performance and Mapping


Cache Memory Performance:

When the processor needs to 'read or write' a location in the main memory, it first checks for a corresponding entry in the cache. The performance of cache memory is measured in terms of what is called the 'hit ratio'. If a data item requested by the CPU is found in the cache, it is called a 'hit'. If the requested data item is not found, it is called a 'miss'. The hit ratio is the ratio of the number of hits divided by the total requests.

Like this,

Hit Ratio = Number of Hits
Number of hits + Number of misses

For example, suppose a computer has main memory access times of 500ns or 500 x 10-9 seconds and cache memory is 0.8. then an average access time of the memory will be 180 ns, since →

Avarafge memory = hit ratio x cache access time + (1-hit ratio) x main memory acces time = (0.8)x100+0.2x500 = 180ns

Hit ratio is mesured by running many sample progrms and then calculating the number of hits and missed in a given time interval. Hit rations of 0.9 and Higher have been reported in many many programming situations. Such high hit rations have provided the theory of locality of reference.

Remember: Cache memory is a very high speed memory built in the processor and it is used between main memory (RAM) and the processor.

Concept of Mapping:

A CPU is connected to the main memory via cache memory. In order to fetch a word, the CPU will send an n bit address to the cache memory. if there is a miss, the required word is searched in the main memory and read from there. There are different methods to search for a word in the cache memory. These mapping methods are:

  1. Associative Mapping
  2. Direct Mapping
  3. Set Associative Mapping

Associative Mapping:

In associative mapping, the contents of cache memory are not associated with any address. Data stored in the cache memory are not accessed by specifying any address. Instead, data or part of data is searched by matching with the actual contents The data are then accessed by their contents.

In the Associative mapping method, both the word and the address of the word (in the memory) are stored in the cache. The address bits are sent by the CPU to search, and if any address is matched, the corresponding word is fetched from the cache and sent to the CPU.

The word is searched in the main memory if no match is found in the cache memory. The word and its address are then copied from the main memory into the cache. This is done because the word is most likely to be referenced again in the future (due to the locality of reference property). If the cache is full, then the existing word along with its address must be removed to make room for the new word is explained in the section.

Note: Associative mapping has the advantage that it is a very fast access method, but it has the disadvantage that it is very expensive and complicated because of the complex logic circuits that are required to implement data searching by content and not by address.

Due to the high cost associated with logic circuits required to implement associative mapping, other methods are used in which data in cache memory are accessed by address, like Direct Mapping and Associative Mapping.

Direct Mapping:

Suppose a computer has 4k main memory i.e. 4x1024 bytes and 1k cache memory. To address a word in the main memory, a 12-bit (212 = 4096 = 4 x 1024) address is required. similarly, to address a word in the cache memory, a 10-bit (210 = 1024 = 1 x 1024) address is required.

In such a case, we see that cache memory needs 10 bits to address a word and main memory requires 12 bits to address a word. In the direct mapping method, the 12-bit address sent by the CPU is divided into two parts called tag field and index field. The tag field will contain 2 bits and the index field will have 10 bits.

Note: That the index field has a number of bits equal to the number of bits required to address a word in the cache.

Thus, if a computer has a main memory of capacity 2m and cache memory of 2n, then the address bits will be divided into n bits index field and (m - n) bits as tag field.

In the direct mapping method, the cache memory stores the word as well as the tag field. The word will store at that location in the cache which is represented by the index field of their address.

Set Associative Mapping:

The problem with the direct mapping technique was that the words having the same index but different tags cannot be stored in the cache. That is, only one word-tag pair can be stored. This will considerably reduce the hit ratio. If two-word tags can be stored, it is called a two-way set associative mapping, and If three tags are stored it is called a three-way set associative mapping and so on.

Note: The transfer of data as a block from main memory to cache memory is referred to as cache mapping.

What's Next?

We've now entered the finance section on this platform, where you can enhance your financial literacy.