

This means lookups can be hardware-intensive and often inefficient. Where two-way associative mapping dictates 8 blocks of cache memory be subdivided into 4 blocks of two sets fully associative mapping would dictate those 8 blocks of cache memory be subdivided into 8 sets of memory.Įffectively, this means that a block of memory from main memory could be written to any block in cache memory. Fully Associative A fully-associative cache is the functional equivalent of a direct-mapped cacheįully associative mapping is similar to N-Set Associative mapping, just that the number of sets in cache memory is equal to the number of blocks. Set associative indexing is calculated using mem_addr % cache_set_count. Each of these sets could contain 2 blocks of data from main memory, differentiated by their tag numbers. For example, if an 8-block cache were 2-way set associative, it would be divided into 4 sets of 2 blocks. Set associative mapping divides the cache blocks into sets, such that a block of data from main memory could map to multiple positions in a single set. N-Set Associative N-Way Set Association divides the total cache memory blocks into “sets” in which specific lines can be addressed This means the 5th block of memory would map to the 1st block of a 4 block cache because 5 % 4 = 1. The corresponding cache block can be calculated by mem_addr % # cache_block_count. Direct Mapped Direct-mapped cache memory associates a single cache block to a single block in main memoryĪ single block of memory is mapped to a single block of the cache.
#Associative cache vs direct mapped software#
As with most software architecture-each comes with tradeoffs. There are several ways of designing caches to help facilitate this goal in a range of different scenarios. Cache memory takes advantage of these principles to provide the CPU access to data stored in memory as efficiently as possible.
