Caches

Cards (14)

  • A dynamic RAM (DRAM) is about 10 times slower than a static RAM (SRAM).
  • A DRAM cache miss is very expensive compared to a SRAM cache miss, SRAM misses are served by the DRAM controller. DRAM misses are served by the disk.
  • DRAM caches are usually fully associative due to the high miss penalty.
  • DRAM caches also tend to use more sophisticated replacement algorithms than SRAM caches (fully associative and direct mapping). DRAM caches always use write-back policies.
  • To facilitate DRAM operations, the operating system maintains a page table that keeps track of whether pages are cached and whether the virtual memory was allocated. Each page table entry consists of a valid bit along with a pointer to the physical address of the page.
  • The address translation hardware uses the page table to transfer between virtual addresses and physical addresses. The operating system is responsible for maintaining the page table.
  • If the virtual addresses have n bits and each virtual page has p bits, then the number of page table entries has to be 2^n/2^p. Each virtual page must have an entry in the page table.
  • If the valid bit is 0, then the virtual page has not been cached in physical memory.
  • A DRAM-cache hit occurs when the valid bit of the page table is 1. In this case, the page table fetches the data from the physical memory using the physical address associated with the virtual page.
  • A DRAM-cache page fault occurs when the virtual page is not cached in the DRAM. When a page fault happens, a victim virtual page (using a designated removal algorithm) is selected by the kernel to be removed from the DRAM. The desired virtual page replaces the victim page and the page table is updated.
  • When a virtual page is allocated, the valid bit associated with that page is set to indicate so. It is possible for a virtual page to be allocated without being cached in DRAM.
  • Virtual memory is stored in the disk, while physical memory is stored on the DRAM. Retrieving disk memory is more expensive than retrieving DRAM, which is why the system checks the page table (also stored in DRAM) first for a cache hit. If not, a victim page is selected and the virtual page is fetched from disk memory and put into physical memory (hopefully to be fetched in the near future).
  • The principle of locality proposes that memory that been accessed recently or in close proximity is likely to be used again. This is a reasonable assumption for many programs, which motivates the design of caches (that specifically store recently fetched data in high-level expensive memory structures).
  • The set of virtual pages utilized by the program at any given time is referred to as the working set. A good cache will try to store the working set at all times during a program's runtime.