Memory Management: Mechanisms and policies to efficiently share RAM (finite resource) among competing processes.
Memory Management: Many layers of memory management, including the kernel allocating memory for its own users, chunks for small allocations, and contiguous memory expectation.
Memory Management:Contiguous allocation policies include Best fit, Worst fit, and First Fit.
Data Structures to track used & unused memory can be Linked Lists or Bitmap.
Allocating Memory to Process: Typical address space layout, MacOS ≤ 9 used contiguous allocation, and virtual addresses provide abstraction where each process runs on its own, isolated address space can’t interfere with other processes.
Virtual Addresses: require HW support to map virtual address to physical address in real-time, solve some kernel memory management challenges, and make life easier on processes (linking).
Base & Limit Register: add a fixed offset to each virtual address, store base & limit values in CPI registers only accessible while in kernel mode, and enable kernel to relocate a process.
Paging: reduces some kernel burden by eliminating requirement that physical addresses are consecutive, splits entire physical memory into smaller (4k) page frames, and allows arbitrary mapping between pages & page frames.
Pros of Paging include each process’ memory no longer being contiguous in physical memory and the ability to allocate additional memory.
Cons of Paging include the need to store mapping from virtual to physical addresses and performance.
Thrashing in paging occurs when the system becomes unusable due to page fault requiring existing page to be removed from main memory.
Demand page them back in involves finding unused page frame, reading only data from disk, updating page table, and resuming program execution.
Forking a process in paging involves saving page frame data to disk and bringing it back in before next use.
Paging enables more programs running with same amount of physical RAM.
One Page Table is maintained per virtual address space.
The Base + Limit register in paging adds a fixed offset to each virtual address.
Page Tables can be large and are stored in RAM.
A 4k (2^12) page size with 32 bit address space requires 2^20 entries in a Page Table, assuming 4 byte entries (words).
Paging involves splitting entire physical memory into small (typically 4K) page frames and splitting entire virtual address space into same 4k pages.
Paging requires hardware support to map virtual addresses into physical addresses in real-time.
A Page Table is a data structure that defines the mapping between pages (virtual addresses) and page frames (physical addresses).
Address translation happens for each memory access in paging.
Paging allows arbitrary mappings between pages and page frames.
The offset in a Page Table is copied from virtual to physical address.
A Translation Lookaside Buffer (TLB) is a cache lookup results for much better performance.
Processes in paging can run in their own, isolated address spaces without interference from other processes.
Being in RAM makes page table lookups much slower, 2 level tables are twice as slow as 1 level tables.
The Page Table is split into two pieces: 20 bits for the page number and 12 bits for the offset.
Shared Memory in paging involves pointing virtual addresses to same page frame, mmap promoting this, read not.
Virtual memory in paging uses slow storage (and present bit) to save page frame contents and make room for other data.
The CPU uses 12 unused bits in each page to track page attributes, including present, read, write, execute, and recently accessed.