Save
AQA A-Level Computer Science
7.0 Fundamentals of computer organization and architecture
7.3 Memory
Save
Share
Learn
Content
Leaderboard
Share
Learn
Cards (19)
Memory in computer architecture is the storage space where data and instructions are held for immediate use by the
processor
RAM is a
volatile
memory type.
Cache memory is a small, high-speed memory used to store frequently accessed
data
Match the type of memory with its key characteristics:
RAM ↔️ Volatile, high-speed, read and write
ROM ↔️ Non-volatile, read-only, slower access
Cache ↔️ Very high-speed, small capacity, temporarily stores data
ROM is used to store
firmware
and boot instructions.
Arrange the memory hierarchy levels in order of increasing speed:
1️⃣ Secondary Storage
2️⃣ Main Memory (RAM)
3️⃣ Cache
4️⃣ Registers
L1, L2, and L3 caches are examples of
cache
The memory hierarchy balances speed,
capacity
, and cost.
ROM stores
firmware
and boot instructions.
The memory hierarchy is structured to balance speed, capacity, and
cost
Match the memory level with its example:
Registers ↔️ Data being actively processed
Cache ↔️ L1, L2, L3 caches
Main Memory (RAM) ↔️ Running programs
Secondary Storage ↔️ SSD, HDD
L1, L2, and L3 caches are located in the
cache
Secondary storage provides
long-term
data storage at slower speeds.
Steps in memory addressing:
1️⃣ CPU generates a logical address
2️⃣ Memory controller translates to a physical address
3️⃣ Data is accessed at the physical memory location
Memory addressing involves translating logical addresses generated by the CPU into physical
addresses
A logical address is independent of the
hardware
memory location.
Virtual memory allows large programs to run with limited physical
memory
Match the memory management technique with its key characteristic:
Segmentation ↔️ Divides memory into variable-sized logical segments
Paging ↔️ Divides memory into fixed-size pages
Swapping ↔️ Temporarily moves inactive memory pages to disk
Virtual Memory ↔️ Provides a large virtual address space
Steps in cache optimization:
1️⃣ Utilize cache-friendly coding practices
2️⃣ Minimize cache misses by improving data locality
3️⃣ Anticipate data needs and load into cache beforehand
4️⃣ Reduce latency by preparing data in advance