16 System Software

Cards (54)

  • Interpreters Purpose
    Read, translate, and execute one statement at a time from HLL program into machine code
  • Where are interpreters used?
    Program development - makes debugging easier; each line of code is analysed + checked before execution
  • Describe Interpreters
    - Interpreted programs will execute immediately, but may run slower than compiled files as each line is translated before being executed
    - No executable file produced
    - Program is re-interpreted each time it is opened
    - Whole program is not fully translated, only specific lines of code called
  • Analysis-Synthesis Model
    - Simplified abstraction of interpreter and compiler
    - Compiler has additional back end elements that produces object code for the compiled executable code
  • Describe Compilers
    - Translates entire program written in HLL into machine code
    - Once whole program has been analysed, error log is created listing all errors found
    - If no errors are found, object code is produced which can either be executed/stored for future use
    - Any changes to program requires whole program to be re-compiled
  • Name the stages of the front-end program of a compiler
    1. Lexical Analysis
    2. Syntax Analysis
    3. Code Generation
    4. Optimisation
  • Lexical Analysis
    1. Comments/unnecessary spaces are removed
    2. Keywords, constants and identifiers are replaced by tokens
    3. Keyword and Symbol Tables are made
    4. Keyword Tables contain reserved words for a programming language
    5. Symbol tables store the names and addresses of all variables, constants, and arrays. A new table is made for each program
  • Syntax Analysis
    1. Tokens are checked using Keyword and Symbol tables to see they match the syntax of the programming language
    2. If syntax errors are found, error messages are produced
    3. Parsing occurs - program constructs are analysed
  • Code Generation
    - Machine code is generated, performing task defined in source code
    - Object code is in binary
    - Machine code can now be executed by the CPU

    Object code can also be in intermediate form
    when the program is loaded; more flexible
    - Intermediate code supports use of relocatable code (can be stored anywhere in memory)
    - Library routines can be added to code to reduce size of stored object program
    - Several programs can be linked to be run together
  • Code Optimisation
    - Makes program more efficient
    - Uses fewer resources - eg. Time, storage space, memory, CPU usage
    - Can take place after syntax analysis/during code generation
  • Program v Process
    Program - executable file containing a set of instructions/written code. Static. Stored on hard disk.

    Process - Program currently being executed. Dynamic. Stored in memory.
  • When a stored program is run/executed, it becomes a

    Process
  • When a process is finished, it reverts to being a

    Program
  • Multitasking
    - OS executes multiple processes at the same time
    - Scheduling is used on processes to appear that the tasks happen simultaneously, but in reality scheduling algorithms allow for overlapping of executions
  • Pre-emptive Scheduling

    Using criteria to allocate a quantum to any one task before checking to see if another task needs a turn to use the OS
  • Non-Pre-Emptive Scheduling
    Processes keep control of the processor until they become idle/logically blocked
  • Pre-Emptive Vs Non-Pre-Emptive
    1. PE allocates processes limited time to CPU, but NPE allocates CPU to process until it terminates/switches to waiting
    2. Processes can be interrupted when higher priority tasks arrive in PE, but in NPE this does not happen
    3. PE includes overhead of switching between processes + maintaining ready queue, NPE does not have any overhead
    4. Lower priority processes may starve in PE if high priority processes arrive frequently. Processes with larger burst times may cause those with small burst times to starve in NPE
    5. PE has flexibility by allowing critical processes to access CPU regardless of arrival order. NPE is rigid - critical processes cannot interrupt the current one.
    6. PE must maintain integrity of any shared data, NPE is not cost associative.
  • HL Scheduling
    Manages which processes are sent from storage to ready queue/memory
  • LL Scheduling
    Manages which memory/ready queue processes should be sent to the CPU
  • Describe LL Scheduler
    Once processes are in memory/ready queue:
    1. LLS takes control, taking responsibility of selecting and running processes
    2. LLS looks at order of processes in ready queue imposed by HLL
    3. LLS checks if any required resources are available for the most important process in the queue
    4. Process is then sent to the CPU, taking it from the ready queue and putting it into the running state
    5. Process remains in running state unless an interrupt happens/process finished/process requires unavailable resources
    6. LLS then selects next process to send to the CPU
  • Conditions for Running State to Ready State
    1. A process is executing during its quantum
    2. When quantum is completed, an interrupt occurs and the program is moved to the ready queue even if it is not finished
  • Conditions for Ready State to Running State
    1. Process is capable of using the CPU
    2. LLS allocates CPU time to process so that it can execute
  • Conditions for Running State to Blocked State
    1. Process is executing and needs to carry out an I/O operation
    2. LLS places process into blocked queue until I/O operation is done
  • Conditions for Blocked State to Ready State
    1. Process is waiting for an I/O resource
    2. I/O operation is completed and process is capable of further processing
  • Process Control Block
    - Data structure storing required info needed to run a process
    - Stores current process state, process privileges, register values, process priority and scheduling info, burst time, and a unique process ID
  • Hardware Interrupts

    Generated by hardware devices to signal that they need attention. Examples:
    1. Data has been received
    2. A previous requested task has been completed
  • Software Interrupts
    Generated by programs requesting a system call to be performed by the OS.
    - Can be triggered unexpectedly by program execution errors (traps/executions)
  • Describe what an OS Kernel does when an interrupt is received
    1. OS Kernel consults Interrupt Dispatch Table linking a device with an interrupt routine
    2. IDT supplies address of low level routine to handle the interrupt
    3. OS Kernel saves state of the interrupted process on the kernel stack. Process state will be restored once interrupt is serviced
    4. Interrupts are prioritised using Interrupt Priority Levels. A process is only suspended if IPL > current task
    5. If IPL < current task, interrupt is stored in interrupt register and only serviced when IPL falls to that level
  • Burst Time
    Total Time required by the process
  • Arrival Time
    Time when the process has arrived in the ready queue
  • Completion Time
    Time when the process completes execution
  • Turnaround Time
    Duration from the arrival of a process to its completion (Completion Time - Arrival Time)
  • Waiting Time
    Time spent by a process waiting in the ready queue
  • First Come First Served
    - NPE
    - Processes are completed in the order they arrive in the ready queue

    Benefit: Simplest scheduling algorithm
    Drawback:
    - Long avg waiting time. Causes convoy effect (whole OS slows down due to few slow processes)
    - Lower CPU utilization and efficiency
  • Shortest Job First
    - NPE
    - LLS looks at ready queue and selects process with the shortest burst time

    Benefit: Lowest waiting time for short processes, reducing avg waiting time
    Drawback: Long processes may never be processed by the system, remaining in queue + causing starvation
  • Shortest Time Remaining First
    - PE
    - LLS looks at ready queue and selects process with shortest burst time. If a process with a shorter burst time arrives, it replaces the currently executing process.

    Benefit: Faster processing of jobs than SJF
    Disadvantage:
    - Context switching is done more frequently than SJF.
    - CPU's processing time is consumed more
    - Overhead charges
  • Round Robin
    - Each process is run for a set quantum of time before being set to blocked and running the next process
    - Processes run in order until they are complete

    Benefit: Every process gets an equal share of the CPU - no starvation

    Drawback: Setting quantum too short increases overhead, lowering CPU efficiency. Setting quantum too long causes poor response to short processes.
  • Paging
    - Main memory is broken down into physical memory blocks (frames)
    - Main memory is also broken down into logical memory blocks (pages)
    - Pages and frames are set to the same size
    - Processes are allocated a number of pages, slightly larger than required
    - Pages and frames are mapped, logically associated with each other to allow for their contents to be copied backwards and forwards
    - Each process has its own page table which maps logical addresses to physical addresses
  • Mapping of Pages to Frames is done by the
    Memory Management Unit
  • What is stored in the page table?
    1. Page Number
    2. Presence Flag -Indicates if a page is in the memory.
    3. Page Frame Address - Physical address.
    4. Time Of Entry/Number of times page has been accessed