2.3.1

Cards (24)

  • Complexity
    A measure of time, memory space or resources needed for an algorithm increases as the data size it works on increases.
  • Big-O Notation
    Mathematical notation used to describe the worst case, average case and best case scenarios of the performance of an algorithm. It provides a way of identifying which algorithm has the best time-complexity and which has the best space-complexity.
  • Constant Complexity
    The time taken for an algorithm to run is independent of the number of items in the data set. For example, printing the first letter in a string will take same amount of time, regardless of the length of the string.
  • Linear Complexity
    Time taken for an algorithm to run fully increases at the same rate at which the number of items in the data set increases. For example, doubling the number of items in a data set will double the time to find the largest number.
  • Polynomial Complexity
    Time taken for an algorithm to run fully increases proportionally to n raised to the power of a constant.
  • Exponential Complexity
    Time taken to run an algorithm completely is proportional to the a constant raised to the power of n. This model doesn't scale up well as with larger values of n, time taken for the algorithm to compute will take exponentially longer.
  • Logarithmic Complexity
    Time taken for an algorithm to run completely is proportional to a logarithmic function of n. This is the most ideal complexity for any algorithm which has been scaled up as it means that at increasingly large values of n, time taken is essentially constant.
  • Bubble Sort
    Moves through the data set repeatedly in a linear fashion, swapping two items that have been compared if the the first is larger than the second. Intuitive and easy to program but inefficient. Best Case: Constant. Average Case: Polynomial. Worst Case: Polynomial.
  • Insertion Sort
    Dividing a data set into two arrays: sorted and unsorted. Elements are inserted into there correct positions in the sorted array as they are reached in the unsorted array. Simple but inefficient. Best Case: Constant. Average Case: Polynomial. Worst Case: Polynomial.
  • Merge Sort
    Splits a data set n terms long into n number of data sets each one item long. These are merged into ordered data sets 2n items long, then ordered datasets 4n data items long. This continued till you have one ordered data set. Recursive function uses more memory. Fast and efficient with large values of n but difficult to program. Best Case: Logarithmic. Average Case: Logarithmic. Worst Case: Logarithmic.
  • Quick Sort
    Picks an items as a "pivot". It creates two lists on either side of the pivot: those bigger than the pivot remain on the right while those lower than the pivot get moved to the left. When this is completed a new pivot is selected and the process repeats till data item n is less than data item n+1. Recursive functions is used so harder to program. Very quick for large data sets. Initial arrangement of data is important. Best Case: Logarithmic. Average: Logarithmic. Worst: Polynomial.
  • A* Algorithm
    A* Search improves on Dijkstra's algorithm by estimating the distance to the final node using a heuristic, finding the shortest path faster. It combines the distance from the start node and the heuristic estimate to the end node to choose the next node. It explores adjoining nodes but may backtrack if needed for a better path.
  • Dijkstra's Shortest Path Algorithm
    Finds the shortest path between two nodes on a
    graph. It works by keeping track of the shortest distance to each node from the starting node. It
    continues this until it has found the destination node.
  • Binary Search
    Binary search halves the search space by comparing the target value with the middle element of a sorted array and recursively searching the left or right half until finding the target or exhausting the array.
  • Linear Search
    Linear search checks each element in a list sequentially until finding the target or exhausting search space.
  • Queues
    Managing tasks in a multitasking environment, such as an OS scheduler. Follows FIFO principle, making them suitable for scenarios where the tasks must be processed in the order that they arrive.
  • Stacks
    Function calls and recursion tracking in programming languages. Follows a LIFO principle, making them suitable for tracking nested function calls and local variable storage. Stacks can also be used to undo/redo operations in various applications.
  • Trees
    Representing hierarchical data structures like file systems or organisational charts. Binary Search Trees are best for efficient searching and sorting of data.
  • Graphs
    Modelling relationships between objects, such as social networks or network routing. Shortest path algorithms used to find shortest distance between two nodes in a graph.
  • Binary Trees
    Each node has maximum 2 nodes. Efficient sorting and searching algorithms. Used to organise data sets and expression trees used to represent mathematical expressions.
  • Multi-branch Trees
    Each node can have more than 2 nodes. Representing hierarchical relationships becomes more flexible. Used where directories contain multiple files and subdirectories.
  • Tree Traversal: How?
    Depth-first, Breadth-first, Pre-Order(NLR), In-Order(LNR), Post-Order(LRN)
  • Tree Traversal: Why?
    Search for a specific node in the tree. Systematically perform an operation at each node of the tree. Navigate the hierarchy of file systems or organisational charts systematically to find specific nodes.
  • Linked Lists
    Stores data in a data set non-contiguously and uses pointers to link the data items together.