The amount of time an algorithm takes to execute and solve a problem
Growth Rate
Describes how an algorithm's running time or space requirements increase as the size of the input grows
Formal Definition of Big O Notation
We say that f(n) is in O(g(n)) if there exists c and n0 such that f n <= c * g(n) for all n >= n0
Asymptotic Behavior
Describes the behaviour of an algorithm as the input size approaches infinity, focusing on the dominant term in the time or space complexity
Simplifying Functions with Big O Notation
Involves dropping constants, lower-order terms, and simplifying expressions to focus on the dominant term in the context of Big O notation
Complexity Classes
Constant time (O(1))
Logarithmic time (O (log n))
Linear time (O(n))
etc.
Complexity with Recursion
Analyzing time or space complexity of recursive algorithms, often involving solving recurrence relations
Complexity with Loop Variables Halving
Analyzing algorithms where the loop variable is halved in each iteration, common in logarithmic time complexities like binary search
Guessing the Complexity from a Program's Runtime
Estimating the time complexity of an algorithm by observing its runtime behaviour on different input sizes
Ω(n) and Θ(n) (spoken: Omega and Theta)
Ω(n) represents the lower bound of an algorithm's time or space complexity, while Θ(n) indicates both upper and lower bounds
Brute Force Algorithms
Solving a problem through an exhaustive search, trying all possible solutions without employing any specific optimisation strategy
Backtracking
A problem-solving strategy that explores all possible solutions and backtracks when it determines that a particular solution cannot be extended to a valid solution
Divide and Conquer Strategy
Problem-solving technique involving breaking down a problem into smaller sub-problems, solving them independently, and combining the solutions to solve the original problem
Recurrence Relations
Mathematical equations that describe the time complexity of recursive algorithms by relating the time complexity of a problem to the time complexity of its sub-problems
Master Theorem
A mathematical tool for analyzing the time complexity of divide and conquer algorithms, providing a solution for recurrence relations of a specific form
Subset Sum Problem
A classic problem in computer science where the goal is to find a subset of a given set of integers that sums up to a target value
Subproblem Configuration
The arrangement or setup of smaller, more manageable parts of a larger problem that can be solved independently and then combined to solve the overall problem
Maps ADT (Abstract Data Type)
A data structure that stores key-value pairs and allows efficient retrieval, insertion, and deletion of elements based on their keys
Hash Table
A data structure that implements an associative array abstract data type, where keys are mapped to values using a hash function for efficient retrieval
Hashing
The process of converting a given input (or 'key') into a fixed-size value (hash code) that represents the input and is typically used for indexing or comparison
Hash Code
A fixed-size value produced by applying a hash function to a given input (or 'key'). In Java, a hash code returns a 32-bit integer
Hash Function
A function used to map data of arbitrary size to data of fixed size (hash code), typically used in hash tables for indexing and retrieving data
Deterministic
Refers to a property of functions or processes where, given the same input, they always produce the same output, ensuring predictability and consistency
Collision
Occurs in hashing when two or more different inputs produce the same hash code, leading to a conflict in the hash table
Collision Resolution
Techniques used to handle collisions that arise in hash tables, ensuring that each key maps to a unique location in the table
Separate Chaining
A collision resolution technique where each bucket (slot) in a hash table contains a linked list of elements that hash to the same location, allowing multiple entries with the same hash code to coexist
Hash code in Java
Returns a 32-bit integer representing the hash value of an object. It's required to be consistent with the [equals] method to ensure that equal objects have equal hash codes
Component Sum
A hash function often used for data larger than 32 bits. It partitions the original data into 32-bit components and then sums these components together, ignoring overflow
Polynomial Accumulation
A hash function that partitions the original data into components of a fixed length and evaluates a polynomial using these components
Modulo
In hashing, after computing a hash value, applying modulo operation with the hash table size to compress the hash value to fit within the table's bounds
MAD (Multiply, Add, and Divide)
A compression method where the hash value is calculated by h2(y) = (a * y + b) mod N
Load Factor
The ratio of the number of elements stored in a hash table to the total number of slots. It indicates how full the hash table is
Uniform Hashing Assumption
Assumption made in the analysis of hashing algorithms where it's assumed that each key is equally likely to hash to any slot of the hash table
Resizing of a Hashmap
The process of increasing or decreasing the size of a hash table to maintain a suitable load factor and improve performance
Open Addressing
A collision resolution technique where instead of storing colliding elements in linked lists (chaining), the hash table itself is probed to find an empty slot
Linear Probing
A collision resolution technique in open addressing hashing where if a collision occurs, the next available slot in the hash table is searched linearly
Double-Hashing
A collision resolution technique in open addressing hashing where a secondary hash function is used to calculate the interval between probes when a collision occurs
Compression
The process of reducing the size of data for storage or transmission, often used in data compression techniques
Lossless Compression
A compression technique where the original data can be perfectly reconstructed from the compressed data without any loss of information
Lossy Compression
A compression technique where some data is discarded or approximated, leading to a loss of quality in the reconstructed data