Chapter 2

Cards (40)

  • Different algorithms may complete the same task with a different set of instructions in less or more time, space or effort than other.
  • The analysis and study of algorithms is a discipline in Computer Science which has a string mathematical background. It often relies on theoretical analysis of pseudocode.
  • To compare the efficiency of algorithms, we don't rely on abstract measure such as the time difference in running speed, since it too heavily relies on the processor power and other tasks running in parallel.
  • Complexity of Algorithms
    A way to classify how efficient an algorithm is, compared to alternative ones. The focus is on how execution time increases with the data set to be processed.
  • Algorithmic complexity
    A measure of how long an algorithm would take to complete given an input of size n. Complexity is calculated asymptotically as n approaches infinity.
  • Time Factor
    Time is measured by counting the number of key operations such as comparisons in the sorting algorithm.
  • Space Factor
    Space is measured by counting the maximum memory space required by the algorithm.
  • Complexity analysis doesn't concern itself with actual execution time, which depends on processor speed, instruction set, disk speed, compiler, etc. Complexity is about the algorithm itself, the way it processes the data to solve a given problem.
  • It's possible to have an inefficient algorithm that's executed on high-end hardware to give a result quickly. However, with large input datasets, the limitations of the hardware will become apparent.
  • Space Complexity
    Defining a formula for prediction of how much memory space is required for the successful execution of the algorithm.
  • Time Complexity

    Determining a formula for total time required towards the execution of that algorithm.
  • The most important properties of a good algorithm are correctness and generality. Efficiency is usually measured in terms of the time and space required to solve a problem.
  • Empirical Method
    Measuring the efficiency of an algorithm by writing a program in some programming language then running it in some computer, and measuring the elapsed time from start to completion and memory utilization.
  • Theoretical Method
    Analyzing the efficiency of an algorithm by detaching it from its runtime environment, making it language-independent and machine-independent, and simulating it using a theoretical one-processor machine.
  • Input
    Algorithms should be given the same input to compare them fairly. The size of the input is a factor in the time and space needed by an algorithm to execute.
  • Three Different Algorithm Times
    • Worst case
    • Best case
    • Average case
  • The worst case analysis is often the most important consideration in algorithm design and analysis.
  • Average case

    The average-time needed to execute the algorithm over some finite set of inputs all of size n
  • The average-case time for input of size n of an algorithm is very useful, but treat with care: what is "average"?
  • Random (equally likely) inputs
    Real-life inputs
  • Suppose if you have to find a file name(x) in your course folder
  • Best Case
    You check the first file, and luckily its name was(x). This means you have found it in only one iteration
  • Worst Case
    You check all the 50 files, and the last name was (x)
  • Average case

    What will be the average time that you will find (x)
  • The worst case analysis is often the most useful. It gives a performance guarantee that the algorithm would not behave any worse. Other inputs could be as bad, if not better, than the worst case
  • The input is not under the control of the algorithm designer. The algorithm is supposed to work for all valid inputs. What the algorithms designer controls is the process applied to the input in producing the desired output
  • One way to analyse algorithms is to find an upper bound to the time and space required by an algorithm. The space required is the total
  • The running time of an algorithm depends on how long it takes a computer to run the lines of code of the algorithm—and that depends on the speed of the computer, the programming language, and the compiler that translates the program from the programming language into code that runs directly on the computer, among other factors
  • We can use a combination of two ideas to think about the running time of an algorithm more carefully. First, we need to determine how long the algorithm takes, in terms of the size of its input. Second, we must focus on how fast a function grows with the input size. We call this the rate of growth of the running time
  • To keep things manageable, we need to simplify the function to distill the most important part and cast aside the less important parts
  • By dropping the less significant terms and the constant coefficients, we can focus on the important part of an algorithm's running time—its rate of growth—without getting mired in details that complicate our understanding
  • Asymptotic notations

    Mathematical tools to represent the complexities of algorithms for asymptotic analysis
  • Big Oh Notation (O(n))
    Provides an upper bound for the runtime of an algorithm. It is the most commonly used notation
  • Big Omega Notation (Ω(n))
    Provides a lower bound for the runtime of an algorithm. It is the least commonly used notation
  • Big Theta Notation (Θ(n))
    Provides a tight bound for the runtime of an algorithm. It bounds a function from above and below, so it defines the exact runtime behavior
  • Little Oh Notation (o(n))

    Used to describe an upper bound that cannot be tight. It denotes a loose upper bound of f(n)
  • Little Omega Notation (ω(n))
    Used to describe a loose lower bound of f(n)
  • Common Complexity Classes
    • O(1) - Constant
    • O(log n) - Logarithmic
    • O(n) - Linear
    • O(n log n) - Superlinear
    • O(n^2) - Quadratic
    • O(n^3) - Cubic
    • O(2^n) - Exponential
    • O(n!) - Factorial
  • O(1) < O(log n) < O(n) < O(n log n) < O(n^2) < O(n^3) < O(2^n)
  • Let n = 16: O(1) = 1 step, O(log n) = 4 steps, O(n) = 16 steps, O(n^2) = 256 steps, O(2^n) = 65,536 steps