Big(O) Notation
Let's now talk about Big(O) Notation
. Given what we've seen, you may ask-- why do we need Big O
if we can just measure execution speed?
In the last section, we recorded the clock time that our computer took to execute a function. We then used this clock time to compare the two programs. But clock time is hardware dependent. An efficient program may take more time to execute on a slower computer than an inefficient program on a fast computer.
So clock time is not a good metric to find time complexity. Then how do we compare the algorithm complexity of two programs in a standardized manner? The answer is Big(O) notation
.
Big(O) notation is an algorithm complexity metric. It defines the relationship between the number of inputs and the steps taken by the algorithm to process those inputs. Read the last sentence very carefully-- it takes a while to understand. Big(O) is NOT about measuring speed, it is about measuring the amount of work a program has to do as an input scales.
We can use Big(O)
to define both time and space complexity. The below are some of the examples of Big(O) notation, starting from "fastest"/"best" to "slowest"/"worst".
Function | Big(O) Notation |
Constant | O(c) |
Logarithmic | O(log(n)) |
Linear | O(n) |
Quadratic | O(n^2) |
Cubic | O(n^3) |
Exponential | O(2^n) |
Factorial | O(n!) |
Let’s see how to calculate time complexity for some of the aforementioned functions using Big(O) notation.