Big O notation
Invented by Paul Bachmann, Edmund Landau (et al.), big O notation is used in computer science to describe the amount of time and space used by an algorithm. In other words, it describes the efficiency of algorithms.
Big O notation follows the syntax:
O(f(n))
…where f(n)
is some function of growth. Given n
input items, the maximum number of operations required to complete the algorithm is f(n)
.
Algorithm speed isn’t measured in seconds, but in growth of the number of operations.[1]
A few common big O designations
(slowest to fastest):
-
Factorial time/space (Very inefficient. Traveling salesperson problem.)
O(n!)
-
Exponential time/space (Selection sort)
O(n^2)
-
Logarithmic-linear time/space (Quicksort)
O(n log n)
-
Linear time/space (Simple search)
O(n)
-
Logarithmic time/space (Binary search)
O(log n)
-
Constant time/space (Reading an array at a known index)
O(1)
Note: in big O notation, log
is always log2
.