Skip to content

Big O notation

Big O notation

Invented by Paul Bachmann, Edmund Landau (et al.), big O notation is used in computer science to describe the amount of time and space used by an algorithm. In other words, it describes the efficiency of algorithms.

Big O notation follows the syntax:

O(f(n))

…where f(n) is some function of growth. Given n input items, the maximum number of operations required to complete the algorithm is f(n).

Algorithm speed isn’t measured in seconds, but in growth of the number of operations.[1]

A few common big O designations

(slowest to fastest):

Note: in big O notation, log is always log2.