Skip to main content

Time & Space Complexity

Different Notations

Now that we know, on a high level, what time & space complexities are, it's a good time to look at different ways to represent them and try to make sense of them practically.

Let's clear out the make sense of it part right now. It's simple: the worse the time complexity, the more time the code takes to execute given the same input. So, given the constraints and time limit of a problem, we'll have to calculate the worst-case time complexity our code can have to pass, & of course, make sure we find an algorithm that works within that time complexity.

It is similar for space complexity as well, but like I wrote in the last article, it isn't generally cared about in more than 95% of the cases.

Diving into the Notations

Asymptotic notations are mathematical notations used to describe an algorithm's time complexity (and space complexity).

There are mainly three asymptotic notations that we'll talk about:

  • Big Oh notation (O) - upper bound
  • Omega notation (Ω) - lower bound
  • Theta notation () - exact bound

Among these three, Big O and Theta are particularly useful while analyzing algorithms.

Big Oh Notation - (O)

Mathematically, We say that f(n) = O(g(n)), if there exist positive constants N0  and C, i.e. f(n) <= C*g(n) ∀ n >= N0

Let's not get into the mathematical definition for now, the crux (in easily understandable terms) is that the Big O notation represents the upper bound of the running time of an algorithm.

Since it gives the worst-case running time of an algorithm, it is widely used to analyze an algorithm as we are always interested in the worst-case scenario.

Here are some common time complexities (ordered best to worse):

  1. O(1): Constant time complexity
  2. O(log⁡ n): Logarithmic time complexity
  3. O(n): Linear time complexity
  4. O(nlog⁡n): Linearithmic time complexity
  5. O(n2): Quadratic time complexity
  6. O(2n): Exponential time complexity

Omega Notation - (Ω)

Mathematically, We say that f(n) = Ω(g(n)), if there exist positive constants N0  and C, i.e. f(n) >= C*g(n) ∀ n >= N0

Again, the crux is that the Ω notation represents the lower bound of the running time of an algorithm.

Theta Notation - (ፀ)

Mathematically, We say that f(n) = ፀ(g(n)), if there exist positive constants N0, C1 and C2, i.e. C1*g(n) <= f(n) <= C2*g(n) ∀ n >= N0

In simpler terms, if an algorithm, let's say has ፀ(N2) time complexity, it basically means it's an exact bound, because the time taken by the algorithm will never grow faster than C2*N2 and slower than C1*N2.

Summing Things Up

If an algorithm has O(N) time complexity, it means, for a sufficiently large input:

  • The code may run within ፀ(LogN) time.
  • It may run within ፀ(N) time.
  • however, it will never take ፀ(N2) time. (because the growth of n2 is faster than that of n)

If an algorithm has Ω(N) time complexity, it means, for a sufficiently large input:

  • The code may run within ፀ(N2) time.
  • It may run within ፀ(N) time.
  • however, it will never take ፀ(LogN) time. (because the growth of log n is slower than that of n)

If an algorithm has ፀ(N) time complexity, it means, for a sufficiently large input:

  • The code will complete execution in ፀ(N) time.
  • It will never take ፀ(N2) time. (because the growth of n2 is faster than that of n)
  • It will never take ፀ(N3) time. (because the growth of n3 is faster than that of n)

Basically, in case of Theta, if a program has a certain time complexity, let's say, ፀ(log n) we can be sure that the time complexity of the program cannot grow faster or slower than ፀ(log n).