Understanding the Big-O Notation

Roland Sankara
7 min readJun 13, 2020

Along your journey of programming irrespective of the programming language you use, you are literally solving problems — “so to speak” usually through the code you write. I hope you know that most problems we encounter can be solved in multiple ways, using totally different approaches of which are all valid and correct.

Imagine you are tasked to determine which solution out of the many valid solutions, would be the most efficient in terms of performance. With no doubt, this would turn out to be very hard to work out, but luckily that is where the Big O notation comes into the picture.

The Big O notation is not “big” probably in terms of size or difficulty but big in terms of its significance. So in a nutshell the Big O establishes a framework to talk about one piece of code in relation to another.

Therefore, in this blog post, I’ll do my best to explain the use cases and scenarios where the big O notation might come in handy and also help you understand it so that you can use it to come up with efficient and faster algorithms with your code.

So let’s get to it ……

Definition of the Big O Notation.

The Big O notation is the language/ numerical representation used to describe the performance or complexity and efficiency of an algorithm.

With the Big O notation, we express the runtime in terms of how quickly it grows relative to the input, as the input gets arbitrarily large. Note that the Big O notation considers the worst-case scenario for its analysis.

Let’s break down the definition:

  1. how quickly the runtime grows — it’s hard to pin down the exact runtime of an algorithm. It depends on the speed of the processor, programs the computer is running in the background, etc. So instead of talking about the runtime directly, we use the Big O notation to talk about how quickly the runtime grows. Runtime( which in this context means the length of time the algorithm takes to run)
  2. relative to the input — if we were measuring our runtime directly, we could express our speed in terms of seconds. Since we are measuring how quickly our runtime grows, we need to express our speed in terms of something else. With the Big O notation, we use the size of the input, which we call “n”. “So we can say things like the runtime grows — on the order of the size of the input (O(n)) or — on the order of the square of the size of the input (O(n2)).
  3. as the input gets arbitrarily large — Since the Big O notation uses the worst-case scenario for its analysis, we compare the runtimes of algorithms as the input(n) grows very large.

I hope this somehow makes sense to you so far. Let’s now go ahead and look at the Common Big O Expressions.

Big O Expressions / Runtime Complexities

Listed below are some of the common big o expressions that are used to evaluate the performance of algorithms:

  1. O(1) — Constant Runtime
  2. O(n) — Linear Runtime
  3. O(n^2) — Quadratic Runtime
  4. O(log n) — logarithmic Runtime
  5. O(2^n) — Exponential Runtime
  6. O(n!) — Factorial Runtime

Here is a chart that shows the performance of the complexities.

Let’s now look at examples to explain each Big O expression……

O(1) — Constant Runtime

This Function/expression runs in constant time (O(n)) relative to its input. In this case, your algorithm runs with the same length of time, regardless of the given input.

An example of this is a function returning the first element in the given array as shown below;

function returnFirst(elements) { 
return elements[0]
}

The runtime is constant no matter the size of the input given.

O(n) — Linear Runtime

This Function/expression runs in linear time (O(n)) relative to the input. Linear runtime occurs when the runtime grows in proportion with the size of the input data set (n).

A good example of this is searching for a particular value from the given input data set using an iteration/ loop as shown below using a sequential search.

function constainsValue(elements, value) { 
for (let element in elements) {
if (element === value)
return true;
}
return false
}

We see that time taken to loop through all the elements in the array to find the element that matches the value grows with an increase in the size of the array.

O(n2) — Quadratic Runtime

This Function /expression runs in Quadratic time (O(n2)) relative to the input data set. This expression denotes an algorithm whose runtime is directly proportional to the square of the size of the input data set.

An example of this is a nested iteration/ loop to check for the first instance an even number in the given data set.

function containsEvenNum(obj){
for (let arr in obj){
for (let item of obj[arr]){
if(item % 2 == 0 )
return true;
}
}
return false;
}

Deeper nested iterations will produce runtime complexities of O(n3), O(n4), etc.

O(log n) — logarithmic Runtime

This Function/ expression runs in logarithmic time (O(log n)) relative to the input. In this case, the runtime it takes for the algorithm to run will plateau no matter the size of the input data set.

A common example of this is a search algorithm like the binary search. The idea of a binary search is not to work with the entire data but rather, reduce the amount of work done by half with each iteration. The number of operations required to arrive at the desired result will be log2 of the input size (n)

Here is an illustration comparing a Binary Search with a sequential search focus on how the binary search is being executed.

O(2n) — Exponential Runtime

This occurs in Algorithms where for each increase in the size of the data set, the runtime is doubled. For a small data set. This might not look bad but as the data increases, the time taken to execute this algorithm increases rapidly

A common example of this is a recursive solution for finding the Fibonacci numbers.

O(n!) — Factorial Runtime

In this case, the algorithm runs in the factorial time. The factorial of a positive integer (n!) is the product of all positive integers less than or equal to n. This results in a very terrible runtime.

An algorithm that performs permutation on a given data set is an example of O(n!)

So the above stated Big O expressions are the metric we use to measure the performance of Algorithms and they, therefore, define how the BIG O notation is expressed relation to specific algorithm implementations.

Up until this point, we have been focusing mainly on the runtime of the Algorithm in relation to the input this is what is known as Time complexity. But with the Big O notation, we can also optimize for using less memory in addition to using less runtime. So Space complexity simply looks at the total size of any new variables we are using in the Algorithm in relation to the size of the input. The Expressions remain the same as stated above but in the context of space other than time.

With this Knowledge of the Big O notation, you can, therefore, make it a habit of thinking about the time and space complexity of the algorithms as you design them. Before you know it'll become second nature, allowing you to see the optimizations and potential performance issues the right way and therefore being able to build very efficient and scalable algorithms.

I hope I have done my best to bring you up to speed with the whole idea of the Big O notation and I believe you’ve learned something. I have placed some references and resources that you can go through to extend your understanding of the Big O notation.

Resources

Adios from me✌ and kindly hit applause👏 if you found this insightful 

--

--

Roland Sankara

Fullstack Web Developer | Technical Writer | Passionate about tech mentorship