Big O Notation

Michael Rosenberg
4 min readJul 1, 2021

I wanted to take a break from Hooks to provide a high level overview of Big O Notation. It’s definitely one of the more harder and complicated parts of a technical interview or coding challenge, but hopefully this post will help some of you out there and provide some clarity on this subject.

We’ll cover exactly what Big O Notation is, some key concepts to grasp, and then go over a few examples. Let’s dive into it!

Overview

So, what is Big O Notation? If you Google the term, you will see lots of definitions and resources out there that explain this topic. Basically, it’s a measure of how well a computer algorithm scales as the amount of data involved increases.

               Determined by size of inputs -> O(n)

Some say that Big O describes the execution time required or the space used by an algorithm, others say it’s what we use when talking about how long an algorithm takes to run. At the end of the day, the latter is pretty much what Big O is, describing how long an algorithm takes to run.

Big O is dependent on the size of your inputs, or how much data you’re passing in. It’s used for inputs that get arbitrarily large. If you’re working with an array that’s 10 or 20 numbers, your algorithm won’t take that long to run but as your array gets bigger, the slower your algorithm gets. And the smaller your Big O Notation is, the faster your algorithm will run.

Types of Measurement

So what does Big O Notation actually measure? There are three different cases:

  1. Best Case
  2. Average Case
  3. Worst Case

Let’s say we have an array of numbers and we’re trying to find a specific number, and we find the number on our first try. That would be Best Case.

Average Case is what will happen on average, depending on multiple different trials.

Worst Case is actually what Big O Notation is all about. Big O specifically describes the worst case scenario of your algorithm. Or how poorly your algorithm runs as your input gets arbitrarily large.

Different Types of Big O’s

Take a look at the following Big O Complexity Chart from bigocheatsheet.com to see how Big O works and the different types:

O(1) < O(n) < O(n log n) < O(n²) < O(2^n) < O(n!)

O(1) is constant, meaning your algorithm will take the same number of steps to execute regardless of the amount of data passed in.

O(n) is linear, meaning the steps that your algorithm takes to execute will depend on the amount of data passed in.

O(n log n) is log-linear, meaning your algorithm is doing log(n) work n times, or the amount of data passed in multiplied by the log of that amount.

O(n²) is quadratic, meaning the performance of your algorithm depends on the square of the size of the data passed in.

O(2^n) is exponential, meaning your algorithm will take twice as long for every new piece of data passed in.

O(n!) is factorial, meaning your algorithm’s performance depends on the factorial of the amount of data passed in.

You can see how the smaller your Big O is, the better your algorithm will perform. At the end of the day, we want an algorithm that can handle more data faster.

Examples

Let’s start by looking at an example of O(1) -> Constant time:

x = 10 + (23*5)   // O(1)x = 10 + (23*5)  // O(1)
y = 56/23 // O(1)
print(x+y) // O(1)
-> O(1) + O(1) + O(1) = 3 * O(1) = O(1)

Here, we have a few equations and then a print statement. As you can see, it doesn’t matter how many additions or multiplications or divisions you add, at the end of the day the Big O of these equations or expressions is going to be constant, or instantaneous, therefore O(1).

One thing to note here is that it doesn’t matter how many O(1)’s you have. If you multiply a Big O by a constant, the constant is irrelevant.

Next, let’s look at an example of O(n) -> Linear time:

for i in range(0,n):  // Running 'n' times
print(i) // O(1)
-> n * O(1) = O(n)

O(n) basically means ’n’ is the number of items in a data set, and it’s dependent on how many items we have.

Even though ‘print(i)’ is O(1), the entire statement is going to run ’n’ times based on whatever ’n’ is. That’s why the Big O of this function is O(n), since we multiply O(1) by n which equals O(n).

Let’s take a look at one more example, O(n²) -> Quadratic time:

for i in range(0,n):      // Running 'n' times
for j in range(0,n): // Running 'n' times
print(i*j) // O(1)
-> n * n * O(1) = O(n²)

Similar to the previous example, we have three lines of code in this statement, two of which depend on the size of n and then a print statement which is O(1).

So even though the print statement is O(1), we’re multiplying that by n and then n again, and that’s why the Big O of this example is O(n²).

Recap

Again, Big O basically tells us how fast an algorithm is running and how it will perform as the data being passed in becomes larger and larger. When creating algorithms, you want them to run as fast and as efficient as possible. If an algorithm takes up a lot of space or takes a lot of time to execute, it’s going to to lead to higher costs and possibly unhappy customers or users.

If you have an algorithm with a high Big O, you’re going to want to refactor and rewrite it to make that Big O as small as possible. Remember, the smaller the Big O, the better that algorithm can handle larger sets of data and the faster it will run.

Thanks as always for reading and see you next time!

--

--

Michael Rosenberg

Software Engineer/Full Stack Web Developer | Ruby, JavaScript, React, Redux