Big O Notation: From O(1) to O(n)

Tomas Svojanovsky
5 min readSep 10, 2023

--

We need to measure how fast our algorithm is.

Let’s say one operation takes 1 second. Our program needs to run 1000 operations — that’s 1000 seconds. However, what if our friend has a very fast computer? On his computer, one operation will only take 0.2 seconds.

Now, how can we determine the running time if the measurement depends on the computer where it will run?

We need something better. This is why we use Big O notation.

Big O notation is a special notation that tells you how fast an algorithm is.

It does not directly indicate the exact time an algorithm takes. Instead, it says how the time complexity of the algorithm will grow as the amount of data on input increases.

Example O(1)

It’s constant time. We do just one operation.

def add(a: float, b: float) -> float:
return a + b

Example O(log n)

A very efficient search algorithm, also known as logarithmic time complexity. Algorithm example: Binary Search.

def binary_search(needle: int, haystack: list[int]) -> int:
low = 0
high = len(haystack) - 1

while low <= high:
mid = low + (high - low) // 2

if haystack[mid] == needle:
return mid
elif haystack[mid] < needle:
low = mid + 1
else:
high = mid - 1

return -1

Example O(n)

We need to make the same number of operations as the size of the input. It’s called linear time complexity. Algorithm example: Linear search.

def linear_search(needle: int, haystack: list[int]) -> int:
for i in range(len(haystack)):
if haystack[i] == needle:
return i

return -1

Example O(n * log n)

It is generally considered good. In fact, it’s one of the best time complexities you can achieve for general-purpose sorting algorithms when working with unsorted data. Algorithm examples: Merge sort, Quicksort.

def merge_sort(array: list[int]) -> None:
if len(array) > 1:
left_arr = array[:len(array) // 2]
right_arr = array[len(array) // 2:]

merge_sort(left_arr)
merge_sort(right_arr)

i = 0
j = 0
k = 0

while i < len(left_arr) and j < len(right_arr):
if left_arr[i] < right_arr[j]:
array[k] = left_arr[i]
i += 1
else:
array[k] = right_arr[j]
j += 1
k += 1

while i < len(left_arr):
array[k] = left_arr[i]
i += 1
k += 1

while j < len(right_arr):
array[k] = right_arr[j]
j += 1
k += 1

Example O(n²)

It is considered less efficient than many other algorithms with better time complexities, especially for larger datasets. Algorithm examples: Selection sort, Bubble sort, or Insertion sort.

def selection_sort(array: list[int]) -> None:
size = len(array)

for i in range(size):
min_index = i

for j in range(i + 1, size):
if array[j] < array[min_index]:
min_index = j

array[i], array[min_index] = array[min_index], array[i]

Example O(n!)

This time complexity is considered extremely bad and highly inefficient for most practical purposes.

It represents factorial growth in computation time as the input size (n) increases. This means that as the input size grows, the running time of the algorithm increases by a factor of n for each additional input element, which results in an astronomically large number of operations. Algorithm example: Travelling salesman problem.

from itertools import permutations

def calculate_total_distance(path: list[int], cities: list[list[int]]) -> int:
total_distance = 0

for i in range(len(path) - 1):
city1, city2 = path[i], path[i + 1]
total_distance += cities[city1][city2]

total_distance += cities[path[-1]][path[0]]

return total_distance

def traveling_salesman_bruteforce(cities: list[list[int]]) -> tuple[int | None, float | int]:
num_cities = len(cities)
all_permutations = permutations(range(num_cities))
best_distance = float("inf")
best_path = None

for path in all_permutations:
distance = calculate_total_distance(path, cities)
if distance < best_distance:
best_distance = distance
best_path = path

return best_path, best_distance

Comparison

The easiest way to determine the complexity of an algorithm is by observing the loops

Let’s go through some examples. It’s not important to understand what it does now, but instead, to familiarize what the algorithm can look like.

Real-world example

You want to play a game with your kids. Everyone has a sheet of paper. The goal is to split the sheet of paper into 16 parts.

Solution 1

One way to do it is to draw 16 boxes, one at a time. Remember, Big O notation counts the number of operations. In this example, drawing one box is one operation. You have to draw 16 boxes. How many operations will it take, drawing one box at a time?

It takes 16 steps to draw 16 boxes. What’s the running time for this
algorithm?

Drawing a square by square

Time complexity: O(n)

Solution 2

Fold the paper. Folding the paper once is an operation. You can draw twice as many boxes with every fold, so you can draw 16 boxes in 4 steps.

Folding the sheet of paper

Time complexity: O(logn)

What if we have more loops…

We’ve seen some algorithm examples. But what if we have this:

def example(input_list: list[int]) -> None:
# Loop 1: Iterates through the list once
for item in input_list:
print(item)

# Loop 2: Iterates through the list again
for item in input_list:
print(item)

my_list = [1, 2, 3, 4, 5]
example(my_list)

It should be O(2*n) right? Wait a minute…

We don’t care about constants and small numbers because what matters to us is how much the algorithm will grow. Let’s take a look at the example.

We have two algorithms. In this case, N represents the number of elements on the input.

Comparison of algorithms with different time complexity

You can see from the picture that in the first case, 2 becomes unimportant, and in the second case, N becomes a significantly smaller number compared to the square.

Summary

  • O(log n) is faster than O(n), but it gets a lot faster once the list of items you’re searching through grows
  • Algorithm speed isn’t measured in seconds
  • Algorithm times are measured in terms of the growth of an algorithm
  • Algorithm times are written in Big O notation

Thanks for reading my article!

If you enjoyed the read and want to be part of our growing community, hit the follow button, and let’s embark on a knowledge journey together.

Your feedback and comments are always welcome, so don’t hold back!

--

--

Tomas Svojanovsky
Tomas Svojanovsky

Written by Tomas Svojanovsky

I'm a full-stack developer. Programming isn't just my job but also my hobby. I like developing seamless user experiences and working on server-side complexities

No responses yet