Data Structures and Algorithms

Big O Notation: Your Algorithm Compass EP01

KaveeshaGMar 26, 202614 min read
Big O Notation: Your Algorithm Compass EP01

“Before you can write efficient code, you need a language to talk about efficiency. Big O is that language – and it’s simpler than you think.”

Imagine you’re searching for your friend’s name in a phonebook. You could start from page one and flip through every single entry until you find it. Or you could open to the middle, check if the name comes before or after, and eliminate half the book instantly. Both approaches work – but one is dramatically smarter. Big O notation is how we put a number on that difference.

If you’ve ever wondered why your code slows to a crawl on large datasets, or why a senior engineer dismisses a solution with “that’s O(n²) – we can’t ship this”, you’re about to gain the vocabulary to understand exactly what they mean.

“Big O isn’t about how fast your code runs — it’s about how fast it slows down.”

What Does Big O Actually Measure?

Big O describes the worst-case growth rate of an algorithm’s time or space usage as input size grows. We deliberately ignore constants and small terms because we care about the big picture: what happens when n doubles? Quadruples? Reaches a million?
Three things Big O is NOT:

  • It’s not the exact number of operations
  • It’s not about a specific machine’s speed
  • It’s not always about worst case (though that’s the default)
Key Insight: Big O uses asymptotic analysis — we care about behaviour as n → ∞, so constants and lower-order terms vanish. 2n + 100 and n/2 are both O(n).

The Six Complexities You Need to Know

Notation

Name

n = 100

n = 10,000

Verdict

O(1)

Constant

1

1

Excellent

O(log n)

Logarithmic

7

14

Great

O(n)

Linear

100

10,000

Acceptable

O(n log n)

Linearithmic

700

140,000

Often fine

O(n²)

Quadratic

10,000

100,000,000

Avoid if possible

O(2ⁿ)

Exponential

10³⁰

≈ ∞

Dangerous

Reading Code Through a Big O Lens

The fastest way to develop intuition for Big O is to read code and predict complexity before analysing it formally. Let’s walk through the key patterns.

O(1) – Constant time

Any operation that doesn’t depend on input size. Array index access, dictionary lookup, pushing to a stack.
arr = [10, 20, 30, 40]
first = arr[0]          # O(1) — index lookup
cache = {"user": "name"}
name  = cache["user"]   # O(1) — dictionary lookup

O(n) – Linear time

A single loop that touches each element once. The work grows proportionally with input.
def find_max(arr):
    maximum = arr[0]
    for num in arr:        # n iterations
        if num > maximum:
            maximum = num
    return maximum

O(n²) – Quadratic time

A loop inside a loop. Every extra element multiplies the work. The classic trap for beginners.
def bubble_sort(arr):
    n = len(arr)
    for i in range(n):          # outer: n times
        for j in range(n - 1):  # inner: n times
            if arr[j] > arr[j+1]:
                arr[j], arr[j+1] = arr[j+1], arr[j]
    return arr

Common Trap: Two separate loops (not nested) is still O(n), not O(n²). Nested loops are what cause quadratic growth. Always look for dependency between loops, not just quantity.

O(log n) – Logarithmic time

Each step halves the problem. Binary search is the textbook example. This is why searching a sorted array of a billion elements only takes ~30 comparisons.
def binary_search(arr, target):
    left, right = 0, len(arr) - 1
    while left <= right:
        mid = (left + right) // 2
        if arr[mid] == target:  return mid
        elif arr[mid] < target: left  = mid + 1
        else:                   right = mid - 1
    return -1  # not found

Space Complexity — The Other Half

Big O applies to memory too, not just time. An algorithm that runs in O(n) time but uses O(n²) extra space is often unacceptable in production.
Tip: Space complexity counts extra memory your algorithm allocates, not the input itself. Sorting an array in-place is O(1) space. Creating a new array of the same size is O(n) space.

Simplification Rules

You don’t need to calculate exact operations. Apply these rules to simplify:
1. Drop constants. O(2n) → O(n). O(500) → O(1).
2. Drop lower-order terms. O(n² + n) → O(n²). The dominant term wins.
3. Different inputs, different variables. Two inputs a and b → O(a + b), not O(n).
4. Nested loops multiply. O(n) inside O(n) → O(n²).
5. Sequential loops add. O(n) + O(n) → O(n), not O(n²).

Quick Reference Cheatsheet

  • Array access, hash lookup, stack push/pop
  • Binary search, balanced BST operations
  • Linear scan, finding min/max, sum of array
  • Merge sort, heap sort, most good sort algorithms
  • Bubble sort, selection sort, naive nested loops
  • Recursive Fibonacci, power set generation

Is O(n²) Ever Okay?

Yes – context matters enormously. An O(n²) algorithm is perfectly fine when n is always small (under a few hundred). A developer writing a configuration parser for 10 environment variables should not obsess over complexity. The phonebook problem only matters when the phonebook is large.
The senior engineer’s instinct to reject O(n²) kicks in when they know n can grow unboundedly — user data, API results, file contents, database records. Always ask: what is the realistic maximum of n in production?”
Interview Tip: When asked for time/space complexity, state both your answer and your reasoning: “This is O(n) time because we iterate once, and O(1) space because we only store a running total.” Interviewers reward the reasoning more than the letter.

Practice Problems

Analyze these before checking the answers:
1. A function that iterates through a list and, for each element, checks if it exists anywhere else in the list.
2. Finding all subsets of a set of size n.
3. A while loop that divides n by 3 until it reaches 1.
4. Two separate for-loops (not nested), each over the same array of size n.
Answers: O(n²)  ·  O(2ⁿ)  ·  O(log n)  ·  O(n)

What’s Next

Now that you can read and reason about algorithm complexity, you’re ready to start applying it. In Article 2, we’ll put Big O to work immediately: we’ll tackle the most common data structure – the array – and explore the powerful two-pointer technique that turns many O(n²) problems into O(n) solutions.
By the end of Article 2, you’ll be able to solve problems like “find two numbers in a sorted array that sum to a target” in a single pass – a pattern you’ll use for the rest of your engineering career.


Share this story:

Comments (...)

You must be signed in to join the discussion.

Loading comments...