Week 5 - Problem solving methods

Problem solving

Basic algorithms

Big O notation

Logging

Debugging

Unit testing

AI

Practice

Assignment

Back to core program

Big O Notation

Before learning Big O notation, take a step back and think about how programs behave as data grows. A solution that works perfectly with a few items might struggle when it has to handle hundreds, thousands, or even millions of them. As the amount of data increases, a program may need to do more work, use more resources, or become noticeably slower.

Big O notation helps us think about this before problems appear. It’s a way to describe how a program’s performance changes when the input size grows. Instead of measuring exact time in seconds, Big O focuses on the trend: does the work grow slowly, quickly, or extremely fast as the data increases?

Big O answers one main question: If I give this program more data, will it still work efficiently? It doesn’t care about how powerful your computer is or the exact runtime. What matters is how much extra work the program needs to do as the input gets bigger. Understanding Big O helps you write code that scales better and make smarter choices between different solutions to the same problem.

Real-Life Analogy: Searching for a Name in a List

Situation 1: One by one search

Imagine you have a list of names on paper.

You are looking for “Jane”.

You start at the top and read every name until you find it.

The bigger the list, the more work you do.

This is how many simple programs work.

Situation 2: Organized list (faster approach)

Now imagine the list is alphabetically sorted.

You open the middle of the list:

Each step removes half the list.

Even if the list grows very large, the work increases slowly.

<aside>

💡 Big O is about patterns, not about measuring exact speed.

</aside>

<aside>

⚠️ You do not need math skills to understand Big O — just logic.

</aside>

Why Big O matters

In real-world software, programs often work with large and growing amounts of data, such as:

As data grows, some solutions that work well with small inputs can become slow or inefficient.

Big O helps developers to:

In short, Big O helps developers build software that continues to perform well as the application grows.

The most common Big O values

Big O notation is a way to describe how a program behaves when it has to handle more data. In real software, programs rarely work with a fixed amount of information. Over time, applications need to process more users, more messages, more products, or more records. Big O helps us understand what happens to a program as this growth occurs.

When we talk about “input,” we mean the data that a program works with. This could be a list of names, a collection of products, or a set of search results. As the input becomes larger, the program usually has to perform more actions, such as checking items, comparing values, or repeating steps inside loops.

Big O focuses on how the amount of work changes as the input grows. It does not measure exact running time. Instead, it describes general patterns that help developers predict whether a program will continue to work well as data increases.

Constant Time – O(1)

In some cases, a program always does the same amount of work, no matter how much data there is. Adding more data does not change the number of actions the program performs.

For example, accessing the first item in a list always takes one step.

const firstItem = items[0];

Whether the list contains 5 items or 5 million items, the program still performs a single action. This behavior is called constant time. It is the most efficient pattern because the amount of work never increases.

Linear Time – O(n)

In many programs, the amount of work increases together with the amount of data. If the program needs to look at every item in a list, it must perform one action per item.

For example, printing all items in a list requires checking each one.

for (let item of items) {
  console.log(item);
}

If the list contains 10 items, the program performs 10 actions. If the list grows to 1,000 items, the program performs 1,000 actions. This behavior is called linear time, because the work grows in direct proportion to the input size.

Quadratic Time – O(n²)

Some programs perform work where each item is compared with every other item. This often happens when one loop is placed inside another loop. In this case, the amount of work grows very quickly as the data increases.

For example, comparing every pair of items in a list looks like this:

for (let i of items) {
  for (let j of items) {
    console.log(i, j);
  }
}

With 10 items, the program performs about 100 actions. With 100 items, it performs about 10,000 actions. Because the work grows so fast, quadratic time solutions can become slow very quickly and should be avoided for large datasets when possible.

Logarithmic Time – O(log n)

In some efficient programs, the amount of data is reduced significantly at each step. Instead of checking everything, the program quickly removes large portions of the data from consideration.

A common example is searching in a sorted list. Each step cuts the remaining data roughly in half.

while (start <= end) {
  const middle = Math.floor((start + end) / 2);

  if (items[middle] === target) break;

  if (target < items[middle]) {
    end = middle - 1;
  } else {
    start = middle + 1;
  }
}

Even when the list is very large, the number of steps remains small. This pattern is called logarithmic time and is very efficient for handling large inputs.

Linearithmic Time – O(n log n)

Some algorithms combine two ideas: they process all items in the list, but they also repeatedly split the data into smaller parts. This pattern is common in efficient sorting algorithms.

For example, JavaScript’s built-in sorting uses an efficient approach that follows this pattern.

items.sort();

The amount of work grows faster than linear time but much slower than quadratic time. In practice, linearithmic time is considered efficient and is widely used when sorting large amounts of data.

How to Think About These Patterns

Big O encourages thinking about how often actions are repeated and how that repetition changes as data grows. A single loop usually leads to linear growth, while nested loops often lead to much faster growth. Solutions that reduce the data at each step tend to scale well.

Big O also focuses on the dominant pattern of work. Small constant actions are ignored because they become less important as the input grows.

What Big O Does and Does Not Measure

Big O does not measure exact execution time, hardware speed, or differences between environments. It also does not include network delays or external systems. Big O is only concerned with how the amount of work increases as the input becomes larger.

Why Big O Is Important for Developers

Big O helps developers design solutions that continue to work well as applications grow. By understanding how different patterns scale, developers can avoid solutions that appear fine at first but cause performance problems later.

In simple terms, Big O helps developers build software that can handle growth without slowing down unexpectedly.

Summary Table

Pattern Meaning Example
O(1) Constant Access array index
O(n) Linear Loop through array
O(n²) Quadratic Nested loops
O(log n) Logarithmic Binary search
O(n log n) Linearithmic Efficient sorting

Recommended Videos

Big O Notation Explained – Computer Science Basics

https://www.youtube.com/watch?v=Mo4vesaut8g

A clear, beginner-friendly explanation of Big O growth patterns.

Visualizing Big O with simple animations

https://www.youtube.com/watch?v=RGuJga2Gl_k

Big O Explained for Beginners (no math!)

https://www.youtube.com/watch?v=D6xkbGLQesk