Big-O Notation (Time and Space Complexity)

LowEasyJavascript
Preparing for interviews?

Use guided tracks for structured prep, then practice company-specific question sets when you want targeted interview coverage.

Quick Answer

Define Big-O time and space complexity, explain why it matters for scalability, show how to derive it with rules of thumb, and list common classes. Include best/average/worst-case and amortized complexity examples.

Answer

The Core Idea

Big-O describes how time or space usage grows as input size n increases. It is an upper bound that focuses on the dominant growth rate and ignores constants.

Time vs space


Always say which complexity you mean. Example: mergesort is O(n log n) time but O(n) extra space.

Rules of thumb

  • Sequential steps add: O(n) + O(n) -> O(n).
  • Nested loops multiply: O(n) * O(n) -> O(n^2).
  • Drop constants/lower-order terms: O(3n + 10) -> O(n).
  • Log base does not matter in Big-O.
JAVASCRIPT
// O(n^2) nested loops
for (let i = 0; i < n; i++) {
  for (let j = 0; j < n; j++) { /* work */ }
}

// O(log n) binary search
while (lo <= hi) {
  const mid = (lo + hi) >> 1;
  if (arr[mid] === target) return mid;
  if (arr[mid] < target) lo = mid + 1; else hi = mid - 1;
}
                  

Best/average/worst


A hash map is O(1) average but can degrade to O(n) worst-case. State which case you are discussing.

Amortized complexity


Dynamic array push is O(1) amortized because occasional O(n) resizes are rare.

Pitfalls

  • Forgetting hidden loops (e.g., inside array methods).
  • Ignoring recursion depth or extra space.
  • Mixing average and worst-case without saying which.
Summary
  • Big-O describes growth rate as input grows.
  • Use rules to derive complexity quickly.
  • Be explicit about time vs space and best/average/worst-case.
Similar questions
Guides
28 / 61