Table of Contents

1. What Even IS a Data Structure? 2. What Even IS an Algorithm? 3. How Computers Store Data (Memory) 4. Time Complexity -- The Most Important Concept 5. The Math Behind Big-O (Why It Works) 6. Space Complexity 7. Every Data Structure Explained From Zero 8. Essential Algorithms Explained 9. Recursion -- Functions That Call Themselves 10. Backtracking -- Try All Possibilities 11. Prefix Sum -- Precompute Cumulative Sums 12. Suffix Sum -- Precompute from Right 13. Sweep Line -- Event-Based Processing 14. Memoization -- Remember Your Work 15. Tabulation -- Build Up Solutions 16. Python & Java DSA Quick Reference 17. How to Think About Problems 18. Tips to Become a World-Class Competitive Programmer 19. Learning Roadmap 20. Practice Quiz

1. What Even IS a Data Structure?

Imagine you just moved into a new house. You have a bunch of stuff -- books, clothes, food, tools. You could just throw everything into one giant pile in the middle of the room. But that would be a nightmare to find anything in, right?

So instead, you organize your stuff:

A data structure is exactly the same thing, but for computer data. It's a way of organizing information so you can find it, add to it, and work with it efficiently.

Data Structure = A way of organizing data so that it can be used efficiently

Just like you wouldn't use a bookshelf to store soup (that's what the fridge is for), different data structures are good for different jobs:

Real WorldData StructureGood For
Row of lockersArrayQuick access by number
Treasure hunt cluesLinked ListEasy to insert/remove items
Stack of platesStackUndo/redo, backtracking
Queue at a shopQueueFirst come, first served
Dictionary / phone bookHash MapInstant lookup by name
Family treeTreeHierarchical data, fast search
Road map / social networkGraphConnections between things
Key Insight

There is no single "best" data structure. The right choice depends on what you need to do. A great programmer knows when to use which one -- and that's exactly what this page teaches you.

2. What Even IS an Algorithm?

An algorithm is just a set of steps to solve a problem. That's it. You already use algorithms every day without realizing it.

Real-World Algorithm: Making a Cup of Tea

Step 1: Boil water
Step 2: Put tea bag in cup
Step 3: Pour boiling water into cup
Step 4: Wait 3 minutes
Step 5: Remove tea bag
Step 6: Add milk/sugar if you want

That's an algorithm. A clear set of instructions that always produces the right result.

In programming, an algorithm is the same thing -- a step-by-step procedure to solve a problem. For example:

That last one? That's binary search -- one of the most important algorithms in computer science. You've been doing it since you were a kid looking up words.

Why Some Algorithms Are Better Than Others

Imagine you need to find the name "Sean" in a phone book with 1,000,000 names.

Bad algorithm: Start at page 1 and read every single name until you find "Sean". Could take up to 1,000,000 checks.

Good algorithm (binary search): Open to the middle. "Sean" comes after "M", so look in the second half. Open to the middle of that. Keep halving. Only takes about 20 checks to search 1,000,000 names.

Same problem. Same answer. But one approach is 50,000x faster. That's why algorithms matter.

3. How Computers Store Data (Memory)

Before you can understand data structures, you need to understand how a computer actually stores data. Don't worry -- it's simpler than you think.

RAM = A Giant Row of Numbered Boxes

Your computer's memory (RAM) is like a massive row of numbered mailboxes. Each mailbox:

Address: [0] [1] [2] [3] [4] [5] [6] [7] Data: [ 42 ] [ 17 ] [ 85 ] [ 3 ] [ 99 ] [ 51 ] [ 8 ] [ 64 ]

When you create a variable in code like x = 42, the computer:

  1. Finds an empty mailbox (say, address #0)
  2. Puts the value 42 in it
  3. Remembers that "x" means "go to address #0"

Why This Matters for Data Structures

The key insight is: going to a specific mailbox by its number is instant. The computer doesn't have to walk past mailbox #0, #1, #2... to get to mailbox #500. It just jumps straight there. This is called random access, and it's why arrays are so fast at looking up items by index.

The Memory Trade-Off

Every data structure is making a trade-off about how to use these mailboxes. Arrays put data in consecutive mailboxes (fast to access, hard to insert). Linked lists scatter data across random mailboxes but connect them with pointers (easy to insert, slow to access). Understanding this trade-off is the key to mastering data structures.

4. Time Complexity -- The Most Important Concept in DSA

This is it. This is the single most important concept in all of data structures and algorithms. If you understand this deeply, everything else falls into place.

Why We Care About Speed

Imagine you're building an app. When you have 100 users, any code works fine. But what happens when you have 1,000,000 users? Or 1,000,000,000?

Bad code that takes 1 second for 100 users might take 3 hours for 1,000,000 users. Good code takes 1 second for both. That's the difference between a successful app and a broken one.

We Count Steps, Not Seconds

We don't measure speed in seconds because different computers run at different speeds. Instead, we count how many operations the algorithm needs as the input gets bigger.

Example: Finding the Maximum

Given a list of n numbers, find the biggest one.

Algorithm: Look at each number, keep track of the biggest.
For a list of 10 numbers: ~10 operations
For a list of 1,000 numbers: ~1,000 operations
For a list of 1,000,000 numbers: ~1,000,000 operations

The number of operations grows linearly with the input size. We call this O(n).

The Big-O Notation Cheat Sheet

Big-O tells us: "As the input gets really big, how does the number of operations grow?"

Big-ONameAnalogySpeed
O(1)ConstantKnowing exactly which locker to openInstant, no matter the size
O(log n)LogarithmicLooking up a word in a dictionaryIncredibly fast
O(n)LinearReading every page of a bookFast, grows steadily
O(n log n)LinearithmicSmart sorting (merge sort)Pretty fast
O(n²)QuadraticComparing every person with every other personSlow for large inputs
O(2ⁿ)ExponentialTrying every combination on a lockImpossibly slow
O(n!)FactorialTrying every possible arrangementHeat death of the universe

Let's See Actual Numbers

This is where it gets real. Look at how many operations each complexity needs for different input sizes:

nO(1)O(log n)O(n)O(n log n)O(n²)O(2ⁿ)
101310331001,024
1001710066410,0001.27 x 10³⁰
1,0001101,0009,9661,000,000IMPOSSIBLE
10,00011310,000132,877100,000,000IMPOSSIBLE
100,000117100,0001,660,96410,000,000,000IMPOSSIBLE
1,000,0001201,000,00019,931,5691,000,000,000,000IMPOSSIBLE
Look at That Table Carefully

For n = 1,000,000:
O(log n) needs just 20 operations. That's binary search.
O(n) needs 1 million operations. Fine, takes maybe 0.01 seconds.
O(n²) needs 1 trillion operations. That's roughly 16 minutes.
O(2ⁿ) is literally impossible -- more operations than atoms in the universe.

This is why understanding time complexity is life or death for your code.

Understanding Each Complexity

O(1) -- Constant Time

No matter how big the input is, it always takes the same number of steps.

Example

Accessing an array element by index: arr[5]
Whether the array has 10 elements or 10 billion, jumping to index 5 is instant. The computer calculates: address = start_address + 5 x element_size. One calculation, done.

O(log n) -- Logarithmic Time

Each step cuts the problem in half. This is the magic of binary search.

Why log₂(n)?

Start with n items. After step 1: n/2 items. After step 2: n/4. After step 3: n/8.
After k steps: n / 2ᵏ items remaining.

We stop when 1 item is left: n / 2ᵏ = 1
Solving: 2ᵏ = n, so k = log₂(n)

For n = 1,000,000,000 (1 billion): log₂(1,000,000,000) ≈ 30 steps
You can search through a BILLION items in just 30 steps. That's the power of logarithms.

O(n) -- Linear Time

You look at every element once. If the input doubles, the time doubles.

Example

Finding the sum of all elements in an array. You must visit each element exactly once. No shortcut.

def find_sum(arr):
    total = 0
    for num in arr:   # visits each element once = O(n)
        total += num
    return total

O(n²) -- Quadratic Time

For every element, you look at every OTHER element. It's a nested loop.

Why It's n²

Imagine 100 students in a class need to shake hands with every other student.
Student 1 shakes hands with 99 others.
Student 2 shakes hands with 99 others.
...
Total handshakes ≈ 100 x 100 = 10,000.

With 1,000 students: 1,000 x 1,000 = 1,000,000 handshakes.
With 1,000,000 students: 1,000,000 x 1,000,000 = 1,000,000,000,000. That's a TRILLION. Way too slow.

# This is O(n²) -- nested loop
for i in range(len(arr)):
    for j in range(len(arr)):   # for EACH i, we loop through ALL j
        if arr[i] + arr[j] == target:
            return [i, j]

O(2ⁿ) -- Exponential Time

The number of operations doubles every time you add one more element. This blows up incredibly fast.

Why It's So Bad

Think of a combination lock with n switches (on/off).
1 switch: 2 combinations
2 switches: 4 combinations
3 switches: 8 combinations
10 switches: 1,024 combinations
20 switches: 1,048,576 (about 1 million)
30 switches: 1,073,741,824 (about 1 billion)
50 switches: 1,125,899,906,842,624 (over 1 quadrillion)

This is why brute-force solutions to problems like "find the best subset" are impractical for large inputs.

5. The Math Behind Big-O (Why It Works)

Big-O notation describes the upper bound on growth rate. Formally: f(n) = O(g(n)) means there exist constants c and n₀ such that f(n) ≤ c·g(n) for all n ≥ n₀.

In plain English: we only care about the fastest-growing term, and we drop constants.

Rule 1: Drop Constants

3n + 5 → O(n) 100n → O(n) n/2 → O(n) Why? When n = 1,000,000, the difference between n and 3n doesn't change the CATEGORY of growth. They both grow linearly.

Rule 2: Drop Lower-Order Terms

n² + n → O(n²) n³ + 100n² + 5000n → O(n³) 2ⁿ + n⁵ → O(2ⁿ) Why? For large n, the biggest term dominates everything else. When n = 1,000: n² = 1,000,000 but n = 1,000. The n term is irrelevant.

Rule 3: Different Steps Get Added

Loop through array A (size a): O(a) Then loop through array B (size b): O(b) Total: O(a + b) -- NOT O(n²) Only if you nest them: O(a × b)
Nested Loop Rule (Multiplication):

If loop A runs O(f(n)) times and loop B runs O(g(n)) times inside loop A:
Total = O(f(n) × g(n))

If loop A runs O(f(n)) and loop B runs O(g(n)) after loop A (not nested):
Total = O(f(n) + g(n)) = O(max(f(n), g(n)))

The Math Behind log₂(n)

Logarithms answer the question: "How many times do I need to divide n by 2 to get to 1?"

log₂(8) = 3 because 2³ = 8 (divide 8 by 2 three times: 8→4→2→1) log₂(16) = 4 because 2⁴ = 16 (divide 16 by 2 four times: 16→8→4→2→1) log₂(1024) = 10 because 2¹⁰ = 1024 log₂(1,000,000) ≈ 20 log₂(1,000,000,000) ≈ 30

This is why binary search is so powerful. Even for 1 billion elements, you only need ~30 steps. Each step cuts the search space in half.

Why O(n log n) Is the Sorting Sweet Spot

The best comparison-based sorting algorithms (merge sort, quick sort) are O(n log n). Here's the intuition:

Merge Sort Intuition

Merge sort splits the array in half, sorts each half, then merges them.

How many times can you split? log₂(n) times (keep halving until you have individual elements).
How much work at each level? n operations (to merge all the pieces at that level).

Total: n work × log₂(n) levels = O(n log n)

Competitive Programming Rule of Thumb

A modern computer does roughly 100,000,000 (10⁸) simple operations per second. Use this to estimate if your solution is fast enough:

n ≤ 10: O(n!) is fine
n ≤ 20: O(2ⁿ) is fine
n ≤ 500: O(n³) is fine
n ≤ 10,000: O(n²) is fine
n ≤ 1,000,000: O(n log n) is fine
n ≤ 100,000,000: O(n) is fine
Any n: O(log n) or O(1) is fine

6. Space Complexity

Space complexity is the same idea as time complexity, but for memory instead of time.

If your algorithm creates a new array of size n, that's O(n) space. If it only uses a few variables regardless of input size, that's O(1) space.

O(1) space: Only uses a fixed number of variables (like counters, pointers) O(n) space: Creates a copy of the input or a hash map of size n O(n²) space: Creates a 2D grid of size n×n

The Time-Space Trade-Off

Often, you can make code faster by using more memory, or save memory by being slower. This is one of the most fundamental trade-offs in CS.

Example: Two Sum

Brute force: O(n²) time, O(1) space -- check every pair
Hash map: O(n) time, O(n) space -- store seen numbers in a hash map

The hash map solution uses more memory but is MUCH faster. This is almost always worth it.

7. Every Data Structure Explained From Zero

Now that you understand time complexity, let's learn each data structure. For each one, we'll cover: what it is, how it works internally, time complexity for every operation (and WHY), and when to use it.

For deep dives with code implementations, click through to each topic's dedicated page.

Arrays -- The Foundation of Everything

Analogy: A row of numbered lockers in a school hallway.

Index: [0] [1] [2] [3] [4] Value: [ 10 ] [ 20 ] [ 30 ] [ 40 ] [ 50 ]

Elements are stored in consecutive memory locations. Because they're consecutive, the computer can calculate exactly where any element is:

address of element[i] = start_address + i × element_size

This is why accessing by index is O(1) -- it's just one multiplication and one addition, no matter how big the array is.

OperationTimeWhy
Access by indexO(1)Direct address calculation
Search (unsorted)O(n)Must check each element
Insert at endO(1)*Just place at next spot (*amortized)
Insert at beginning/middleO(n)Must shift all elements after
Delete from middleO(n)Must shift elements to fill gap

Deep dive: Arrays & Strings page

Linked Lists -- The Treasure Hunt

Analogy: A treasure hunt where each clue tells you where the next clue is.

[10 | →] → [20 | →] → [30 | →] → [40 | →] → [50 | null] Head Tail

Each element (called a node) stores two things: its value AND a pointer to the next node. Nodes can be scattered anywhere in memory -- they don't need to be consecutive.

OperationTimeWhy
Access by indexO(n)Must follow links from the start
SearchO(n)Must follow links one by one
Insert at headO(1)Just create node, point to old head
Insert at position (given pointer)O(1)Just redirect pointers
Delete (given pointer)O(1)Just redirect pointers

Deep dive: Linked Lists page

Stacks -- Last In, First Out (LIFO)

Analogy: A stack of plates. You can only add or remove from the top.

| 50 | ← top (most recently added) | 40 | | 30 | | 20 | | 10 | +----+

Three operations, ALL O(1):

Why O(1)? You always know exactly where the top is. No searching needed.

Use cases: Undo/redo, browser back button, matching parentheses, function call stack, DFS.

Deep dive: Stacks & Queues page

Queues -- First In, First Out (FIFO)

Analogy: A line at a shop. First person in line gets served first.

Front → [10] [20] [30] [40] [50] ← Back Remove from here Add here

Two operations, both O(1):

Use cases: BFS, task scheduling, printer queue, message queues.

Hash Maps -- The Most Important Interview Data Structure

Analogy: A magical librarian who instantly knows exactly which shelf any book is on.

A hash map stores key-value pairs and can find any value by its key in O(1) average time. How?

How Hashing Works

Step 1: Take the key (e.g., "Sean")
Step 2: Run it through a hash function that converts it to a number (e.g., 7)
Step 3: Use that number as the index in an array: storage[7] = "Sean's data"

To find "Sean" later, just hash the key again → get 7 → go directly to storage[7]. Instant.

OperationAverageWorstWhy Worst Case?
InsertO(1)O(n)Hash collisions (many keys map to same index)
LookupO(1)O(n)Same reason
DeleteO(1)O(n)Same reason

Deep dive: Hash Maps & Sets page

Trees -- Hierarchical Data

Analogy: A family tree, or an org chart, or your file system (folders inside folders).

[50] ← root / \ [30] [70] ← children of 50 / \ / \ [20] [40] [60] [80] ← leaves

A Binary Search Tree (BST) has a special rule: left child < parent < right child. This means searching is like binary search:

Why BST Search Is O(log n)

Looking for 60 in the tree above:
Start at 50. 60 > 50, so go right.
At 70. 60 < 70, so go left.
At 60. Found it!

Each comparison eliminates HALF the tree (either the left or right subtree). Same math as binary search: O(log n) for a balanced tree.

Deep dive: Trees & BST page

Heaps -- Tournament Brackets

Analogy: A tournament bracket where the winner always rises to the top.

A min-heap always keeps the smallest element at the root. A max-heap keeps the largest. Both support:

OperationTimeWhy
Get min/maxO(1)It's always at the root
InsertO(log n)Add at bottom, bubble up (tree height = log n)
Extract min/maxO(log n)Remove root, bubble down replacement

Use cases: Priority queues, finding Kth largest/smallest, Dijkstra's algorithm, merge K sorted lists.

Deep dive: Advanced Topics page

Graphs -- Connections Between Things

Analogy: A social network (people are nodes, friendships are edges) or a road map (cities are nodes, roads are edges).

[A] --- [B] | \ | | \ | [C] --- [D]

Graphs are the most general data structure. Trees are actually a special type of graph (connected, no cycles). Linked lists are a special type of tree (each node has at most one child).

Two ways to explore graphs:

Deep dive: Graphs page

Tries -- Autocomplete on Your Phone

Analogy: The autocomplete feature that shows suggestions as you type each letter.

Each node represents a letter. Paths from root to leaf form words. Finding all words that start with "ca" means just following c → a and then listing everything below.

Deep dive: Advanced Topics page

8. Essential Algorithms Explained

Binary Search -- Halving Your Way to the Answer

Requirement: The data must be sorted.

How it works: Compare the target with the middle element. If target is smaller, search the left half. If larger, search the right half. Repeat.

def binary_search(arr, target):
    left, right = 0, len(arr) - 1
    while left <= right:
        mid = (left + right) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            left = mid + 1     # target is in right half
        else:
            right = mid - 1    # target is in left half
    return -1  # not found

Recursion -- Functions That Call Themselves

Analogy: Russian nesting dolls. Open a doll, find a smaller doll inside. Keep opening until you reach the smallest doll (the base case).

Every recursive function needs:

  1. Base case: When to stop (the smallest doll)
  2. Recursive case: Break the problem into a smaller version of itself
# Factorial: 5! = 5 × 4 × 3 × 2 × 1
def factorial(n):
    if n <= 1:        # base case
        return 1
    return n * factorial(n - 1)  # recursive case

Two Pointers

Analogy: Reading a book from both ends simultaneously, moving inward.

Use two pointers (usually start and end) and move them toward each other based on conditions. Works on sorted arrays or when you need to find pairs.

Deep dive: LeetCode Patterns page

Sliding Window

Analogy: A magnifying glass sliding across a page, looking at a fixed-width section at a time.

Maintain a "window" of elements and slide it across the array, adding/removing one element at a time instead of recalculating everything.

Deep dive: LeetCode Patterns page

Dynamic Programming

Analogy: Instead of recalculating your monthly expenses every time someone asks, you write the total on a sticky note and just read it next time.

DP = solving a problem by breaking it into smaller overlapping subproblems and remembering (memoizing) the results so you never solve the same subproblem twice.

Deep dive: Dynamic Programming page

Sorting

The most common sorting algorithms and when to use them:

AlgorithmTimeSpaceKey Idea
Bubble SortO(n²)O(1)Repeatedly swap adjacent elements. Simple but slow.
Merge SortO(n log n)O(n)Divide in half, sort each half, merge. Stable and consistent.
Quick SortO(n log n)*O(log n)Pick pivot, partition around it. Fast in practice.
Counting SortO(n + k)O(k)Count occurrences. Only works for integers in a known range.

Deep dive: Sorting & Searching page

9. Recursion -- Functions That Call Themselves

Analogy: Russian nesting dolls. Open a doll, find a smaller doll inside. Keep opening until you reach the smallest doll (the base case).

Every recursive function needs:

  1. Base case: When to stop (the smallest doll) -- returns a value without further recursion
  2. Recursive case: Break the problem into a smaller version of itself and call itself
Recursion Template:
def function(inputs):
  if base_case:
    return result
  return function(smaller_problem)

Why Recursion Works

Each recursive call solves a smaller version of the same problem. Eventually you reach the base case, then the results "unwind" back up.

Recursion Patterns (LeetCode Style)

Pattern 1: Linear Recursion

Process elements one at a time, moving forward or backward.

Python - Reverse a String
class Solution:
    def reverse_string(self, s: list[str]) -> None:
        def solve(left: int, right: int):
            if left >= right:
                return
            s[left], s[right] = s[right], s[left]
            solve(left + 1, right - 1)
        solve(0, len(s) - 1)

Pattern 2: Binary Recursion

Split problem into two halves, process each recursively.

Python - Fibonacci
class Solution:
    def fib(self, n: int) -> int:
        if n <= 1:
            return n
        return self.fib(n - 1) + self.fib(n - 2)

Pattern 3: Recursion with Memoization

Add caching to avoid recalculating the same subproblems.

Python - Fibonacci with Memo
class Solution:
    def fib(self, n: int) -> int:
        cache = {}
        def solve(x: int) -> int:
            if x in cache:
                return cache[x]
            if x <= 1:
                return x
            cache[x] = solve(x - 1) + solve(x - 2)
            return cache[x]
        return solve(n)

Pattern 4: Tree Recursion

Recursively process left and right subtrees.

Python - Binary Tree Inorder Traversal
class Solution:
    def inorder_traversal(self, root) -> list[int]:
        result = []
        def solve(node):
            if not node:
                return
            solve(node.left)
            result.append(node.val)
            solve(node.right)
        solve(root)
        return result
Key Insight

When writing recursive functions, ask: "What is the smallest version of this problem I can solve directly?" That's your base case. Then ask: "How do I reduce the current problem to a smaller one?" That's your recursive case.

10. Backtracking -- Try All Possibilities

Analogy: Exploring a maze. You try a path, if it hits a dead end, you backtrack to the last junction and try a different path.

Backtracking = Recursion + Choice + Undo Choice. You explore all possibilities by making a choice, recursing, then undoing that choice to try another.

Backtracking Template:
def backtrack(path, choices):
  if valid solution:
    record solution
  for each choice in choices:
    make choice
    backtrack(path, remaining_choices)
    undo choice (backtrack)

Backtracking Patterns (LeetCode Style)

Pattern 1: Subsets (Include/Exclude)

Python
class Solution:
    def subsets(self, nums: list[int]) -> list[list[int]]:
        result = []
        def backtrack(start: int, path: list[int]):
            result.append(path[:])
            for i in range(start, len(nums)):
                path.append(nums[i])
                backtrack(i + 1, path)
                path.pop()
        backtrack(0, [])
        return result

Pattern 2: Permutations

Python
class Solution:
    def permute(self, nums: list[int]) -> list[list[int]]:
        result = []
        used = [False] * len(nums)
        def backtrack(path: list[int]):
            if len(path) == len(nums):
                result.append(path[:])
                return
            for i in range(len(nums)):
                if not used[i]:
                    used[i] = True
                    path.append(nums[i])
                    backtrack(path)
                    path.pop()
                    used[i] = False
        backtrack([])
        return result

Pattern 3: Combination Sum

Python
class Solution:
    def combination_sum(self, candidates: list[int], target: int) -> list[list[int]]:
        result = []
        def backtrack(start: int, path: list[int], remaining: int):
            if remaining == 0:
                result.append(path[:])
                return
            if remaining < 0:
                return
            for i in range(start, len(candidates)):
                path.append(candidates[i])
                backtrack(i, path, remaining - candidates[i])
                path.pop()
        backtrack(0, [], target)
        return result

Pattern 4: N-Queens

Python
class Solution:
    def solve_n_queens(self, n: int) -> list[list[str]]:
        result = []
        cols = set()
        diag1 = set()
        diag2 = set()
        def backtrack(row: int, path: list[str]):
            if row == n:
                result.append(path[:])
                return
            for col in range(n):
                if col in cols or (row - col) in diag1 or (row + col) in diag2:
                    continue
                cols.add(col)
                diag1.add(row - col)
                diag2.add(row + col)
                path.append("." * col + "Q" + "." * (n - col - 1))
                backtrack(row + 1, path)
                path.pop()
                cols.remove(col)
                diag1.remove(row - col)
                diag2.remove(row + col)
        backtrack(0, [])
        return result
When to use Backtracking:
- Generate all combinations/permutations
- Solve puzzles (N-Queens, Sudoku)
- Find all paths in a grid
- Subset selection problems
- Constraint satisfaction problems

11. Prefix Sum -- Precompute Cumulative Sums

Analogy: Instead of counting books on each shelf every time someone asks, you precompute cumulative totals and just subtract to get any range.

Precompute a running sum array so you can answer any "sum from index i to j" query in O(1).

prefix[i] = arr[0] + arr[1] + ... + arr[i]
sum(arr[l..r]) = prefix[r] - prefix[l-1] (if l > 0)

Prefix Sum Patterns (LeetCode Style)

Pattern 1: Range Sum Query

Python
class Solution:
    def range_sum(self, nums: list[int], queries: list[list[int]]) -> list[int]:
        prefix = [0] * (len(nums) + 1)
        for i in range(len(nums)):
            prefix[i + 1] = prefix[i] + nums[i]
        
        result = []
        for l, r in queries:
            result.append(prefix[r + 1] - prefix[l])
        return result

Pattern 2: Subarray Sum Equals K

Python
class Solution:
    def subarray_sum(self, nums: list[int], k: int) -> int:
        count = 0
        prefix = 0
        prefix_counts = {0: 1}
        
        for num in nums:
            prefix += num
            count += prefix_counts.get(prefix - k, 0)
            prefix_counts[prefix] = prefix_counts.get(prefix, 0) + 1
        
        return count

Pattern 3: Continuous Subarray with Positive Sum

Python
class Solution:
    def max_subarray_len(self, nums: list[int], k: int) -> int:
        prefix = 0
        prefix_index = {0: -1}
        max_len = 0
        
        for i, num in enumerate(nums):
            prefix += num
            if prefix - k in prefix_index:
                max_len = max(max_len, i - prefix_index[prefix - k])
            if prefix not in prefix_index:
                prefix_index[prefix] = i
        
        return max_len

Pattern 4: Difference Array (Range Update)

Python
class Solution:
    def corp_flight_bookings(self, bookings: list[list[int]], n: int) -> list[int]:
        diff = [0] * (n + 1)
        
        for start, end, seats in bookings:
            diff[start - 1] += seats
            diff[end] -= seats
        
        result = [0] * n
        for i in range(n):
            result[i] = result[i - 1] + diff[i] if i > 0 else diff[i]
        
        return result
When to use Prefix Sum:
- Range sum queries on static array
- Subarray sum problems
- Counting subarrays with certain sum
- Range updates (difference array)
- Any problem asking about "sum of subarray"

12. Suffix Sum -- Precompute from Right

Analogy: Similar to prefix sum, but instead of "how much from the start," you want "how much from the end."

Useful when you need to quickly answer queries like "sum from index i to end" or when processing from right to left.

suffix[i] = arr[i] + arr[i+1] + ... + arr[n-1]
sum(arr[i..end]) = suffix[i]

Suffix Sum Patterns (LeetCode Style)

Pattern 1: Suffix Sum Array

Python
class Solution:
    def suffix_sum(self, nums: list[int]) -> list[int]:
        n = len(nums)
        suffix = [0] * n
        suffix[n - 1] = nums[n - 1]
        
        for i in range(n - 2, -1, -1):
            suffix[i] = nums[i] + suffix[i + 1]
        
        return suffix

Pattern 2: Right-side Processing

Python - Product of Array Except Self
class Solution:
    def product_except_self(self, nums: list[int]) -> list[int]:
        n = len(nums)
        result = [1] * n
        
        prefix = 1
        for i in range(n):
            result[i] = prefix
            prefix *= nums[i]
        
        suffix = 1
        for i in range(n - 1, -1, -1):
            result[i] *= suffix
            suffix *= nums[i]
        
        return result

Pattern 3: Minimum Path Sum from Right

Python
class Solution:
    def min_path_sum(self, grid: list[list[int]]) -> int:
        m, n = len(grid), len(grid[0])
        
        for j in range(n - 1, -1, -1):
            for i in range(m - 1, -1, -1):
                if i == m - 1 and j == n - 1:
                    continue
                if i == m - 1:
                    grid[i][j] += grid[i][j + 1]
                elif j == n - 1:
                    grid[i][j] += grid[i + 1][j]
                else:
                    grid[i][j] += min(grid[i + 1][j], grid[i][j + 1])
        
        return grid[0][0]

Pattern 4: Longest Happy Prefix (Suffix-based)

Python
class Solution:
    def longest_prefix(self, s: str) -> str:
        n = len(s)
        for length in range(n - 1, 0, -1):
            if s[:length] == s[n - length:]:
                return s[:length]
        return ""
Key Insight

Suffix sum is just prefix sum from the right side. Use it when you need to process from end to start or when queries ask about "from position i to end."

13. Sweep Line -- Event-Based Processing

Analogy: A security guard walking along a street, counting how many people are on the street at each moment. When someone enters, count goes up; when someone leaves, count goes down.

Convert 2D/interval problems into sorted 1D events. Sweep left to right, maintaining state.

Sweep Line Steps:
1. Create events: START (+1) and END (-1) for each interval
2. Sort events by position
3. Sweep and track current state (counter, max, etc.)

Sweep Line Patterns (LeetCode Style)

Pattern 1: Meeting Rooms II

Python
class Solution:
    def min_meeting_rooms(self, intervals: list[list[int]]) -> int:
        events = []
        for start, end in intervals:
            events.append((start, 1))
            events.append((end, -1))
        
        events.sort()
        
        max_rooms = 0
        current = 0
        for _, delta in events:
            current += delta
            max_rooms = max(max_rooms, current)
        
        return max_rooms

Pattern 2: My Calendar III (Maximum Overlap)

Python
class MyCalendarThree:
    def __init__(self):
        self.events = collections.defaultdict(int)

    def book(self, start: int, end: int) -> int:
        self.events[start] += 1
        self.events[end] -= 1
        
        active = 0
        max_overlap = 0
        for time in sorted(self.events):
            active += self.events[time]
            max_overlap = max(max_overlap, active)
        
        return max_overlap

Pattern 3: Rectangle Overlap

Python
class Solution:
    def is_rectangle_overlap(self, rec1: list[int], rec2: list[int]) -> bool:
        x1, y1, x2, y2 = rec1
        x3, y3, x4, y4 = rec2
        
        return not (x2 <= x3 or x4 <= x1 or y2 <= y3 or y4 <= y1)

Pattern 4: Car Pooling

Python
class Solution:
    def car_pooling(self, trips: list[list[int]], capacity: int) -> bool:
        events = []
        for num, start, end in trips:
            events.append((start, num))
            events.append((end, -num))
        
        events.sort()
        
        current = 0
        for _, delta in events:
            current += delta
            if current > capacity:
                return False
        
        return True
When to use Sweep Line:
- Counting overlapping intervals
- Meeting rooms problems
- Calendar booking problems
- Maximum concurrent events
- Any problem with intervals [start, end)

14. Memoization -- Remember Your Work

Analogy: Instead of recalculating your monthly budget every time someone asks, you write it on a sticky note and just read it next time.

Memoization = Recursion + Cache. Store results of expensive function calls and return cached result when the same inputs occur again.

Memoization Template:
def solve(inputs):
  if inputs in cache: return cache[inputs]
  result = expensive_computation(inputs)
  cache[inputs] = result
  return result

Memoization Patterns (LeetCode Style)

Pattern 1: Climbing Stairs (1D Memo)

Python
class Solution:
    def climb_stairs(self, n: int) -> int:
        cache = {}
        def solve(i: int) -> int:
            if i in cache:
                return cache[i]
            if i <= 2:
                return i
            cache[i] = solve(i - 1) + solve(i - 2)
            return cache[i]
        return solve(n)

Pattern 2: House Robber (1D Memo)

Python
class Solution:
    def rob(self, nums: list[int]) -> int:
        cache = {}
        def solve(i: int) -> int:
            if i in cache:
                return cache[i]
            if i < 0:
                return 0
            result = max(solve(i - 1), solve(i - 2) + nums[i])
            cache[i] = result
            return result
        return solve(len(nums) - 1)

Pattern 3: Unique Paths (2D Memo)

Python
class Solution:
    def unique_paths(self, m: int, n: int) -> int:
        cache = {}
        def solve(i: int, j: int) -> int:
            if (i, j) in cache:
                return cache[(i, j)]
            if i >= m or j >= n:
                return 0
            if i == m - 1 and j == n - 1:
                return 1
            cache[(i, j)] = solve(i + 1, j) + solve(i, j + 1)
            return cache[(i, j)]
        return solve(0, 0)

Pattern 4: Unique Paths with Obstacles

Python
class Solution:
    def unique_paths_with_obstacles(self, grid: list[list[int]]) -> int:
        N = len(grid)
        M = len(grid[0])
        
        if grid[N - 1][M - 1] == 1:
            return 0
        
        cache = {}
        def solve(x: int, y: int) -> int:
            key = str(x) + "," + str(y)
            if key in cache:
                return cache[key]
            if x < 0 or x >= N or y < 0 or y >= M:
                return 0
            if grid[x][y] == 1:
                return 0
            if x == N - 1 and y == M - 1:
                return 1
            cache[key] = solve(x + 1, y) + solve(x, y + 1)
            return cache[key]
        
        solve()
        return cache["0,0"] if "0,0" in cache else 0

Pattern 5: Coin Change (Memo)

Python
class Solution:
    def coin_change(self, coins: list[int], amount: int) -> int:
        cache = {}
        def solve(remaining: int) -> int:
            if remaining in cache:
                return cache[remaining]
            if remaining == 0:
                return 0
            if remaining < 0:
                return float('inf')
            
            min_coins = float('inf')
            for coin in coins:
                min_coins = min(min_coins, solve(remaining - coin) + 1)
            
            cache[remaining] = min_coins
            return min_coins
        
        result = solve(amount)
        return result if result != float('inf') else -1

Pattern 6: Longest Common Subsequence (String Memo)

Python
class Solution:
    def longest_common_subsequence(self, text1: str, text2: str) -> int:
        cache = {}
        def solve(i: int, j: int) -> int:
            if (i, j) in cache:
                return cache[(i, j)]
            if i == len(text1) or j == len(text2):
                return 0
            
            if text1[i] == text2[j]:
                cache[(i, j)] = solve(i + 1, j + 1) + 1
            else:
                cache[(i, j)] = max(solve(i + 1, j), solve(i, j + 1))
            return cache[(i, j)]
        
        return solve(0, 0)
Key Insight

Memoization turns O(2^n) exponential problems into O(n) or O(n*m) polynomial problems. The key is identifying what subproblem to cache. Ask: "What parameters uniquely define this subproblem?"

15. Tabulation -- Build Up Solutions

Analogy: Instead of starting from the top and working down (like memoization), you start from the bottom (base cases) and build up to the final answer iteratively.

Tabulation = Bottom-Up DP. No recursion. Just fill a table from base cases to final answer.

Tabulation Steps:
1. Identify the state (what does dp[i] represent?)
2. Define base cases (dp[0], dp[1], etc.)
3. Determine the order to fill (usually left to right)
4. Write the recurrence relation
5. Return dp[final_state]

Tabulation Patterns (LeetCode Style)

Pattern 1: Climbing Stairs (1D Tab)

Python
class Solution:
    def climb_stairs(self, n: int) -> int:
        if n <= 2:
            return n
        dp = [0] * (n + 1)
        dp[1] = 1
        dp[2] = 2
        for i in range(3, n + 1):
            dp[i] = dp[i - 1] + dp[i - 2]
        return dp[n]

Pattern 2: House Robber (1D Tab)

Python
class Solution:
    def rob(self, nums: list[int]) -> int:
        if not nums:
            return 0
        if len(nums) == 1:
            return nums[0]
        
        dp = [0] * len(nums)
        dp[0] = nums[0]
        dp[1] = max(nums[0], nums[1])
        
        for i in range(2, len(nums)):
            dp[i] = max(dp[i - 1], dp[i - 2] + nums[i])
        
        return dp[-1]

Pattern 3: Unique Paths (2D Tab)

Python
class Solution:
    def unique_paths(self, m: int, n: int) -> int:
        dp = [[1] * n for _ in range(m)]
        
        for i in range(1, m):
            for j in range(1, n):
                dp[i][j] = dp[i - 1][j] + dp[i][j - 1]
        
        return dp[m - 1][n - 1]

Pattern 4: Longest Common Subsequence (2D Tab)

Python
class Solution:
    def longest_common_subsequence(self, text1: str, text2: str) -> int:
        m, n = len(text1), len(text2)
        dp = [[0] * (n + 1) for _ in range(m + 1)]
        
        for i in range(1, m + 1):
            for j in range(1, n + 1):
                if text1[i - 1] == text2[j - 1]:
                    dp[i][j] = dp[i - 1][j - 1] + 1
                else:
                    dp[i][j] = max(dp[i - 1][j], dp[i][j - 1])
        
        return dp[m][n]

Pattern 5: Coin Change (1D Tab)

Python
class Solution:
    def coin_change(self, coins: list[int], amount: int) -> int:
        dp = [float('inf')] * (amount + 1)
        dp[0] = 0
        
        for i in range(1, amount + 1):
            for coin in coins:
                if coin <= i:
                    dp[i] = min(dp[i], dp[i - coin] + 1)
        
        return dp[amount] if dp[amount] != float('inf') else -1

Pattern 6: 0/1 Knapsack (2D Tab)

Python
class Solution:
    def knapsack(self, weights: list[int], values: list[int], capacity: int) -> int:
        n = len(weights)
        dp = [[0] * (capacity + 1) for _ in range(n + 1)]
        
        for i in range(1, n + 1):
            for w in range(capacity + 1):
                if weights[i - 1] <= w:
                    dp[i][w] = max(
                        dp[i - 1][w],
                        dp[i - 1][w - weights[i - 1]] + values[i - 1]
                    )
                else:
                    dp[i][w] = dp[i - 1][w]
        
        return dp[n][capacity]

Pattern 7: Edit Distance (2D Tab)

Python
class Solution:
    def min_distance(self, word1: str, word2: str) -> int:
        m, n = len(word1), len(word2)
        dp = [[0] * (n + 1) for _ in range(m + 1)]
        
        for i in range(m + 1):
            dp[i][0] = i
        for j in range(n + 1):
            dp[0][j] = j
        
        for i in range(1, m + 1):
            for j in range(1, n + 1):
                if word1[i - 1] == word2[j - 1]:
                    dp[i][j] = dp[i - 1][j - 1]
                else:
                    dp[i][j] = 1 + min(dp[i - 1][j], dp[i][j - 1], dp[i - 1][j - 1])
        
        return dp[m][n]
Memoization vs Tabulation

Use memoization when you can't determine the order of computation easily. Use tabulation when you know the order and want to avoid recursion overhead. Both give the same time complexity, but tabulation often uses less memory (can drop old rows).

16. Python & Java DSA Quick Reference

Quick syntax reference for the most common DSA operations in both Python and Java.

1. Arrays

Python

# Create array
arr = [1, 2, 3, 4, 5]

# Access - O(1)
arr[0]  # first element
arr[-1]  # last element

# Length - O(1)
len(arr)

# Append - O(1) amortized
arr.append(6)

# Insert at index - O(n)
arr.insert(0, 0)  # insert 0 at beginning

# Remove at index - O(n)
arr.pop()         # remove last
arr.pop(0)      # remove first

# Slicing - O(k)
arr[1:4]     # elements at index 1,2,3
arr[:3]      # first 3 elements
arr[2:]      # from index 2 to end

# Sort - O(n log n)
arr.sort()
arr.sort(reverse=True)
sorted_arr = sorted(arr)

# Search - O(n) for linear, O(log n) for binary
5 in arr              # linear search
# Binary search requires sorted array
import bisect
bisect.bisect_left(arr, 3)  # returns index

# 2D Array
matrix = [[1, 2], [3, 4]]
matrix[0][1]  # row 0, col 1 = 2

Java

// Create array
int[] arr = {1, 2, 3, 4, 5};
int[] arr2 = new int[10];

// Access - O(1)
arr[0];   // first element
arr[arr.length - 1];  // last element

// Length - O(1)
arr.length

// ArrayList - dynamic array
ArrayList<Integer> list = new ArrayList<>();
list.add(1);           // O(1) amortized
list.get(0);            // O(1)
list.remove(0);          // O(n)
list.size();             // O(1)

// Sort - O(n log n)
Arrays.sort(arr);
Arrays.sort(arr, Collections.reverseOrder());

// Binary search - O(log n)
Arrays.binarySearch(arr, 3);

// 2D Array
int[][] matrix = {{1, 2}, {3, 4}};
matrix[0][1];  // row 0, col 1 = 2

2. Lists / ArrayLists

Python

# List is same as array above

# List comprehension
squares = [x*x for x in range(10)]
evens = [x for x in arr if x % 2 == 0]

# List operations
arr.extend([6, 7])  # add multiple
arr.remove(3)           # remove first occurrence
arr.index(5)           # find index
arr.count(5)           # count occurrences

Java

// ArrayList
ArrayList<Integer> list = new ArrayList<>();
list.add(1);
list.add(2);
list.add(0, 5);  // insert at index 0
list.get(0);
list.set(0, 10);  // update
list.remove(0);    // by index
list.remove(Integer.valueOf(5));  // by value
list.contains(5);
list.indexOf(5);
list.size();

// Iterate
for (int x : list) { ... }
for (int i = 0; i < list.size(); i++) { ... }

3. HashMap / Dict

Python

# Create
hashmap = {"a": 1, "b": 2}
from collections import defaultdict
dd = defaultdict(int)  # default value 0

# Access - O(1)
hashmap["a"]
hashmap.get("c", 0)  # default if not found

# Insert/Update - O(1)
hashmap["c"] = 3

# Delete - O(1)
del hashmap["a"]
hashmap.pop("b")

# Check existence
"a" in hashmap

# Iterate
for key in hashmap:
    print(key, hashmap[key])

for key, val in hashmap.items():
    print(key, val)

# Get all keys/values
hashmap.keys()
hashmap.values()
hashmap.items()

Java

// Create
HashMap<String, Integer> map = new HashMap<>();
map.put("a", 1);
map.put("b", 2);

// Access - O(1)
map.get("a");
map.getOrDefault("c", 0);

// Insert/Update - O(1)
map.put("c", 3);
map.putIfAbsent("d", 4);

// Delete - O(1)
map.remove("a");
map.remove("b", 2);  // only if key=value

// Check existence
map.containsKey("a");
map.containsValue(1);

// Iterate
for (String key : map.keySet()) { ... }
for (Integer val : map.values()) { ... }
for (Map.Entry<String, Integer> e : map.entrySet()) {
    e.getKey();
    e.getValue();
}

// Get all
map.keySet();
map.values();
map.entrySet();

4. HashSet

Python

# Create
s = {1, 2, 3}
s = set()  # empty set

# Add - O(1)
s.add(4)
s.update([5, 6])

# Remove - O(1)
s.remove(3)    # throws if not found
s.discard(3)  # no error if not found
s.pop()        # remove and return arbitrary

# Check
5 in s

# Set operations
s1 | s2         # union
s1 & s2         # intersection
s1 - s2         # difference
s1 ^ s2         # symmetric difference

Java

// Create
HashSet<Integer> set = new HashSet<>();
set.add(1);
set.add(2);
set.add(3);

// Add - O(1)
set.add(4);

// Remove - O(1)
set.remove(3);

// Check
set.contains(5);

// Set operations
HashSet<Integer> set1 = new HashSet<>(Arrays.asList(1,2,3));
HashSet<Integer> set2 = new HashSet<>(Arrays.asList(2,3,4));

set1.addAll(set2);           // union
set1.retainAll(set2);       // intersection
set1.removeAll(set2);       // difference

5. Stack & Queue

Python

# Stack (LIFO) - use list
stack = [1, 2, 3]
stack.append(4)    # push - O(1)
stack.pop()           # pop - O(1)
stack[-1]          # peek - O(1)

# Queue (FIFO) - use collections.deque
from collections import deque
queue = deque([1, 2])
queue.append(3)      # enqueue - O(1)
queue.popleft()       # dequeue - O(1)
queue[0]              # peek - O(1)

Java

// Stack (LIFO)
Stack<Integer> stack = new Stack<>();
stack.push(1);        // O(1)
stack.pop();          // O(1)
stack.peek();         // O(1)

// Queue (FIFO)
Queue<Integer> queue = new LinkedList<>();
queue.add(1);         // enqueue - O(1)
queue.remove();       // dequeue - O(1)
queue.peek();         // peek - O(1)

// Priority Queue (Heap)
PriorityQueue<Integer> minHeap = new PriorityQueue<>();
PriorityQueue<Integer> maxHeap = new PriorityQueue<>(Collections.reverseOrder());
minHeap.add(5);
minHeap.poll();       // get min - O(log n)
minHeap.peek();

6. String Operations

Python

# Create
s = "hello"

# Access - O(1)
s[0]

# Length - O(1)
len(s)

# Find - O(n)
s.find("ll")     # returns index or -1
s.index("ll")    # throws if not found

# Slice - O(k)
s[1:4]

# Split/Join - O(n)
s.split(",")
",".join(["a", "b"])

# Replace - O(n)
s.replace("l", "x")

# Check
s.startswith("he")
s.endswith("lo")
s.isdigit()
s.isalpha()

# Convert
s.upper()
s.lower()
s.strip()

Java

// Create
String s = "hello";

// Access - O(1)
s.charAt(0);

// Length - O(1)
s.length();

// Find - O(n)
s.indexOf("ll");     // returns index or -1
s.lastIndexOf("l");

// Substring - O(k)
s.substring(1, 4);   // [1, 4)

// Split/Join
s.split(",");
String.join(",", Arrays.asList("a", "b"));

// Replace
s.replace("l", "x");
s.replaceAll("regex", "x");

// Check
s.startsWith("he");
s.endsWith("lo");
Character.isDigit(s.charAt(0));
Character.isLetter(s.charAt(0));

// Convert
s.toUpperCase();
s.toLowerCase();
s.trim();

// StringBuilder (mutable, efficient)
StringBuilder sb = new StringBuilder();
sb.append("hello");
sb.toString();

7. Loops & Iteration

Python

# For loop
for i in range(10):        # 0 to 9
for i in range(1, 10):      # 1 to 9
for i in range(0, 10, 2):  # 0,2,4,6,8

# For each
for x in arr:
    print(x)

# Enumerate (index + value)
for i, val in enumerate(arr):
    print(i, val)

# While
while condition:
    ...

# List comprehension
doubled = [x*2 for x in arr]

Java

// For loop
for (int i = 0; i < 10; i++) { ... }

// Enhanced for
for (int x : arr) { ... }

// While
while (condition) { ... }

// Stream (Java 8+)
Arrays.stream(arr)
    .filter(x -> x % 2 == 0)
    .map(x -> x * 2)
    .toArray();

8. Common Utilities

Python

import heapq
import collections

# Min Heap
heap = [3, 1, 4]
heapq.heapify(heap)      # O(n)
heapq.heappush(heap, 2)  # O(log n)
heapq.heappop(heap)      # O(log n)

# Counter (frequency)
counter = collections.Counter(arr)
counter.most_common(3)  # top 3

# Max/Min
max(arr)
min(arr)
max(arr, key=len)

# Sum/Product
sum(arr)
prod(arr)  # Python 3.8+

# Reversed
reversed(arr)
reversed(range(10))

# Infinite
float('inf')
-float('inf')

Java

import java.util.*;

// Min Heap
PriorityQueue<Integer> minHeap = new PriorityQueue<>();
// Max Heap
PriorityQueue<Integer> maxHeap = new PriorityQueue<>(Collections.reverseOrder());

// Counter - use HashMap
HashMap<Integer, Integer> counter = new HashMap<>();
for (int x : arr) {
    counter.put(x, counter.getOrDefault(x, 0) + 1);
}

// Max/Min
Collections.max(list);
Collections.min(list);
Arrays.stream(arr).max().getAsInt();

// Sum
Arrays.stream(arr).sum();

// Reverse
Collections.reverse(list);
// For array: convert to list, reverse, convert back

// Infinity
Integer.MAX_VALUE;
Integer.MIN_VALUE;
Double.POSITIVE_INFINITY;
Double.NEGATIVE_INFINITY;
Quick Tip

For LeetCode: Use Python for faster coding, Java for more structure. Both are equally accepted in interviews. Pick one and stick with it!

17. How to Think About Problems

This is where most people get stuck -- not because they don't know data structures, but because they don't have a systematic approach to problem-solving.

  1. Read the problem carefully. Read it twice. Highlight constraints (input size, value ranges).
  2. Work through examples by hand. Use small inputs. Draw pictures. Trace through the logic on paper.
  3. Think about edge cases. Empty input? One element? All duplicates? Negative numbers?
  4. Start with brute force. What's the simplest, most obvious solution? Even if it's O(n³), write it down.
  5. Identify the bottleneck. What's making it slow? Is there a nested loop that could be replaced with a hash map?
  6. Ask yourself these questions:
    • Can I use a hash map to avoid repeated lookups?
    • Can I sort first to enable two pointers or binary search?
    • Is this a sliding window problem (subarray/substring)?
    • Can I use two pointers (sorted array, palindrome)?
    • Is this a tree/graph traversal (BFS/DFS)?
    • Does it have overlapping subproblems (DP)?
    • Can I use a stack (matching, nearest greater/smaller)?
  7. Code it up. Write clean code. Use meaningful variable names.
  8. Test with examples. Walk through your code with the given examples. Try edge cases.
Problem-Solving Decision Rules:

By input size n:
• n ≤ 10: O(n!) or O(2ⁿ) is fine -- brute force / backtracking
• n ≤ 100: O(n³) is fine
• n ≤ 1,000: O(n²) is fine
• n ≤ 100,000: O(n log n) required
• n ≤ 10,000,000: O(n) required
• n > 10,000,000: O(log n) or O(1) required

Rule of thumb: ~10⁸ simple operations per second in competitive programming.
The Pattern Recognition Shortcut

After solving 100+ problems, you'll start recognizing patterns instantly. "Oh, this is a sliding window problem." "This needs a monotonic stack." That recognition comes from practice, not theory. It's why grinding LeetCode works -- you're training pattern recognition.

18. Tips to Become a World-Class Competitive Programmer

19. Learning Roadmap

Follow this order. Each topic builds on the previous one.

1
Arrays & Strings
Foundation of everything
2
Hash Maps & Sets
O(1) lookups change everything
3
Two Pointers & Sliding Window
Key array/string patterns
4
Stacks & Queues
LIFO and FIFO patterns
5
Linked Lists
Pointer manipulation
6
Binary Search
The power of halving
7
Sorting Algorithms
Merge sort, quick sort, and why
8
Recursion & Backtracking
Thinking recursively
9
Trees & BST
Hierarchical data, DFS/BFS on trees
10
Heaps & Priority Queues
Top-K problems, scheduling
11
Graphs
BFS, DFS, shortest paths
12
Dynamic Programming
The boss level -- master this and you're elite
13
Advanced: Tries, Union-Find, Segment Trees
Competition-level data structures
14
Grind the PythonWithSean 650

20. Practice Quiz

Q1: What is the time complexity of accessing an element by index in an array?

Correct! Arrays use direct address calculation: address = start + index × size. This takes constant time regardless of array size.

Q2: You have 1,000,000 sorted numbers and want to find a specific one. What's the best approach?

Correct! Binary search on sorted data takes O(log n). For 1,000,000 items, that's only about 20 steps. Linear search would take up to 1,000,000 steps.

Q3: What is O(n²) in plain English?

Correct! O(n²) typically means a nested loop: for each of n elements, you do n operations. n × n = n². For n=10,000, that's 100,000,000 operations.

Q4: Which data structure gives O(1) average lookup by key?

Correct! Hash maps use a hash function to convert the key directly into an array index, giving O(1) average time for insert, lookup, and delete.

Q5: What is log₂(1,000,000) approximately?

Correct! 2²⁰ = 1,048,576 ≈ 1,000,000. This means binary search through 1 million items needs only about 20 comparisons.

Q6: A stack follows which principle?

Correct! Like a stack of plates, the last item placed on top is the first one removed. Push adds to top, pop removes from top.

Q7: Why is inserting at the beginning of an array O(n)?

Correct! To insert at position 0, every element at index 0, 1, 2, ..., n-1 must move to index 1, 2, 3, ..., n. That's n shift operations.

Q8: Your code has a loop inside a loop, both going from 0 to n. What's the time complexity?

Correct! The outer loop runs n times. For each iteration, the inner loop also runs n times. Total: n × n = n².

Q9: Which traversal explores a graph level by level?

Correct! BFS uses a queue to explore all neighbors at the current depth before moving to the next level. Like ripples spreading in water.

Q10: n = 100,000. Which of these is too slow?

Correct! At n=100,000, O(n²) = 10 billion operations. At ~10⁸ operations per second, that's about 100 seconds -- way over typical time limits (1-2 seconds). You need at least O(n log n) for this input size.