The KEY to solving LeetCode. Learn these patterns and you can solve hundreds of problems.
LeetCode is not about memorizing 500 solutions. If you memorize solutions, you will blank the moment a problem changes even one detail. The real skill is recognizing patterns and knowing which technique applies to which type of problem.
Read a problem, identify which pattern it belongs to, then apply that pattern's template and adapt it. That is the entire strategy. This page teaches you the most important patterns with full solutions so you can practice exactly that.
Use two pointers to scan a data structure (usually an array) from different positions. This often reduces an O(n^2) brute force to O(n).
Two main types:
Two Sum II (Sorted Array) -- Given a sorted array and a target, find two numbers that add up to the target.
Python
def two_sum(numbers, target):
left, right = 0, len(numbers) - 1
while left < right:
curr_sum = numbers[left] + numbers[right]
if curr_sum == target:
return [left + 1, right + 1] # 1-indexed
elif curr_sum < target:
left += 1 # need bigger sum
else:
right -= 1 # need smaller sum
JavaScript
function twoSum(numbers, target) {
let left = 0, right = numbers.length - 1;
while (left < right) {
const sum = numbers[left] + numbers[right];
if (sum === target) return [left + 1, right + 1];
else if (sum < target) left++;
else right--;
}
}
Because the array is sorted, if the sum is too small we move left right to increase it. If the sum is too large we move right left to decrease it. We never miss a valid pair because we only eliminate values that cannot be part of the answer.
Container With Most Water -- Given height array, find two lines that form the container holding the most water.
Python
def max_area(height):
left, right = 0, len(height) - 1
best = 0
while left < right:
width = right - left
h = min(height[left], height[right])
best = max(best, width * h)
# Move the shorter side -- it's the bottleneck
if height[left] < height[right]:
left += 1
else:
right -= 1
return best
JavaScript
function maxArea(height) {
let left = 0, right = height.length - 1, best = 0;
while (left < right) {
const w = right - left;
const h = Math.min(height[left], height[right]);
best = Math.max(best, w * h);
if (height[left] < height[right]) left++;
else right--;
}
return best;
}
3Sum -- Find all unique triplets that sum to zero. Sort the array, fix one number, then use converging two pointers on the remainder. Skip duplicates to avoid repeats. Time O(n^2), Space O(1) extra.
Remove Duplicates from Sorted Array -- Keep a slow pointer at the position to write, fast pointer scans ahead. When fast finds a new value, write it at slow and advance slow.
Move Zeroes -- Same idea: slow pointer tracks the next position for a nonzero value. Fast pointer scans the array. Swap nonzero values to the front.
Python
# Template: Converging Two Pointers
def converging(arr, target):
left, right = 0, len(arr) - 1
while left < right:
# compute current value from arr[left], arr[right]
if condition_met:
return result
elif need_larger:
left += 1
else:
right -= 1
# Template: Same Direction (partition / remove)
def same_direction(arr):
slow = 0
for fast in range(len(arr)):
if arr[fast] meets_condition:
arr[slow] = arr[fast]
slow += 1
return slow # new length
JavaScript
// Template: Converging Two Pointers
function converging(arr, target) {
let left = 0, right = arr.length - 1;
while (left < right) {
// compute current value from arr[left], arr[right]
if (conditionMet) return result;
else if (needLarger) left++;
else right--;
}
}
// Template: Same Direction (partition / remove)
function sameDirection(arr) {
let slow = 0;
for (let fast = 0; fast < arr.length; fast++) {
if (arr[fast] meets condition) {
arr[slow] = arr[fast];
slow++;
}
}
return slow;
}
Maintain a "window" over a contiguous subarray or substring and slide it across the input. This avoids recomputing from scratch each time -- you add the new element and remove the old one.
Two types:
Maximum Sum Subarray of Size K
Python
def max_sum_subarray(arr, k):
window_sum = sum(arr[:k])
best = window_sum
for i in range(k, len(arr)):
window_sum += arr[i] - arr[i - k] # slide: add right, remove left
best = max(best, window_sum)
return best
JavaScript
function maxSumSubarray(arr, k) {
let windowSum = 0;
for (let i = 0; i < k; i++) windowSum += arr[i];
let best = windowSum;
for (let i = k; i < arr.length; i++) {
windowSum += arr[i] - arr[i - k];
best = Math.max(best, windowSum);
}
return best;
}
Minimum Size Subarray Sum -- Find the smallest subarray whose sum is at least the target. Expand right to grow the window, shrink left when the sum is big enough.
Longest Substring Without Repeating Characters -- Classic variable window problem. Expand right, if duplicate found, shrink left until valid again.
Python
def length_of_longest_substring(s):
seen = {} # char -> most recent index
left = 0
best = 0
for right in range(len(s)):
if s[right] in seen and seen[s[right]] >= left:
left = seen[s[right]] + 1 # shrink past the duplicate
seen[s[right]] = right
best = max(best, right - left + 1)
return best
JavaScript
function lengthOfLongestSubstring(s) {
const seen = new Map();
let left = 0, best = 0;
for (let right = 0; right < s.length; right++) {
if (seen.has(s[right]) && seen.get(s[right]) >= left) {
left = seen.get(s[right]) + 1;
}
seen.set(s[right], right);
best = Math.max(best, right - left + 1);
}
return best;
}
s = "abcabcbb" -- The longest substring without repeating characters is "abc", length 3. As we scan, when we hit the second 'a', we jump left past the first 'a'.
Python
# Template: Fixed Size Window
def fixed_window(arr, k):
# Initialize window with first k elements
window = compute(arr[0:k])
best = window
for i in range(k, len(arr)):
window = window + arr[i] - arr[i - k] # add new, remove old
best = update(best, window)
return best
# Template: Variable Size Window
def variable_window(arr):
left = 0
best = 0
state = {} # track window contents
for right in range(len(arr)):
# Expand: add arr[right] to state
while window_is_invalid(state):
# Shrink: remove arr[left] from state
left += 1
best = max(best, right - left + 1)
return best
JavaScript
// Template: Fixed Size Window
function fixedWindow(arr, k) {
let window = 0;
for (let i = 0; i < k; i++) window += arr[i];
let best = window;
for (let i = k; i < arr.length; i++) {
window += arr[i] - arr[i - k];
best = Math.max(best, window);
}
return best;
}
// Template: Variable Size Window
function variableWindow(arr) {
let left = 0, best = 0;
const state = new Map();
for (let right = 0; right < arr.length; right++) {
// Expand: add arr[right] to state
while (windowIsInvalid(state)) {
// Shrink: remove arr[left] from state
left++;
}
best = Math.max(best, right - left + 1);
}
return best;
}
Binary search is not just for finding an element in a sorted array. The deeper insight is: any time you can halve a search space, you can binary search. This includes searching on the answer itself.
Python
def binary_search(arr, target):
left, right = 0, len(arr) - 1
while left <= right:
mid = left + (right - left) // 2 # avoids overflow
if arr[mid] == target:
return mid
elif arr[mid] < target:
left = mid + 1
else:
right = mid - 1
return -1 # not found
JavaScript
function binarySearch(arr, target) {
let left = 0, right = arr.length - 1;
while (left <= right) {
const mid = left + Math.floor((right - left) / 2);
if (arr[mid] === target) return mid;
else if (arr[mid] < target) left = mid + 1;
else right = mid - 1;
}
return -1;
}
Sometimes you are not searching an array -- you are searching a range of possible answers. If you can write a function canAchieve(x) that returns true/false, and the answers are monotonic (once true, always true), you can binary search on the answer.
Koko Eating Bananas -- Koko has piles of bananas and h hours. Find the minimum eating speed k so she finishes all piles in h hours.
Python
import math
def min_eating_speed(piles, h):
def can_finish(speed):
hours = sum(math.ceil(p / speed) for p in piles)
return hours <= h
left, right = 1, max(piles)
while left < right:
mid = (left + right) // 2
if can_finish(mid):
right = mid # try slower
else:
left = mid + 1 # too slow, go faster
return left
JavaScript
function minEatingSpeed(piles, h) {
function canFinish(speed) {
let hours = 0;
for (const p of piles) hours += Math.ceil(p / speed);
return hours <= h;
}
let left = 1, right = Math.max(...piles);
while (left < right) {
const mid = Math.floor((left + right) / 2);
if (canFinish(mid)) right = mid;
else left = mid + 1;
}
return left;
}
piles = [3, 6, 7, 11], h = 8 -- Answer is 4. At speed 4: ceil(3/4)+ceil(6/4)+ceil(7/4)+ceil(11/4) = 1+2+2+3 = 8 hours. Speed 3 would take 10 hours (too slow).
Capacity to Ship Packages Within D Days -- Same idea. Binary search on the ship capacity. The canShip(capacity) function simulates loading packages greedily and checks if it fits in D days.
Python
# Template: Binary Search on the Answer
def search_answer(data):
def is_feasible(x):
# return True if x is a valid answer
pass
left, right = min_possible, max_possible
while left < right:
mid = (left + right) // 2
if is_feasible(mid):
right = mid # try smaller (for minimum)
else:
left = mid + 1
return left
JavaScript
// Template: Binary Search on the Answer
function searchAnswer(data) {
function isFeasible(x) {
// return true if x is a valid answer
}
let left = minPossible, right = maxPossible;
while (left < right) {
const mid = Math.floor((left + right) / 2);
if (isFeasible(mid)) right = mid;
else left = mid + 1;
}
return left;
}
Breadth-First Search processes nodes level by level using a queue. It is the go-to for shortest path in unweighted graphs/grids and level-order traversal in trees.
Python
from collections import deque
def bfs(graph, start):
queue = deque([start])
visited = {start}
while queue:
node = queue.popleft()
for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
JavaScript
function bfs(graph, start) {
const queue = [start];
const visited = new Set([start]);
while (queue.length > 0) {
const node = queue.shift();
for (const neighbor of graph[node]) {
if (!visited.has(neighbor)) {
visited.add(neighbor);
queue.push(neighbor);
}
}
}
}
Start BFS from multiple sources at once by putting them all in the queue initially. This is key for problems like "distance from nearest X" or "simultaneous spread."
Rotting Oranges -- All rotten oranges spread simultaneously. Find the time for all fresh oranges to rot (or -1 if impossible).
Python
from collections import deque
def oranges_rotting(grid):
rows, cols = len(grid), len(grid[0])
queue = deque()
fresh = 0
# Collect all rotten oranges as starting points
for r in range(rows):
for c in range(cols):
if grid[r][c] == 2:
queue.append((r, c))
elif grid[r][c] == 1:
fresh += 1
minutes = 0
directions = [(0,1), (0,-1), (1,0), (-1,0)]
while queue and fresh > 0:
minutes += 1
for _ in range(len(queue)): # process current level
r, c = queue.popleft()
for dr, dc in directions:
nr, nc = r + dr, c + dc
if 0 <= nr < rows and 0 <= nc < cols and grid[nr][nc] == 1:
grid[nr][nc] = 2
fresh -= 1
queue.append((nr, nc))
return minutes if fresh == 0 else -1
JavaScript
function orangesRotting(grid) {
const rows = grid.length, cols = grid[0].length;
const queue = [];
let fresh = 0;
for (let r = 0; r < rows; r++) {
for (let c = 0; c < cols; c++) {
if (grid[r][c] === 2) queue.push([r, c]);
else if (grid[r][c] === 1) fresh++;
}
}
const dirs = [[0,1],[0,-1],[1,0],[-1,0]];
let minutes = 0;
while (queue.length > 0 && fresh > 0) {
minutes++;
const size = queue.length;
for (let i = 0; i < size; i++) {
const [r, c] = queue.shift();
for (const [dr, dc] of dirs) {
const nr = r + dr, nc = c + dc;
if (nr >= 0 && nr < rows && nc >= 0 && nc < cols && grid[nr][nc] === 1) {
grid[nr][nc] = 2;
fresh--;
queue.push([nr, nc]);
}
}
}
}
return fresh === 0 ? minutes : -1;
}
Number of Islands -- For each unvisited land cell, run BFS to mark the entire island. Count how many times you start a new BFS.
Shortest Path in Grid -- BFS naturally finds the shortest path because it explores all cells at distance d before distance d+1. Track distance by processing one level at a time.
Python
from collections import deque
def grid_bfs(grid, start_r, start_c):
rows, cols = len(grid), len(grid[0])
queue = deque([(start_r, start_c, 0)]) # row, col, distance
visited = {(start_r, start_c)}
directions = [(0,1), (0,-1), (1,0), (-1,0)]
while queue:
r, c, dist = queue.popleft()
for dr, dc in directions:
nr, nc = r + dr, c + dc
if (0 <= nr < rows and 0 <= nc < cols
and (nr, nc) not in visited
and grid[nr][nc] != obstacle):
visited.add((nr, nc))
queue.append((nr, nc, dist + 1))
JavaScript
function gridBFS(grid, startR, startC) {
const rows = grid.length, cols = grid[0].length;
const queue = [[startR, startC, 0]];
const visited = new Set([`${startR},${startC}`]);
const dirs = [[0,1],[0,-1],[1,0],[-1,0]];
while (queue.length > 0) {
const [r, c, dist] = queue.shift();
for (const [dr, dc] of dirs) {
const nr = r + dr, nc = c + dc;
const key = `${nr},${nc}`;
if (nr >= 0 && nr < rows && nc >= 0 && nc < cols
&& !visited.has(key) && grid[nr][nc] !== obstacle) {
visited.add(key);
queue.push([nr, nc, dist + 1]);
}
}
}
}
Depth-First Search explores as deep as possible before backtracking. It is the natural choice for tree problems, exploring all paths, and generating combinations/permutations.
Python
# Recursive DFS Template for Binary Trees
def dfs(node):
if not node:
return base_case
left = dfs(node.left)
right = dfs(node.right)
return combine(node.val, left, right)
# Maximum Depth of Binary Tree
def max_depth(root):
if not root:
return 0
return 1 + max(max_depth(root.left), max_depth(root.right))
JavaScript
// Recursive DFS Template for Binary Trees
function dfs(node) {
if (!node) return baseCase;
const left = dfs(node.left);
const right = dfs(node.right);
return combine(node.val, left, right);
}
// Maximum Depth of Binary Tree
function maxDepth(root) {
if (!root) return 0;
return 1 + Math.max(maxDepth(root.left), maxDepth(root.right));
}
Connected Components -- Run DFS from each unvisited node. Each DFS call discovers one connected component.
Cycle Detection -- In directed graphs, track nodes in the current DFS path. If you visit a node already on the path, there is a cycle. Use three states: unvisited, in-progress, done.
Backtracking is DFS with a twist: you make a choice, recurse, then undo the choice to try the next option. It generates all valid combinations, permutations, or subsets.
Subsets -- Generate all subsets of a set. At each index, choose to include or exclude the element.
Python
def subsets(nums):
result = []
def backtrack(start, current):
result.append(current[:]) # snapshot current state
for i in range(start, len(nums)):
current.append(nums[i]) # choose
backtrack(i + 1, current) # explore
current.pop() # un-choose
backtrack(0, [])
return result
JavaScript
function subsets(nums) {
const result = [];
function backtrack(start, current) {
result.push([...current]);
for (let i = start; i < nums.length; i++) {
current.push(nums[i]); // choose
backtrack(i + 1, current); // explore
current.pop(); // un-choose
}
}
backtrack(0, []);
return result;
}
Permutations -- Generate all orderings of the input.
Python
def permute(nums):
result = []
def backtrack(current, remaining):
if not remaining:
result.append(current[:])
return
for i in range(len(remaining)):
current.append(remaining[i])
backtrack(current, remaining[:i] + remaining[i+1:])
current.pop()
backtrack([], nums)
return result
JavaScript
function permute(nums) {
const result = [];
function backtrack(current, remaining) {
if (remaining.length === 0) {
result.push([...current]);
return;
}
for (let i = 0; i < remaining.length; i++) {
current.push(remaining[i]);
backtrack(current, [...remaining.slice(0, i), ...remaining.slice(i + 1)]);
current.pop();
}
}
backtrack([], nums);
return result;
}
Combination Sum -- Find all combinations that sum to a target, where each number can be used unlimited times. Same backtracking structure, but pass the same start index (not i+1) to allow reuse, and prune when the running sum exceeds the target.
Python
def backtrack_template(candidates):
result = []
def backtrack(start, current, remaining_state):
if is_valid_solution(current):
result.append(current[:])
return
for i in range(start, len(candidates)):
if should_prune(candidates[i]):
continue
current.append(candidates[i]) # choose
backtrack(i + 1, current, new_state) # explore
current.pop() # un-choose
backtrack(0, [], initial_state)
return result
JavaScript
function backtrackTemplate(candidates) {
const result = [];
function backtrack(start, current, remainingState) {
if (isValidSolution(current)) {
result.push([...current]);
return;
}
for (let i = start; i < candidates.length; i++) {
if (shouldPrune(candidates[i])) continue;
current.push(candidates[i]);
backtrack(i + 1, current, newState);
current.pop();
}
}
backtrack(0, [], initialState);
return result;
}
Precompute a running sum array so you can answer any "sum of subarray from i to j" query in O(1). The prefix sum at index i is the sum of all elements from index 0 through i.
Count the number of subarrays that sum to k. The brute force is O(n^2). The trick: use a hash map to store how many times each prefix sum has occurred. If current_prefix - k has been seen before, those are valid subarrays.
Python
def subarray_sum(nums, k):
count = 0
prefix = 0
prefix_counts = {0: 1} # empty prefix sum = 0 seen once
for num in nums:
prefix += num
# If (prefix - k) was seen before, those are valid subarrays
count += prefix_counts.get(prefix - k, 0)
prefix_counts[prefix] = prefix_counts.get(prefix, 0) + 1
return count
JavaScript
function subarraySum(nums, k) {
let count = 0, prefix = 0;
const prefixCounts = new Map([[0, 1]]);
for (const num of nums) {
prefix += num;
count += (prefixCounts.get(prefix - k) || 0);
prefixCounts.set(prefix, (prefixCounts.get(prefix) || 0) + 1);
}
return count;
}
nums = [1, 2, 3], k = 3 -- Prefix sums: [1, 3, 6]. At prefix=3, we check 3-3=0 which exists (count 1). At prefix=6, we check 6-3=3 which exists (count 1). Result: 2 subarrays ([1,2] and [3]).
Python
# Build prefix array
prefix = [0] * (len(arr) + 1)
for i in range(len(arr)):
prefix[i + 1] = prefix[i] + arr[i]
# Range sum query: sum from index l to r (inclusive)
range_sum = prefix[r + 1] - prefix[l]
# Prefix sum + hash map (for subarray sum = k)
prefix = 0
seen = {0: 1}
for num in arr:
prefix += num
if prefix - k in seen:
# found valid subarrays
pass
seen[prefix] = seen.get(prefix, 0) + 1
JavaScript
// Build prefix array
const prefix = new Array(arr.length + 1).fill(0);
for (let i = 0; i < arr.length; i++) {
prefix[i + 1] = prefix[i] + arr[i];
}
// Range sum query: sum from index l to r (inclusive)
const rangeSum = prefix[r + 1] - prefix[l];
// Prefix sum + hash map (for subarray sum = k)
let pre = 0;
const seen = new Map([[0, 1]]);
for (const num of arr) {
pre += num;
if (seen.has(pre - k)) {
// found valid subarrays
}
seen.set(pre, (seen.get(pre) || 0) + 1);
}
A monotonic stack maintains elements in strictly increasing or decreasing order. When a new element violates the order, you pop from the stack -- and the popped elements have found their answer (the new element is their "next greater" or "next smaller").
Given daily temperatures, for each day find how many days until a warmer temperature (or 0 if none).
Python
def daily_temperatures(temperatures):
n = len(temperatures)
result = [0] * n
stack = [] # stores indices, monotonically decreasing temps
for i in range(n):
while stack and temperatures[i] > temperatures[stack[-1]]:
prev = stack.pop()
result[prev] = i - prev
stack.append(i)
return result
JavaScript
function dailyTemperatures(temperatures) {
const n = temperatures.length;
const result = new Array(n).fill(0);
const stack = [];
for (let i = 0; i < n; i++) {
while (stack.length > 0 && temperatures[i] > temperatures[stack[stack.length - 1]]) {
const prev = stack.pop();
result[prev] = i - prev;
}
stack.push(i);
}
return result;
}
temps = [73, 74, 75, 71, 69, 72, 76, 73] -- Output: [1, 1, 4, 2, 1, 1, 0, 0]. After day 0 (73), the next warmer day is day 1 (74), so answer is 1. After day 2 (75), the next warmer day is day 6 (76), so answer is 4.
Given an array, find the next greater element for each position. Same approach as Daily Temperatures -- maintain a decreasing stack. When you encounter a larger element, pop all smaller elements from the stack: the larger element is their "next greater."
Python
# Template: Next Greater Element (decreasing stack)
def next_greater(arr):
n = len(arr)
result = [-1] * n
stack = [] # indices of elements waiting for their answer
for i in range(n):
while stack and arr[i] > arr[stack[-1]]:
idx = stack.pop()
result[idx] = arr[i] # or (i - idx) for distance
stack.append(i)
return result
JavaScript
// Template: Next Greater Element (decreasing stack)
function nextGreater(arr) {
const n = arr.length;
const result = new Array(n).fill(-1);
const stack = [];
for (let i = 0; i < n; i++) {
while (stack.length > 0 && arr[i] > arr[stack[stack.length - 1]]) {
const idx = stack.pop();
result[idx] = arr[i];
}
stack.push(i);
}
return result;
}
A greedy algorithm makes the locally optimal choice at each step, hoping it leads to a globally optimal solution. The key question is always: can I prove that the greedy choice is safe?
Given an array where each element is the max jump length from that position, determine if you can reach the last index.
Python
def can_jump(nums):
farthest = 0
for i in range(len(nums)):
if i > farthest:
return False # can't reach this index
farthest = max(farthest, i + nums[i])
return True
JavaScript
function canJump(nums) {
let farthest = 0;
for (let i = 0; i < nums.length; i++) {
if (i > farthest) return false;
farthest = Math.max(farthest, i + nums[i]);
}
return true;
}
At each step we track the farthest position reachable. If at any point our current position exceeds farthest, we are stuck. Otherwise, we always have at least one path forward. No need to track which path -- just whether any path exists.
Sort intervals by start time. Scan through them: if the current interval overlaps with the previous one (current.start <= prev.end), merge them by extending the end. Otherwise, start a new group. Greedy because sorting + merging left-to-right never misses an overlap.
Interval problems follow a consistent pattern: sort by start time, then process intervals left to right checking for overlaps.
Python
def merge(intervals):
intervals.sort(key=lambda x: x[0])
merged = [intervals[0]]
for start, end in intervals[1:]:
if start <= merged[-1][1]: # overlaps with previous
merged[-1][1] = max(merged[-1][1], end) # extend
else:
merged.append([start, end]) # no overlap, new interval
return merged
JavaScript
function merge(intervals) {
intervals.sort((a, b) => a[0] - b[0]);
const merged = [intervals[0]];
for (let i = 1; i < intervals.length; i++) {
const [start, end] = intervals[i];
const prev = merged[merged.length - 1];
if (start <= prev[1]) {
prev[1] = Math.max(prev[1], end);
} else {
merged.push([start, end]);
}
}
return merged;
}
intervals = [[1,3],[2,6],[8,10],[15,18]] -- After sorting (already sorted): [1,3] and [2,6] overlap (2 <= 3), merge to [1,6]. [8,10] and [15,18] do not overlap with anything. Result: [[1,6],[8,10],[15,18]].
Insert Interval: Find the position and merge. Non-overlapping Intervals: Count removals (greedy by end time). Meeting Rooms: Sort by start, check any overlap. Meeting Rooms II: Min-heap for end times, count peak overlaps.
The sweep line (also called line sweep or event-based processing) technique converts a 2D or interval-based problem into a series of events sorted along one axis, then processes them in order. Instead of comparing every pair of objects (O(n^2)), you "sweep" through events left to right and maintain a data structure that tracks the current state.
The pattern has three steps:
Given intervals: [1,4], [2,6], [5,8], [7,9]
Events (sorted):
pos=1 START pos=2 START pos=4 END pos=5 START pos=6 END pos=7 START pos=8 END pos=9 END
Sweep left to right, tracking active count:
Active: 0 1 2 1 2 1 2 1 0
Timeline:
1────4
2────────6
5────────8
7──────9
Maximum overlap: 2 (at positions 2-4 and 5-6 and 7-8)
Given a list of meeting time intervals, find the minimum number of conference rooms required. This is the classic sweep line problem.
Python
def min_meeting_rooms(intervals):
events = []
for start, end in intervals:
events.append((start, 1)) # +1 at start (meeting begins)
events.append((end, -1)) # -1 at end (meeting ends)
events.sort() # sort by time; ties: ends (-1) before starts (+1)
max_rooms = 0
current_rooms = 0
for time, delta in events:
current_rooms += delta
max_rooms = max(max_rooms, current_rooms)
return max_rooms
# Example: [[0,30],[5,10],[15,20]]
# Events: (0,+1),(5,+1),(10,-1),(15,+1),(20,-1),(30,-1)
# Sweep: 1 2 1 2 1 0
# Answer: 2 rooms needed
JavaScript
function minMeetingRooms(intervals) {
const events = [];
for (const [start, end] of intervals) {
events.push([start, 1]); // meeting begins
events.push([end, -1]); // meeting ends
}
events.sort((a, b) => a[0] - b[0] || a[1] - b[1]); // sort by time, ends before starts
let maxRooms = 0, currentRooms = 0;
for (const [time, delta] of events) {
currentRooms += delta;
maxRooms = Math.max(maxRooms, currentRooms);
}
return maxRooms;
}
By converting intervals into +1 (start) and -1 (end) events, we turn a 2D overlap problem into a 1D running sum. The maximum value of the running sum is the peak number of overlapping intervals -- exactly the number of rooms (or resources) needed. Time complexity: O(n log n) for sorting, O(n) for the sweep. Space: O(n).
Given buildings represented as [left, right, height], compute the skyline outline. This is a harder sweep line problem that uses a max-heap to track the tallest active building.
Python
import heapq
def get_skyline(buildings):
# Create events: (x, type, height)
# Start: negative height (for max-heap via min-heap trick)
# End: positive height
events = []
for left, right, height in buildings:
events.append((left, -height, right)) # start event
events.append((right, 0, 0)) # end event
events.sort() # sort by x; at same x, taller buildings first (negative height)
# Max-heap: (-height, end_x). Start with ground level.
result = []
heap = [(0, float('inf'))] # (neg_height, end_x) -- ground never ends
for x, neg_h, end in events:
if neg_h != 0: # start event
heapq.heappush(heap, (neg_h, end))
# Remove buildings that have ended
while heap[0][1] <= x:
heapq.heappop(heap)
max_height = -heap[0][0]
if not result or result[-1][1] != max_height:
result.append([x, max_height])
return result
# Example: buildings = [[2,9,10],[3,7,15],[5,12,12],[15,20,10],[19,24,8]]
# Output: [[2,10],[3,15],[7,12],[12,0],[15,10],[20,8],[24,0]]
JavaScript
// Simplified approach using sorted events + tracking max height
function getSkyline(buildings) {
const events = [];
for (const [left, right, height] of buildings) {
events.push([left, -height, right]); // start: negative height
events.push([right, 0, 0]); // end event
}
events.sort((a, b) => a[0] - b[0] || a[1] - b[1]);
// Use a sorted multiset or map to track active heights
const active = new Map(); // height -> count
active.set(0, 1); // ground level always active
const result = [];
let prevMax = 0;
for (const [x, negH, end] of events) {
if (negH !== 0) {
// Start event: add height
const h = -negH;
active.set(h, (active.get(h) || 0) + 1);
} else {
// End event: find and remove the building's height
const bld = buildings.find(b => b[1] === x && active.has(b[2]));
if (bld) {
const h = bld[2];
if (active.get(h) === 1) active.delete(h);
else active.set(h, active.get(h) - 1);
}
}
const curMax = Math.max(...active.keys());
if (curMax !== prevMax) {
result.push([x, curMax]);
prevMax = curMax;
}
}
return result;
}
Implement a calendar that rejects overlapping bookings. Sweep line with sorted events makes checking conflicts efficient.
Python
class MyCalendar:
def __init__(self):
self.bookings = []
def book(self, start, end):
for s, e in self.bookings:
if start < e and s < end: # overlap check
return False
self.bookings.append((start, end))
return True
# Optimized with sweep line for MyCalendar III (count max overlaps):
import collections
class MyCalendarThree:
def __init__(self):
self.timeline = collections.defaultdict(int)
def book(self, start, end):
self.timeline[start] += 1 # event begins
self.timeline[end] -= 1 # event ends
# Sweep through timeline
active = 0
max_k = 0
for time in sorted(self.timeline):
active += self.timeline[time]
max_k = max(max_k, active)
return max_k
JavaScript
class MyCalendarThree {
constructor() {
this.timeline = new Map();
}
book(start, end) {
this.timeline.set(start, (this.timeline.get(start) || 0) + 1);
this.timeline.set(end, (this.timeline.get(end) || 0) - 1);
let active = 0, maxK = 0;
const keys = [...this.timeline.keys()].sort((a, b) => a - b);
for (const time of keys) {
active += this.timeline.get(time);
maxK = Math.max(maxK, active);
}
return maxK;
}
}
Regular Interval Problems (sort by start, check neighbors):
- Merge Intervals → sort + linear scan
- Insert Interval → find position + merge
- Non-Overlapping Count → sort by end, greedy
Sweep Line Problems (convert to events, sweep with counter/heap):
- Meeting Rooms II → count max concurrent (events + counter)
- The Skyline Problem → track tallest active (events + max-heap)
- Rectangle Area Union → horizontal sweep + vertical sweep
- My Calendar III → count k-booking (events + counter)
- Employee Free Time → merge all intervals via sweep
Rule of thumb:
If you need to count/track CONCURRENT or OVERLAPPING state → sweep line
If you need to MERGE or CHECK neighbors → sort + scan
HashMaps give you O(1) average-time lookups, making them the go-to for frequency counting, finding complements, and grouping. Many problems that look like O(n^2) brute force become O(n) with a HashMap.
The most famous LeetCode problem. For each number, check if its complement (target - num) is already in the map.
Python
def two_sum(nums, target):
seen = {} # value -> index
for i, num in enumerate(nums):
complement = target - num
if complement in seen:
return [seen[complement], i]
seen[num] = i
JavaScript
function twoSum(nums, target) {
const seen = new Map();
for (let i = 0; i < nums.length; i++) {
const complement = target - nums[i];
if (seen.has(complement)) return [seen.get(complement), i];
seen.set(nums[i], i);
}
}
Sort each word to create a canonical key. Group words with the same key.
Python
def group_anagrams(strs):
groups = {}
for s in strs:
key = tuple(sorted(s))
groups.setdefault(key, []).append(s)
return list(groups.values())
JavaScript
function groupAnagrams(strs) {
const groups = new Map();
for (const s of strs) {
const key = [...s].sort().join('');
if (!groups.has(key)) groups.set(key, []);
groups.get(key).push(s);
}
return [...groups.values()];
}
Use prefix sum + hashmap. If prefix[j] - prefix[i] = k, then subarray (i,j] sums to k. Store prefix sum frequencies in a map.
Python
def subarray_sum(nums, k):
count = 0
prefix = 0
seen = {0: 1} # prefix_sum -> frequency
for num in nums:
prefix += num
count += seen.get(prefix - k, 0)
seen[prefix] = seen.get(prefix, 0) + 1
return count
JavaScript
function subarraySum(nums, k) {
let count = 0, prefix = 0;
const seen = new Map([[0, 1]]);
for (const num of nums) {
prefix += num;
count += (seen.get(prefix - k) || 0);
seen.set(prefix, (seen.get(prefix) || 0) + 1);
}
return count;
}
If two prefix sums differ by k, the subarray between them sums to k. The hashmap counts how many previous prefix sums equal (current prefix - k), giving us the count of valid subarrays ending here.
Find the contiguous subarray with the maximum sum in O(n). The key insight: at each position, either extend the previous subarray or start a new one.
Python
def max_subarray(nums):
max_ending_here = nums[0]
max_so_far = nums[0]
for num in nums[1:]:
max_ending_here = max(num, max_ending_here + num)
max_so_far = max(max_so_far, max_ending_here)
return max_so_far
JavaScript
function maxSubArray(nums) {
let maxHere = nums[0], maxSoFar = nums[0];
for (let i = 1; i < nums.length; i++) {
maxHere = Math.max(nums[i], maxHere + nums[i]);
maxSoFar = Math.max(maxSoFar, maxHere);
}
return maxSoFar;
}
The max circular subarray is either: (1) a normal max subarray, or (2) total_sum - min_subarray (the "wrap-around" case). Take the max of both.
Python
def max_subarray_circular(nums):
max_here = min_here = 0
max_sum = float('-inf')
min_sum = float('inf')
total = 0
for num in nums:
max_here = max(num, max_here + num)
max_sum = max(max_sum, max_here)
min_here = min(num, min_here + num)
min_sum = min(min_sum, min_here)
total += num
# If all negative, min_sum == total, return max_sum (non-circular)
return max(max_sum, total - min_sum) if total != min_sum else max_sum
JavaScript
function maxSubarraySumCircular(nums) {
let maxHere = 0, minHere = 0;
let maxSum = -Infinity, minSum = Infinity, total = 0;
for (const num of nums) {
maxHere = Math.max(num, maxHere + num);
maxSum = Math.max(maxSum, maxHere);
minHere = Math.min(num, minHere + num);
minSum = Math.min(minSum, minHere);
total += num;
}
return total === minSum ? maxSum : Math.max(maxSum, total - minSum);
}
Find the K largest, smallest, or most frequent elements. The heap approach gives O(n log k) which is better than sorting O(n log n) when k is much smaller than n.
Python
import heapq
def find_kth_largest(nums, k):
# Min-heap of size k -- the root is the k-th largest
heap = nums[:k]
heapq.heapify(heap)
for num in nums[k:]:
if num > heap[0]:
heapq.heapreplace(heap, num)
return heap[0]
JavaScript
// Using quickselect (average O(n), worst O(n^2))
function findKthLargest(nums, k) {
const target = nums.length - k;
function quickselect(lo, hi) {
let pivot = nums[hi], j = lo;
for (let i = lo; i < hi; i++) {
if (nums[i] <= pivot) {
[nums[i], nums[j]] = [nums[j], nums[i]];
j++;
}
}
[nums[j], nums[hi]] = [nums[hi], nums[j]];
if (j === target) return nums[j];
return j < target ? quickselect(j + 1, hi) : quickselect(lo, j - 1);
}
return quickselect(0, nums.length - 1);
}
Python
from collections import Counter
import heapq
def top_k_frequent(nums, k):
freq = Counter(nums)
return heapq.nlargest(k, freq.keys(), key=freq.get)
JavaScript
function topKFrequent(nums, k) {
const freq = new Map();
for (const n of nums) freq.set(n, (freq.get(n) || 0) + 1);
// Bucket sort: index = frequency
const buckets = Array.from({length: nums.length + 1}, () => []);
for (const [num, count] of freq) buckets[count].push(num);
const result = [];
for (let i = buckets.length - 1; i >= 0 && result.length < k; i--) {
result.push(...buckets[i]);
}
return result.slice(0, k);
}
Order vertices of a DAG (Directed Acyclic Graph) so that for every edge u → v, u comes before v. Two approaches: Kahn's (BFS with indegree tracking) and DFS post-order.
Python
from collections import deque
def topo_sort(num_nodes, edges):
adj = [[] for _ in range(num_nodes)]
indegree = [0] * num_nodes
for u, v in edges:
adj[u].append(v)
indegree[v] += 1
queue = deque(i for i in range(num_nodes) if indegree[i] == 0)
order = []
while queue:
node = queue.popleft()
order.append(node)
for nei in adj[node]:
indegree[nei] -= 1
if indegree[nei] == 0:
queue.append(nei)
return order if len(order) == num_nodes else [] # empty = cycle
JavaScript
function topoSort(numNodes, edges) {
const adj = Array.from({length: numNodes}, () => []);
const indegree = new Array(numNodes).fill(0);
for (const [u, v] of edges) { adj[u].push(v); indegree[v]++; }
const queue = [];
for (let i = 0; i < numNodes; i++) if (indegree[i] === 0) queue.push(i);
const order = [];
while (queue.length) {
const node = queue.shift();
order.push(node);
for (const nei of adj[node]) {
if (--indegree[nei] === 0) queue.push(nei);
}
}
return order.length === numNodes ? order : [];
}
Shortest path from a single source in a weighted graph with non-negative edges. Uses a min-heap to always process the closest unvisited node. Time: O((V + E) log V).
Python
import heapq
def dijkstra(graph, start):
# graph: adjacency list, graph[u] = [(v, weight), ...]
dist = {start: 0}
heap = [(0, start)]
while heap:
d, u = heapq.heappop(heap)
if d > dist.get(u, float('inf')):
continue # stale entry
for v, w in graph[u]:
nd = d + w
if nd < dist.get(v, float('inf')):
dist[v] = nd
heapq.heappush(heap, (nd, v))
return dist
JavaScript
function dijkstra(graph, start) {
// Simple min-heap via sorted insert (use a proper heap for production)
const dist = new Map([[start, 0]]);
const heap = [[0, start]]; // [distance, node]
while (heap.length) {
heap.sort((a, b) => a[0] - b[0]);
const [d, u] = heap.shift();
if (d > (dist.get(u) ?? Infinity)) continue;
for (const [v, w] of (graph[u] || [])) {
const nd = d + w;
if (nd < (dist.get(v) ?? Infinity)) {
dist.set(v, nd);
heap.push([nd, v]);
}
}
}
return dist;
}
The "stale entry" check (if d > dist[u]: continue) is crucial. Since we can't decrease-key in a standard heap, we just push duplicates and skip them when they pop. This keeps the algorithm correct and efficient.
A graph is bipartite if you can 2-color it: assign every node to group A or group B such that no edge connects two nodes in the same group. Check this with BFS or DFS.
Python
from collections import deque
def is_bipartite(graph):
# graph: adjacency list, graph[i] = [neighbors]
n = len(graph)
color = [-1] * n
for start in range(n):
if color[start] != -1:
continue
queue = deque([start])
color[start] = 0
while queue:
node = queue.popleft()
for nei in graph[node]:
if color[nei] == -1:
color[nei] = 1 - color[node]
queue.append(nei)
elif color[nei] == color[node]:
return False
return True
JavaScript
function isBipartite(graph) {
const n = graph.length;
const color = new Array(n).fill(-1);
for (let start = 0; start < n; start++) {
if (color[start] !== -1) continue;
const queue = [start];
color[start] = 0;
while (queue.length) {
const node = queue.shift();
for (const nei of graph[node]) {
if (color[nei] === -1) {
color[nei] = 1 - color[node];
queue.push(nei);
} else if (color[nei] === color[node]) return false;
}
}
}
return true;
}
Find all prime numbers up to N in O(N log log N). Mark multiples of each prime as composite starting from p^2.
Python
def sieve(n):
is_prime = [True] * (n + 1)
is_prime[0] = is_prime[1] = False
for p in range(2, int(n**0.5) + 1):
if is_prime[p]:
for multiple in range(p * p, n + 1, p):
is_prime[multiple] = False
return [i for i in range(2, n + 1) if is_prime[i]]
JavaScript
function sieve(n) {
const isPrime = new Array(n + 1).fill(true);
isPrime[0] = isPrime[1] = false;
for (let p = 2; p * p <= n; p++) {
if (isPrime[p]) {
for (let m = p * p; m <= n; m += p) isPrime[m] = false;
}
}
return isPrime.reduce((acc, v, i) => v ? [...acc, i] : acc, []);
}
DP is the most important topic in competitive programming. Here are the three foundational DP frameworks. For full coverage see the DP page.
dp[i] depends on a fixed number of previous states (dp[i-1], dp[i-2], etc.). Often optimizable to O(1) space.
Python
# Template: Fibonacci-type DP
def fib_dp(n):
if n <= 1: return n
prev2, prev1 = 0, 1
for _ in range(2, n + 1):
prev2, prev1 = prev1, prev2 + prev1
return prev1
For each item, decide: include it or skip it. dp[i][w] = best value using first i items with capacity w.
Python
def knapsack(weights, values, capacity):
n = len(weights)
dp = [[0] * (capacity + 1) for _ in range(n + 1)]
for i in range(1, n + 1):
for w in range(capacity + 1):
dp[i][w] = dp[i-1][w] # skip item i
if weights[i-1] <= w:
dp[i][w] = max(dp[i][w], dp[i-1][w - weights[i-1]] + values[i-1])
return dp[n][capacity]
dp[i] = length of LIS ending at index i. For each j < i where arr[j] < arr[i], dp[i] = max(dp[i], dp[j] + 1).
Python
def lis(nums):
n = len(nums)
dp = [1] * n
for i in range(1, n):
for j in range(i):
if nums[j] < nums[i]:
dp[i] = max(dp[i], dp[j] + 1)
return max(dp)
O(V^3). Three nested loops: for each intermediate vertex k, check if going through k gives a shorter path from i to j.
Python
def floyd_warshall(n, edges):
INF = float('inf')
dist = [[INF] * n for _ in range(n)]
for i in range(n):
dist[i][i] = 0
for u, v, w in edges:
dist[u][v] = w
for k in range(n): # intermediate
for i in range(n): # source
for j in range(n): # destination
if dist[i][k] + dist[k][j] < dist[i][j]:
dist[i][j] = dist[i][k] + dist[k][j]
return dist
O(V * E). Relax all edges V-1 times. If any edge can still be relaxed after V-1 iterations, there's a negative cycle.
Python
def bellman_ford(n, edges, src):
dist = [float('inf')] * n
dist[src] = 0
for _ in range(n - 1):
for u, v, w in edges:
if dist[u] + w < dist[v]:
dist[v] = dist[u] + w
# Check for negative cycles
for u, v, w in edges:
if dist[u] + w < dist[v]:
return None # negative cycle exists
return dist
JavaScript
function bellmanFord(n, edges, src) {
const dist = new Array(n).fill(Infinity);
dist[src] = 0;
for (let i = 0; i < n - 1; i++) {
for (const [u, v, w] of edges) {
if (dist[u] + w < dist[v]) dist[v] = dist[u] + w;
}
}
for (const [u, v, w] of edges) {
if (dist[u] + w < dist[v]) return null; // negative cycle
}
return dist;
}
Dijkstra: Single source, non-negative weights, O((V+E) log V). Bellman-Ford: Single source, negative weights OK, O(V*E). Floyd-Warshall: All pairs, O(V^3), simple to implement.
Represent subsets as bitmasks (integers). Bit i is 1 if element i is in the subset. This lets you enumerate all 2^n subsets and use them as DP states.
dp[mask][i] = minimum cost to visit all cities in mask, ending at city i.
Python
def tsp(dist):
n = len(dist)
INF = float('inf')
dp = [[INF] * n for _ in range(1 << n)]
dp[1][0] = 0 # start at city 0
for mask in range(1 << n):
for u in range(n):
if dp[mask][u] == INF: continue
if not (mask & (1 << u)): continue
for v in range(n):
if mask & (1 << v): continue # already visited
new_mask = mask | (1 << v)
dp[new_mask][v] = min(dp[new_mask][v], dp[mask][u] + dist[u][v])
full = (1 << n) - 1
return min(dp[full][i] + dist[i][0] for i in range(n))
Python
# Check if bit i is set
(mask >> i) & 1
# Set bit i
mask | (1 << i)
# Clear bit i
mask & ~(1 << i)
# Iterate over all subsets of mask
sub = mask
while sub > 0:
# process subset 'sub'
sub = (sub - 1) & mask
Count numbers in a range [0, N] that satisfy some digit-based constraint. Build the number digit by digit, tracking whether we're still "tight" (bounded by N's digits) and any constraint state.
Python
from functools import lru_cache
def count_up_to(N, max_digit_sum):
digits = [int(d) for d in str(N)]
@lru_cache(maxsize=None)
def dp(pos, digit_sum, tight):
if digit_sum > max_digit_sum:
return 0
if pos == len(digits):
return 1
limit = digits[pos] if tight else 9
result = 0
for d in range(0, limit + 1):
result += dp(pos + 1, digit_sum + d, tight and d == limit)
return result
return dp(0, 0, True)
# Count in range [L, R]: count_up_to(R) - count_up_to(L - 1)
tight: Are we still bounded by N? If all digits so far match N's digits, the next digit can be at most N's next digit. Once we place a smaller digit, we're "free" (tight = False) and can use 0-9 for all remaining positions.
A data structure that supports point updates and prefix sum queries in O(log n). Much simpler to implement than a segment tree. Uses the lowest set bit trick to navigate.
Python
class FenwickTree:
def __init__(self, n):
self.n = n
self.tree = [0] * (n + 1)
def update(self, i, delta):
# Add delta to index i (1-indexed)
while i <= self.n:
self.tree[i] += delta
i += i & (-i) # add lowest set bit
def query(self, i):
# Prefix sum [1..i]
s = 0
while i > 0:
s += self.tree[i]
i -= i & (-i) # remove lowest set bit
return s
def range_query(self, l, r):
return self.query(r) - self.query(l - 1)
JavaScript
class FenwickTree {
constructor(n) {
this.n = n;
this.tree = new Array(n + 1).fill(0);
}
update(i, delta) {
for (; i <= this.n; i += i & (-i)) this.tree[i] += delta;
}
query(i) {
let s = 0;
for (; i > 0; i -= i & (-i)) s += this.tree[i];
return s;
}
rangeQuery(l, r) { return this.query(r) - this.query(l - 1); }
}
This extracts the lowest set bit. For index 6 (binary 110), 6 & (-6) = 2 (binary 010). Update goes UP by adding the lowest set bit. Query goes DOWN by subtracting it. This is what makes the tree work in O(log n).
Range queries (sum, min, max, GCD) and point/range updates in O(log n). More flexible than Fenwick tree. Lazy propagation enables range updates.
Python
class SegTree:
def __init__(self, arr):
self.n = len(arr)
self.tree = [0] * (4 * self.n)
self._build(arr, 1, 0, self.n - 1)
def _build(self, arr, node, lo, hi):
if lo == hi:
self.tree[node] = arr[lo]
return
mid = (lo + hi) // 2
self._build(arr, 2 * node, lo, mid)
self._build(arr, 2 * node + 1, mid + 1, hi)
self.tree[node] = self.tree[2 * node] + self.tree[2 * node + 1]
def update(self, idx, val, node=1, lo=0, hi=None):
if hi is None: hi = self.n - 1
if lo == hi:
self.tree[node] = val
return
mid = (lo + hi) // 2
if idx <= mid:
self.update(idx, val, 2 * node, lo, mid)
else:
self.update(idx, val, 2 * node + 1, mid + 1, hi)
self.tree[node] = self.tree[2 * node] + self.tree[2 * node + 1]
def query(self, ql, qr, node=1, lo=0, hi=None):
if hi is None: hi = self.n - 1
if qr < lo or hi < ql: return 0
if ql <= lo and hi <= qr: return self.tree[node]
mid = (lo + hi) // 2
return self.query(ql, qr, 2 * node, lo, mid) + \
self.query(ql, qr, 2 * node + 1, mid + 1, hi)
JavaScript
class SegTree {
constructor(arr) {
this.n = arr.length;
this.tree = new Array(4 * this.n).fill(0);
this._build(arr, 1, 0, this.n - 1);
}
_build(arr, node, lo, hi) {
if (lo === hi) { this.tree[node] = arr[lo]; return; }
const mid = (lo + hi) >> 1;
this._build(arr, 2 * node, lo, mid);
this._build(arr, 2 * node + 1, mid + 1, hi);
this.tree[node] = this.tree[2 * node] + this.tree[2 * node + 1];
}
update(idx, val, node = 1, lo = 0, hi = this.n - 1) {
if (lo === hi) { this.tree[node] = val; return; }
const mid = (lo + hi) >> 1;
if (idx <= mid) this.update(idx, val, 2 * node, lo, mid);
else this.update(idx, val, 2 * node + 1, mid + 1, hi);
this.tree[node] = this.tree[2 * node] + this.tree[2 * node + 1];
}
query(ql, qr, node = 1, lo = 0, hi = this.n - 1) {
if (qr < lo || hi < ql) return 0;
if (ql <= lo && hi <= qr) return this.tree[node];
const mid = (lo + hi) >> 1;
return this.query(ql, qr, 2 * node, lo, mid)
+ this.query(ql, qr, 2 * node + 1, mid + 1, hi);
}
}
Find all occurrences of pattern in text in O(n + m). Build a prefix function (failure function) that tells you how far to "fall back" on a mismatch instead of restarting.
Python
def kmp_search(text, pattern):
# Build prefix function
m = len(pattern)
lps = [0] * m
length = 0
i = 1
while i < m:
if pattern[i] == pattern[length]:
length += 1
lps[i] = length
i += 1
elif length:
length = lps[length - 1]
else:
lps[i] = 0
i += 1
# Search
results = []
j = 0 # index in pattern
for i in range(len(text)):
while j > 0 and text[i] != pattern[j]:
j = lps[j - 1]
if text[i] == pattern[j]:
j += 1
if j == m:
results.append(i - m + 1)
j = lps[j - 1]
return results
JavaScript
function kmpSearch(text, pattern) {
const m = pattern.length;
const lps = new Array(m).fill(0);
let len = 0, i = 1;
while (i < m) {
if (pattern[i] === pattern[len]) { lps[i++] = ++len; }
else if (len) { len = lps[len - 1]; }
else { lps[i++] = 0; }
}
const results = [];
let j = 0;
for (i = 0; i < text.length; i++) {
while (j > 0 && text[i] !== pattern[j]) j = lps[j - 1];
if (text[i] === pattern[j]) j++;
if (j === m) { results.push(i - m + 1); j = lps[j - 1]; }
}
return results;
}
Hash substrings in O(1) using polynomial hashing. Compare hashes instead of characters. Use double hashing to minimize collisions.
Python
def rabin_karp(text, pattern):
n, m = len(text), len(pattern)
if m > n: return []
BASE, MOD = 131, 10**9 + 7
# Compute pattern hash and first window hash
p_hash = 0
t_hash = 0
power = 1
for i in range(m):
p_hash = (p_hash * BASE + ord(pattern[i])) % MOD
t_hash = (t_hash * BASE + ord(text[i])) % MOD
if i < m - 1:
power = power * BASE % MOD
results = []
for i in range(n - m + 1):
if t_hash == p_hash and text[i:i+m] == pattern:
results.append(i)
if i < n - m:
t_hash = (t_hash - ord(text[i]) * power) * BASE + ord(text[i + m])
t_hash %= MOD
return results
Optimize LIS from O(n^2) to O(n log n). Maintain a "tails" array where tails[i] is the smallest tail element of all increasing subsequences of length i+1. Use binary search to find where each element fits.
Python
from bisect import bisect_left
def lis_nlogn(nums):
tails = []
for num in nums:
pos = bisect_left(tails, num)
if pos == len(tails):
tails.append(num) # extend longest subsequence
else:
tails[pos] = num # replace to keep smallest tail
return len(tails)
JavaScript
function lisNlogN(nums) {
const tails = [];
for (const num of nums) {
let lo = 0, hi = tails.length;
while (lo < hi) {
const mid = (lo + hi) >> 1;
if (tails[mid] < num) lo = mid + 1;
else hi = mid;
}
tails[lo] = num;
}
return tails.length;
}
The tails array is always sorted. For each new number, binary search finds the first tail ≥ num. If found, replace it (we found a subsequence of the same length with a smaller tail). If not found, extend the array (we found a longer subsequence). The length of tails is the LIS length.
An Eulerian path visits every edge exactly once. An Eulerian circuit returns to the starting vertex.
Python
from collections import defaultdict
def eulerian_path(edges):
# Directed graph: edges = [(u, v), ...]
graph = defaultdict(list)
in_deg = defaultdict(int)
out_deg = defaultdict(int)
for u, v in edges:
graph[u].append(v)
out_deg[u] += 1
in_deg[v] += 1
# Find start node
start = edges[0][0]
for node in graph:
if out_deg[node] - in_deg[node] == 1:
start = node
break
# Sort neighbors for lexicographic order (if needed)
for node in graph:
graph[node].sort(reverse=True) # reverse so we can pop()
# Hierholzer's
stack = [start]
path = []
while stack:
while graph[stack[-1]]:
stack.append(graph[stack[-1]].pop())
path.append(stack.pop())
return path[::-1]
JavaScript
function eulerianPath(edges) {
const graph = new Map();
const inDeg = new Map(), outDeg = new Map();
for (const [u, v] of edges) {
if (!graph.has(u)) graph.set(u, []);
graph.get(u).push(v);
outDeg.set(u, (outDeg.get(u) || 0) + 1);
inDeg.set(v, (inDeg.get(v) || 0) + 1);
}
let start = edges[0][0];
for (const [node] of graph) {
if ((outDeg.get(node) || 0) - (inDeg.get(node) || 0) === 1) {
start = node; break;
}
}
for (const [, neighbors] of graph) neighbors.sort();
const stack = [start], path = [];
while (stack.length) {
const top = stack[stack.length - 1];
if (graph.has(top) && graph.get(top).length) {
stack.push(graph.get(top).pop());
} else {
path.push(stack.pop());
}
}
return path.reverse();
}
Multiply two polynomials in O(n log n) instead of O(n^2). Convert coefficients to point-value form (evaluate at roots of unity), multiply pointwise, then convert back. Used for convolutions, large number multiplication, and counting problems.
Python
import cmath
def fft(a, invert=False):
n = len(a)
if n == 1: return a
a_even = fft(a[0::2], invert)
a_odd = fft(a[1::2], invert)
angle = 2 * cmath.pi / n * (-1 if invert else 1)
w = 1
wn = cmath.exp(complex(0, angle))
result = [0] * n
for i in range(n // 2):
result[i] = a_even[i] + w * a_odd[i]
result[i + n // 2] = a_even[i] - w * a_odd[i]
if invert:
result[i] /= 2
result[i + n // 2] /= 2
w *= wn
return result
def poly_multiply(a, b):
# Pad to next power of 2
n = 1
while n < len(a) + len(b): n <<= 1
a += [0] * (n - len(a))
b += [0] * (n - len(b))
fa = fft(a)
fb = fft(b)
fc = [fa[i] * fb[i] for i in range(n)]
c = fft(fc, invert=True)
return [round(x.real) for x in c]
Standard FFT uses complex numbers and has floating-point precision issues. For competitive programming with modular arithmetic, use NTT (Number Theoretic Transform) instead, which works over a prime modulus (commonly 998244353) and avoids floating-point errors entirely.
When you read a problem, look for these clues to identify the right pattern. This is the single most valuable skill for LeetCode.
| Clue in Problem | Pattern to Try | Why |
|---|---|---|
| "Sorted array" | Two Pointers / Binary Search | Sorted order lets you eliminate halves or converge pointers |
| "Contiguous subarray" or "substring" | Sliding Window / Prefix Sum | Contiguous means a window can slide across it |
| "Shortest path" (unweighted) | BFS | BFS explores level-by-level, finding shortest path naturally |
| "All combinations / permutations / subsets" | Backtracking (DFS) | Choose-explore-unchoose generates all possibilities |
| "K-th largest / smallest" | Heap | Heap maintains top-k efficiently in O(n log k) |
| "Next greater / smaller element" | Monotonic Stack | Stack pops when order is violated, connecting elements |
| "Tree traversal" | DFS (recursion) | Trees are naturally recursive structures |
| "Graph connectivity / components" | Union-Find / DFS | Track connected components or detect cycles |
| "Optimization" (min cost, max profit) | DP / Greedy | Overlapping subproblems = DP; provably safe local choice = Greedy |
| "In-place" modification | Two Pointers | Read/write pointers partition the array without extra space |
| "Intervals" (merge, insert, schedule) | Sort + Sweep | Sort by start, process left to right |
| "Max concurrent" / "peak overlap" / "how many at once" | Sweep Line | Convert to +1/-1 events, sweep to find peak running sum |
| "Frequency" or "count occurrences" | Hash Map | O(1) lookups make counting fast |
| "Linked list cycle" or "middle element" | Fast & Slow Pointers | Tortoise and hare detect cycles and find midpoints |
| "Level by level" (tree) | BFS | Queue processes one level at a time |
| "Top K frequent" | Heap / Bucket Sort | Heap for O(n log k), bucket sort for O(n) |
| "Maximum contiguous subarray" | Kadane's Algorithm | Track max ending here vs starting fresh at each position |
| "Prerequisites" or "dependency order" | Topological Sort | DAG ordering ensures dependencies come first |
| "Shortest path" (weighted, non-negative) | Dijkstra's Algorithm | Min-heap greedily processes closest unvisited node |
| "Shortest path" (negative weights) | Bellman Ford | Relax all edges V-1 times; detects negative cycles |
| "All-pairs shortest path" | Floyd Warshall | O(V^3) triple loop through intermediate vertices |
| "Can we split into two groups" | Bipartite Check (BFS/DFS) | 2-color the graph; conflict means not bipartite |
| "Count primes" or number theory | Sieve of Eratosthenes | Mark multiples as composite in O(N log log N) |
| "Assign n items" (n ≤ 20) | Bitmask DP | Enumerate 2^n subsets as bitmask states |
| "Count numbers in range with digit constraint" | Digit DP | Build number digit-by-digit tracking tight bound |
| "Range sum/min/max with updates" | Segment Tree / Fenwick Tree | O(log n) queries and updates on ranges |
| "Find pattern in string" (exact match) | KMP / Rolling Hash | Linear-time string matching without backtracking |
| "Visit every edge exactly once" | Eulerian Path (Hierholzer's) | Check degree conditions, then DFS with edge removal |
| "Polynomial multiplication" or convolution | FFT / NTT | O(n log n) via evaluation at roots of unity |
These are starting points, not guarantees. Some problems combine multiple patterns (e.g., binary search + greedy, sliding window + hash map). The clue tells you where to start; the details tell you how to adapt.
Given each problem description, identify which algorithm pattern you would use. Think about the clues before clicking an answer.