System design is how you think about building software at scale before you write a single line of code. It's the skill that separates someone who can build a feature from someone who can build a system.
System design is how you think about building software at scale before you write a single line of code. It's the skill that separates someone who can build a feature from someone who can build a system. Interviewers use it to test whether you can reason about trade-offs, and companies need it to build products that don't collapse under real-world load.
You don't need to be a principal engineer to benefit from system design thinking. Every time you make a decision -- "should I store this in memory or in a database?", "should this be synchronous or async?", "what happens if this service goes down?" -- you're doing system design. The framework below gives you a structured way to think through these decisions instead of guessing.
Most people jump straight to drawing boxes and arrows. Don't. The first 5-10 minutes of any system design should be spent understanding what you're actually building. There are two types of requirements:
Functional requirements describe the features and behaviors of the system. They answer: "What can the user do? What does the system need to support?"
// Example: Design a URL shortener (like bit.ly)
Functional Requirements:
1. Users can submit a long URL and get a short URL back
2. When someone visits the short URL, they are redirected to the original
3. Users can optionally set a custom short code ("mylink" instead of "x7Kp2")
4. Short links expire after a configurable time (default: never)
5. Users can see analytics (click count, referrers, geolocation)
// How to extract these in an interview:
// Ask: "Who are the users?" (anonymous? logged in? admins?)
// Ask: "What are the core actions?" (create, read, update, delete)
// Ask: "What's the most important use case?" (focus on this first)
// Ask: "What can we leave out for v1?" (reduces scope)
Non-functional requirements describe the quality attributes of the system. They constrain HOW you build it.
// Example: URL shortener NFRs
Non-Functional Requirements:
1. AVAILABILITY -- System should be 99.9% uptime (8.7 hours downtime/year)
2. LATENCY -- URL redirect should complete in < 100ms (p99)
3. THROUGHPUT -- Handle 10,000 redirects per second, 100 new URLs per second
4. SCALABILITY -- Support 1 billion stored URLs
5. DURABILITY -- Once a URL is created, it must never be lost
6. CONSISTENCY -- A newly created short URL should work within 1 second
// Key NFR categories to always consider:
// - Availability: What uptime % do we need? (99.9% vs 99.99% is 10x harder)
// - Latency: What response time is acceptable? (p50, p95, p99)
// - Throughput: How many requests per second? (reads vs writes)
// - Storage: How much data? How fast does it grow?
// - Consistency: Can users see stale data? For how long?
// - Security: Authentication? Authorization? Encryption?
// - Cost: Budget constraints? (affects technology choices)
Before choosing technologies, estimate the scale. These rough calculations tell you whether you need one server or a thousand.
// Given: 100 million URLs created per month, 100:1 read/write ratio
// WRITES (new URLs)
100M URLs / month
= 100M / (30 days * 24 hours * 3600 seconds)
= 100M / 2.6M
≈ 40 URLs created per second
// READS (redirects)
100:1 ratio → 40 * 100 = 4,000 redirects per second
// STORAGE
Each URL record: short_code (7 bytes) + long_url (avg 200 bytes)
+ created_at (8 bytes) + metadata (100 bytes) ≈ 315 bytes
Per month: 100M * 315 bytes = 31.5 GB
Per year: 31.5 * 12 ≈ 378 GB
Over 5 years: ~1.9 TB
// This tells us:
// - Write throughput is low (40/s) -- single database can handle this
// - Read throughput is moderate (4,000/s) -- needs caching
// - Storage is manageable (~2TB over 5 years) -- fits on one machine
// - We should optimize for reads (caching, read replicas)
// Key numbers to memorize:
// 1 day = 86,400 seconds ≈ 100K seconds
// 1 month = 2.6M seconds
// 1 year = 31.5M seconds
// 1 KB = 1,000 bytes (for estimation purposes)
// 1 MB = 1,000 KB, 1 GB = 1,000 MB, 1 TB = 1,000 GB
// Read latencies to know:
// RAM access: ~100 nanoseconds
// SSD random read: ~100 microseconds
// Network round trip: ~0.5 milliseconds (same datacenter)
// SSD sequential 1MB: ~1 millisecond
// HDD sequential 1MB: ~20 milliseconds
// Cross-continent RTT: ~150 milliseconds
Now draw the big picture. Start with the simplest architecture that could work, then evolve it to handle the requirements.
// V1: SIMPLEST POSSIBLE (works for small scale)
//
// [Client] → [Web Server] → [Database (PostgreSQL)]
//
// - Single server, single database
// - Works for thousands of users
// - Single point of failure
// V2: ADD CACHING (handle read-heavy load)
//
// [Client] → [Load Balancer] → [Web Server 1] → [Cache (Redis)]
// → [Web Server 2] ↓ (miss)
// → [Web Server 3] → [Database]
//
// - Multiple servers behind a load balancer
// - Redis caches popular URLs (most URLs follow Zipf's law)
// - Cache hit rate of 80%+ reduces DB load by 5x
// V3: ADD READ REPLICAS (scale reads further)
//
// [Client] → [CDN] → [Load Balancer] → [Servers]
// ↓
// [Cache]
// ↓ (miss)
// [DB Primary] → [Replica 1]
// → [Replica 2]
//
// - CDN handles static assets and can cache redirect responses
// - Database reads go to replicas
// - Only writes go to primary
// V4: FULL SCALE (billions of URLs)
//
// [Client] → [CDN]
// ↓
// [API Gateway / Load Balancer]
// ↓ ↓
// [URL Service] [Analytics Service] ← Separate concerns
// ↓ ↓
// [Redis Cache] [Kafka Queue] ← Async analytics
// ↓ ↓
// [Sharded DB] [ClickHouse] ← Different DBs for different needs
Zoom into each component and explain how it works internally.
// REST API for URL Shortener
// Create short URL
POST /api/v1/urls
Request: { "long_url": "https://example.com/very/long/path", "custom_code": "mylink", "ttl": 86400 }
Response: { "short_url": "https://sho.rt/x7Kp2", "code": "x7Kp2", "expires_at": "2024-02-18T..." }
Status: 201 Created
// Redirect (the hot path -- must be fast)
GET /:code
Response: 301 Moved Permanently (cacheable) or 302 Found (not cached)
Header: Location: https://example.com/very/long/path
// Get analytics
GET /api/v1/urls/:code/stats
Response: { "clicks": 15234, "created_at": "...", "top_referrers": [...] }
// Delete URL
DELETE /api/v1/urls/:code
Response: 204 No Content
// Key API design decisions:
// - 301 vs 302 redirect: 301 is cached by browsers (fewer hits to your server,
// but you lose analytics). 302 always hits your server (accurate analytics).
// - API versioning: /api/v1/ lets you evolve without breaking clients
// - Rate limiting: Prevent abuse (100 creates per hour per user)
// Schema for URL shortener
CREATE TABLE urls (
id BIGSERIAL PRIMARY KEY,
code VARCHAR(10) UNIQUE NOT NULL, -- "x7Kp2"
long_url TEXT NOT NULL, -- Original URL
user_id BIGINT REFERENCES users(id), -- NULL for anonymous
created_at TIMESTAMP DEFAULT NOW(),
expires_at TIMESTAMP, -- NULL = never expires
click_count BIGINT DEFAULT 0 -- Denormalized for speed
);
CREATE INDEX idx_urls_code ON urls (code); -- Primary lookup path
CREATE INDEX idx_urls_user ON urls (user_id, created_at DESC);
// Separate table for analytics (write-heavy, different access pattern)
CREATE TABLE clicks (
id BIGSERIAL PRIMARY KEY,
url_id BIGINT REFERENCES urls(id),
clicked_at TIMESTAMP DEFAULT NOW(),
ip_address INET,
user_agent TEXT,
referrer TEXT,
country VARCHAR(2)
);
// This table grows FAST (millions of rows/day)
// Options: partition by date, move to ClickHouse/TimescaleDB, or
// use Kafka → batch processor → aggregated analytics table
// Key data model decisions:
// - Separate URLs from clicks (different read/write patterns)
// - Index on 'code' for O(log n) lookups
// - click_count denormalized on urls table to avoid COUNT(*) on clicks
// - Consider: NoSQL (DynamoDB) for URLs if you only need key-value lookup
// APPROACH 1: Hash-based
// MD5/SHA256 the long URL, take first 7 characters
const crypto = require("crypto");
function generateCode(longUrl) {
const hash = crypto.createHash("md5").update(longUrl).digest("base62");
return hash.substring(0, 7); // "x7Kp2mR"
}
// Problem: Collisions. Two different URLs could produce the same 7-char hash.
// Solution: Check for collision, if found, append a counter and re-hash.
// APPROACH 2: Counter-based (auto-increment ID → base62)
// ID 1000000 → base62 → "4C92"
function idToBase62(id) {
const chars = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";
let result = "";
while (id > 0) {
result = chars[id % 62] + result;
id = Math.floor(id / 62);
}
return result;
}
// Problem: Predictable (sequential). User can guess other URLs.
// Solution: Shuffle with a bijective mapping or use random IDs.
// APPROACH 3: Pre-generated random codes
// Background worker generates millions of random codes and stores them
// When a user creates a URL, grab the next unused code from the pool
// No collision checking needed, no computation at request time
// With 7 characters of base62: 62^7 = 3.5 TRILLION possible codes
// Even with 1 billion URLs, collision probability is negligible
These components appear in almost every system design. Understand what each one does, when to use it, and the trade-offs.
// Distributes incoming traffic across multiple servers
// Why: no single server can handle all traffic; provides redundancy
// Algorithms:
// Round Robin: 1→A, 2→B, 3→C, 4→A, 5→B, 6→C (simple, even distribution)
// Least Connections: Send to server with fewest active connections (better for uneven work)
// IP Hash: Same client IP always goes to same server (session stickiness)
// Weighted: Server A gets 3x traffic of Server B (different hardware)
// Layer 4 (TCP) vs Layer 7 (HTTP):
// L4: Routes based on IP/port. Fast but dumb (can't inspect HTTP headers)
// L7: Routes based on URL, headers, cookies. Slower but smart (route /api to API servers,
// /static to CDN, etc.)
// Tools: nginx, HAProxy, AWS ALB/NLB, Cloudflare
// CACHE-ASIDE (Lazy Loading) -- Most common
// 1. App checks cache. Hit? Return cached data.
// 2. Miss? Query database, store result in cache, return data.
// Pro: Only caches data that's actually requested
// Con: First request always slow (cache miss). Stale data possible.
async function getUser(id) {
let user = await cache.get(`user:${id}`);
if (!user) {
user = await db.query("SELECT * FROM users WHERE id = $1", [id]);
await cache.set(`user:${id}`, user, { ex: 300 }); // TTL: 5 min
}
return user;
}
// WRITE-THROUGH -- Write to cache AND database simultaneously
// Pro: Cache is always up to date
// Con: Write latency increases (two writes per operation)
async function updateUser(id, data) {
await db.query("UPDATE users SET ... WHERE id = $1", [id]);
await cache.set(`user:${id}`, data); // Both updated atomically
}
// WRITE-BEHIND (Write-Back) -- Write to cache, async write to DB
// Pro: Very fast writes (only writing to memory)
// Con: Data loss if cache crashes before DB write
// Use for: analytics, activity logs, non-critical data
// Cache eviction policies:
// LRU (Least Recently Used) -- Remove items not accessed recently (most common)
// LFU (Least Frequently Used) -- Remove items accessed least often
// TTL (Time-To-Live) -- Remove items after a fixed time
// A message queue decouples producers from consumers
// Producer sends a message → Queue stores it → Consumer processes it later
// WHY: If processing takes 30 seconds (video encoding, sending emails,
// generating reports), you don't want the user waiting 30 seconds.
// Instead: accept the request, queue the work, respond immediately.
// Example: User uploads a video
// WITHOUT queue:
app.post("/upload", async (req, res) => {
await saveVideo(req.file); // 100ms
await transcodeToMP4(req.file); // 30 seconds -- USER WAITS
await generateThumbnail(req.file); // 5 seconds -- USER STILL WAITING
await notifyFollowers(user); // 2 seconds
res.json({ success: true }); // User waited 37 seconds total
});
// WITH queue:
app.post("/upload", async (req, res) => {
const videoId = await saveVideo(req.file); // 100ms
await queue.publish("video.uploaded", { videoId, userId: req.user.id });
res.json({ status: "processing", videoId }); // User gets response in 100ms
});
// Separate workers process the queue:
queue.subscribe("video.uploaded", async (msg) => {
await transcodeToMP4(msg.videoId);
await generateThumbnail(msg.videoId);
await notifyFollowers(msg.userId);
await updateStatus(msg.videoId, "ready");
});
// Popular message queues:
// Redis (simple, fast, but messages can be lost)
// RabbitMQ (reliable, feature-rich, AMQP protocol)
// Kafka (distributed log, massive throughput, event sourcing)
// AWS SQS (managed, no infrastructure to run)
// A CDN caches your content on servers worldwide ("edge nodes")
// so users download from a nearby server instead of your origin
// WITHOUT CDN:
// User in Tokyo → request travels to your server in Virginia → 200ms latency
// WITH CDN:
// User in Tokyo → hits CDN edge in Tokyo → 20ms latency (10x faster)
// What to put on a CDN:
// - Static files: JS, CSS, images, fonts, videos
// - API responses that don't change often (product catalogs, blog posts)
// - Redirect responses (for a URL shortener)
// CDN cache headers (you control what gets cached and for how long):
res.set("Cache-Control", "public, max-age=86400"); // Cache 24 hours
res.set("Cache-Control", "private, no-cache"); // Don't cache (user-specific data)
res.set("Cache-Control", "public, s-maxage=3600"); // CDN caches 1hr, browser doesn't
// Cache invalidation:
// - Versioned URLs: /app.a1b2c3.js (new hash = new URL = automatic cache bust)
// - Purge API: Tell the CDN to drop specific cached items
// - Short TTLs: Cache for 60 seconds, accept slightly stale data
// Popular CDNs: Cloudflare, AWS CloudFront, Fastly, Vercel Edge
// Protects your system from abuse and overload
// Token Bucket algorithm (most common):
// Bucket holds N tokens. Each request consumes 1 token.
// Tokens are added at a fixed rate (e.g., 10/second).
// If bucket is empty, request is rejected (429 Too Many Requests).
// Implementation with Redis:
async function rateLimit(userId, maxRequests, windowSeconds) {
const key = `rate:${userId}`;
const current = await redis.incr(key);
if (current === 1) {
await redis.expire(key, windowSeconds);
}
if (current > maxRequests) {
const ttl = await redis.ttl(key);
return {
allowed: false,
retryAfter: ttl,
};
}
return {
allowed: true,
remaining: maxRequests - current,
};
}
// Express middleware:
app.use(async (req, res, next) => {
const result = await rateLimit(req.ip, 100, 60); // 100 req/min
res.set("X-RateLimit-Remaining", result.remaining);
if (!result.allowed) {
res.set("Retry-After", result.retryAfter);
return res.status(429).json({ error: "Too many requests" });
}
next();
});
// Different limits for different users/endpoints:
// Anonymous: 60 req/min
// Authenticated: 600 req/min
// Premium: 6000 req/min
// POST /api/upload: 10 req/min (expensive operation)
Good system design acknowledges what can go wrong and how the system handles it. This is what separates "I drew some boxes" from actual engineering.
// In a distributed system, you can only guarantee 2 of 3:
//
// C = Consistency -- Every read gets the most recent write
// A = Availability -- Every request gets a response (even if stale)
// P = Partition Tolerance -- System works even if network splits
//
// You MUST choose P (networks fail in practice), so the real choice is:
//
// CP (Consistency + Partition Tolerance):
// If network splits, refuse to serve requests rather than serve stale data
// Example: Banking systems, inventory systems
// Tools: PostgreSQL, MongoDB (with majority writes), etcd, ZooKeeper
//
// AP (Availability + Partition Tolerance):
// If network splits, serve potentially stale data rather than go down
// Example: Social media feeds, DNS, shopping carts
// Tools: Cassandra, DynamoDB, CouchDB
//
// In practice, most systems are neither purely CP nor AP.
// They make different trade-offs for different operations:
// - User authentication: CP (must be consistent)
// - News feed: AP (stale by a few seconds is fine)
// - Shopping cart: AP (don't lose the cart even if stale)
// - Payment processing: CP (never process a payment twice)
// 1. SINGLE POINT OF FAILURE (SPOF)
// "What happens if this component dies?"
// Fix: Redundancy. Run 2+ instances of everything critical.
// Database: primary + replicas. Servers: behind load balancer.
// Cache: Redis Sentinel or Redis Cluster for failover.
// 2. THUNDERING HERD
// Cache key expires. 10,000 requests hit the database simultaneously.
// Fix: Cache stampede protection -- lock so only 1 request rebuilds cache.
async function getWithLock(key) {
let value = await cache.get(key);
if (value) return value;
const lock = await cache.set(`lock:${key}`, "1", "EX", 5, "NX");
if (lock) {
value = await db.query(/* ... */);
await cache.set(key, value, "EX", 300);
} else {
await sleep(50); // Wait for other request to populate cache
return cache.get(key); // Try again
}
}
// 3. HOT KEY PROBLEM
// One cache key gets 90% of all requests (celebrity profile, viral post)
// Single Redis node becomes bottleneck
// Fix: Replicate hot keys across multiple Redis nodes with random suffix
// 4. DATA LOSS
// Server dies, taking in-memory data with it
// Fix: WAL (Write-Ahead Log), replication, regular backups
// Rule: If losing this data would be bad, it must be on disk before you ack.
// 5. CASCADING FAILURE
// Service A is slow → Service B times out waiting → B backs up → C backs up
// Fix: Circuit breaker pattern, timeouts, bulkheads
// If Service A fails 50% of requests, stop calling it for 30 seconds.
// MINUTES 0-5: REQUIREMENTS GATHERING
// Ask questions. Define functional and non-functional requirements.
// Scope the problem. What's in v1? What's out of scope?
// MINUTES 5-10: ESTIMATION
// Back-of-envelope math. How many users? QPS? Storage?
// This determines whether you need 1 server or 1000.
// MINUTES 10-25: HIGH-LEVEL DESIGN
// Draw the main components: clients, servers, databases, caches, queues
// Define the API (endpoints, request/response formats)
// Define the data model (tables, relationships, indexes)
// Walk through the main user flows
// MINUTES 25-40: DEEP DIVE
// Pick 2-3 interesting components and go deep
// - How does X handle 10x traffic?
// - How does Y handle failures?
// - What's the caching strategy?
// - How do you handle data consistency?
// MINUTES 40-45: TRADE-OFFS AND WRAP-UP
// What are the bottlenecks?
// What would you change for 10x scale?
// What are the operational concerns? (monitoring, alerting, deployment)
// GOLDEN RULES:
// 1. Don't over-engineer. Start simple, add complexity only when needed.
// 2. Justify every component. "Why Redis?" "Because our read QPS is 50K
// and PostgreSQL can handle ~5K. We need a caching layer."
// 3. Talk about trade-offs. There are no perfect solutions.
// 4. Use real numbers. "We need ~2TB of storage" is better than "a lot."
// 5. Consider operational complexity. A simple system you can debug
// beats a complex one you can't.
// URL SHORTENER
// Key: hash/counter for short codes, Redis for hot URLs, 301 vs 302
// RATE LIMITER
// Key: token bucket or sliding window, Redis for distributed counting
// CHAT SYSTEM (WhatsApp/Slack)
// Key: WebSockets for real-time, message queue for delivery, last-seen tracking
// Hard part: group chats at scale, offline message delivery, read receipts
// NEWS FEED (Twitter/Instagram)
// Key: fan-out-on-write (pre-compute feeds) vs fan-out-on-read (compute on demand)
// Trade-off: write amplification vs read latency
// Celebrity problem: hybrid approach (fan-out-on-read for users with 10M+ followers)
// FILE STORAGE (Dropbox/Google Drive)
// Key: chunk files into blocks, deduplicate, sync client with server
// Hard part: conflict resolution (two users edit same file), efficient sync
// VIDEO STREAMING (YouTube/Netflix)
// Key: transcode to multiple resolutions/bitrates, CDN for delivery, adaptive bitrate
// Hard part: live streaming (low latency), recommendation engine
// SEARCH ENGINE (Google)
// Key: inverted index (word → list of documents containing it), PageRank for ranking
// Hard part: crawling the web, keeping index fresh, handling queries with spelling errors
// NOTIFICATION SYSTEM
// Key: multiple channels (push, email, SMS), user preferences, delivery guarantees
// Hard part: at-most-once vs at-least-once delivery, priority queues, rate limiting
// E-COMMERCE (Amazon checkout)
// Key: inventory reservation (not just checking), payment processing, order state machine
// Hard part: preventing overselling, handling payment failures, distributed transactions
System design is not about memorizing architectures. It's about reasoning through trade-offs. The interviewer (or your tech lead) cares more about HOW you think than WHAT you draw. When you say "I'd use Redis here," they want to hear "because our read throughput is 50K QPS, our data fits in memory at ~10GB, and we can tolerate stale data for up to 60 seconds -- which makes a TTL-based cache appropriate." That reasoning is the skill. The boxes and arrows are just notation.