Table of Contents

1. How JavaScript Actually Works 2. Execution Context & Hoisting 3. Scope & Closures 4. The this Keyword 5. Prototypes & Inheritance 6. The Event Loop 7. Promises & Async/Await 8. Async Patterns -- p-limit, Resource Pools & Concurrency Control 9. ES6+ Features (Harmony) 10. Proxy & Reflect 11. Modules (CJS vs ESM) 12. Memory Management 13. Weird Parts & Gotchas

1. How JavaScript Actually Works

JavaScript is a single-threaded, dynamically typed, garbage-collected language. But saying that doesn't explain what's really happening. Let's go deeper.

The V8 Engine (Chrome & Node)

V8 is the engine that powers Chrome and Node.js. It takes your JavaScript source code and turns it into machine code that your CPU can execute. Here's the pipeline:

V8 Compilation Pipeline
Source Code (.js)
    |
    v
  Parser  -->  AST (Abstract Syntax Tree)
    |
    v
  Ignition (Interpreter)  -->  Bytecode
    |
    v  (if a function is called many times -- "hot")
  TurboFan (JIT Compiler)  -->  Optimized Machine Code
    |
    v  (if assumptions break -- "deoptimization")
  Back to Ignition bytecode

JIT compilation means V8 doesn't compile everything upfront. It starts by interpreting your code into bytecode (fast startup), then identifies "hot" functions that run many times and compiles them into highly optimized machine code. If the optimized code breaks assumptions (like a variable changing types), V8 deoptimizes back to bytecode.

Hidden Classes & Inline Caching

V8 creates hidden classes for your objects. When you always create objects with the same shape (same properties in the same order), V8 can optimize property access to a simple memory offset lookup instead of a dictionary lookup.

JavaScript
// GOOD: same shape every time -- V8 creates one hidden class
function createUser(name, age) {
  return { name, age };
}

// BAD: adding properties after creation changes the hidden class
const user = {};
user.name = "Sean";   // hidden class changes
user.age = 25;        // hidden class changes again

The Call Stack & Heap

JavaScript has two memory areas:

Stack Overflow

If you call functions recursively without a base case, you'll fill up the call stack and get RangeError: Maximum call stack size exceeded. The default stack size is around 10,000-15,000 frames depending on the engine.

Garbage Collection

V8 uses a generational garbage collector:

Objects start in the young generation. If they survive two GC cycles, they're promoted to old generation. This is why short-lived objects (function-scoped variables) are cheap -- they get cleaned up quickly.

2. Execution Context & Hoisting

When JavaScript runs your code, it creates an execution context. There are two phases:

Phase 1: Creation Phase

Phase 2: Execution Phase

Hoisting Explained

Hoisting means declarations are moved to the top of their scope during the creation phase. But var, let, const, and functions behave differently:

JavaScript
// var is hoisted AND initialized to undefined
console.log(x); // undefined (not an error!)
var x = 5;

// let/const are hoisted but NOT initialized (Temporal Dead Zone)
console.log(y); // ReferenceError: Cannot access 'y' before initialization
let y = 5;

// Function declarations are fully hoisted (name + body)
greet(); // "Hello!" -- works!
function greet() { console.log("Hello!"); }

// Function expressions are NOT hoisted (only the variable is)
sayBye(); // TypeError: sayBye is not a function
var sayBye = function() { console.log("Bye!"); };
Temporal Dead Zone (TDZ)

The TDZ is the time between entering a scope and the let/const declaration being reached. During this time, the variable exists but you can't access it. This is why let/const are safer than var -- they catch bugs where you use a variable before declaring it.

3. Scope & Closures

Types of Scope

JavaScript
var global = "I'm everywhere";

function outer() {
  var funcScoped = "Only in outer()";

  if (true) {
    var stillFuncScoped = "var ignores blocks!";
    let blockScoped = "Only in this if-block";
  }

  console.log(stillFuncScoped); // works! var is function-scoped
  console.log(blockScoped);      // ReferenceError! let is block-scoped
}

Lexical Scope

JavaScript uses lexical scoping (also called static scoping). This means a function's scope is determined by where it's defined, not where it's called.

Closures

A closure is when a function "remembers" the variables from the scope where it was created, even after that scope has finished executing. This is the single most important concept in JavaScript.

JavaScript
function createCounter() {
  let count = 0; // this variable is "closed over"

  return {
    increment: () => ++count,
    getCount: () => count,
  };
}

const counter = createCounter();
counter.increment();
counter.increment();
console.log(counter.getCount()); // 2
// count is not accessible directly -- it's private!

Classic Closure Gotcha: Loops

JavaScript
// BUG: all callbacks print 3
for (var i = 0; i < 3; i++) {
  setTimeout(() => console.log(i), 100);
}
// Output: 3, 3, 3 (var is function-scoped, all closures share the same i)

// FIX: use let (block-scoped, creates new binding each iteration)
for (let i = 0; i < 3; i++) {
  setTimeout(() => console.log(i), 100);
}
// Output: 0, 1, 2
Why Closures Matter

Closures enable data privacy, factory functions, callbacks, currying, memoization, and the module pattern. Every time you pass a callback to .map(), .filter(), or addEventListener(), you're using closures.

4. The this Keyword

this in JavaScript is determined by how a function is called, not where it's defined. This is the opposite of how closures work (lexical), which is why it confuses people.

The Rules (in order of precedence)

RuleContextthis value
1. newnew Foo()The newly created object
2. Explicitfn.call(obj), fn.apply(obj), fn.bind(obj)obj
3. Implicitobj.fn()obj
4. Defaultfn()globalThis (or undefined in strict mode)
JavaScript
const user = {
  name: "Sean",
  greet() {
    console.log(`Hi, I'm ${this.name}`);
  }
};

user.greet();          // "Hi, I'm Sean" (implicit: this = user)

const fn = user.greet;
fn();                   // "Hi, I'm undefined" (default: this = globalThis)

fn.call({ name: "Bob" }); // "Hi, I'm Bob" (explicit)

Arrow Functions: Lexical this

Arrow functions do NOT have their own this. They inherit this from the enclosing scope (lexical binding). This is why arrow functions are perfect for callbacks but bad for object methods:

JavaScript
const timer = {
  seconds: 0,
  start() {
    // Arrow function inherits `this` from start()
    setInterval(() => {
      this.seconds++;
      console.log(this.seconds);
    }, 1000);
  }
};

timer.start(); // 1, 2, 3... (this = timer, works!)

// If we used a regular function instead:
// setInterval(function() { this.seconds++; }, 1000);
// this would be globalThis, not timer -- BUG

5. Prototypes & Inheritance

JavaScript doesn't have classical inheritance (like Java/C++). It uses prototypal inheritance. Every object has an internal [[Prototype]] link to another object.

The Prototype Chain

JavaScript
const animal = { eats: true };
const dog = Object.create(animal); // dog's prototype is animal
dog.barks = true;

console.log(dog.barks); // true (own property)
console.log(dog.eats);  // true (found on prototype)

// The chain: dog -> animal -> Object.prototype -> null

__proto__ vs .prototype

JavaScript
function Dog(name) {
  this.name = name;
}
Dog.prototype.bark = function() {
  console.log(`${this.name} says woof!`);
};

const rex = new Dog("Rex");
rex.bark(); // "Rex says woof!"

// rex.__proto__ === Dog.prototype  --> true
// Dog.prototype.__proto__ === Object.prototype  --> true

Classes Are Just Sugar

ES6 class syntax is syntactic sugar over prototypes. It doesn't change the underlying model:

JavaScript
class Animal {
  constructor(name) {
    this.name = name;
  }
  speak() {
    console.log(`${this.name} makes a noise`);
  }
}

class Dog extends Animal {
  bark() {
    console.log(`${this.name} barks`);
  }
}

const d = new Dog("Rex");
d.bark();  // "Rex barks"
d.speak(); // "Rex makes a noise" (inherited)

6. The Event Loop

This is the most important concept for understanding how JavaScript handles asynchronous code. JavaScript is single-threaded, but it can handle concurrent operations through the event loop.

The Event Loop Architecture
                         +-----------------+
                         |   Call Stack     |  (executes JS, one frame at a time)
                         +-----------------+
                                |
                                v
                    +------------------------+
                    |     Event Loop          |  (checks: is stack empty?)
                    +------------------------+
                      /         |           \
                     v          v            v
          +----------+  +-----------+  +-----------+
          | Microtask |  | Macrotask |  |  Web APIs |
          |   Queue   |  |   Queue   |  | (timers,  |
          | (Promise  |  |(setTimeout|  |  fetch,   |
          |  .then,   |  | setInterval| |  DOM)     |
          | queueMicro|  | I/O, etc.)|  +-----------+
          +-----------+  +-----------+

The Loop in Action

  1. Execute everything in the call stack until it's empty
  2. Drain the microtask queue (Promise callbacks, queueMicrotask)
  3. Take ONE task from the macrotask queue (setTimeout, setInterval, I/O)
  4. Go back to step 2
Microtasks Always Run Before Macrotasks

This is the key insight. After the call stack empties, ALL microtasks run before the next macrotask. This is why Promise.resolve().then() always runs before setTimeout(fn, 0).

JavaScript
console.log("1");

setTimeout(() => console.log("2"), 0);

Promise.resolve().then(() => console.log("3"));

console.log("4");

// Output: 1, 4, 3, 2
// 1: synchronous (call stack)
// 4: synchronous (call stack)
// 3: microtask (Promise.then)
// 2: macrotask (setTimeout)

setTimeout(fn, 0) Explained

It doesn't mean "run immediately." It means "run as soon as the call stack is empty AND all microtasks have been processed AND there's no other macrotask ahead of you." The minimum delay is actually ~4ms in browsers due to spec requirements.

7. Promises & Async/Await

Promise States

A Promise is an object representing the eventual completion or failure of an async operation. It has three states:

JavaScript
const promise = new Promise((resolve, reject) => {
  // async operation
  const success = true;
  if (success) resolve("Data!");
  else reject(new Error("Failed"));
});

promise
  .then(data => console.log(data))   // "Data!"
  .catch(err => console.log(err))    // handles rejection
  .finally(() => console.log("done")); // always runs

Promise Internals -- How It Actually Works

When you write new Promise(executor), the engine does several things that are worth understanding deeply. The Promise spec (Promises/A+) is surprisingly small, but its implications are huge.

The executor runs synchronously. This is the most common misconception. The function you pass to new Promise() runs immediately, on the current call stack. It is NOT deferred:

JavaScript
console.log("before");

const p = new Promise((resolve) => {
  console.log("executor runs NOW"); // synchronous!
  resolve("done");
  console.log("after resolve -- still runs"); // resolve doesn't return/throw
});

console.log("after constructor");

p.then(v => console.log("then:", v));

console.log("end");

// Output:
// "before"
// "executor runs NOW"
// "after resolve -- still runs"
// "after constructor"
// "end"
// "then: done"            <-- microtask, runs after all sync code

.then() registers microtasks. When you call .then(), the callback is not invoked immediately even if the Promise is already resolved. Instead, it is scheduled as a microtask. This guarantees consistent asynchronous behavior -- you can always rely on .then() callbacks running after the current synchronous code finishes.

The Promise Resolution Procedure (thenable detection). When you resolve a Promise with a value, the engine checks: is this value a "thenable" (an object with a .then method)? If so, it recursively unwraps it. This is why you can resolve a Promise with another Promise and it "flattens" automatically:

JavaScript
const inner = new Promise(r => setTimeout(() => r("inner value"), 1000));

const outer = new Promise(resolve => {
  resolve(inner); // resolve with a Promise -- it unwraps!
});

outer.then(v => console.log(v)); // "inner value" (after 1s, not a Promise object)

// This also works with any "thenable" (duck typing):
const fakeThenable = {
  then(onFulfill) {
    onFulfill("I'm not a real Promise but I quack like one");
  }
};
Promise.resolve(fakeThenable).then(console.log);
// "I'm not a real Promise but I quack like one"

Why you cannot cancel a Promise. A Promise represents a value that will exist in the future. Once created, there is no built-in mechanism to cancel it. The executor has already started running. The best you can do is ignore the result (using an AbortController pattern or a cancellation token), but the underlying work (network request, timer, etc.) may still complete. This is a deliberate design choice -- Promises are about the result, not the operation.

Promise.withResolvers() (ES2024). A newer addition that gives you the resolve/reject functions without nesting inside the executor. This is extremely useful when the resolution happens in a completely different context:

JavaScript
// Old pattern: awkward variable hoisting
let resolve, reject;
const promise = new Promise((res, rej) => {
  resolve = res;
  reject = rej;
});

// New pattern: Promise.withResolvers()
const { promise: p, resolve: res, reject: rej } = Promise.withResolvers();

// Now you can resolve/reject from anywhere:
setTimeout(() => res("resolved from a timer!"), 1000);

// Real use case: wrapping event emitters
function waitForEvent(emitter, eventName) {
  const { promise, resolve } = Promise.withResolvers();
  emitter.once(eventName, resolve);
  return promise;
}

Promise Combinators

MethodResolves whenRejects when
Promise.all()ALL promises resolveANY promise rejects
Promise.allSettled()ALL promises settle (resolve or reject)Never rejects
Promise.race()FIRST promise settlesFIRST promise rejects
Promise.any()FIRST promise resolvesALL promises reject

Async/Await -- What the Engine Actually Does

async/await is syntactic sugar over Promises, but understanding the transformation the engine performs helps you reason about execution order and avoid bugs.

An async function always returns a Promise. Even if you return a plain value, it gets wrapped in Promise.resolve(). If you throw, the returned Promise is rejected:

JavaScript
async function getNumber() {
  return 42;
}
// Equivalent to:
function getNumber() {
  return Promise.resolve(42);
}

async function throwError() {
  throw new Error("oops");
}
// Equivalent to:
function throwError() {
  return Promise.reject(new Error("oops"));
}

await suspends the function and adds a continuation to the microtask queue. When the engine hits await, it pauses the async function, returns control to the caller, and schedules the rest of the function as a microtask that runs when the awaited Promise settles. The call stack is free to do other work in the meantime:

JavaScript
async function demo() {
  console.log("A");
  await Promise.resolve();
  console.log("B"); // this is a microtask continuation
}

console.log("1");
demo();
console.log("2");

// Output: "1", "A", "2", "B"
// "A" runs synchronously (before the first await)
// "2" runs because demo() returned control at the await
// "B" runs as a microtask after the synchronous code finishes

How try/catch maps to .catch(). When you use try/catch around an await, the engine transforms it into a .catch() handler on the Promise chain. A rejection becomes a thrown exception inside the async function:

JavaScript
// This async/await code:
async function fetchData() {
  try {
    const data = await fetch("/api");
    return data;
  } catch (err) {
    console.error(err);
  }
}

// Is roughly equivalent to this Promise chain:
function fetchData() {
  return fetch("/api")
    .then(data => data)
    .catch(err => console.error(err));
}
Common Gotcha: Forgetting to await (Fire-and-Forget)

If you call an async function without await, it returns a Promise that nobody is listening to. If that Promise rejects, you get an unhandled rejection -- one of the most common bugs in Node.js applications. The function still runs, but errors vanish silently:

JavaScript
async function saveToDb(data) {
  await db.insert(data); // might throw!
}

// BUG: no await -- if db.insert fails, we'll never know
saveToDb({ name: "Sean" });

// FIX: always await (or handle the returned Promise)
await saveToDb({ name: "Sean" });

Sequential vs Parallel Execution Patterns

One of the most impactful performance mistakes is accidentally running independent async operations sequentially when they could run in parallel:

JavaScript
// SLOW: sequential -- each waits for the previous to finish
// Total time: time(A) + time(B) + time(C)
const a = await fetchA();
const b = await fetchB();
const c = await fetchC();

// FAST: parallel -- all start at the same time
// Total time: max(time(A), time(B), time(C))
const [a, b, c] = await Promise.all([
  fetchA(),
  fetchB(),
  fetchC(),
]);

// PARALLEL WITH ERROR ISOLATION: if one fails, you still get the others
const results = await Promise.allSettled([
  fetchA(),
  fetchB(),
  fetchC(),
]);

results.forEach(r => {
  if (r.status === "fulfilled") console.log(r.value);
  else console.error(r.reason);
});
JavaScript
async function fetchUser(id) {
  try {
    const res = await fetch(`/api/users/${id}`);
    if (!res.ok) throw new Error(`HTTP ${res.status}`);
    return await res.json();
  } catch (err) {
    console.error("Failed to fetch user:", err);
    throw err;
  }
}

// Parallel fetches (don't await sequentially if independent!)
const [user, posts] = await Promise.all([
  fetchUser(1),
  fetchPosts(1),
]);

8. Async Patterns -- p-limit, Resource Pools & Concurrency Control

In the previous section, we saw that Promise.all() runs everything in parallel. But what happens when "everything" is 1,000 HTTP requests, or 500 database queries? You will overwhelm the server, exhaust your connection pool, hit rate limits, or run out of file descriptors. This section is about the patterns that sit between "one at a time" (sequential) and "everything at once" (unbounded parallel).

The Problem: Unbounded Concurrency

Consider this seemingly reasonable code:

JavaScript
const urls = Array(1000).fill(null).map((_, i) => `https://api.example.com/item/${i}`);

// This fires ALL 1000 requests simultaneously
const results = await Promise.all(
  urls.map(url => fetch(url).then(r => r.json()))
);

What goes wrong:

What you actually want is: run at most N tasks at the same time. When one finishes, start the next. This is concurrency control.

What is p-limit?

p-limit is a popular npm package (by Sindre Sorhus) that limits the number of Promises running concurrently. The API is simple: you create a limiter with a concurrency number, then wrap your async functions with it:

JavaScript
import pLimit from "p-limit";

const limit = pLimit(5); // max 5 concurrent

const urls = Array(1000).fill(null).map((_, i) => `https://api.example.com/item/${i}`);

// Only 5 fetches run at any given time
const results = await Promise.all(
  urls.map(url =>
    limit(() => fetch(url).then(r => r.json()))
  )
);

But how does it work? It is not magic. Let's build it from scratch.

Building p-limit From Scratch

The core idea is a semaphore -- a concurrency primitive that tracks how many operations are "active" and queues the rest. Here is a complete implementation with detailed comments explaining every line:

JavaScript
function pLimit(concurrency) {
  // Validate input
  if (concurrency < 1) throw new RangeError("Concurrency must be >= 1");

  // The queue of functions waiting to run
  const queue = [];

  // How many are currently running
  let activeCount = 0;

  // Called when a task finishes -- tries to start the next queued task
  function next() {
    activeCount--;

    if (queue.length > 0) {
      // Pull the next task off the queue and run it
      const fn = queue.shift();
      fn();
    }
  }

  // The "run" function: starts the task immediately
  function run(fn, resolve, reject) {
    activeCount++;

    // Call the user's async function, then resolve/reject the wrapper Promise
    fn().then(resolve, reject).finally(next);
  }

  // The "enqueue" function: either runs immediately or queues
  function enqueue(fn, resolve, reject) {
    if (activeCount < concurrency) {
      // We have capacity -- run now
      run(fn, resolve, reject);
    } else {
      // At capacity -- queue it for later
      queue.push(() => run(fn, resolve, reject));
    }
  }

  // The limiter function itself -- wraps user's fn in a Promise
  function limit(fn) {
    return new Promise((resolve, reject) => {
      enqueue(fn, resolve, reject);
    });
  }

  // Expose metadata for debugging
  Object.defineProperties(limit, {
    activeCount: { get: () => activeCount },
    pendingCount: { get: () => queue.length },
  });

  return limit;
}

Let's trace through what happens when you run 3 tasks with a concurrency of 2:

Execution Trace: pLimit(2) with 3 tasks
limit(taskA)  -->  activeCount=0 < 2, run immediately.  activeCount=1
limit(taskB)  -->  activeCount=1 < 2, run immediately.  activeCount=2
limit(taskC)  -->  activeCount=2 >= 2, push to queue.   queue=[taskC]

... taskA finishes ...
  next() called: activeCount=1, queue has taskC
  taskC starts.  activeCount=2, queue=[]

... taskB finishes ...
  next() called: activeCount=1, queue empty. Done.

... taskC finishes ...
  next() called: activeCount=0, queue empty. All done.
Why Build It From Scratch?

Understanding the internals of p-limit teaches you the semaphore pattern -- one of the foundational concurrency primitives in all of computer science. The same pattern appears in Go (buffered channels), Java (Semaphore), Python (asyncio.Semaphore), and operating systems (POSIX semaphores). Once you understand this, you can solve any concurrency-limiting problem in any language.

Resource Pool / Semaphore

A resource pool is the next step up from p-limit. While p-limit controls how many tasks run, a resource pool manages actual resources -- database connections, API client instances, browser pages in Puppeteer, etc. The key difference: tasks "check out" a resource, use it, then "return" it.

Why does this pattern exist? Because many resources are expensive to create and have hard limits:

Building a Resource Pool From Scratch

JavaScript
class ResourcePool {
  constructor(createFn, destroyFn, maxSize) {
    this.createFn = createFn;     // async () => resource
    this.destroyFn = destroyFn;   // async (resource) => void
    this.maxSize = maxSize;
    this.available = [];          // resources ready to be used
    this.waiting = [];            // queued resolve functions
    this.size = 0;               // total resources created
  }

  async acquire() {
    // 1. If a resource is available, return it immediately
    if (this.available.length > 0) {
      return this.available.pop();
    }

    // 2. If we haven't hit the limit, create a new one
    if (this.size < this.maxSize) {
      this.size++;
      try {
        return await this.createFn();
      } catch (err) {
        this.size--; // creation failed, don't count it
        throw err;
      }
    }

    // 3. At capacity -- wait for one to be released
    return new Promise(resolve => {
      this.waiting.push(resolve);
    });
  }

  release(resource) {
    if (this.waiting.length > 0) {
      // Someone is waiting -- give the resource directly to them
      const resolve = this.waiting.shift();
      resolve(resource);
    } else {
      // Nobody waiting -- put it back in the available pool
      this.available.push(resource);
    }
  }

  // Convenience: acquire, use, and automatically release
  async use(fn) {
    const resource = await this.acquire();
    try {
      return await fn(resource);
    } finally {
      this.release(resource);
    }
  }

  // Clean up all resources
  async drain() {
    for (const resource of this.available) {
      await this.destroyFn(resource);
    }
    this.available = [];
    this.size = 0;
  }
}

Using the Resource Pool

JavaScript
// Example: Database connection pool
const pool = new ResourcePool(
  async () => {
    console.log("Creating new DB connection...");
    return await createDbConnection("postgres://localhost/mydb");
  },
  async (conn) => {
    console.log("Closing DB connection...");
    await conn.close();
  },
  10 // max 10 connections
);

// Use it -- the pool handles acquire/release automatically
const user = await pool.use(async (conn) => {
  return await conn.query("SELECT * FROM users WHERE id = $1", [1]);
});

// Run 100 queries with only 10 connections
const userIds = Array(100).fill(null).map((_, i) => i + 1);
const users = await Promise.all(
  userIds.map(id =>
    pool.use(async (conn) => conn.query("SELECT * FROM users WHERE id = $1", [id]))
  )
);
// At most 10 queries run at once. As each finishes, its connection
// is released back and picked up by the next waiting query.

Real-World Pattern: Rate-Limited API Calls

Combining p-limit with delay gives you rate limiting -- critical for working with external APIs that enforce requests-per-second limits:

JavaScript
function pLimitWithRate(concurrency, minInterval) {
  const limit = pLimit(concurrency);
  let lastRun = 0;

  return function(fn) {
    return limit(async () => {
      // Ensure minimum time between requests
      const now = Date.now();
      const elapsed = now - lastRun;
      if (elapsed < minInterval) {
        await new Promise(r => setTimeout(r, minInterval - elapsed));
      }
      lastRun = Date.now();
      return fn();
    });
  };
}

// GitHub API: 5000 requests/hour = ~1.4/second
// Use 3 concurrent with 750ms min gap for safety
const ghFetch = pLimitWithRate(3, 750);

const repos = await Promise.all(
  usernames.map(name =>
    ghFetch(() => fetch(`https://api.github.com/users/${name}/repos`).then(r => r.json()))
  )
);

Real-World Pattern: Connection Pool with Health Checks

Production connection pools need health validation. A connection might have been dropped by the server, timed out, or become stale. Here is how you add health checking to the resource pool pattern:

JavaScript
class HealthCheckedPool extends ResourcePool {
  constructor(createFn, destroyFn, validateFn, maxSize) {
    super(createFn, destroyFn, maxSize);
    this.validateFn = validateFn; // async (resource) => boolean
  }

  async acquire() {
    while (this.available.length > 0) {
      const resource = this.available.pop();
      const isHealthy = await this.validateFn(resource);
      if (isHealthy) return resource;

      // Resource is dead -- destroy it and try the next one
      this.size--;
      await this.destroyFn(resource);
    }

    // No healthy resources available -- create or wait
    return super.acquire();
  }
}

// Usage:
const pool = new HealthCheckedPool(
  () => createDbConnection(connectionString),
  (conn) => conn.close(),
  async (conn) => {
    try {
      await conn.query("SELECT 1"); // ping
      return true;
    } catch {
      return false;
    }
  },
  10
);
When to Use What

p-limit: Use when you just need to cap how many async operations run at once. Great for bulk API calls, file processing, or any "map over N items with bounded concurrency" scenario.

Resource Pool: Use when the limiting factor is an actual reusable resource (database connections, browser pages, API tokens). The resource itself gets checked out and returned, not just a concurrency slot.

Rate limiter: Use when you need to respect a requests-per-second limit. Combine p-limit with timing logic.

9. ES6+ Features (Harmony)

ES6 (ECMAScript 2015, codenamed "Harmony") was the biggest update to JavaScript ever. Here's every major feature and why it matters:

Destructuring

JavaScript
// Object destructuring
const { name, age, city = "Unknown" } = user;

// Array destructuring
const [first, second, ...rest] = [1, 2, 3, 4, 5];
// first=1, second=2, rest=[3,4,5]

// Swap variables
[a, b] = [b, a];

// Nested destructuring
const { address: { street } } = user;

// Function parameter destructuring
function greet({ name, age }) {
  console.log(`${name} is ${age}`);
}

Spread & Rest

JavaScript
// Spread: expand iterable into individual elements
const arr = [1, 2, 3];
const copy = [...arr];
const merged = [...arr, 4, 5];

const obj = { a: 1, b: 2 };
const updated = { ...obj, b: 3, c: 4 }; // { a:1, b:3, c:4 }

// Rest: collect remaining elements
function sum(...nums) {
  return nums.reduce((a, b) => a + b, 0);
}

Map, Set, WeakMap, WeakSet

JavaScript
// Map: any key type (not just strings like objects)
const map = new Map();
map.set("key", "value");
map.set(42, "number key");
map.set(obj, "object key!");

// Set: unique values only
const set = new Set([1, 2, 2, 3]); // Set {1, 2, 3}
const unique = [...new Set(array)]; // deduplicate an array

// WeakMap/WeakSet: keys are weakly held (garbage-collectible)
// Use for caching metadata on objects without preventing GC

Symbol, Iterators, Generators

JavaScript
// Symbol: unique, immutable identifier
const id = Symbol("id");
const obj = { [id]: 123 };

// Iterator protocol: any object with [Symbol.iterator]()
const range = {
  from: 1,
  to: 5,
  [Symbol.iterator]() {
    let current = this.from;
    return {
      next: () => current <= this.to
        ? { value: current++, done: false }
        : { done: true }
    };
  }
};
for (const n of range) console.log(n); // 1,2,3,4,5

// Generator: function that can pause and resume
function* fibonacci() {
  let [a, b] = [0, 1];
  while (true) {
    yield a;
    [a, b] = [b, a + b];
  }
}
const fib = fibonacci();
fib.next().value; // 0
fib.next().value; // 1
fib.next().value; // 1
fib.next().value; // 2

Optional Chaining & Nullish Coalescing

JavaScript
// Optional chaining: ?. stops and returns undefined if null/undefined
const street = user?.address?.street; // no more && chains
const result = arr?.[0];              // optional array access
obj.method?.();                       // optional method call

// Nullish coalescing: ?? only falls through for null/undefined
const port = config.port ?? 3000;    // 0 is kept! (|| would fallthrough)
const name = user.name ?? "Anonymous";

10. Proxy & Reflect

A Proxy wraps an object and lets you intercept and customize operations on it (get, set, delete, function calls, etc.). This is how reactive frameworks like Vue 3 work.

JavaScript
const handler = {
  get(target, prop) {
    console.log(`Accessing ${prop}`);
    return prop in target ? target[prop] : `Property ${prop} not found`;
  },
  set(target, prop, value) {
    if (prop === "age" && typeof value !== "number") {
      throw new TypeError("Age must be a number");
    }
    target[prop] = value;
    return true;
  }
};

const user = new Proxy({ name: "Sean", age: 25 }, handler);
console.log(user.name);      // "Accessing name" -> "Sean"
console.log(user.missing);   // "Property missing not found"
user.age = "old";            // TypeError: Age must be a number

Reflect provides default implementations of the same operations that Proxy traps intercept. Use Reflect.get(), Reflect.set(), etc. inside your traps to invoke the default behavior.

11. Modules (CJS vs ESM)

JavaScript has two module systems, and understanding the difference is critical for Node.js and bundler configuration.

CommonJS (CJS) -- Node.js Original

JavaScript
// math.js (export)
const add = (a, b) => a + b;
module.exports = { add };
// OR: exports.add = add;

// app.js (import)
const { add } = require("./math");

ES Modules (ESM) -- The Standard

JavaScript
// math.js (export)
export const add = (a, b) => a + b;
export default function multiply(a, b) { return a * b; }

// app.js (import)
import multiply, { add } from "./math.js";

// Dynamic import (code splitting)
const module = await import("./heavy-module.js");
FeatureCommonJSES Modules
LoadingSynchronousAsynchronous
EvaluationRuntimeCompile-time (static)
Tree-shakingNot possibleSupported
Syntaxrequire/module.exportsimport/export
Default in NodeYes (until "type": "module")Opt-in
Using ESM in Node.js

Add "type": "module" to your package.json. Then all .js files are treated as ESM. If you need CJS in an ESM project, use .cjs extension. If you need ESM in a CJS project, use .mjs extension.

12. Memory Management

Common Memory Leaks

JavaScript
// MEMORY LEAK: detached DOM reference
const elements = [];
function addElement() {
  const div = document.createElement("div");
  document.body.appendChild(div);
  elements.push(div); // holds reference even if removed from DOM
}

// FIX: use WeakRef or WeakMap for caches
const cache = new WeakMap();

13. Weird Parts & Gotchas

Type Coercion

JavaScript
// JavaScript tries to convert types automatically
"5" + 3     // "53"  (number -> string, concatenation)
"5" - 3     // 2     (string -> number, subtraction)
"5" * "3"   // 15    (both -> numbers)
true + true // 2     (booleans -> numbers)
[] + []     // ""    (arrays -> strings -> concatenation)
[] + {}     // "[object Object]"
{} + []     // 0     ({} is parsed as empty block, not object)

Always Use ===

JavaScript
0 == ""       // true  (WAT)
0 == "0"      // true
"" == "0"     // false (inconsistent!)
null == undefined // true
null === undefined // false

// RULE: always use === and !== (strict equality)

typeof Quirks

JavaScript
typeof null        // "object"  (historical bug, will never be fixed)
typeof []          // "object"  (arrays are objects)
typeof NaN         // "number"  (Not a Number is a... number)
typeof function(){} // "function" (the only "subtype" typeof detects)

// Better checks:
Array.isArray([]);           // true
Number.isNaN(NaN);            // true (don't use global isNaN())
value === null;               // check for null explicitly

Floating Point

JavaScript
0.1 + 0.2 === 0.3  // false! (0.30000000000000004)

// Fix: compare with epsilon
Math.abs((0.1 + 0.2) - 0.3) < Number.EPSILON // true

// Or use integers (cents instead of dollars)
const total = 10 + 20; // 30 cents, not 0.10 + 0.20 dollars

Array.sort() Default

JavaScript
[10, 9, 80, 1].sort();
// [1, 10, 80, 9] -- WRONG! Default sort is lexicographic (string comparison)

// Always provide a comparator for numbers
[10, 9, 80, 1].sort((a, b) => a - b);
// [1, 9, 10, 80] -- correct
Summary

JavaScript is quirky, but every quirk has a reason rooted in its history and design decisions. Understanding why these exist (backward compatibility, type coercion rules, IEEE 754 floats) makes you a much stronger developer. Don't just memorize the gotchas -- understand the engine.