Memory Leaks in Node.js: How to Find Them Before They Crash Your Server
Your Node.js server runs fine for days. Then memory climbs past 800MB, then 1.2GB, then the process restarts. That's not a capacity problem — that's a memory leak. Here's how to find it before it finds you.
Your Node.js server has been running fine for days. Then memory climbs past 800MB. Then 1.2GB. Then the process restarts. Your monitoring shows another restart six hours later. You add more RAM. It restarts again.
That's not a capacity problem. That's a memory leak — and it will find you eventually, even in apps you've been running in production for years.
What a Memory Leak Actually Is in Node.js
A memory leak in Node.js is not a crash. It's memory that gets allocated and never released — because something in your code is still holding a reference to it, even though you're done with it.
JavaScript is garbage collected. The V8 engine automatically frees memory when nothing references an object anymore. A leak happens when your code keeps an unintentional reference alive — a callback registered but never removed, a cache that grows forever, a closure capturing a variable it shouldn't.
Node.js has two key memory spaces to understand:
Heap Used — where your JS objects live (what you usually leak)
External — C++ objects bound to JS (Buffers, streams)
RSS — total memory reserved by the processRun this anywhere to see your current memory snapshot:
const used = process.memoryUsage();
console.log({
heapUsed: `${Math.round(used.heapUsed / 1024 / 1024)} MB`,
heapTotal: `${Math.round(used.heapTotal / 1024 / 1024)} MB`,
external: `${Math.round(used.external / 1024 / 1024)} MB`,
rss: `${Math.round(used.rss / 1024 / 1024)} MB`,
});A healthy server has a heapUsed that fluctuates but trends flat over time. A leaking server has a heapUsed that only ever goes up.
The Five Patterns That Actually Cause Leaks
1. Event Emitters with Registered Listeners That Never Get Removed
This is the most common real-world leak and the easiest to miss.
// server.js
const EventEmitter = require('events');
const emitter = new EventEmitter();
function handleRequest(req, res) {
// New listener added on every request — never removed
emitter.on('data', (chunk) => {
res.write(chunk);
});
}Every request adds a listener. None are removed. After 10,000 requests, you have 10,000 listeners on that emitter, each one keeping a reference to its res object and closure scope alive in memory.
Node.js warns you when an emitter has more than 10 listeners: MaxListenersExceededWarning. This warning is not noise — it's a leak alarm. Never suppress it with emitter.setMaxListeners(0) without understanding why.
The fix:
function handleRequest(req, res) {
const onData = (chunk) => {
res.write(chunk);
};
emitter.on('data', onData);
// Clean up when the request is done
res.on('finish', () => {
emitter.off('data', onData);
});
}2. Caches Without Eviction
In-memory caches are the second most common source of leaks. A plain object or Map used as a cache will grow forever if you never evict entries.
// This cache never shrinks
const cache = new Map();
async function getUser(id) {
if (cache.has(id)) return cache.get(id);
const user = await db.query(`SELECT * FROM users WHERE id = $1`, [id]);
cache.set(id, user);
return user;
}After enough unique user IDs, this Map holds every user object ever fetched in RAM. Use a WeakMap when the key is an object, or implement a max-size eviction policy with lru-cache:
import { LRUCache } from 'lru-cache';
// Bounded cache — max 500 entries, expire after 5 minutes
const cache = new LRUCache({
max: 500,
ttl: 1000 * 60 * 5,
});
async function getUser(id) {
if (cache.has(id)) return cache.get(id);
const user = await db.query(`SELECT * FROM users WHERE id = $1`, [id]);
cache.set(id, user);
return user;
}3. Closures Capturing Large Objects
Closures in JavaScript capture their surrounding scope — including everything in that scope, even if the closure itself only needs one small part of it.
// The timer callback captures the entire `bigData` array
function processReport() {
const bigData = fetchHeavyDataset(); // 50MB array
const summary = bigData.reduce((acc, row) => acc + row.value, 0);
// This timer runs for 60 seconds — and holds bigData in memory the whole time
setTimeout(() => {
console.log('Report processed, summary:', summary);
}, 60000);
}bigData stays in memory for 60 seconds even though only summary is needed inside the timeout. On a busy server calling processReport() frequently, you're holding dozens of 50MB arrays simultaneously. The fix — capture only what you need:
function processReport() {
const bigData = fetchHeavyDataset();
const summary = bigData.reduce((acc, row) => acc + row.value, 0);
// bigData reference dies here — GC can collect it
setTimeout(() => {
console.log('Report processed, summary:', summary);
}, 60000);
}4. Timers and Intervals That Never Clear
setInterval keeps its callback and all captured variables alive indefinitely. If you create intervals without storing and clearing them, they leak for the lifetime of the process.
// Called on every WebSocket connection — interval leaks when socket closes
io.on('connection', (socket) => {
setInterval(() => {
socket.emit('ping', { time: Date.now() });
}, 5000);
});Every new connection creates an interval. When the socket disconnects, the interval keeps running — and keeps the socket object alive in memory.
// Clear the interval when the connection ends
io.on('connection', (socket) => {
const interval = setInterval(() => {
socket.emit('ping', { time: Date.now() });
}, 5000);
socket.on('disconnect', () => {
clearInterval(interval);
});
});5. Unbounded Request Queues and Global State
Global arrays and objects that accumulate data per-request without ever being drained:
// Request log that grows forever
const requestLog = [];
app.use((req, res, next) => {
requestLog.push({
url: req.url,
method: req.method,
timestamp: Date.now(),
headers: req.headers, // entire headers object captured
body: req.body, // entire body captured
});
next();
});On a server handling 1,000 requests per minute, this array doubles in memory every minute, permanently. Replace it with a bounded structure or flush it periodically:
// Keep last 1000 requests only, drop the rest
const MAX_LOG = 1000;
const requestLog = [];
app.use((req, res, next) => {
requestLog.push({ url: req.url, method: req.method, timestamp: Date.now() });
if (requestLog.length > MAX_LOG) requestLog.shift();
next();
});How to Actually Detect a Leak
Step 1 — Watch Heap Over Time
Add a memory monitor to your server process so you can see the trend, not just a snapshot:
// mem-monitor.js — drop this into any Node.js app
const INTERVAL_MS = 30_000; // every 30 seconds
setInterval(() => {
const { heapUsed, heapTotal, rss, external } = process.memoryUsage();
const mb = (bytes) => `${Math.round(bytes / 1024 / 1024)}MB`;
console.log(JSON.stringify({
ts: new Date().toISOString(),
heapUsed: mb(heapUsed),
heapTotal: mb(heapTotal),
external: mb(external),
rss: mb(rss),
}));
}, INTERVAL_MS).unref(); // .unref() so this timer doesn't prevent process exitRun this for a few hours under real load. If heapUsed grows by more than 10–20MB per hour without any usage spike, you have a leak.
Step 2 — Take Heap Snapshots
Node.js can generate V8 heap snapshots — serialized dumps of every object currently in memory, with their size and references.
const v8 = require('v8');
const fs = require('fs');
// Trigger via an endpoint so you can capture snapshots on demand
app.get('/debug/heap-snapshot', (req, res) => {
const filename = `heap-${Date.now()}.heapsnapshot`;
const stream = v8.writeHeapSnapshot(filename);
res.json({ message: 'Snapshot written', file: stream });
});Take a snapshot before a leak occurs (baseline), leave the server running under load for 30–60 minutes, then take a second snapshot. Open both in Chrome DevTools → Memory tab → switch to "Comparison" view. The Comparison view shows which object types increased between the two snapshots — these are your suspects.
Step 3 — Use --inspect with Chrome DevTools
Run your server with the inspector flag:
node --inspect server.js
# or for a running process:
kill -USR1 <pid>Open chrome://inspect in Chrome, click your process, and you get a full live DevTools connection to your Node.js process — including the Memory tab with heap snapshots and allocation timelines, without modifying any code.
Step 4 — clinic.js for Production-Grade Diagnostics
clinic.js from NearForm is the most practical tool for leak diagnosis:
npm install -g clinic
# Run your server under clinic's heap profiler
clinic heapprofiler -- node server.js
# Then send traffic (curl loop, k6, autocannon, etc.)
autocannon -c 100 -d 60 http://localhost:3000
# clinic generates an interactive HTML reportThe heap profiler shows you which functions are allocating memory and — critically — which allocations survive across GC cycles. Those survivors are the leaks.
How to Confirm You Fixed It
After making a fix, don't just restart and assume it's done. Run a controlled test:
# Install autocannon — a Node.js HTTP benchmarker
npm install -g autocannon
# Hammer your server with 50 concurrent connections for 5 minutes
autocannon -c 50 -d 300 http://localhost:3000/api/usersWhile it's running, watch your memory monitor logs. If heapUsed climbs steadily during the test and doesn't return close to baseline after traffic stops, the leak is still there. A fixed server should show heapUsed oscillate (allocate during load, GC when load stops) and return to roughly the same baseline.
Production Safeguards While You Hunt the Leak
You need to find the leak, but you also need your server to stay alive in the meantime.
Memory limit with auto-restart:
# Start Node.js with a heap limit
node --max-old-space-size=512 server.js
# Pair with PM2 for auto-restart
pm2 start server.js --max-memory-restart 450MWhen heap exceeds 450MB, PM2 restarts the process. This keeps the server available while you debug.
Expose current memory as a health check metric:
app.get('/health', (req, res) => {
const { heapUsed, heapTotal } = process.memoryUsage();
const heapUsedMB = Math.round(heapUsed / 1024 / 1024);
const heapTotalMB = Math.round(heapTotal / 1024 / 1024);
res.json({
status: heapUsedMB < 400 ? 'ok' : 'warning',
memory: { heapUsedMB, heapTotalMB },
uptime: Math.round(process.uptime()),
});
});Wire this to your monitoring (Datadog, Grafana, whatever you use). Alert when heapUsedMB exceeds a threshold so you know before it crashes.
Common Mistakes
- Suppressing the MaxListenersExceededWarning — this is a diagnostic signal, not a noisy warning. It means you have an event emitter leak right now
- Using plain objects or Maps as caches without a max size — every cache needs a bound; pick lru-cache and set a max
- Taking one heap snapshot and trying to read it in isolation — snapshots are only useful as comparisons; you need a before and an after
- Adding more RAM to fix the leak — it delays the crash by hours, not days; the leak will fill the new RAM too
- Not calling .unref() on diagnostic timers — your monitoring interval itself can prevent the process from exiting cleanly in tests
The Takeaway
Memory leaks in Node.js don't announce themselves. They grow slowly, silently, and then crash your server at 3AM. The five patterns that cause 90% of real leaks — event emitters without cleanup, unbounded caches, closures holding large objects, uncleaned intervals, and growing global state — are all fixable once you know what to look for. Add a memory monitor now, before you have a problem. When something looks wrong, take two heap snapshots and compare them in Chrome DevTools. And use clinic.js before you go to production on anything that handles sustained traffic.
Related Articles
You might also enjoy these
Array.fromAsync() and the End of Promise.all Map Patterns
Every JavaScript developer has written await Promise.all(items.map(async item =>...)). It works — until you hit a rate-limited API, a paginated async generator, or a ReadableStream. Array.fromAsync() is the purpose-built replacement you didn't know you needed.
Angular Signals Forms — Replace ReactiveFormsModule in New Projects
Reactive forms were the right solution for 2018. Angular 21 ships Signal-based Forms — no valueChanges, no async pipe, no subscription management. Here's how to replace ReactiveFormsModule in every new component you write.
Stay in the loop
Get articles on technology, health, and lifestyle delivered to your inbox.
No spam — unsubscribe anytime.