StackDevLife
Cover image for: How I Fixed a Node.js API That Was Taking 15 Minutes to Return 8,000 Records
Back to Blog

How I Fixed a Node.js API That Was Taking 15 Minutes to Return 8,000 Records

A slow MongoDB + Node.js API was taking 15 minutes to return 8,000 records. Here's the exact process I used to diagnose it — missing indexes, in-memory aggregation, no pagination — and bring it down to 15 seconds.

SB

Sandeep Bansod

April 6, 2026
Share:

The ticket came in with one line: "The API is too slow."

No stack trace. No logs. Just a frustrated client and an endpoint that took around 15 minutes to return 8,000 records. Fun.

The worst part? The code wasn't even mine. Someone else had written it, shipped it, and moved on. Now I was the one staring at a Mongoose query wondering where 15 minutes were disappearing to.

Here's exactly what I investigated, what I found, and how I brought it down to around 15 seconds — without rewriting everything from scratch.

The Setup

The API was a Node.js service using Express and Mongoose, backed by MongoDB Atlas. It had a /reports endpoint that fetched records and ran some calculations for a dashboard.

Simple enough on paper. Here's what the original route looked like:

JavaScript
// Original route — the culprit
app.get('/reports', async (req, res) => {
  try {
    const data = await Report.find({
      status: req.query.status,
      createdAt: {
        $gte: new Date(req.query.from),
        $lte: new Date(req.query.to)
      }
    });
    res.json(data);
  } catch (err) {
    res.status(500).json({ error: err.message });
  }
});

Nothing about this looks obviously broken, which is what made it tricky. The problem wasn't the code logic — it was everything happening underneath it.

Step 1: Find Out What MongoDB Is Actually Doing

Before changing a single line, I ran .explain("executionStats") on the query. If you've never used this, make it a habit — it's the fastest way to see exactly how MongoDB is executing your query.

JavaScript
const result = await Report.find({
  status: 'pending',
  createdAt: {
    $gte: new Date('2024-01-01'),
    $lte: new Date('2024-12-31')
  }
}).explain('executionStats');

console.log(JSON.stringify(result.executionStats, null, 2));

This was the output that told me everything:

JSON
{
  "executionSuccess": true,
  "nReturned": 8000,
  "totalDocsExamined": 240000,
  "totalKeysExamined": 0,
  "executionTimeMillis": 874000
}

See that? totalKeysExamined: 0. MongoDB wasn't using any index at all. It was scanning all 240,000 documents in the collection just to return 8,000 matching ones. That's a full collection scan on every single request — and 874 seconds to prove it.

Step 2: Add the Right Indexes

This was the single biggest fix, and it took about two minutes to apply.

JavaScript
// In your Mongoose schema
const reportSchema = new mongoose.Schema({
  status: { type: String },
  createdAt: { type: Date },
  userId: mongoose.Schema.Types.ObjectId,
  category: String,
  amount: Number
});

// Add a compound index that matches the query pattern exactly
reportSchema.index({ status: 1, createdAt: -1 });

Or if you want to add it directly in the MongoDB shell without touching the codebase:

JavaScript
db.reports.createIndex({ status: 1, createdAt: -1 });

Why compound and not two separate indexes? MongoDB can only use one index per query by default. If you add individual indexes on status and createdAt separately, MongoDB will pick one of them and still do extra filtering work for the other. A compound index covers both fields together in a single efficient lookup — and the order matters. Put the equality filter (status) first, range filter (createdAt) second.

After adding the index, the explain output changed to this:

JSON
{
  "totalDocsExamined": 8000,
  "totalKeysExamined": 8001,
  "executionTimeMillis": 1200
}

From 240,000 documents examined down to 8,000. That's the impact of one index.

Step 3: Fix the Aggregation Pipeline

The original code also had a pattern that I see constantly — fetching records from MongoDB and then processing them in Node.js:

JavaScript
// Original: pull everything into memory, then process in JS
const records = await Report.find({ status: 'pending' });

const summary = records.reduce((acc, r) => {
  acc[r.category] = (acc[r.category] || 0) + r.amount;
  return acc;
}, {});

This is backwards. You're transferring thousands of full documents over the network just to run a sum. MongoDB can do this aggregation in the database and send you back a tiny result.

JavaScript
// Fixed: push the aggregation into MongoDB
const summary = await Report.aggregate([
  {
    $match: {
      status: 'pending',
      createdAt: {
        $gte: new Date(req.query.from),
        $lte: new Date(req.query.to)
      }
    }
  },
  {
    $group: {
      _id: '$category',
      totalAmount: { $sum: '$amount' },
      count: { $sum: 1 }
    }
  },
  {
    $sort: { totalAmount: -1 }
  }
]);

Always put $match at the top of the pipeline. If you filter after a $group or $lookup, MongoDB has already processed every document before your filter kicks in. Filter early, process as little as possible.

Use $project to drop fields you don't need:

JavaScript
{
  $project: {
    category: 1,
    amount: 1,
    status: 1,
    createdAt: 1,
    _id: 0
  }
}

Every extra field is extra bytes — across thousands of documents, it adds up fast.

Step 4: Refactor the API Code

Even after fixing the indexes and aggregation, the Node.js code itself had a few more problems worth addressing.

No Pagination

Returning 8,000 records in a single JSON response is never a good idea, regardless of how fast the query is. Add pagination from the start:

JavaScript
app.get('/reports', async (req, res) => {
  const page  = parseInt(req.query.page)  || 1;
  const limit = parseInt(req.query.limit) || 100;
  const skip  = (page - 1) * limit;

  const filter = {
    status: req.query.status,
    createdAt: {
      $gte: new Date(req.query.from),
      $lte: new Date(req.query.to)
    }
  };

  try {
    const [data, total] = await Promise.all([
      Report.find(filter)
        .select('category amount status createdAt userId')
        .sort({ createdAt: -1 })
        .skip(skip)
        .limit(limit)
        .lean(),
      Report.countDocuments(filter)
    ]);

    res.json({
      data,
      pagination: {
        page,
        limit,
        total,
        pages: Math.ceil(total / limit)
      }
    });
  } catch (err) {
    res.status(500).json({ error: err.message });
  }
});

Use .lean() on Read Queries

Mongoose documents carry a lot of invisible overhead — prototype methods, change tracking, virtual getters. When you're just reading data to send as a JSON response, you don't need any of that. .lean() returns plain JavaScript objects and can make a noticeable difference at scale:

JavaScript
// Without lean — full Mongoose document with overhead
const docs = await Report.find({ status: 'pending' });

// With lean — plain JS objects, faster and lighter
const docs = await Report.find({ status: 'pending' }).lean();

Parallelize Independent Queries

The original code was running its queries one after another. If two queries don't depend on each other's results, run them at the same time:

JavaScript
// Slow: sequential — second query waits for first to finish
const reports = await Report.find(filter);
const summary = await Report.aggregate([...]);

// Fast: parallel — both queries run simultaneously
const [reports, summary] = await Promise.all([
  Report.find(filter).lean(),
  Report.aggregate([...])
]);

The Results

Here's what changed after applying all four fixes:

  • Response time (8k records): ~15 minutes → ~15 seconds
  • Documents scanned per query: 240,000 → 8,000
  • Indexes used: None → Compound index on status + createdAt
  • Aggregation location: Node.js → MongoDB
  • Pagination: None → 100 records per page

Is 15 seconds blazing fast? No. But with pagination in place, real users now get their first 100 records in under a second. The full 8,000-record export runs as a background job and takes about 15 seconds — completely acceptable.

What to Actually Check When You Inherit a Slow API

This is the checklist I now go through any time I land in a slow MongoDB + Node.js codebase:

  1. Run .explain('executionStats') first — look at totalDocsExamined vs nReturned. A large gap means missing or wrong indexes.
  2. Add compound indexes that match your query patterns field-by-field. Order matters — equality fields go first, range fields go last.
  3. Move aggregation logic into MongoDB — stop pulling data into Node.js just to run a reduce or a sum.
  4. Add .lean() on all read queries where you're just serializing to JSON.
  5. Paginate everything — no route should return thousands of records in a single response.
  6. Put $project and $match early in aggregation pipelines to reduce document size as fast as possible.
  7. Use Promise.all for independent queries instead of awaiting them one by one.

You won't always get a 60x improvement. But if someone else wrote the code and nothing was thought through, there's almost always something obvious hiding in the query patterns. Start with .explain() and work from there.

Found this useful? Share it.

XLinkedInHN
SB

Sandeep Bansod

I'm a Front‑End Developer located in India focused on website look great, work fast and perform well with a seamless user experience. Over the years I worked across different areas of digital design, web development, email design, app UI/UX and developemnt.

Related Articles

You might also enjoy these

Array.fromAsync() and the End of Promise.all Map Patterns

Every JavaScript developer has written await Promise.all(items.map(async item =>...)). It works — until you hit a rate-limited API, a paginated async generator, or a ReadableStream. Array.fromAsync() is the purpose-built replacement you didn't know you needed.

Read

Stay in the loop

Get articles on technology, health, and lifestyle delivered to your inbox.No spam — unsubscribe anytime.