Back to blog

Performance Anti-patterns in Modern JavaScript

Performance Problems Hide in Plain Sight

Most performance problems in JavaScript applications are not caused by slow algorithms or inefficient data structures. They are caused by patterns that look perfectly reasonable in a code review but silently degrade performance as the application scales.

A sequential await where parallel execution was possible. An API call inside a loop. A full lodash import for a single utility function. These patterns do not cause failures. They cause slowness that creeps in gradually, making the application a little worse with every sprint until someone finally notices that page load takes four seconds.

This article covers the most common performance anti-patterns in modern JavaScript, explains why they are problematic, and shows practical strategies for detection and remediation.

Sequential Awaits Where Parallel Is Possible

This is one of the most common performance mistakes in async JavaScript. When multiple asynchronous operations are independent of each other, they should run concurrently. Running them sequentially wastes time waiting for each to complete before starting the next.

The Problem

// Sequential: total time = time(a) + time(b) + time(c)
const userProfile = await fetchProfile(userId);
const userOrders = await fetchOrders(userId);
const userPreferences = await fetchPreferences(userId);

If each call takes 200ms, this code takes 600ms. None of these calls depend on the result of the others, so there is no reason to wait.

The Fix

// Parallel: total time = max(time(a), time(b), time(c))
const [userProfile, userOrders, userPreferences] = await Promise.all([
  fetchProfile(userId),
  fetchOrders(userId),
  fetchPreferences(userId),
]);

The parallel version takes 200ms instead of 600ms. The improvement scales linearly with the number of independent operations.

Use Promise.all when all operations must succeed. Use Promise.allSettled when you want results from all operations regardless of individual failures.

Why This Pattern Creeps In

Sequential awaits look natural. They read top to bottom, like synchronous code. Developers often write them without realising the performance cost because the individual calls are fast enough in development. It is only in production, with real network latency and load, that the cumulative cost becomes visible.

N+1 Patterns in API Calls

The N+1 problem is well understood in database queries, but the same pattern appears in API calls and is just as damaging.

The Problem

// 1 call to get the list + N calls for details
const orders = await fetchOrders(userId);

for (const order of orders) {
  const details = await fetchOrderDetails(order.id);
  order.details = details;
}

If the user has 50 orders, this makes 51 API calls. Each call has network overhead, and the total execution time is unpredictable because it depends on the response time of every individual call.

The Fix

Batch the requests. If the API supports it, fetch all details in a single call.

const orders = await fetchOrders(userId);
const orderIds = orders.map(o => o.id);
const allDetails = await fetchOrderDetailsBatch(orderIds);

If batching is not available, use Promise.all to at least run the detail calls concurrently rather than sequentially.

const orders = await fetchOrders(userId);
const detailedOrders = await Promise.all(
  orders.map(async (order) => {
    const details = await fetchOrderDetails(order.id);
    return { ...order, details };
  })
);

This does not reduce the number of calls, but it runs them in parallel, which is a significant improvement. For large lists, consider adding concurrency limits with a library like p-limit to avoid overwhelming the target API.

Importing Heavy Libraries for Small Tasks

The JavaScript ecosystem has a well-documented tendency towards large, monolithic libraries. Importing the entirety of a library for a single function is one of the most common causes of unnecessarily large bundle sizes.

Common Offenders

Package Full size Common alternative Alternative size
moment ~290 KB date-fns (tree-shakeable) Varies by function
lodash ~530 KB lodash-es or native methods Varies
request ~1.6 MB node's built-in fetch 0 KB
chalk (in browser bundles) ~40 KB Not needed in browser 0 KB
bluebird ~80 KB Native Promises 0 KB

The Fix

Use tree-shakeable alternatives. Instead of import _ from 'lodash', use import { debounce } from 'lodash-es' or import debounce from 'lodash/debounce'. Better yet, check whether the native language provides what you need. Modern JavaScript has Array.prototype.flat, Object.entries, structuredClone, and many other utilities that once required a library.

Audit your bundle. Tools like webpack-bundle-analyzer, source-map-explorer, or vite-plugin-visualizer show exactly which packages contribute to your bundle size. Run this analysis regularly and investigate any package that occupies a disproportionate share.

Check before installing. Before adding a new dependency, check its size on bundlephobia.com and consider whether the functionality justifies the weight. A 5 KB utility library is a different proposition from a 500 KB one.

Large Bundle Sizes

Bundle size affects every user on every page load. A large JavaScript bundle means longer download times, more parsing time, and delayed interactivity. On mobile devices with slower processors and connections, the impact is even more pronounced.

What Causes Large Bundles

Beyond heavy imports, several other patterns contribute to bundle bloat:

  • No code splitting. Loading the entire application upfront instead of splitting by route or feature means users download code they may never use.
  • Unused exports. Dead code that is exported but never imported. Tree shaking should remove these, but it only works with ES modules. CommonJS modules are not tree-shakeable.
  • Duplicate packages. Different versions of the same package pulled in by different dependencies. Your lockfile may contain three versions of the same library without anyone noticing.
  • Unoptimised assets. JSON files, SVGs, and other static assets bundled without compression.

The Fix

Enable code splitting. Modern bundlers (Vite, webpack, Rollup) support dynamic imports that create separate chunks loaded on demand. Split by route at minimum.

// Instead of static import
import { HeavyComponent } from './HeavyComponent';

// Use dynamic import
const HeavyComponent = lazy(() => import('./HeavyComponent'));

Monitor bundle size in CI. Set a budget for your main bundle and fail the build if it is exceeded. Tools like bundlesize or size-limit make this straightforward. A budget creates a forcing function: before adding weight, the team must remove an equivalent amount or justify the increase.

Deduplicate packages. Run npm dedupe or check your lockfile for multiple versions of the same package. Keeping dependencies reasonably current reduces the likelihood of version conflicts that cause duplication.

How These Patterns Creep In Unnoticed

Performance anti-patterns are insidious because they rarely cause obvious failures. Each individual instance has a small impact. A single sequential await adds 200ms. One unnecessary import adds 50 KB. A single N+1 loop adds a few hundred milliseconds.

The problem is accumulation. Over months of development, these patterns multiply. The application gets a little slower with each sprint. By the time someone investigates, the causes are spread across dozens of files and hundreds of commits.

This is why detection must be automated. Manual code review catches some patterns, but reviewers are focused on correctness and business logic, not on whether an await could be parallelised.

Detection Strategies

Practical approaches for catching these patterns before they reach production:

  • Static analysis for async patterns. Lint rules can flag sequential awaits and await-in-loop patterns. Custom ESLint rules or tools that parse the AST can identify these automatically.
  • Bundle analysis in CI. Run bundle size checks on every pull request. Flag increases above a threshold and require justification.
  • Import analysis. Scan for full imports of known heavy packages. A rule that flags import _ from 'lodash' but allows import { debounce } from 'lodash-es' is easy to implement and catches a common problem.
  • Performance budgets. Set explicit budgets for Time to Interactive, Largest Contentful Paint, and total JavaScript size. Measure against these budgets regularly.
  • Profiling in staging. Run performance profiles against realistic data volumes. Patterns that are invisible with 10 records become obvious with 10,000.

Frequently Asked Questions

How do I find sequential awaits in a large codebase?

Use static analysis tools that can parse async functions and identify consecutive await expressions where the second does not depend on the result of the first. Some linting plugins offer this out of the box. For a quick manual check, search for files containing multiple await keywords within the same function scope.

Is Promise.all always better than sequential awaits?

Not always. If the second operation depends on the result of the first, they must be sequential. Promise.all is only appropriate when the operations are independent. Additionally, if you are calling an external API with rate limits, firing all requests simultaneously may trigger throttling. In those cases, use a concurrency limiter.

What is a reasonable JavaScript bundle size budget?

There is no universal answer, but a common guideline for web applications is to keep the initial JavaScript payload under 200 KB (compressed). For performance-critical applications, aim for under 100 KB. The right budget depends on your audience, their devices, and their network conditions.

How often should I audit bundle size?

Monitor it continuously in CI so that increases are caught at the pull request level. Run a detailed bundle analysis monthly to identify opportunities for optimisation that may not be caught by size thresholds alone.

Building Performance Into Your Workflow

Performance is not something you fix after launch. It is something you maintain throughout development. Add automated checks to your CI pipeline that catch these patterns early. Set budgets, measure regularly, and treat regressions with the same urgency as bugs.

The patterns in this article are straightforward to detect once you know what to look for. The challenge is building the discipline to catch them consistently, before they accumulate into a problem that requires a dedicated performance sprint to resolve.