Skip links
Chain link made of glowing code with one red cracked link

Supply Chain Security for Modern JavaScript Applications

The average JavaScript application has 1,200 transitive dependencies. When you run npm install on a fresh Next.js project, you are trusting code written by roughly 800 individual maintainers, many of whom are anonymous, unpaid, and maintaining their packages in their spare time. The event-stream incident in 2018, the ua-parser-js compromise in 2021, the colors.js sabotage in 2022, and the xz-utils backdoor in 2024 demonstrated that supply chain attacks are not theoretical risks. They are recurring operational incidents. This post covers concrete, implementable measures for securing your JavaScript supply chain, ordered from highest impact to lowest effort.

Article Overview

Supply Chain Security for Modern JavaScript Applications

8 sections · Reading flow

01
Lock Files Are Not Optional
02
Audit and Automate Dependency Updates
03
Restrict Install Scripts
04
Pin Exact Versions for Critical Dependencies
05
Verify Package Provenance
06
Runtime Protection: Sandboxing and Permissions
07
Private Registry and Namespace Claiming
08
Monitoring and Responding to Compromise

HARBOR SOFTWARE · Engineering Insights

Lock Files Are Not Optional

This should be obvious, but I still encounter teams that do not commit their lock files. A package-lock.json, yarn.lock, or pnpm-lock.yaml pins every dependency (including transitive dependencies) to an exact version and integrity hash. Without a lock file, npm install resolves the latest compatible version at install time, which means a compromised version published between your last install and your CI build will be silently picked up.

Commit your lock file. Always. If your team debates this, show them what happens when you run npm install on a project with "lodash": "^4.17.0" in package.json. Without a lock file, you get whatever 4.x version is current. With a lock file, you get exactly the version you tested against, byte-for-byte verified by the integrity hash. The lock file is not just a convenience for reproducible builds; it is a security control that prevents silent dependency substitution.

Furthermore, configure your CI to use npm ci instead of npm install. The ci command installs exactly what the lock file specifies, fails if the lock file is out of sync with package.json, and does not modify the lock file. This ensures your CI builds are reproducible and cannot be affected by newly published package versions. We have seen incidents where npm install in CI silently updated a transitive dependency to a compromised version because the lock file was stale and the package.json range allowed the update.

# .github/workflows/ci.yml
steps:
  - uses: actions/checkout@v4
  - uses: actions/setup-node@v4
    with:
      node-version: '20'
      cache: 'npm'
  - run: npm ci  # NOT npm install
  - run: npm test

One additional detail: review lock file changes in PRs. When a dependency update changes the lock file, the diff shows exactly which packages changed and to which versions. This is a natural checkpoint to catch unexpected changes. If a PR that modifies application code also changes 40 entries in the lock file, that deserves scrutiny: did the developer run npm install (which updates everything) instead of npm ci (which uses the existing lock file)?

Audit and Automate Dependency Updates

Running npm audit is table stakes, but it is only useful if you actually act on the results. The common failure mode is: npm audit reports 47 vulnerabilities, most are in deep transitive dependencies you cannot control, the team marks them as “accepted risk,” and the audit output becomes noise that everyone ignores.

A better approach is to automate dependency updates with Renovate (not Dependabot, for reasons I will explain) and configure it to merge patch and minor updates automatically when tests pass:

// renovate.json
{
  "$schema": "https://docs.renovatebot.com/renovate-schema.json",
  "extends": ["config:recommended"],
  "packageRules": [
    {
      "matchUpdateTypes": ["patch"],
      "automerge": true,
      "automergeType": "branch"
    },
    {
      "matchUpdateTypes": ["minor"],
      "automerge": true,
      "automergeType": "pr",
      "requiredStatusChecks": ["ci/test", "ci/lint", "ci/security"]
    },
    {
      "matchUpdateTypes": ["major"],
      "automerge": false,
      "labels": ["major-update", "review-required"]
    }
  ],
  "vulnerabilityAlerts": {
    "enabled": true,
    "labels": ["security"],
    "automerge": true
  }
}

Why Renovate over Dependabot? Three reasons. First, Renovate supports grouping related updates into a single PR (so you do not get 30 separate PRs for a monorepo). You can group all @babel/* packages into one PR, all testing library packages into another, and all linting packages into a third. This reduces PR noise from 30 PRs per week to 5-8, which is manageable. Second, Renovate has fine-grained package rules: you can auto-merge patches from trusted packages (lodash, date-fns, uuid) but require review for packages that handle sensitive operations (jsonwebtoken, bcrypt, passport). Third, Renovate supports replacement rules that handle package renames and deprecations, automatically updating your code when a package is deprecated in favor of a successor.

Dependabot is simpler but less configurable, and in our experience that lack of configurability leads to PR fatigue that causes teams to stop reviewing dependency updates altogether. When engineers ignore 25 out of 30 Dependabot PRs per week, they will also ignore the one PR that contains a critical security fix.

Restrict Install Scripts

npm packages can execute arbitrary code during installation via preinstall, install, and postinstall scripts. This is the primary attack vector for supply chain compromises: a malicious package publishes a new version with a postinstall script that exfiltrates environment variables (which often contain API keys and database credentials) to an attacker-controlled server.

The event-stream attack worked exactly this way. A malicious maintainer took over the package, added a postinstall script that extracted Bitcoin wallet private keys from the environment, and published it as a patch version. The script ran automatically on every npm install across thousands of CI pipelines and developer machines before anyone noticed.

The defense is to disable install scripts by default and whitelist the packages that legitimately need them:

# .npmrc
ignore-scripts=true

Then, for packages that require install scripts (native addons like sharp, bcrypt, sqlite3; or tools like esbuild and swc that download platform-specific binaries), use an allow-list:

# .npmrc (npm 9.5+)
ignore-scripts=true

# package.json
{
  "scripts": {
    "postinstall": "npx --yes allow-scripts"
  },
  "allowScripts": {
    "sharp": true,
    "@swc/core": true,
    "esbuild": true
  }
}

Alternatively, if you use pnpm, its built-in onlyBuiltDependencies configuration provides a cleaner mechanism:

# .pnpmrc or pnpm-workspace.yaml
onlyBuiltDependencies:
  - sharp
  - "@swc/core"
  - esbuild

When we enabled ignore-scripts=true across our projects, we discovered that 4 of our 1,800 transitive dependencies had install scripts. Three were legitimate native addons. One was a CSS framework that ran a postinstall script to display a sponsorship message in the terminal. None were malicious, but the exercise demonstrated how few packages actually need install scripts and how much unnecessary attack surface we were exposing.

Pin Exact Versions for Critical Dependencies

Lock files pin transitive dependencies, but your direct dependencies in package.json still use semver ranges by default. For critical dependencies (your web framework, authentication library, database driver, and anything that handles user input), pin exact versions:

// package.json
{
  "dependencies": {
    "next": "14.2.8",          // Exact: critical framework
    "jsonwebtoken": "9.0.2",   // Exact: handles authentication
    "pg": "8.12.0",            // Exact: database driver
    "zod": "3.23.8",           // Exact: input validation
    "lodash": "^4.17.21",      // Range: utility, low risk
    "date-fns": "^3.6.0"       // Range: utility, low risk
  }
}

The tradeoff is that pinned versions require manual updates (or Renovate PRs), while semver ranges update automatically within the lock file. For low-risk utility libraries, the convenience of semver ranges is fine. For anything security-sensitive, the control of exact versions is worth the maintenance cost. A compromised patch release of your authentication library is catastrophic; a compromised patch release of a date formatting library is much less likely to be exploitable in a meaningful way.

We categorize our dependencies into three tiers: Tier 1 (security-critical: auth, crypto, input validation, database drivers) gets exact pinning. Tier 2 (framework and build tools: Next.js, React, TypeScript, esbuild) gets patch-range pinning (~14.2.0 allows 14.2.x but not 14.3.0). Tier 3 (utilities: lodash, date-fns, uuid) gets standard caret-range pinning (^4.17.0). This tiered approach balances security control with maintenance burden.

Verify Package Provenance

npm now supports package provenance via Sigstore, which cryptographically links a published package to its source code and build process. When a package has provenance, you can verify that the published tarball was built from a specific commit in a specific repository by a specific CI system, with no human intervention between the source code and the published artifact.

This addresses the specific attack vector where an attacker with stolen npm credentials publishes a malicious version from their local machine. With provenance, the registry verifies that the package was built by the repository’s CI system (GitHub Actions, GitLab CI, etc.), not by a human running npm publish from their laptop. Even if the attacker has the maintainer’s npm token, they cannot publish a provenance-verified package without also compromising the CI system.

Check provenance with:

npm audit signatures

# Output:
# audited 1,247 packages in 3s
# 1,247 packages have verified registry signatures
# 312 packages have verified attestations

As of late 2024, approximately 25% of the top 1,000 npm packages publish provenance attestations. This number is growing as npm and major package maintainers adopt the workflow. For your own packages, enable provenance by adding --provenance to your publish command:

# In GitHub Actions
- run: npm publish --provenance --access public
  env:
    NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

This requires your package to be published from GitHub Actions (or another supported CI provider) with OIDC identity federation. The CI system’s identity is cryptographically bound to the published package, making it impossible for someone with stolen npm credentials to publish a compromised version from their local machine.

Runtime Protection: Sandboxing and Permissions

All of the above measures protect you at install time and build time. But what about runtime? A malicious dependency that passes all static checks can still exfiltrate data or open reverse shells when your application runs.

Node.js 20 introduced an experimental permissions model that restricts what code can do at runtime:

node --experimental-permission 
     --allow-fs-read=/app/config/* 
     --allow-fs-write=/app/logs/* 
     --allow-child-process 
     server.js

This is still experimental and has rough edges (many packages assume unrestricted filesystem access), but it represents the future of supply chain defense at runtime. For production deployments today, the practical alternatives are: run your application in a minimal Docker container with a read-only filesystem, use network policies to restrict outbound connections (a dependency cannot exfiltrate data if it cannot reach the internet), and use seccomp profiles to restrict syscalls.

# Docker Compose with security restrictions
services:
  app:
    image: my-app:latest
    read_only: true
    tmpfs:
      - /tmp:size=100m
    security_opt:
      - no-new-privileges:true
    networks:
      - internal
    deploy:
      resources:
        limits:
          memory: 512M
          cpus: '1.0'

networks:
  internal:
    driver: bridge
    internal: true  # No outbound internet access

The read_only: true flag prevents the application (and any malicious dependency) from writing to the filesystem, which blocks many exfiltration techniques. The internal: true network configuration blocks all outbound internet access, which prevents data exfiltration over HTTP. If your application needs to call external APIs, create a separate network for those specific connections and route only the approved traffic through it.

Private Registry and Namespace Claiming

If your organization uses scoped packages (@yourorg/package-name), claim your scope on the public npm registry even if you use a private registry. Namespace squatting is a real attack vector: an attacker publishes @yourorg/internal-utils on the public registry, and a misconfigured developer’s machine installs the public malicious version instead of your private one.

For organizations using a private registry (Artifactory, Verdaccio, GitHub Packages), configure .npmrc to route scoped packages to your private registry and everything else to the public registry:

# .npmrc
@yourorg:registry=https://npm.yourorg.com/
//npm.yourorg.com/:_authToken=${NPM_PRIVATE_TOKEN}
registry=https://registry.npmjs.org/

This ensures that @yourorg/* packages always resolve from your private registry, preventing dependency confusion attacks. Test this by running npm view @yourorg/any-package --registry=https://registry.npmjs.org/ and verifying it returns a 404 or your organization’s placeholder package.

Dependency confusion attacks are more common than most teams realize. In 2021, a security researcher demonstrated the technique against 35 major companies, including Apple, Microsoft, and PayPal, by publishing packages on npm that matched the names of their internal packages. The npm client resolved the public versions because they had higher version numbers than the private ones. The fix is straightforward: scope your packages and configure your .npmrc to route scoped packages to your private registry. The cost of not doing this is the possibility that any developer on your team could inadvertently install an attacker’s code by running npm install.

Monitoring and Responding to Compromise

Prevention is necessary but insufficient. You also need the ability to detect and respond to a supply chain compromise after it occurs. The median time to detect a supply chain attack is 44 days according to research from Google’s Software Supply Chain Security team. During those 44 days, every developer machine and every CI pipeline that installed the compromised package was potentially exposed.

Detection requires monitoring at three levels. First, monitor for unexpected network connections from your applications. A compromised dependency that exfiltrates data must communicate with an external server. Network monitoring tools like Falco (for Kubernetes) or osquery (for servers and developer machines) can detect unexpected outbound connections from processes that should not be making them. Configure alerts for any outbound connection from your Node.js process to an IP address or domain that is not in your approved list. This will generate false positives initially (many legitimate packages phone home for analytics or update checks), but the approved list stabilizes quickly and the remaining alerts are worth investigating.

Second, monitor for changes in package behavior between versions. Tools like Socket.dev analyze the code diff between package versions and flag suspicious additions: new network calls, new filesystem access, new environment variable reads, new child process spawns, and obfuscated code. Socket integrates into GitHub as a PR check and flags suspicious dependency updates before they are merged. We added Socket to our CI pipeline six months ago and it flagged two suspicious packages: one that added a postinstall script in a minor version bump (turned out to be a legitimate telemetry addition, but the flag was correct to raise it), and one that added environment variable reading in a patch version (turned out to be a maintainer adding a configuration option, but again, worth reviewing).

Third, maintain a software bill of materials (SBOM) for every deployment. An SBOM is a complete list of every package, version, and hash in your deployed application. When a new vulnerability is disclosed or a package is identified as compromised, you can immediately determine whether your deployed applications are affected by querying the SBOM rather than manually checking each application’s lock file. We generate SBOMs using cyclonedx-npm during our CI build and store them alongside our deployment artifacts:

# Generate SBOM during CI build
npx @cyclonedx/cyclonedx-npm --output-file sbom.json --spec-version 1.5

# Store alongside deployment artifact
aws s3 cp sbom.json s3://deploy-artifacts/$APP/$VERSION/sbom.json

When the xz-utils backdoor was discovered in March 2024, teams with SBOMs were able to determine their exposure within minutes. Teams without SBOMs spent hours or days checking each application individually. The SBOM cost us nothing to generate (it adds 3 seconds to our CI build) and it has already saved us significant incident response time on two occasions.

Response to a confirmed compromise follows a specific playbook: identify all affected systems using the SBOM, rotate all credentials that were accessible to the compromised dependency (remember, environment variables mean all credentials in the process), deploy patched versions to all affected systems, and review logs for evidence of data exfiltration. The credential rotation step is the most painful and the strongest argument for moving beyond environment variables to a proper secrets management system where credentials can be rotated programmatically rather than manually updated in 15 different deployment configurations.

Supply chain security is not a one-time setup; it is an ongoing practice. Lock your dependencies, automate updates, restrict install scripts, pin critical packages, verify provenance, and restrict runtime capabilities. None of these measures is sufficient alone, but together they make a supply chain attack substantially harder to execute and faster to detect. The goal is not to be impenetrable; it is to be harder to compromise than the next target, and to detect compromises quickly when they do occur.

Leave a comment

Explore
Drag