The CI/CD Pipeline as a Supply Chain Attack Target
The SolarWinds attack in 2020 demonstrated what security researchers had been warning about for years: the build pipeline is a more valuable target than the application itself. By compromising the build system, attackers inserted malicious code into signed, legitimate software that was then distributed to thousands of customers. The attacker didn't need to bypass any application security controls β they owned the process that created the software.
Your CI/CD pipeline is particularly attractive to attackers because it concentrates risk. It has read access to all your source code. It holds credentials for cloud environments, container registries, and package repositories. It has write access to production infrastructure. And it runs code from dozens of third-party sources β package registries, GitHub Actions, build tool plugins β that most teams audit far less rigorously than their own code.
Real-world precedent: The 3CX supply chain attack (2023) involved a compromised upstream dependency being included in a legitimately signed installer. The malicious code was in the dependency, not in 3CX's own codebase β making it invisible to standard code review processes.
The attack surface includes: third-party GitHub Actions used in workflows, npm/pip/Maven packages pulled at build time, base container images, build system plugins, and the CI/CD platform configuration itself. Each of these is a vector for supply chain compromise.
Dependency Confusion and Namespace Attacks
In 2021, security researcher Alex Birsan demonstrated dependency confusion against 35 major organisations including Apple, Microsoft, and PayPal. The technique exploits how package managers resolve internal vs public package names: if your organisation uses a private package named company-utils and an attacker publishes a public package with the same name at a higher version number, many package managers will pull the public (malicious) version.
The attack works because package managers like npm, pip, and RubyGems default to checking public registries alongside (or instead of) private ones. When the same package name appears in both, the version with the higher semver number wins β and the attacker controls that.
# Force all @company scoped packages to internal registry only @company:registry=https://registry.internal.company.com # Block any @company package from resolving via public npmjs.com always-auth=true # Do NOT use a flat registry config without scoping β # that still allows public fallback for unscoped packages
Mitigations for Dependency Confusion
- Use scoped package namespaces (e.g.,
@yourorg/package-name) for all internal packages - Configure your package manager to resolve internal packages from your private registry only β not as a fallback
- Register your internal package names on public registries as placeholder packages to prevent squatting
- Use lock files (
package-lock.json,poetry.lock) and verify integrity hashes - Scan for dependency confusion patterns in CI β tools like
confusedor SCA scanners with namespace awareness
Malicious Actions and Workflow Poisoning
GitHub Actions has become the most common CI/CD platform for open-source and many enterprise teams. The Actions marketplace allows any user to publish actions that other pipelines can consume. This creates a direct parallel to the npm supply chain problem: you may be running arbitrary third-party code in your pipeline with access to your repository secrets.
Workflow poisoning takes several forms. A legitimate action can be compromised after you've already adopted it β the maintainer's account gets phished, and the attacker pushes a new version that exfiltrates secrets. Alternatively, a malicious actor publishes a convincingly named action that mimics a popular one. Or a pull-request-based workflow can be manipulated to execute untrusted code from the PR branch with access to repository secrets.
tj-actions/changed-files compromise (2025): One of the most widely used GitHub Actions was compromised to print CI secrets to workflow logs. At peak, it was used in over 23,000 repositories. The attacker modified the action in-place β all pipelines referencing a mutable tag like v45 were immediately affected.
jobs: build: runs-on: ubuntu-latest steps: # BAD: mutable tag β can be silently replaced - uses: actions/checkout@v4 # GOOD: pinned to immutable commit SHA - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 β verified 2026-03-01 # GOOD: use permissions: read-all to limit blast radius permissions: contents: read id-token: write # only if OIDC is needed
Always pin third-party actions to a full commit SHA rather than a mutable tag or branch name. Use tools like pin-github-action or Dependabot to automate SHA pinning and keep pins up to date with verified releases.
Build Poisoning: Compromising the Build System
Build poisoning is the direct compromise of the system or environment that produces your software artifacts. Unlike dependency confusion (which targets your inputs) or action hijacking (which targets your workflow steps), build poisoning targets the build infrastructure itself β the CI runners, build caches, or build scripts.
Common build poisoning vectors include: compromised self-hosted runners that have been backdoored; poisoned build caches where a previous malicious build writes artifacts that future builds consume; environment variable injection from untrusted sources (e.g., PR titles or commit messages used in build scripts without sanitisation); and compromised build tool plugins (e.g., Gradle plugins, Maven plugins, webpack plugins).
Self-hosted runners are high risk: GitHub-hosted runners are ephemeral and sandboxed. Self-hosted runners persist between runs. If an attacker compromises a self-hosted runner β through a malicious workflow, a vulnerability in the runner software, or direct network access β they have persistent access to every subsequent build.
Hardening Build Environments
- Use ephemeral, immutable runners β spin up a fresh VM or container per build, destroy after use
- Never share runners between different trust levels (e.g., public forks and internal branches)
- Isolate build environments from production networks β runners should not have direct access to prod credentials
- Sanitise all inputs used in build scripts β treat PR titles, branch names, and commit messages as untrusted user input
- Audit build tool plugins and their dependencies with the same rigour as application dependencies
Artifact Integrity: Signing and Verification
Even if your build process is clean, how do you prove that what gets deployed is what was built? Artifact signing provides a cryptographic chain of custody: each artifact (container image, binary, package) is signed with a key that proves it was produced by a specific build process and hasn't been tampered with since.
Sigstore's Cosign has become the standard for container image signing in cloud-native environments. It integrates with Rekor (a transparency log) and supports keyless signing using OIDC identity tokens β no long-lived signing keys to manage or rotate.
- name: Sign container image uses: sigstore/cosign-installer@11086d25041f77fe8fe7b9ea4e48e3b9192b8f19 # cosign-installer v3.1.2 - name: Sign the published Docker image env: COSIGN_EXPERIMENTAL: true # keyless mode via OIDC run: | cosign sign --yes \ ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}@${{ steps.push.outputs.digest }} # The digest pins to the exact image content hash, not a mutable tag
At deployment time, verify the signature before running the image. Kubernetes admission controllers like Connaisseur or Kyverno can enforce that only signed images from trusted registries are allowed to run in your cluster.
SBOMs in CI/CD: Generating and Enforcing
A Software Bill of Materials (SBOM) is a machine-readable inventory of every component in a software artifact β direct dependencies, transitive dependencies, their versions, and their licenses. Generating SBOMs in CI/CD provides a foundation for supply chain visibility: you know exactly what went into every build.
The two dominant SBOM formats are SPDX (maintained by the Linux Foundation) and CycloneDX (maintained by OWASP). Most tooling supports both. Syft is the most widely used open-source SBOM generator for container images and filesystems.
- name: Generate SBOM uses: anchore/sbom-action@v0 with: image: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }} format: cyclonedx-json output-file: sbom.cyclonedx.json - name: Attest SBOM to image run: | cosign attest --yes \ --predicate sbom.cyclonedx.json \ --type cyclonedx \ ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}@${{ steps.push.outputs.digest }} # Attaching SBOM as a signed attestation links it to the image digest
Beyond generation, enforce SBOM consumption: scan the generated SBOM for known vulnerabilities with Grype or Trivy, fail the build if critical CVEs are present, and store SBOMs in an artifact store for post-incident forensics. When a new CVE drops, you can query your SBOM store to find all affected builds without re-scanning everything.
SLSA Framework: Build Provenance in Practice
SLSA (Supply-chain Levels for Software Artifacts, pronounced "salsa") is a security framework from Google that defines graduated levels of supply chain security. It focuses on build provenance: a verifiable record of how an artifact was built, including what source code it came from, which build system produced it, and what inputs were used.
SLSA defines four levels. Level 1 requires a generated provenance document. Level 2 requires the provenance to be hosted on a tamper-resistant service. Level 3 requires a hardened build environment where the provenance cannot be forged by the build process itself. Level 4 (the target for high-assurance software) adds two-party review and hermetic builds.
SLSA in practice: Most organisations should target SLSA Level 2 as a starting point β it provides meaningful supply chain assurance without requiring significant infrastructure changes. The GitHub Actions SLSA generator makes this achievable in a single workflow step.
jobs: build: outputs: hashes: ${{ steps.hash.outputs.hashes }} runs-on: ubuntu-latest steps: - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 - name: Build binary run: go build -o myapp ./... - name: Generate artifact hashes id: hash run: | set -euo pipefail echo "hashes=$(sha256sum myapp | base64 -w0)" >> "$GITHUB_OUTPUT" provenance: needs: [build] uses: slsa-framework/slsa-github-generator/.github/workflows/[email protected] with: base64-subjects: ${{ needs.build.outputs.hashes }} permissions: actions: read id-token: write contents: write
Dependency Pinning and Lock Files
Dependency pinning is the practice of specifying exact versions (or exact content hashes) for all dependencies rather than version ranges. In CI/CD, this means never running npm install without a committed package-lock.json, never running pip install without a requirements.txt with pinned versions and hash verification, and never pulling a container base image by tag.
Version ranges like ^1.2.3 or >=2.0 are convenient for development but dangerous in CI/CD. They mean your Monday build and your Friday build may use different code β and the difference could be a malicious update published to the registry between those days.
# BAD: 'latest' tag is updated constantly FROM node:20-alpine # BAD: version tag is still mutable (can be force-pushed) FROM node:20.11.0-alpine3.19 # GOOD: pinned to immutable content digest FROM node:20.11.0-alpine3.19@sha256:bf77dc26e48ea95fca9d1aceb5acfa69d2e546b765ec2abfb502975b1a2d27f7
For application dependencies, use lock files and enable integrity checking. For npm, commit package-lock.json and use npm ci (not npm install) in CI β npm ci enforces that the installed packages match the lock file exactly, failing if there is any discrepancy.
Monitoring and Alerting for Pipeline Anomalies
Even with preventive controls in place, monitoring is essential. Supply chain attacks often exhibit anomalous behaviour: unusual network connections from build runners, unexpected new dependencies appearing in builds, artifact sizes changing significantly, or pipelines running at unusual times.
Build a pipeline observability layer that captures and alerts on the following signals:
- Dependency drift: Alert when new dependencies appear in a build that weren't present in the previous build
- Unexpected outbound connections: CI runners should have narrow egress rules; alert on connections to unrecognised hosts
- Secret access patterns: Alert when a secret is accessed by a workflow that hasn't used it before, or when secrets are accessed outside normal build hours
- Artifact size anomalies: Significant unexplained size changes in build outputs warrant investigation
- Failed signature verifications: Any failed cosign verification at deploy time should trigger an immediate investigation, not just a deployment failure
Baseline first: Effective anomaly detection requires a baseline. Spend two to four weeks logging pipeline behaviour before writing alert rules β you need to know what "normal" looks like for your pipelines before you can reliably detect deviations.
Integrate pipeline audit logs into your SIEM. GitHub, GitLab, and CircleCI all export audit events. Correlate pipeline events with your identity provider logs (who triggered the build) and your cloud provider logs (what the build did in your environment) for a complete picture.
Secure Your CI/CD Supply Chain
AquilaX scans your pipeline dependencies, detects unpinned actions, generates SBOMs, and integrates SLSA provenance verification automatically β without slowing down your builds.
Start Free Scan