What Vulnerability Scanning Actually Does

A vulnerability scanner compares what you have β€” software versions, configurations, installed packages β€” against databases of known vulnerabilities (primarily the NVD, National Vulnerability Database, and CVE list). When it finds a match (you have log4j 2.14.1, which has CVE-2021-44228), it reports a finding.

What it doesn't do: exploit the vulnerability, determine if the vulnerable component is actually reachable, assess whether your WAF or other controls mitigate it, or know whether a published exploit exists and is being actively used in attacks.

CVE vs NVD: A CVE (Common Vulnerabilities and Exposures) is a unique identifier for a publicly known security vulnerability. The NVD (National Vulnerability Database) maintains additional metadata including CVSS scores. When you see CVE-2021-44228, that's the identifier; the 9.3 CVSS score comes from NVD.

CVSS Score Explained: The Math Behind the Number

CVSS (Common Vulnerability Scoring System) v3.1 calculates a score 0-10 based on three metric groups:

Base Score Metrics

  • Attack Vector (AV): Network / Adjacent / Local / Physical β€” is it exploitable remotely?
  • Attack Complexity (AC): Low / High β€” does it require special conditions?
  • Privileges Required (PR): None / Low / High β€” does the attacker need an account?
  • User Interaction (UI): None / Required β€” does a victim have to take action?
  • Scope (S): Unchanged / Changed β€” does exploitation affect other components?
  • Confidentiality, Integrity, Availability Impact (CIA): None / Low / High for each

A network-exploitable, low-complexity, no-privileges-required, no-user-interaction vulnerability with high impact across all three CIA dimensions scores 9.8 β€” the highest possible base score for most vulnerabilities.

Why CVSS Alone Misleads You

CVSS was designed to be environment-agnostic. That's also its critical limitation for operational prioritisation.

The 9.8 that doesn't matter: A remote code execution vulnerability in a library scores 9.8. But the library is only used in a batch job that runs in an isolated internal network with no network-accessible ports. Nobody outside the office can reach it. The real-world risk is dramatically lower than 9.8.

The 5.0 that's a crisis: A moderate-severity authentication bypass with CVSS 5.0. But it's being actively exploited in the wild by ransomware operators, your scanning tool tells you it affects your externally-exposed API server, and a public exploit appeared on GitHub two days ago. This is a critical emergency regardless of the CVSS.

CVSS tells you about theoretical severity. It says nothing about exploitability in practice, whether a public exploit exists, or whether attackers are actively using it.

EPSS: The Scoring That Changes Prioritisation

EPSS (Exploit Prediction Scoring System) is a machine learning model developed by FIRST that predicts the probability that a CVE will be exploited in the wild within the next 30 days. It's based on real-world exploitation data from threat intelligence feeds.

The implications are significant:

  • Only about 5-6% of CVEs published each year are ever exploited in the wild
  • EPSS can identify which ones are likely to be exploited, before they are
  • A CVE with CVSS 7.5 and EPSS 0.95 (95% probability of exploitation) should take priority over a CVE with CVSS 9.8 and EPSS 0.01

EPSS is free and publicly available: FIRST publishes daily EPSS scores for all CVEs at epss.cyentia.com. Most modern vulnerability management platforms incorporate EPSS alongside CVSS for prioritisation.

The Vulnerability Backlog Problem

A medium-sized application might have 200-500 CVEs in its dependency tree at any given time. An organisation with hundreds of applications can have tens of thousands of open vulnerability findings. Trying to remediate everything leads to paralysis β€” you can't remediate everything, so nothing gets prioritised, and critical issues get buried in noise.

A risk-based approach:

  1. Critical CVSS + high EPSS + publicly reachable component = immediate action
  2. Critical CVSS + no public exploit + not internet-facing = important, schedule for next sprint
  3. Medium CVSS + low EPSS = track, review quarterly
  4. Low CVSS = accept risk with documentation, review annually

SCA Scanning for Third-Party Library CVEs

Software Composition Analysis (SCA) is the vulnerability scanning category focused on open source and third-party dependencies. SCA tools resolve your dependency tree (including transitive dependencies) and check each package and version against vulnerability databases.

Integrating SCA in CI YAML
# GitHub Actions β€” SCA scan on every PR
- name: SCA Scan
  uses: aquilax/scan-action@v1
  with:
    scan-type: sca
    fail-on: critical  # block only on critical CVEs
    severity-threshold: high  # report high and above

SCA scanning in CI catches new CVEs introduced by dependency version bumps or new dependencies added by developers. Pair with continuous monitoring of your registry to catch CVEs disclosed after build time.

Container Image Scanning for CVEs

Container images contain two layers of potential CVEs: the OS packages in the base image, and your application's dependencies. Both need to be scanned.

Container scanning tools (Trivy, Grype, Snyk Container, AquilaX) inspect all installed packages in every image layer and cross-reference against CVE databases. The base image is often the bigger problem β€” a year-old Ubuntu or Debian base image can have dozens of unpatched OS-level CVEs.

Continuous registry scanning is important here: new CVEs are disclosed daily. An image that was clean when built may become vulnerable a week later without any code changes. Registry scanners alert you when a previously-clean image becomes vulnerable due to a newly-published CVE.

Building a Vulnerability Management Programme

  1. Inventory: Know what you're running β€” all applications, dependencies, infrastructure components
  2. Scan: SCA on every CI build, container scanning in registry, infrastructure scanning via CSPM
  3. Triage: Apply CVSS + EPSS + context to prioritise findings. Not everything is urgent.
  4. Track: All findings in a ticketing system with SLA targets by severity. Critical ≀24h, High ≀7d, Medium ≀30d.
  5. Remediate: Fix it, mitigate it (WAF rule, network restriction), or formally accept the risk with documented justification
  6. Verify: Rescan after remediation to confirm the fix was effective and didn't introduce new issues
  7. Report: Track metrics over time β€” MTTR, vulnerability escape rate, backlog age

Metrics for a Mature Vulnerability Management Practice

  • Mean time to remediate (MTTR) by severity: Are you meeting your SLA targets?
  • Vulnerability backlog age: How old are your oldest open findings? Ageing backlog indicates capacity issues.
  • Coverage: What percentage of your asset inventory is scanned regularly?
  • Recurrence rate: Do the same vulnerability classes keep appearing? High recurrence indicates a training or process gap, not just a remediation gap.
  • Accepted risk inventory: How many vulnerabilities are risk-accepted, and when were they last reviewed?

Prioritise Vulnerabilities That Actually Matter

AquilaX SCA and container scanning surface CVEs across your full dependency tree with severity-informed prioritisation β€” so you fix what matters first.

Start Free Scan