SAST coverage map for OWASP Top 10

OWASP 2021 β€” tool coverage matrixtext
Category                                   SAST      DAST      SCA
──────────────────────────────────────────────────────────────────
A01: Broken Access Control                 partial   strong    βœ—
A02: Cryptographic Failures                strong    partial   βœ—
A03: Injection                             strong    strong    βœ—
A04: Insecure Design                       weak      partial   βœ—
A05: Security Misconfiguration             partial   strong    βœ—
A06: Vulnerable Components                 βœ—         βœ—         strong
A07: Auth and Session Failures             partial   strong    βœ—
A08: Software/Data Integrity Failures      partial   partial   strong
A09: Logging/Monitoring Failures           weak      βœ—         βœ—
A10: SSRF                                  partial   strong    βœ—

The honest assessment: SAST alone gives you strong coverage on A02 and A03, partial coverage on four others, and near-zero coverage on A06 (which requires SCA) and A09 (which requires runtime observation). A SAST-only compliance claim is misleading.

Where SAST excels

A02: Cryptographic Failures

SAST is excellent at detecting weak cryptography because the patterns are syntactic and highly predictable. It catches:

  • MD5 or SHA1 used for password hashing
  • Hard-coded encryption keys and IVs
  • ECB mode for block ciphers
  • Insufficient key lengths
  • Disabled SSL/TLS certificate verification

A03: Injection

SQL injection, command injection, LDAP injection, XPath injection β€” all have well-established source-sink patterns that taint-flow SAST engines detect reliably. This is the single strongest SAST category.

Injection β€” what SAST catchespython
# SQL injection β€” taint flows from request param to query
user_id = request.args.get("id")
query = f"SELECT * FROM users WHERE id = {user_id}"  # ← SAST flags

# Command injection β€” taint from user input to subprocess
filename = request.form["filename"]
os.system(f"cat {filename}")  # ← SAST flags

Where SAST helps partially

A01: Broken Access Control

SAST can find obvious access control flaws: missing authentication decorators on routes, direct object reference patterns (using user-supplied IDs without ownership checks). But it cannot model the full access control policy of an application without understanding business logic. A missing @login_required decorator is detectable; an IDOR in a complex multi-tenant data model is usually not.

A07: Auth and Session Failures

SAST detects obvious patterns: storing sessions in client-side cookies without signing, insecure cookie flags missing (httponly, secure, samesite), weak session token generation using random instead of secrets. It cannot test whether session fixation is possible or whether logout actually invalidates server-side sessions β€” those require runtime testing.

A10: SSRF

SAST can identify patterns where user-controlled URLs flow into HTTP request functions without validation. It cannot verify whether the server's network configuration actually allows access to internal metadata services β€” that requires a runtime test.

Where SAST is effectively blind

A06: Vulnerable and Outdated Components

SAST does not scan third-party dependencies for known CVEs. This is the domain of SCA tools entirely. A codebase with perfectly secure first-party code but 50 outdated npm packages with critical CVEs will pass SAST with zero findings β€” and be critically vulnerable.

A04: Insecure Design

This category β€” the most subjective in the OWASP Top 10 β€” refers to architectural flaws: missing threat modelling, inadequate rate limiting as a design choice, single-tenant data architectures used in a multi-tenant context. No automated tool can reliably detect these because they require understanding of the intended design and the threat model.

A09: Security Logging and Monitoring Failures

SAST can sometimes detect the absence of logging in specific patterns (unlogged authentication failures, unlogged privileged actions). But it cannot verify that logs are actually shipped to a SIEM, alerts are properly configured, or that log retention meets compliance requirements. Runtime and operational verification is required.

A complete OWASP Top 10 programme

To claim genuine OWASP Top 10 coverage, you need a layered programme:

  • SAST in CI β€” covers A02, A03, partial on A01/A05/A07/A10
  • SCA in CI β€” covers A06 completely
  • DAST on staging β€” covers A01, A05, A07, A10 from the runtime perspective
  • Threat modelling β€” addresses A04 (cannot be automated)
  • Log/alert review β€” addresses A09 operationally
  • Penetration testing β€” validates the whole programme periodically

The compliance shortcut: Many frameworks (PCI DSS, SOC 2) accept a combination of automated scanning evidence (SAST + SCA + DAST reports) plus process evidence (threat modelling documentation, penetration test reports) as compliance evidence for OWASP coverage. No auditor expects 100% automated coverage of all ten categories.

Full OWASP Top 10 coverage in one platform

AquilaX combines SAST, SCA, secrets scanning, IaC, and DAST in a single platform β€” giving you the broadest OWASP Top 10 coverage from a single vendor, with a unified findings dashboard and compliance reporting.

See the full platform β†’