Traditional Threat Modelling: The Bottleneck
Threat modelling is universally recommended and widely skipped. The barrier is time: a comprehensive STRIDE model for a non-trivial system takes 2-3 days of specialist security engineer time. Most organisations do it once at design time and never update it as the system evolves.
The result: threat models that accurately describe a system that no longer exists, failing to capture the attack surface of the system actually running in production.
AI-Assisted Threat Modelling
LLMs trained on security knowledge can generate a first-draft threat model from system inputs including: architecture diagrams (DFDs), OpenAPI specifications, infrastructure-as-code files, README documentation, and codebase summaries.
The AI identifies: trust boundaries, data flows carrying sensitive data, external interfaces, authentication points, and cryptographic operations. It then applies STRIDE systematically to each component and data flow, generating a structured list of potential threats.
A human security engineer reviews, enriches with business context, and removes irrelevant threats. The process takes 2-3 hours instead of 2-3 days β and more importantly, it can be triggered automatically when the architecture changes.
STRIDE Automation in Practice
For each component in the architecture, the AI evaluates all six STRIDE categories:
- Spoofing β can an attacker impersonate this component? Missing authentication checks, weak session management.
- Tampering β can data flowing to/from this component be modified? Missing integrity checks, lack of signing.
- Repudiation β can actions be denied? Missing audit logging, unsigned log entries.
- Information disclosure β can sensitive data be exposed? Verbose errors, missing encryption.
- Denial of service β can the component be made unavailable? Missing rate limiting, resource exhaustion vectors.
- Elevation of privilege β can an attacker gain more access? Missing authorisation checks, IDOR patterns.
The output is a structured threat register with severity scores, attack vectors, and β critically β a direct link to the code or configuration that should be changed to mitigate each threat.
Bridging Threat to Fix
The traditional gap is between the threat model (a document) and the code (where the fix lives). AI can close this gap by mapping each threat to specific code locations and generating remediation candidates:
- Threat: missing rate limiting on /api/auth/login β AI identifies the route handler β generates rate limiting middleware configuration
- Threat: unsigned session tokens β AI finds the session creation code β generates migration to signed JWT with appropriate algorithm
- Threat: verbose error messages exposing internal paths β AI finds all error handlers β generates a patch to sanitise error output
Continuous threat modelling: The power of AI threat modelling is the ability to run it on every significant PR. A PR that adds a new API endpoint automatically triggers a threat model update for that endpoint β converting threat modelling from a project-phase activity to a continuous CI gate.
Accuracy and Limits
AI threat modelling is strong on well-known, pattern-based threats and weaker on novel attacks, business logic abuse, and threats that require deep understanding of your specific threat actors.
- Reliable: missing security controls, misconfiguration threats, known vulnerability patterns
- Moderate: business logic threats, complex privilege escalation chains
- Poor: novel attack techniques, insider threat scenarios, highly application-specific abuse cases
Never rely on AI threat modelling alone for high-value targets. Use AI to get to 80% coverage quickly, then invest human expert time on the remaining 20% β which is where the highest-value threats typically live.