The SOAR Problem

Security Orchestration, Automation, and Response platforms promised to automate security operations. In practice, most deployments are disappointing: a small number of brittle playbooks covering only the most common alert types, maintained at enormous ongoing cost, and breaking every time underlying systems change their APIs or alert formats.

The fundamental problem is that SOAR playbooks encode logic in imperative rules: "if this field equals X, do Y." Security incidents rarely follow the exact script the playbook was written for. Novel attack patterns, new infrastructure, renamed fields β€” any deviation causes silent failure or requires human intervention anyway.

The playbook debt problem: A SOAR platform with 50 playbooks requires a dedicated engineer to maintain them. When that engineer leaves, the playbooks are never updated, slowly becoming incorrect descriptions of how the team used to respond, not how they should now.

The AI Approach to Security Orchestration

Instead of encoding fixed response logic, AI security orchestration provides the agent with: the alert details, the current environment context, the available tools, the organisation's security policies, and the high-level response goals β€” and lets the agent reason about the appropriate response.

When a phishing alert fires, the agent does not follow a fixed playbook. It: reads the alert, queries the email gateway for the full email, checks the sender against threat intelligence, queries the directory service for which users received it, checks whether the link was clicked, and decides on a proportional response β€” all reasoning from first principles about what is appropriate given the specific context.

Intent vs Rules: The Key Difference

A rule-based playbook: "If alert type is phishing AND confidence > 0.9: quarantine email, disable user account, create JIRA ticket P1."

An intent-based AI response: "When a phishing alert fires, protect affected users from credential theft and lateral movement while minimising business disruption. Escalate to human if the affected user is in privileged access tier."

The intent-based approach handles: new alert formats, changed API endpoints, partial information, novel attack variants, and exceptional cases β€” all of which break rule-based playbooks.

The maintenance advantage: When your email gateway API changes, you update one tool integration. The agent's reasoning about how to use that tool adapts automatically. With a rule-based playbook, you update every step that calls that API.

Migration Strategy

A phased approach to replacing SOAR playbooks with AI agents:

  1. Phase 1: Observation β€” run the AI agent in parallel with existing playbooks. Compare responses. Build confidence.
  2. Phase 2: Low-stakes automation β€” enable AI agent for enrichment tasks only: threat intel lookups, asset queries, evidence collection. No containment actions.
  3. Phase 3: Supervised containment β€” AI agent recommends containment actions. Human analyst approves each action.
  4. Phase 4: Autonomous response with escalation β€” AI agent acts autonomously for defined alert classes with defined severity thresholds. Escalates to human for novel situations or high-impact decisions.

Human Oversight in AI-Driven SOC

AI agents do not eliminate the need for human analysts. They change the nature of the work:

  • Analysts review agent reasoning chains rather than manually executing response steps
  • Analysts handle cases the agent escalates as novel or high-confidence ambiguous
  • Analysts provide feedback to improve agent response quality over time
  • Analysts own the policy definitions that constrain agent behaviour

"The AI agent is not replacing the security analyst. It is giving the analyst leverage β€” making each analyst as capable as a small team by automating the mechanical parts of incident response."