What AI generates well in infrastructure

AI coding assistants are genuinely useful for infrastructure work in specific ways:

  • Boilerplate Terraform modules: S3 buckets, VPCs, EKS clusters, RDS instances β€” AI can generate working starting-point configurations quickly. It knows the resource schema and common patterns.
  • Converting between IaC formats: CloudFormation to Terraform, Ansible to Terraform, manual console configurations to declarative code β€” AI does this translation reasonably well.
  • Generating Kubernetes manifests from descriptions: "Create a deployment for an nginx container with 3 replicas, resource limits, and a readiness probe" β€” AI handles this competently.
  • Writing Terraform variable definitions and outputs: The repetitive plumbing of modules is where AI saves the most time.
  • Explaining complex IaC: Asking AI to explain what a Terraform module does is often faster and clearer than reading provider documentation.

AI-generated Terraform: what comes out

A realistic example: asking Claude or Copilot to generate an S3 bucket with versioning and logging typically produces something like this:

AI-generated S3 module β€” common outputhcl
resource "aws_s3_bucket" "main" {
  bucket = var.bucket_name

  versioning {
    enabled = true
  }

  logging {
    target_bucket = var.log_bucket
    target_prefix = "log/"
  }

  # ← AI rarely generates this unprompted:
  # server_side_encryption_configuration β€” missing
  # public_access_block β€” missing
  # lifecycle_rule β€” missing
}

The generated code works. It is not wrong. But IaC scanners like Checkov or tfsec will flag 3–5 security findings on it: missing encryption at rest, no public access block, no lifecycle policy. The AI generated functional infrastructure, not secure infrastructure.

Security risks of AI-written infrastructure code

Infrastructure misconfigurations are in many ways more dangerous than application code vulnerabilities β€” they expose entire environments, not just individual endpoints. The specific risks from AI-generated IaC:

  • Over-permissive IAM policies: AI generates the minimum policy needed for the described task β€” but often uses "Action": "*" because it is the simplest working configuration. This is a critical security finding.
  • Public-by-default resources: S3 buckets, security groups, RDS instances β€” AI often omits the explicit "private" configurations because the prompt did not specify them.
  • Missing encryption: AI generates storage resources without encryption-at-rest configurations unless explicitly asked.
  • Outdated provider patterns: AI training data includes Terraform configurations from 2019–2022 where security best practices were different. It may generate patterns that work but violate current CIS benchmarks.

The blast radius difference: A SQL injection in an application endpoint affects data accessible through that endpoint. A misconfigured S3 bucket with public access enabled exposes every file in the bucket β€” potentially an entire data lake. Infrastructure misconfigurations have a larger blast radius than most application vulnerabilities.

Most common AI IaC misconfigurations

Top findings in AI-generated Terraform (AquilaX data)text
Rank  Check                                    Frequency
──────────────────────────────────────────────────────
 1    S3 bucket public access not blocked       78%
 2    IAM policy with wildcard actions          71%
 3    EBS volume encryption disabled            69%
 4    Security group open ingress (0.0.0.0/0)  65%
 5    RDS deletion protection disabled          58%
 6    CloudTrail logging not enabled            54%
 7    Missing resource tagging for cost mgmt    49%
 8    Lambda function with excessive IAM perms  45%

AI as an IaC security reviewer

Interestingly, AI is significantly better at reviewing IaC for security than at generating it securely. This asymmetry is exploitable: use AI for fast generation, use AI (and automated scanners) for review.

Prompting Claude or GPT-4o to review a Terraform file for security issues produces surprisingly good results β€” it will often catch the public access block issue, the IAM wildcard, and suggest least-privilege alternatives. The model understands the security implications better than it applies them unprompted.

Effective IaC review prompt: "Review this Terraform configuration for security misconfigurations. Check for: overly permissive IAM, publicly exposed resources, missing encryption, insecure defaults, and CIS AWS benchmark violations. Provide specific line-level recommendations."

A safe AI infrastructure workflow

  1. Generate with AI: Use Copilot, Claude, or similar to produce the initial Terraform. Fast and good for boilerplate.
  2. Scan immediately: Run Checkov or tfsec on the output before any review. Fix automated findings first β€” they are objective and fast to address.
  3. AI-assisted security review: Paste the Terraform into Claude with a security review prompt. Get a second opinion on what the scanner might have missed.
  4. Human review: IAM policies and network security groups specifically should always have human eyes β€” the blast radius is too large to trust automation alone.
  5. Gate in CI: IaC scanner in CI blocks any PR that introduces HIGH or CRITICAL findings. AI-generated Terraform that passes the gate can merge.

The right mental model: AI is your infrastructure intern β€” fast, knowledgeable about syntax, but needs security supervision. The IaC scanner is the security-focused reviewer who catches what the intern missed.

Scan AI-generated Terraform automatically

AquilaX IaC scanning runs in your IDE and CI pipeline, catching the security misconfigurations that AI consistently produces in Terraform, Kubernetes, and CloudFormation.

Explore IaC scanning β†’