Why Misconfiguration Now Leads Cloud Breach Statistics

In cloud environments, misconfiguration has overtaken traditional exploitation as the leading breach cause. IBM's Cost of a Data Breach Report consistently finds cloud misconfiguration among the top three root causes. Gartner has predicted that through the mid-2020s, the vast majority of security failures in cloud would be the customer's fault β€” not the cloud provider's.

The reason is structural: cloud platforms give developers unprecedented ability to provision infrastructure through APIs, consoles, and Infrastructure as Code. The same speed that makes cloud so productive makes it easy to accidentally click the wrong option or write the wrong policy. And unlike a typo in your code that breaks a test, a misconfigured S3 bucket can silently expose data for months.

The misconfiguration-to-breach timeline: Automated scanners continuously probe for open cloud storage. An S3 bucket made public at 2pm can be discovered and downloaded by 2:30pm. The window for "accidental" exposure is measured in minutes.

The Open S3 Bucket Problem (Still Happening in 2026)

Despite years of press coverage, open S3 buckets remain a leading breach cause. The pattern: a developer creates a bucket for static assets, sets it to public for the website CDN use case, then reuses that bucket or that configuration pattern for internal data.

s3_bucket.tf Terraform
# Wrong β€” public access block disabled
resource "aws_s3_bucket_public_access_block" "data_bucket" {
  bucket = aws_s3_bucket.data.id
  block_public_acls       = false
  block_public_policy     = false
  ignore_public_acls      = false
  restrict_public_buckets = false
}

# Correct β€” all public access blocked
resource "aws_s3_bucket_public_access_block" "data_bucket" {
  bucket = aws_s3_bucket.data.id
  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

AWS now has an account-level public access block setting that prevents any bucket in the account from being made public. Enable this at the organisation level in AWS Organizations β€” it's a single setting that prevents an entire class of misconfiguration.

IAM Permissions Gone Wrong

The most common IAM misconfigurations we see:

Wildcard resource policies

iam_policy.json JSON
// Wrong β€” EC2 instance can access ALL S3 buckets, ALL objects
{
  "Effect": "Allow",
  "Action": "s3:*",
  "Resource": "*"
}

// Correct β€” scoped to specific bucket and required actions only
{
  "Effect": "Allow",
  "Action": ["s3:GetObject", "s3:PutObject"],
  "Resource": "arn:aws:s3:::my-specific-bucket/*"
}

Unused admin accounts with active access keys

Former employees or service accounts with long-lived access keys and administrator permissions. These are prime targets for credential stuffing and often go unreviewed for months or years.

Cross-account roles with excessive trust

IAM roles that can be assumed from any account ("Principal": "*") rather than specific, named accounts. We've found these on roles with broad data access.

Publicly Exposed Databases

Databases that are directly reachable from the internet are scanned constantly by automated tools. MongoDB, Elasticsearch, Redis, and MySQL instances without authentication exposed to the internet get discovered and compromised β€” often within hours of provisioning.

The MongoDB ransomware wave: In 2017, attackers scanned the entire internet for open MongoDB instances, wiped the data, and left ransom notes. Over 28,000 databases were compromised in one weekend. The pattern has repeated with Elasticsearch and Redis multiple times since.

Databases should never be in a public subnet. They should only be accessible from within a VPC, via specific security groups that allow connections only from application servers. The "just temporarily expose it for debugging" trap is where most of these breaches start.

Security Group Rules That Open Everything

The default "quick start" for many tutorials: open port 22 (SSH) or 3389 (RDP) from 0.0.0.0/0. This exposes the instance to brute force from the entire internet. Similarly, 0.0.0.0/0 on port 443 is appropriate for a web server but catastrophic for a database port.

Review all security groups for:

  • Inbound 0.0.0.0/0 rules on management ports (22, 3389, 5432, 3306, 27017, 9200)
  • Overly broad outbound rules β€” most services don't need unrestricted egress
  • Unused security groups attached to instances (stale, possibly insecure rules)

Logging and Monitoring Gaps

You can't investigate a breach you didn't log. Common logging failures:

  • CloudTrail not enabled in all regions β€” attackers know to operate in rarely-used regions
  • S3 access logging disabled β€” no record of who accessed which objects
  • VPC Flow Logs not enabled β€” no network traffic visibility
  • Log retention too short β€” breach discovery typically happens 200+ days after compromise
  • Logs stored in the same account β€” an attacker with account access can delete them

Separate log account: Store security logs in a dedicated, locked-down account. Even if the production account is compromised, the attacker can't cover their tracks by deleting CloudTrail logs they don't have access to.

The Shared Responsibility Model and Where Teams Fail

Cloud providers secure the infrastructure. You secure everything you put on it. AWS is responsible for the security of S3 as a service. You are responsible for the security of your data in S3. This includes access controls, encryption, and configuration.

The failure mode: teams assume that because they're running on AWS, they're inheriting AWS's security posture. They're not. The shared responsibility model is explicitly documented by every major cloud provider β€” and most misconfiguration breaches are unambiguously the customer's responsibility.

Infrastructure-as-Code Scanning

The best time to catch a misconfiguration is before it deploys. IaC scanning tools analyse your Terraform, CloudFormation, CDK, and Kubernetes manifests for security issues before terraform apply runs.

.github/workflows/iac-security.yml YAML
name: IaC Security Scan
on: [pull_request]
jobs:
  iac-scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: AquilaX IaC Scan
        uses: aquilax/scan-action@v1
        with:
          scan-type: iac
          fail-on: high

Cloud Security Posture Management (CSPM)

CSPM tools continuously monitor your deployed cloud infrastructure for misconfigurations β€” not just at deploy time, but as it drifts over time. Infrastructure configured through the console rather than IaC, resources created by third-party tools, manual debugging changes β€” all of these can introduce misconfigurations that IaC scanning won't catch.

CSPM tools include: AWS Security Hub, Microsoft Defender for Cloud, Wiz, Orca Security, and open-source options like Prowler and ScoutSuite. They provide a continuously updated view of your security posture across all cloud accounts and regions.

Scan Your IaC for Misconfigurations

AquilaX IaC scanning catches open storage buckets, wildcard IAM policies, exposed databases, and 400+ other cloud security misconfigurations before they deploy.

Start Free Scan