What vibe coding actually means

The term "vibe coding" was coined by Andrej Karpathy in early 2025 to describe a mode of programming where you describe intent to an AI assistant, accept its output, and iterate β€” without deeply reading or reviewing the generated code. You feel the direction and let the AI fill in the details.

This is not a fringe practice. A 2025 Stack Overflow survey found that 73% of professional developers use AI coding assistants regularly, and more than half report accepting AI suggestions "usually or always" without modification for routine tasks.

The problem is not the speed: Vibe coding's security risk is not that it's fast β€” it is that it encourages trust transfer. Developers who vibe code are implicitly trusting the AI to handle correctness and safety. The AI is happy to accept that trust β€” and frequently gets security wrong.

The evidence on AI-generated security bugs

Multiple independent studies have measured the security quality of AI-generated code:

  • Stanford (2021): Developers using Copilot wrote insecure code 40% of the time on security-relevant tasks, vs 25% without AI assistance.
  • NYU (2022): Replicated the Stanford findings. AI assistance significantly increased the frequency of CWE-top-25 vulnerabilities in generated code.
  • GitClear (2024): Code churn (code written and then reverted/rewritten) doubled between 2022 and 2024, coinciding with mass adoption of AI coding tools. Security defects were a major driver.
  • AquilaX internal data (2025): Repositories with high AI-generated code content (detected via authorship patterns) showed 2.3Γ— the rate of injection vulnerabilities compared to equivalent human-written codebases.

The scale problem: If a human developer writes 200 lines per day with a 2% security bug rate, that is 4 vulnerable lines per day. A developer vibe coding writes 800 lines per day with a 5% security bug rate β€” that is 40 vulnerable lines per day. Volume amplifies the impact of a higher defect rate.

Why AI generates insecure code

Three structural reasons, not fixable by better prompting alone:

  • Training data bias: The internet contains far more insecure code than secure code. LLMs learn statistical patterns β€” and insecure patterns are statistically common.
  • No security intent model: The AI optimises for "code that compiles and appears to work". Security is a non-functional requirement that requires understanding threat models β€” which the model does not have.
  • Context window limitations: Security constraints are often defined in one part of a codebase (an auth layer, a validation module) while vulnerable code is generated in another. The model cannot always see both at once.

High-risk patterns in vibe-coded codebases

From AquilaX scan data across AI-assisted repositories, the most common findings:

  • #1 Hardcoded credentials β€” present in 61% of repos with significant AI-generated content. Usually in test/config files that were generated as examples and never cleaned up.
  • #2 SQL injection via f-string/string concatenation β€” AI consistently reaches for the syntactically simpler interpolation approach.
  • #3 Missing authentication on generated API routes β€” AI generates the route handler but not the auth middleware.
  • #4 Disabled TLS verification β€” verify=False in requests, rejectUnauthorized: false in Node.js β€” common in generated HTTP client code.
  • #5 Insecure deserialization β€” pickle.loads() on untrusted data in Python, JSON.parse() of unvalidated input in places where eval-like behaviour follows.

A secure vibe coding workflow

The answer is not "stop vibe coding". It is "vibe code with guardrails". The guardrails:

  1. IDE SAST scanning (real-time): AquilaX, Semgrep, or SonarLint flagging security issues as the AI writes them β€” before you even accept the suggestion.
  2. Pre-commit secrets scanning: Gitleaks or TruffleHog blocking any commit that contains credentials, regardless of how they got there.
  3. SAST in CI on every PR: Catches anything the IDE scanner missed. Non-negotiable gate: no merge with critical findings.
  4. Mandatory code review for security-sensitive paths: Auth, payments, data access β€” these routes should always have human eyes on them, vibe coded or not.
  5. Security-aware prompting: Add security requirements to prompts: "generate a parameterised query, never use string interpolation for SQL". Reduces (does not eliminate) security bugs at the source.

The pragmatic answer: Vibe coding with SAST in IDE is safer than careful manual coding without SAST. The tool is the guardrail β€” not the pace of development.

Verdict: good, bad, or manageable?

Without security guardrails: Vibe coding is actively bad for security. It increases vulnerability introduction rate, increases code volume (more surface area), and reduces code comprehension (harder to review).

With security guardrails: Vibe coding is manageable β€” and the productivity gains are real enough to justify the investment in tooling. The same security controls that make vibe coding safe also improve the security of traditionally-written code.

The teams that will win are those who treat AI code generation as a powerful but untrusted input source β€” and apply the same scepticism to AI output that they would apply to user input in their applications.

Make vibe coding safe

AquilaX SAST runs in your IDE and CI pipeline, catching the specific vulnerability patterns that AI coding assistants consistently produce β€” before they ship.

Start scanning β†’