What is vibe coding?
The term "vibe coding" was coined by Andrej Karpathy in early 2025 to describe a style of AI-assisted programming where the developer describes intent at a high level and accepts AI-generated code with minimal review — "going with the vibe" rather than carefully crafting every line. The developer often cannot explain the resulting code in detail, and the codebase grows faster than the developer's understanding of it.
Vibe coding sits at one end of a spectrum:
- AI-assisted (tab completion): Developer writes code; AI suggests completions. Developer reviews every suggestion. Most control.
- AI-generated (prompt + review): Developer prompts for a function or component; AI generates it; developer reads and understands the result before accepting. Standard AI pair programming.
- Vibe coding: Developer describes a feature; AI generates large blocks of code; developer accepts and iterates without fully reading the result. The AI is the primary author. Least control.
Who vibe codes: Vibe coding is most common among developers building prototypes, solo founders moving fast, non-engineers building internal tools, and engineers in new domains where they lack expertise. It is rare (but not unheard of) in production engineering teams with formal review processes.
Telltale patterns in the code itself
There is no reliable deterministic method to distinguish AI-generated code from human-written code — LLMs learn from human-written code, after all. But several statistical and stylistic patterns correlate strongly with AI authorship, particularly with unreviewed vibe coding.
Hyper-verbose docstrings and comments
LLMs are trained to be helpful and explanatory. They add docstrings and inline comments to virtually every function, even trivial ones. Human developers tend to comment sparingly and only where logic is non-obvious.
def add(a, b): """ Adds two numbers together and returns the result. Args: a: The first number to add. b: The second number to add. Returns: The sum of a and b. Raises: TypeError: If a or b are not numeric types. """ return a + b
Generic, symmetric variable names
AI models default to generic names: result, data, response, value, item, element. They also tend to name things very symmetrically and formulaically — user_data, user_info, user_object appearing in the same file for similar concepts.
Exhaustive error handling for every edge case
LLMs add try/except blocks around almost everything, often catching overly broad exceptions (except Exception as e) and logging or swallowing them silently. Human developers typically add error handling where they know failures can occur.
Consistent, formulaic code structure
AI-generated code is structurally consistent to an uncanny degree — uniform indentation, identical patterns across functions, the same code structure repeated in predictable ways. Human codebases show more stylistic variation, especially across files written at different times or by different developers.
Structural and architectural signs
Beyond individual code style, vibe-coded codebases often exhibit structural characteristics that reflect how LLMs generate code when given open-ended prompts.
Over-engineering for simple requirements
LLMs default to "best practice" patterns regardless of the scale of the problem. A vibe-coded script that reads a CSV file may include a full repository pattern, abstract base classes, dependency injection, and factory methods — all for ten lines of actual logic. Human developers calibrate complexity to requirements.
Inconsistent abstractions
When building a feature across multiple sessions or prompts, the LLM may make different design decisions in each session. The result is a codebase where authentication uses one pattern in Module A and a completely different pattern in Module B, with no coherent rationale.
Dependency sprawl
LLMs suggest popular libraries for every sub-problem, even when stdlib alternatives would suffice. A vibe-coded project may have 80+ dependencies for a simple web application — each one a library the AI knew from its training data.
Orphaned code and dead imports
During iterative prompting ("now add X", "actually do Y instead"), old code is frequently not cleaned up. Vibe-coded repositories often contain unused functions, unreachable code branches, and imports for libraries no longer used — artefacts of abandoned generation attempts.
Orphaned code is a security risk: Dead code that contains vulnerabilities is still scanned and reported. But more subtly, unused dependencies with CVEs still appear in SCA scans. A vibe-coded project with 80 dependencies likely has more vulnerable packages than a focused project with 15.
Comment patterns as signals
Comments are one of the strongest signals of AI authorship — not their presence, but their specific character.
Obvious explanatory comments
AI adds comments that explain what code does, not why it was written that way. # Check if user is authenticated above if not user.authenticated: adds no information. Human developers comment on the "why" — # Must check before loading profile — race condition with session expiry.
Section headers
LLMs structure longer functions with section comments: # Step 1: Validate input, # Step 2: Process data, # Step 3: Return result. This is uncommon in human code outside of very long procedures.
TODO comments with specific wording
AI-generated code includes TODOs phrased in characteristic ways: # TODO: Add error handling here, # TODO: Consider edge cases, # TODO: Add input validation. These reflect the LLM acknowledging limitations without implementing solutions.
# TODO: Add proper error handling # TODO: Validate input before processing # TODO: Consider edge cases for empty list # TODO: Add logging here # TODO: This could be optimized # FIXME: This is a placeholder implementation
Security implication of AI TODOs: "TODO: Add input validation" means input validation is absent. In vibe-coded projects, these TODOs are frequently never acted on — they are security gaps waiting to be exploited. Grep your codebase for AI-style TODOs and treat them as security findings.
Security antipatterns in vibe-coded software
Multiple studies (Stanford, NYU, academic researchers at multiple institutions) have shown that AI-generated code has higher rates of security vulnerabilities than carefully written human code. The specific patterns are consistent:
Disabled SSL verification
LLMs frequently generate HTTP client code with SSL verification disabled — often as a "fix" for SSL errors encountered during development. This is catastrophic in production.
# AI frequently generates this "fix" for SSL errors response = requests.get(url, verify=False) # The correct approach response = requests.get(url, verify=True) # or just omit verify= entirely
SQL string interpolation
Older training data contains pre-parameterized-query patterns. LLMs sometimes generate f-string or %-format SQL construction instead of parameterised queries.
Hardcoded example credentials
AI-generated tutorial-style code frequently includes example credentials (password = "admin123", api_key = "test_key_replace_this") that never get replaced.
Broad exception swallowing
The pattern except Exception: pass or logging and continuing silently masks security-relevant errors — authentication failures, permission denials, signature verification errors.
Missing authentication decorators
When generating REST endpoint stubs rapidly, LLMs sometimes omit authentication decorators on endpoints that clearly require them, especially in frameworks where authentication is decorator-based (Flask-Login, FastAPI dependencies).
False positives in security scanning of vibe code
Vibe-coded repositories create specific false positive challenges for SAST and secret scanning tools.
Test data that looks like credentials
AI-generated test files frequently contain placeholder credentials, example API keys with realistic formats, and test tokens. Secret scanners flag these correctly — they match the pattern of real secrets — but they are intentionally fake. The challenge is that vibe-coded projects rarely document which strings are test fixtures.
Over-broad error handling flagged as insecure logging
The AI-typical logger.error(f"Error: {e}") pattern may include exception objects that contain request data or user information in their string representation. SAST tools that detect PII in log statements may flag many of these — and they may be correct, but the volume can be overwhelming.
Unused but imported vulnerable libraries
SCA scanners flag all dependencies, including those imported but unused. Vibe-coded projects with dependency sprawl produce large SCA reports where many findings relate to libraries that are present but not actually invoked. The findings are technically accurate but operationally noisy.
Managing false positives in vibe-coded repos: Establish a triage process early. Use scanner suppression mechanisms (# nosec, .trivyignore, baseline files) for confirmed false positives — and document every suppression with the reason why it is a false positive.
False negatives — what scanners miss in AI code
Vibe-coded software also produces patterns that increase false negative rates — security issues that scanners miss.
Business logic flaws
SAST tools scan for syntactic patterns. Business logic vulnerabilities — an AI-generated checkout flow that skips price validation, or an authorisation check placed after the privileged operation — require semantic understanding that current tools do not provide. Vibe-coded business logic is particularly prone to these subtle flaws.
Broken authentication flows
Authentication implemented by an AI across multiple sessions may have subtle gaps — a token validation step in the happy path that is skipped in an error recovery branch, or a session invalidation call that the AI placed in the wrong location. These are logical errors, not syntactic patterns, and scanners frequently miss them.
Over-broad CORS configurations
LLMs frequently generate Access-Control-Allow-Origin: * to "fix" CORS errors during development. DAST can detect this, but static analysis of configuration files may miss the actual allowed origins depending on how they are computed at runtime.
The real pros of vibe coding
Vibe coding is popular for good reasons. Dismissing it entirely misses the genuine value it provides in the right contexts.
- Radical speed for prototyping: A solo founder can build a functional web application in hours instead of days. For validating ideas and building MVPs, this velocity is genuinely transformative.
- Accessibility for non-engineers: Product managers, data analysts, and domain experts can build internal tools and automations without deep programming expertise. This democratises software creation.
- Boilerplate elimination: CRUD endpoints, form validation, database schema migration scripts, test fixtures — AI handles these tedious tasks reliably, freeing engineers for higher-value work.
- Domain crossing: An experienced backend engineer can vibe-code a React frontend without learning React deeply. The AI provides competent code in the unfamiliar domain; the engineer applies their quality judgement.
- Documentation and tests: AI is excellent at generating tests for existing code and writing documentation. These tasks benefit from AI speed and thoroughness even in mature codebases.
- Reduced cognitive load for repetitive tasks: For tasks the developer has done dozens of times, vibe coding outsources the mechanical execution while the developer focuses on the problem design.
The right frame: Vibe coding is not categorically good or bad — it is a spectrum of AI involvement with different risk profiles at each point. The question is whether the level of AI involvement matches the risk tolerance of the context.
The real cons and risks
- Security debt compounds silently: Each vibe-coded feature adds security issues the developer may not understand or even recognise. The debt accumulates faster than it can be reviewed, especially in rapidly growing codebases.
- The developer cannot explain the code: Code review, incident debugging, and compliance audits require understanding what code does and why. If the author cannot explain a function, they cannot confidently assert it is correct or secure.
- Hallucinated APIs and deprecated patterns: LLMs occasionally generate code that calls functions that do not exist, use deprecated APIs, or implement patterns that were valid in older versions of a library. In unreviewed vibe-coded output, these errors may not be caught until runtime.
- Dependency risk: Libraries suggested by LLMs may be abandoned, supply-chain-compromised (typosquatting is a real threat), or simply inappropriate for the use case. Without review, these enter the dependency graph unchallenged.
- Maintenance debt: Vibe-coded code with inconsistent abstractions and orphaned functions becomes expensive to maintain. Future developers (or the future self) cannot navigate a codebase no one fully understands.
- False confidence: Code that works is not the same as code that is correct or secure. Vibe-coded software often works at the demo level while containing subtle data handling errors, race conditions, or security gaps that only manifest under real-world load or adversarial conditions.
- Compliance and auditability: Regulated environments (finance, healthcare, government) require that someone with appropriate expertise understands and can account for every security-relevant control in the system. "The AI wrote it and it seemed to work" is not a defensible audit position.
The velocity illusion: Vibe coding is extremely fast at producing code that appears to work. The hidden cost is the time required to find and fix the security issues, logic errors, and maintenance debt — which almost always exceeds the time saved at the generation stage, particularly in production systems.
Verdict: a practical framework
Rather than asking "is vibe coding good or bad?", ask "what level of AI involvement is appropriate for this context?"
Context Appropriate AI involvement ────────────────────────────────────────────────────────────────────────────── Prototype / throwaway demo High — vibe coding acceptable Internal tool / low-risk automation Medium — review security-relevant paths Production web application Low-medium — review all AI output Regulated system (HIPAA/PCI/SOC 2) Low — expert review mandatory Authentication / payments / crypto Very low — human expert writes, AI assists
The non-negotiable: scan everything
The one universal principle that applies regardless of AI involvement level: automated security scanning must run on every code change. Vibe-coded code needs scanning more urgently than carefully reviewed human code — not less. The AI generates faster; the scanner must keep up.
The appropriate tooling stack for vibe-coded repositories:
- SAST in IDE: Real-time feedback catches obvious issues as code is accepted from the AI
- Secret scanning on every commit: Catch the test credentials and placeholder keys before they reach the repo
- SCA with reachability analysis: Reduce noise from unused dependency CVEs in projects with dependency sprawl
- DAST before any public release: Find the business logic and authentication flaws that SAST cannot see
- Mandatory review for security-critical paths: Auth, payments, data handling — require human expert review regardless of how the code was generated
The honest summary: Vibe coding is a legitimate and powerful approach for the right contexts. The security community should not demonise it — but it must insist on automated scanning as the non-negotiable complement to AI-assisted development. The AI writes the code; the scanner makes it safe to ship.
Scan vibe-coded repos before they go live
AquilaX runs SAST, secret scanning, SCA, and DAST — automatically, on every PR — catching the security issues that AI coding assistants introduce before they reach production.
Start scanning AI-generated code →