In February 2025, Andrej Karpathy posted on X:
“There’s a new kind of coding I call ‘vibe coding’, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”
By March 2026, the bill arrived.
The CVE Spike
Researchers at Georgia Tech track a metric called the Vibe Security Radar. It counts CVEs formally attributed to AI-generated code.
Here is what they found in Q1 2026:
- January: 6 CVEs
- February: 15 CVEs
- March: 35 CVEs
That is nearly a 6x increase in a single quarter.
And that is only the confirmed cases. Most AI coding tools leave no identifiable commit metadata. Researchers estimate the true number of AI-introduced vulnerabilities across the open-source ecosystem is 400 to 700 cases — 5 to 10 times higher than what gets officially tracked.
Source: Cloud Security Alliance — AI-Generated CVE Surge 2026
What Veracode Found
Veracode tested over 100 large language models across 80 coding tasks in Java, Python, C#, and JavaScript. The task: write code for security-sensitive scenarios.
The result: 45% of AI-generated code samples introduced OWASP Top 10 vulnerabilities.
The breakdown by vulnerability type was alarming:
- 86% of samples were vulnerable to cross-site scripting (XSS)
- 88% were vulnerable to log injection
- Java had a 72% security failure rate overall
This was not a one-time result. Veracode ran testing cycles across 2025 and into early 2026. The pass rate did not improve, despite vendor claims that models were getting safer.
Source: Veracode GenAI Code Security Report
What GitGuardian Found
GitGuardian tracks hardcoded secrets — API keys, tokens, passwords — leaked in public GitHub commits.
Their 2025 State of Secrets Sprawl report (published March 2026) found:
- 29 million secrets were leaked on public GitHub in 2025 — a 34% year-over-year increase and the largest single-year jump ever recorded
- Leaks tied to AI services surged 81% year-over-year
- AI-assisted commits had a 3.2% secret-leak rate versus 1.5% for non-AI commits — roughly double the baseline
- 24,008 unique secrets were found exposed in Model Context Protocol (MCP) configuration files alone
Source: GitGuardian State of Secrets Sprawl 2026
Real Incidents in 2026
The data is not just abstract statistics. There are real examples.
Moltbook (January 2026)
Moltbook was an AI social network. The founder built the entire platform using AI prompts — no manual code written. It launched January 28, 2026 to significant attention.
By January 31st — three days later — security researchers at Wiz discovered the production database was wide open. The breach exposed 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents.
Root cause: a misconfigured Supabase database with row-level security disabled. A classic OWASP vulnerability the AI did not catch — and the founder did not review.
Source: Wiz Blog — Hacking Moltbook
The Lovable Platform Problem (2025–2026)
A separate analysis of 1,645 web applications built with the Lovable vibe coding platform found 2,038 critical vulnerabilities across 1,400 of them. Roughly 70% of Lovable-built apps shipped with row-level security disabled on Supabase tables — meaning any authenticated user could read, modify, or delete any other user’s data via a simple API call.
Why AI Code Is More Vulnerable
There are three main reasons.
1. AI models are trained to make code work, not to make it safe.
The training objective is functional correctness. If the code runs and returns the expected output, the model is rewarded. Security properties — least privilege, input validation, secret handling — are not part of basic correctness checks.
2. PR volume outpaces review capacity.
Developers using AI coding tools commit code at 3 to 4 times the rate of non-AI peers. According to Apiiro, this caused security findings at one enterprise to increase 10-fold — from 1,000 to over 10,000 monthly issues. Privilege escalation paths rose 322%. Architectural design flaws rose 153%.
There are simply more PRs than reviewers can meaningfully inspect. Code ships before it is checked.
3. Secrets end up in the wrong places.
AI models suggest code that looks complete. Developers accept it. The suggestion includes an API key or a hardcoded password — copied from a training example — and it goes straight into a commit.
The Counter-Framework: How Professionals Use AI Safely
The solution is not to stop using AI coding tools. The solution is to add process layers that AI cannot skip.
Here is a concrete checklist.
1. Test-First Principle
Write your tests before you generate code. Use AI to generate the implementation. Then verify it passes your tests.
This forces the AI output to satisfy real correctness criteria — not just the AI’s own sense of “working.”
# Write tests first
# Then generate:
cursor "implement the UserAuth class that passes auth_test.py"
2. Static Analysis in the CI Gate
Run a SAST (Static Application Security Testing) tool on every PR, not just major releases. Free options include:
- Semgrep (open-source, OWASP rules included)
- Bandit for Python
- ESLint with security plugins for JavaScript
Add it to your GitHub Actions workflow:
- name: Run Semgrep
uses: returntocorp/semgrep-action@v1
with:
config: p/owasp-top-ten
If the SAST gate fails, the PR does not merge. No exceptions.
3. Secret Scanning — Always On
Never accept an AI-generated code block that contains anything that looks like a secret. Use a pre-commit hook to catch it before it hits the remote.
# Install gitleaks
brew install gitleaks
# Add pre-commit hook
gitleaks protect --staged
GitGuardian also offers a free GitHub integration that scans every push in real time.
4. Agent Isolation — Minimal Permissions
When you give an AI agent access to your codebase or CI pipeline, give it the minimum permissions it needs. Do not give it write access to production databases. Do not give it credentials to external services.
The Codex CLI shell injection vulnerability (patched in early 2026 by OpenAI) is a clear example: AI agents with shell access are a new attack surface. Treat agent permissions the same way you treat service account permissions.
5. Review Every Generated File — Actually Read It
The Moltbook founder did not read the code. That is the core problem.
Vibe coding gives you speed. But speed without review is just faster debt accumulation. Read what the AI writes. Not every line — but the structure, the data flows, the configuration files, the authentication logic.
A useful rule: any file that touches user data, authentication, or external APIs requires a human review before merge.
6. Keep Dependencies Fresh
AI models sometimes reference outdated libraries or suggest packages that no longer exist (hallucination rate: approximately 20% of generated samples, per USENIX Security 2025). An outdated dependency can carry a known CVE that the AI had no way to know about.
Use a dependency scanner in your pipeline:
# Python
pip-audit
# Node.js
npm audit
# Go
govulncheck ./...
The Verdict
Vibe coding is not going away. The speed gains are real. 72% of developers use AI coding tools daily. 41% of all code written globally is now AI-generated.
But the productivity gain comes with a proportional security debt — unless you build process discipline around it.
The developers who will come out ahead are not the ones who write the most AI-generated code. They are the ones who write the most AI-generated code that does not become a CVE.
Simon Willison, creator of Datasette, put it well: writing code is now near-free. The expensive part is architecture, test design, and domain knowledge. That has not changed. If anything, it matters more now — because the AI can write the code for you, but it cannot tell you what the code should do, who should be allowed to access it, or what happens when it goes wrong.
Security Checklist (Quick Reference)
[ ] Write tests before generating code
[ ] Add SAST (Semgrep / Bandit) to every PR
[ ] Enable secret scanning (gitleaks or GitGuardian)
[ ] Minimum permissions for all AI agents
[ ] Row-level security is ON (verify Supabase / DB config)
[ ] Read every file that touches auth, user data, or external APIs
[ ] Run dependency audit in CI (pip-audit / npm audit)
[ ] Review AI-suggested packages — confirm they exist and are maintained
Sources
- Cloud Security Alliance — Vibe Coding’s Security Debt: The AI-Generated CVE Surge
- Veracode — GenAI Code Security Report
- GitGuardian — State of Secrets Sprawl 2026
- Wiz Blog — Hacking Moltbook: Exposed Database Reveals Millions of API Keys
- Trend Micro — The Real Risk of Vibecoding
- Apiiro — 4x Velocity, 10x Vulnerabilities
- Infosecurity Magazine — Moltbook Exposes User Data, API Keys and More
- The Register — Using AI to code does not mean your code is more secure