Why AI-Generated Code Is Insecure (And What You Can Do About It)
AI coding tools like Cursor, Lovable, Bolt, and Replit are transforming how software gets built. Solo founders are shipping full-stack apps in days. Non-technical builders are creating SaaS products without writing a line of code themselves. But there's a problem nobody's talking about: the code these tools generate is often deeply insecure.
The Numbers Are Alarming
Research from Stanford found that developers using AI assistants produce significantly less secure code than those who don't. A separate study found that roughly 45% of code generated by large language models contains security vulnerabilities. These aren't edge cases — they're common patterns that appear in almost every AI-generated codebase.
What AI Gets Wrong
AI models are trained to make code that works, not code that's secure. Here are the most common vulnerabilities we see in AI-generated apps:
1. Hardcoded Secrets
AI tools frequently embed API keys, database credentials, and secret tokens directly in source code. When you ask an AI to connect to Stripe, Supabase, or OpenAI, it often puts the key right in the file rather than using environment variables. Once you push to GitHub, those keys are exposed to the world.
2. Missing Authentication
AI-generated API routes often lack authentication middleware. The AI creates a beautiful CRUD endpoint but forgets to check if the user is logged in. Anyone who discovers the URL can read, modify, or delete data without any authorization.
3. SQL Injection
When AI generates database queries, it frequently uses string interpolation instead of parameterized queries. This classic vulnerability lets attackers inject malicious SQL and dump your entire database. AI models have seen millions of examples of both patterns — and often pick the insecure one.
4. Cross-Site Scripting (XSS)
AI tools generate React components that use dangerouslySetInnerHTML without sanitization, or Vue templates with v-html that render user input directly. Attackers exploit this to steal session cookies, redirect users, or deface your app.
5. Unverified Webhooks
Stripe, Clerk, and other services send webhooks to your app. AI-generated webhook handlers often skip signature verification, meaning anyone can forge fake payment events and mark orders as paid without actually paying.
Why AI Can't Fix Itself
You might think: “Why not just ask the AI to make the code secure?” The problem is that LLMs don't have a security model. They predict the most likely next token based on training data. If insecure patterns appear more frequently in the training data (and they do), the AI will reproduce them.
Even when you explicitly ask for “secure code,” the AI might add a comment saying // TODO: add authentication while still generating an unprotected route. It knows what security looks like in theory but doesn't enforce it in practice.
The Solution: Scan Before You Ship
The fix isn't to stop using AI coding tools — they're genuinely transformative. The fix is to scan your code for vulnerabilities before deploying, just like you'd run tests before shipping.
That's why we built XploitScan. One command scans your entire codebase with 131 security rules purpose-built for the kinds of mistakes AI makes:
$ npx xploitscan scan .No install. No config. No account required. Runs locally — your code never leaves your machine.
Every vulnerability is explained in plain English with a copy-paste fix suggestion. No security expertise required. If you can read the output of a linter, you can use XploitScan.
What You Should Do Right Now
- Scan your current project — Run
npx xploitscan scan .and see what comes up - Fix critical and high findings first — Focus on secrets, injection, and missing auth
- Add scanning to your CI pipeline — Catch vulnerabilities in every PR with our GitHub Action
- Scan regularly — Every time AI generates new code, scan again
Ready to secure your AI-generated code?
131 security rules. Plain-English results. Free to start.
Scan Now — Free