I recently reviewed a vibe-coded app that had been live for three months. The founder was proud of it, and rightfully so. It worked, it had users, it was making money. But within an hour of looking at the code, I found a way to access any user's data by changing a number in the URL.
The founder had no idea. The AI that wrote the code had no idea. And the users whose data was exposed had no idea either.
This is not an isolated case. I've now reviewed enough AI-generated codebases to see clear patterns in the security mistakes these tools make. And the scary part is that most of these vulnerabilities are invisible to non-technical founders until something goes wrong.
Why AI Makes Security Mistakes
AI coding tools are trained on vast amounts of code from the internet. Some of that code is secure. A lot of it is not. The AI learns patterns, but it doesn't learn intent. It doesn't understand why security matters. It just generates code that looks right and functions correctly.
Research from Stanford University found that developers using AI assistants produced code with significantly more security vulnerabilities than developers working without AI help. The study found that AI-assisted developers were also more likely to believe their code was secure when it wasn't.
This creates a dangerous combination: more vulnerabilities and more confidence.
The Most Common Vulnerabilities I See
After reviewing dozens of vibe-coded applications, these are the security issues that come up again and again:
Broken Access Control
This is the vulnerability I mentioned at the start. The app checks if you're logged in, but it doesn't check if you're allowed to access the specific resource you're requesting.
Your data (this is fine):
Someone else's data (this should be blocked):
The server checked if you were logged in, but not if you were allowed to access User 2's data.
The AI generated code that verifies authentication but forgot about authorization.
OWASP ranks Broken Access Control as the number one web application security risk. It's the most common serious vulnerability in web applications, and AI tools reproduce it constantly because most tutorial code doesn't demonstrate proper authorization patterns.
Exposed API Keys and Secrets
AI-generated code frequently includes API keys, database credentials, and other secrets directly in the source code. Sometimes these end up in frontend code that gets sent to browsers. Sometimes they get committed to Git repositories that later become public.
The AI copied a pattern from example code. This key can create charges on your Stripe account.
"sk_live_51ABC...xyz"
Secret keys in frontend code are visible to everyone. Attackers actively scan for these patterns.
I've seen production database credentials in React components. I've seen Stripe secret keys (the ones that can charge cards) in client-side JavaScript. The AI sees patterns in example code where credentials are hardcoded for demonstration purposes and reproduces those patterns in production code.
GitGuardian's 2024 State of Secrets Sprawl report found over 12 million new secrets exposed in public GitHub repositories in a single year. Many of these came from developers (and increasingly, AI tools) not understanding the difference between example code and production code.
SQL Injection
This one surprised me because it's such a well-known vulnerability. But AI tools still generate code that's vulnerable to SQL injection, especially when building custom queries or working with less common database operations.
WHERE email = 'alice@example.com'
AND password = 'hashed_password'
This works fine for normal input.
WHERE email = '' OR '1'='1' --'
AND password = 'anything'
The -- comments out the rest of the query. The condition '1'='1' is always true. The attacker logs in as the first user in the database (usually an admin).
The AI generates code that builds SQL queries by concatenating user input directly into the query string. An attacker can input specially crafted text that changes what the query does. In the worst case, they can read your entire database or delete everything in it.
The 2024 Verizon Data Breach Investigations Report found that web application attacks, including SQL injection, were involved in 26% of breaches. These are decades-old vulnerabilities that we keep recreating.
Missing Input Validation
AI-generated code tends to trust user input. It assumes that if the frontend sends data in a certain format, that's the format it will always be in. But attackers don't use your frontend. They send whatever data they want directly to your API.
The dropdown only has "User" as an option. Looks safe, right?
Attackers don't use your form. They send requests directly:
The AI-generated backend code accepted whatever role was sent. It only validated on the frontend, which attackers bypass entirely.
The frontend only showed "user" as an option, but the backend happily accepted anything. An attacker could sign up with admin privileges.
Insecure Authentication
This one takes many forms:
- Passwords stored in plain text or with weak hashing
- Session tokens that don't expire
- Password reset flows that can be exploited
- Missing rate limiting that allows unlimited login attempts
- Tokens stored in localStorage where they can be stolen by XSS attacks
AI tools often generate authentication code that works but cuts corners on security. The code lets users log in and out, so it appears to function correctly. But the underlying implementation has gaps that an attacker can exploit.
The Real Cost of These Vulnerabilities
Security vulnerabilities aren't just theoretical risks. They have real consequences:
Data breaches. If you're storing user data and it gets exposed, you have legal obligations under GDPR and other regulations. The average cost of a data breach in 2024 was $4.88 million globally, according to IBM's annual report. For small businesses, a breach can be existential.
Reputation damage. Users trust you with their data. Breach that trust and they won't come back. Neither will the users they tell about it.
Legal liability. Depending on your industry and the data you handle, security failures can result in fines, lawsuits, and regulatory action.
Account takeovers. If your authentication is weak, attackers can take over user accounts. Those users will blame you, not the attacker.
What You Can Do About It
The point of this article isn't to scare you away from AI coding tools. They're useful and they're not going away. The point is to help you understand the risks so you can address them.
Before You Launch
Get a security review. Seriously. Before you put real user data into your application, have someone with security experience look at the code. This doesn't have to be expensive. A few hours of expert review can catch the most critical issues.
At minimum, go through this checklist:
- Are API keys and secrets stored in environment variables, not in code?
- Does every API endpoint verify that the user is authorized to access that specific resource?
- Is user input validated on the server, not just the client?
- Are passwords hashed with a strong algorithm (bcrypt, argon2)?
- Do session tokens expire?
- Is there rate limiting on login attempts?
- Are database queries parameterized to prevent SQL injection?
- Is sensitive data encrypted in transit (HTTPS) and at rest?
Use Security Headers
This is one of the easiest wins. Security headers tell browsers how to behave when loading your site and can prevent entire classes of attacks.
AI-generated code rarely includes proper security headers. You can test your site's headers at securityheaders.com and get specific recommendations for what to add.
Don't Roll Your Own Auth
AI tools will happily generate custom authentication code for you. Don't use it. Use established authentication libraries or services like Auth0, Clerk, Laravel Sanctum, or NextAuth. These are built by security experts and battle-tested by millions of users.
The same goes for payment processing. Use Stripe, or Polar, not custom code. Use established libraries for anything security-sensitive.
Assume the Frontend is Compromised
Everything that happens in the browser can be seen and modified by the user. Any validation that happens only in the frontend can be bypassed. Any data sent from the frontend can be faked.
This means:
- Validate all input on the server
- Check authorization on the server
- Never trust client-side role or permission claims
- Don't expose sensitive operations through API endpoints just because the UI doesn't show them
Keep Dependencies Updated
AI-generated code often includes dependencies with known vulnerabilities. Run npm audit (for Node.js projects) or equivalent tools for other languages. Update packages that have security issues. Set up automated alerts for new vulnerabilities in your dependencies.
When Security Reviews Make Sense
Not every project needs a full penetration test. But some level of security review makes sense in these situations:
Before handling real user data. Once you have actual users with actual data, the stakes go up. A review before this point can catch issues before they become breaches.
Before processing payments. Financial data is a high-value target. If you're taking credit cards or handling money, security matters more.
Before fundraising. Technical due diligence often includes security review. Better to find and fix issues before investors find them.
When you're regulated. Healthcare, finance, gaming, and other industries have specific security requirements. Make sure you meet them before regulators check.
When something feels off. Trust your instincts. If you're worried about security, that's a good reason to get an expert opinion.
The Bigger Picture
AI coding tools have democratized software development in a way that's genuinely exciting. People who couldn't build software a year ago are now shipping products. That's a good thing.
But the security knowledge that professional developers accumulate over years doesn't automatically come with these tools. The AI generates code that works, but working and secure are different things.
The good news is that the most critical vulnerabilities are fixable. They follow patterns. Once you know what to look for, you can address them. And the investment in security now is much smaller than the cost of a breach later.
Build fast with AI. But before you go live with real users and real data, take the time to make sure you're not exposing them to risks they don't know about.
Concerned about security?
Our Security Scan checks for the vulnerabilities covered in this article and more. I'll personally review your code and tell you exactly what's at risk and how to fix it.
