Security Is Not Just an Ops Problem
Security vulnerabilities in production often originate in code. Not in infrastructure misconfiguration or zero-day exploits, but in patterns that developers write every day without realising the risk. A hardcoded API key, a careless use of innerHTML, or a permissive CORS policy can open doors that no firewall can close.
This article covers the most common security anti-patterns found in web codebases, explains why each one is dangerous, and shows how to fix it. These are not theoretical risks. They are patterns that appear in real repositories every day.
Committed Secrets
The Risk
When API keys, database credentials, or private keys are committed to a repository, they become part of the permanent history. Even if removed in a later commit, they remain accessible to anyone with read access to the repository. Automated scanners trawl public repositories specifically looking for these patterns.
Common offenders include AWS access keys, Stripe secret keys, GitHub tokens, database connection strings, and private key files (PEM, PPK).
The Fix
Use environment variables for all secrets and credentials. Store them in .env files locally and add .env to your .gitignore. For production, use your platform's secret management (Vercel environment variables, AWS Secrets Manager, Cloudflare secrets).
Add a pre-commit hook or CI step that scans for secret patterns before code reaches the repository. Tools like gitleaks or trufflehog can detect secrets in staged changes.
If a secret has already been committed, rotate it immediately. Removing it from the codebase is not enough because the old value remains in git history.
Dangerous Use of eval()
The Risk
eval() executes arbitrary JavaScript code at runtime. If user input ever reaches an eval() call, an attacker can execute any code they choose in the context of your application. This includes reading environment variables, making network requests, or modifying application state.
The same risk applies to new Function(), setTimeout with a string argument, and setInterval with a string argument.
The Fix
There is almost never a legitimate reason to use eval() in application code. JSON parsing should use JSON.parse(). Dynamic property access should use bracket notation. Template rendering should use a proper templating engine.
If you encounter eval() in your codebase, treat it as a high-priority finding. Replace it with the appropriate safe alternative and add a lint rule to prevent it from being reintroduced.
// Dangerous
const result = eval(userInput);
// Safe alternatives
const data = JSON.parse(jsonString);
const value = obj[propertyName];
innerHTML and Cross-Site Scripting (XSS)
The Risk
Setting innerHTML with user-supplied content allows an attacker to inject arbitrary HTML and JavaScript into your page. This is cross-site scripting (XSS), and it remains one of the most common web vulnerabilities. An attacker can steal session cookies, redirect users, or modify page content.
Framework equivalents carry the same risk: React's dangerouslySetInnerHTML, Vue's v-html, and Angular's bypassSecurityTrustHtml.
The Fix
Use textContent instead of innerHTML when inserting plain text. When you need to render HTML, sanitise it first with a library like DOMPurify.
Modern frameworks escape content by default. The danger comes from opting out of that protection. If you find dangerouslySetInnerHTML in your codebase, verify that the content is either from a trusted source (your own CMS, not user input) or sanitised before rendering.
// Dangerous
element.innerHTML = userComment;
// Safe
element.textContent = userComment;
// When HTML is genuinely needed
import DOMPurify from 'dompurify';
element.innerHTML = DOMPurify.sanitize(trustedHtml);
SQL Injection
The Risk
SQL injection occurs when user input is concatenated directly into a SQL query string. An attacker can modify the query to read, modify, or delete data they should not have access to. In severe cases, they can extract the entire database or gain administrative access.
This is not limited to raw SQL. Any ORM or query builder that accepts raw string interpolation is vulnerable.
The Fix
Use parameterised queries or prepared statements. Every modern database client supports them. The key principle is simple: never concatenate user input into a query string.
// Dangerous
const query = `SELECT * FROM users WHERE email = '${email}'`;
// Safe: parameterised query
const { data } = await supabase
.from('users')
.select('*')
.eq('email', email);
// Safe: prepared statement
const result = await db.query(
'SELECT * FROM users WHERE email = $1',
[email]
);
If your codebase uses an ORM, stick to the ORM's query API. Raw query methods should be flagged in code review and checked for parameterisation.
CORS Misconfiguration
The Risk
Cross-Origin Resource Sharing (CORS) controls which domains can make requests to your API. A permissive configuration, particularly Access-Control-Allow-Origin: * combined with Access-Control-Allow-Credentials: true, allows any website to make authenticated requests to your API on behalf of your users.
This effectively disables the browser's same-origin policy, one of the most important security boundaries in web applications.
The Fix
Configure CORS to allow only the specific origins that need access.
// Dangerous
app.use(cors({ origin: '*', credentials: true }));
// Safe
app.use(cors({
origin: ['https://app.example.com', 'https://www.example.com'],
credentials: true,
}));
Never use wildcard origins in combination with credentials. If your API is genuinely public and does not use cookies or authentication headers, a wildcard origin is acceptable. But the moment credentials are involved, the origin must be explicit.
Review your CORS configuration regularly. It is common for development settings (localhost:3000) to leak into production configurations.
Hardcoded Credentials
The Risk
Hardcoded credentials differ from committed secrets in that they are often intentional. A developer hardcodes a database password "temporarily" during development, or embeds an API key directly in the source code because environment variables feel like overhead.
The problem is identical: anyone with access to the code has access to the credentials. This includes contractors, open-source contributors, and anyone who compromises the repository.
The Fix
Every credential, token, password, and API key should be loaded from environment variables. No exceptions. Create a .env.example file that documents all required variables without their values, so new developers know what to configure.
// Dangerous
const apiKey = 'sk_live_abc123def456';
// Safe
const apiKey = process.env.STRIPE_SECRET_KEY;
if (!apiKey) throw new Error('STRIPE_SECRET_KEY is required');
The validation step is important. Failing fast with a clear error message is far better than silently using an undefined value or falling back to a hardcoded default.
Unbounded Database Queries
The Risk
A query without a LIMIT clause can return millions of rows if the table grows beyond what the developer anticipated. This can crash your application, exhaust memory, or create a denial-of-service condition. It is a performance issue that becomes a security issue at scale.
The Fix
Add explicit limits to all queries, especially those exposed through API endpoints. If pagination is appropriate, implement it. If not, set a reasonable maximum.
// Dangerous
const { data } = await supabase.from('logs').select('*');
// Safe
const { data } = await supabase.from('logs').select('*').limit(100);
API endpoints should enforce maximum page sizes regardless of what the client requests. A client asking for 10,000 records per page should receive your maximum, not an out-of-memory error.
Frequently Asked Questions
What is the most common security vulnerability in web applications?
Cross-site scripting (XSS) and injection attacks consistently rank among the most common vulnerabilities. The OWASP Top 10 lists injection as a perennial risk. In practice, committed secrets are also extremely common but are often overlooked because they do not cause visible failures until they are exploited.
How do I check my codebase for security issues?
Start with static analysis tools that scan for known dangerous patterns: eval() usage, innerHTML with dynamic content, secret patterns in source files, and unparameterised SQL queries. Tools like gitleaks, semgrep, and ESLint security plugins can automate this as part of your CI/CD pipeline.
Should security scanning block pull requests?
Yes. Security findings should be treated as defects, not suggestions. Configure your quality gates to fail when critical security patterns are detected. It is far cheaper to fix a vulnerability before it reaches the main branch than to remediate it in production.
Building a Security-Conscious Codebase
Fixing individual vulnerabilities is necessary but insufficient. The goal is to create a codebase where these patterns cannot easily be introduced.
Automate detection. Static analysis tools can scan for secret patterns, dangerous API usage, and missing parameterisation. Run these checks in CI so vulnerabilities are caught before they reach the main branch.
Use lint rules. ESLint rules exist for eval(), innerHTML, and many other dangerous patterns. Enable them and set them to error, not warn.
Review CORS and auth configuration regularly. These are high-impact settings that rarely change, which means they rarely get reviewed. Schedule periodic checks.
Treat security findings as bugs, not suggestions. A committed secret or an unparameterised query is a defect. It should be fixed with the same urgency as a broken feature.
Security is not a feature you add at the end. It is a quality of the code you write every day. The patterns in this article are straightforward to detect and fix. The hard part is building the discipline to catch them consistently.