Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
71 changes: 44 additions & 27 deletions backend/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,29 +10,52 @@ npm run backend

Server starts at `http://localhost:4000` by default.

## Database
## Security Layer

### Issue #26: Move from in-memory store to persistent DB
### Helmet-style security headers

- Uses a local JSON database file at `backend/data/brocode.json`.
- You can override the location with `BROCODE_DB_PATH=/custom/path.json npm run backend`.
- On first start, seed data is inserted for users, spots, catalog items, and a sample order.
- New orders are validated against DB data (known `spotId`, `userId`, `productId`) and item pricing is always derived from catalog prices in the database.
The server now sends strict security headers on every API response, including:

- `Content-Security-Policy`
- `Strict-Transport-Security`
- `X-Content-Type-Options`
- `X-Frame-Options`
- `Referrer-Policy`
- `Cross-Origin-*` hardening headers

Configure CSP via `SECURITY_HEADERS_CSP`.

### CORS

CORS is applied to all endpoints with these defaults:

- Allowed origin from `CORS_ALLOW_ORIGIN` (defaults to `*`)
- Allowed headers: `Content-Type`, `Authorization`
- Allowed methods: `GET,POST,DELETE,OPTIONS`

### Issue #28: Secure credential storage and verification
### Rate limiting

- Passwords are stored as salted `scrypt` hashes (not plaintext).
- Legacy plaintext user passwords are auto-migrated to hashed values on successful login.
Two limits are active:

### Issue #29: Protect login endpoint from brute-force attempts
1. **Global API limiter** per IP (`GLOBAL_RATE_LIMIT_MAX_REQUESTS` in `GLOBAL_RATE_LIMIT_WINDOW_MS`)
2. **Login brute-force limiter** per `IP + username`
(`LOGIN_RATE_LIMIT_MAX_ATTEMPTS` in `LOGIN_RATE_LIMIT_WINDOW_MS`, temporary block for `LOGIN_RATE_LIMIT_BLOCK_MS`)

- Login is now rate-limited per `IP + username` key.
- Defaults: 5 failed attempts within 15 minutes triggers a 15 minute temporary block (`429`).
- Configure via env vars:
- `LOGIN_RATE_LIMIT_MAX_ATTEMPTS`
- `LOGIN_RATE_LIMIT_WINDOW_MS`
- `LOGIN_RATE_LIMIT_BLOCK_MS`
Both return HTTP `429` and `Retry-After` headers.

### Password hashing

User credentials are stored as salted hashes (using Node crypto `scrypt`) and never as plaintext.
Legacy plaintext records auto-migrate to hashed values at successful login.

## Database

- Uses a local JSON database file at `backend/data/brocode.json`.
- You can override the location with `BROCODE_DB_PATH=/custom/path.json npm run backend`.
- On first start, seed data is inserted for users, spots, catalog items, and a sample order.
- New orders are validated against DB data (known `spotId`, `userId`, `productId`) and item pricing is always derived from catalog prices in the database.

## Deployment (Render / Railway / AWS EC2)
### Issue #31: Redis-backed caching + session/performance primitives

- Added optional Redis integration (`REDIS_URL`) with automatic in-memory fallback when Redis is unavailable.
Expand All @@ -56,18 +79,12 @@ Server starts at `http://localhost:4000` by default.

### Issue #30: Secure backend data access with signed auth tokens + authorization

- Login now returns an HMAC-signed bearer token (replacing predictable demo tokens).
- Tokens include user id, role, and expiry, and are validated with constant-time signature checks.
- Data endpoints now require `Authorization: Bearer <token>` and enforce role access:
- `GET /api/orders` → users can only read their own orders; admins can read all.
- `POST /api/orders` → users can create only for themselves; admins can create for any user.
- `GET /api/orders/:id` → users can read only their own order; admins can read any order.
- `GET /api/bills/:spotId` and `DELETE /api/users/:userId` → admin only.
- Configure via env vars:
- `AUTH_TOKEN_SECRET`
- `AUTH_TOKEN_TTL_SECONDS`
- `CORS_ALLOW_ORIGIN`
See [`backend/deployment.md`](./deployment.md) for step-by-step deployment options and env setup for:

- Backend hosting: **Render**, **Railway**, **AWS EC2**
- PostgreSQL: **Supabase** or **Neon**
- Redis: **Upstash**
- File storage: **AWS S3** or **Cloudinary**
### Background jobs (BullMQ + Redis)

- The backend initializes BullMQ queues for:
Expand Down
108 changes: 108 additions & 0 deletions backend/deployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
# Deployment guide

This project can be deployed with the following stack:

- **Backend**: Render, Railway, or AWS EC2
- **Database (PostgreSQL)**: Supabase or Neon
- **Redis**: Upstash
- **File storage**: AWS S3 or Cloudinary

> Current repo runtime still uses a local JSON DB for persistence. `DATABASE_URL`, `REDIS_URL`, and storage variables are wired into env validation so you can safely provide production credentials while evolving integrations.

## 1) Required environment variables

Use these common variables in all platforms:

```bash
PORT=4000
AUTH_TOKEN_SECRET=replace-with-long-secret
AUTH_TOKEN_TTL_SECONDS=43200
CORS_ALLOW_ORIGIN=https://your-frontend-domain.com

# Security/rate-limit tuning
SECURITY_HEADERS_CSP=default-src 'self'
GLOBAL_RATE_LIMIT_MAX_REQUESTS=300
GLOBAL_RATE_LIMIT_WINDOW_MS=900000
LOGIN_RATE_LIMIT_MAX_ATTEMPTS=5
LOGIN_RATE_LIMIT_WINDOW_MS=900000
LOGIN_RATE_LIMIT_BLOCK_MS=900000

# PostgreSQL (Supabase/Neon)
DATABASE_URL=postgresql://...

# Redis (Upstash)
REDIS_URL=redis://...
UPSTASH_REDIS_REST_URL=https://...
UPSTASH_REDIS_REST_TOKEN=...

# Storage (choose one)
STORAGE_DRIVER=s3
AWS_REGION=ap-south-1
AWS_S3_BUCKET=...
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...

# OR
STORAGE_DRIVER=cloudinary
CLOUDINARY_CLOUD_NAME=...
CLOUDINARY_API_KEY=...
CLOUDINARY_API_SECRET=...
```

## 2) Render

1. Create a **Web Service** from this repo.
2. Build command: `npm install && npm run build`
3. Start command: `npm run backend`
4. Add the env vars above in Render dashboard.
5. Add PostgreSQL (Supabase/Neon external) and Upstash connection URLs.

## 3) Railway

1. Create a new Railway project linked to this repo.
2. Set start command to `npm run backend`.
3. Add all env vars in the Variables tab.
4. Set custom domain and update `CORS_ALLOW_ORIGIN`.

## 4) AWS EC2

1. Provision Ubuntu instance and install Node.js LTS.
2. Clone repo and run `npm install`.
3. Configure env vars in systemd service file.
4. Run service with `npm run backend` via systemd.
5. Put Nginx in front with HTTPS (Let's Encrypt).

Example service snippet:

```ini
[Service]
WorkingDirectory=/srv/Brocode-Party-Update-App
ExecStart=/usr/bin/npm run backend
Environment=PORT=4000
Environment=AUTH_TOKEN_SECRET=replace-me
Restart=always
```

## 5) PostgreSQL provider choice

- **Supabase**: Copy pooled connection string from Project Settings → Database.
- **Neon**: Copy connection string from Neon project dashboard.
- Set as `DATABASE_URL`.

## 6) Redis (Upstash)

- Create Redis database in Upstash.
- Use TCP URL as `REDIS_URL` (if your runtime supports it).
- For REST-based access, set both `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN`.

## 7) File storage

- **AWS S3**: set `STORAGE_DRIVER=s3` + AWS credentials and bucket vars.
- **Cloudinary**: set `STORAGE_DRIVER=cloudinary` + Cloudinary keys.

## 8) Post-deploy checks

- `GET /api/health` returns status 200.
- Login endpoint returns token and includes security headers.
- CORS allows only your front-end domain.
- Rate limiting returns 429 after repeated requests.
15 changes: 15 additions & 0 deletions backend/env.js
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,21 @@ const envSchema = z.object({
LOGIN_RATE_LIMIT_MAX_ATTEMPTS: z.string().regex(/^\d+$/).optional(),
LOGIN_RATE_LIMIT_WINDOW_MS: z.string().regex(/^\d+$/).optional(),
LOGIN_RATE_LIMIT_BLOCK_MS: z.string().regex(/^\d+$/).optional(),
GLOBAL_RATE_LIMIT_MAX_REQUESTS: z.string().regex(/^\d+$/).optional(),
GLOBAL_RATE_LIMIT_WINDOW_MS: z.string().regex(/^\d+$/).optional(),
SECURITY_HEADERS_CSP: z.string().optional(),
DATABASE_URL: z.string().optional(),
REDIS_URL: z.string().optional(),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Duplicate REDIS_URL key in Zod schema — stricter .url() validation overrides the lenient one

The env schema defines REDIS_URL twice: once at line 62 as z.string().optional() (new, lenient) and again at line 73 as z.string().url().optional() (old, strict). In a JavaScript object literal, the second property with the same key wins.

Root Cause and Impact

In backend/env.js, the Zod schema object has:

REDIS_URL: z.string().optional(),          // line 62 (new)
// ... other fields ...
REDIS_URL: z.string().url().optional(),    // line 73 (old)

The second definition overrides the first. The new code at line 62 intentionally relaxed the validation to z.string().optional() (allowing non-URL strings like Redis connection strings that may not be valid URLs), but the old stricter z.string().url().optional() at line 73 takes precedence. This means REDIS_URL values like redis://user:pass@host:6379 that aren't strictly valid URLs per Zod's .url() check could fail validation unexpectedly.

Impact: Environment validation may reject valid Redis connection strings that don't pass strict URL validation.

Prompt for agents
In backend/env.js, remove the duplicate REDIS_URL key. The schema at line 73 (`REDIS_URL: z.string().url().optional()`) is the old pre-existing definition. The new one at line 62 (`REDIS_URL: z.string().optional()`) was added by this PR. Decide which validation is appropriate and keep only one. If Redis connection strings like `redis://...` are always valid URLs, keep the `.url()` version. Otherwise, keep the lenient `z.string().optional()` version. Remove the other.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

UPSTASH_REDIS_REST_URL: z.string().optional(),
UPSTASH_REDIS_REST_TOKEN: z.string().optional(),
STORAGE_DRIVER: z.enum(['s3', 'cloudinary', 'local']).optional(),
AWS_REGION: z.string().optional(),
AWS_S3_BUCKET: z.string().optional(),
AWS_ACCESS_KEY_ID: z.string().optional(),
AWS_SECRET_ACCESS_KEY: z.string().optional(),
CLOUDINARY_CLOUD_NAME: z.string().optional(),
CLOUDINARY_API_KEY: z.string().optional(),
CLOUDINARY_API_SECRET: z.string().optional(),
REDIS_URL: z.string().url().optional(),
REDIS_KEY_PREFIX: z.string().optional(),
CACHE_DEFAULT_TTL_SECONDS: z.string().regex(/^\d+$/).optional(),
Expand Down
113 changes: 110 additions & 3 deletions backend/server.js
Original file line number Diff line number Diff line change
Expand Up @@ -16,19 +16,67 @@ const AUTH_TOKEN_SECRET = process.env.AUTH_TOKEN_SECRET || 'brocode-dev-secret-c
const AUTH_TOKEN_TTL_SECONDS = Number(process.env.AUTH_TOKEN_TTL_SECONDS || 60 * 60 * 12);
const EVENT_STATE_DEFAULT_TTL_SECONDS = Number(process.env.EVENT_STATE_DEFAULT_TTL_SECONDS || 120);
const CORS_ALLOW_ORIGIN = process.env.CORS_ALLOW_ORIGIN || '*';
const GLOBAL_RATE_LIMIT_MAX_REQUESTS = Number(process.env.GLOBAL_RATE_LIMIT_MAX_REQUESTS || 300);
const GLOBAL_RATE_LIMIT_WINDOW_MS = Number(process.env.GLOBAL_RATE_LIMIT_WINDOW_MS || 15 * 60 * 1000);
const SECURITY_HEADERS_CSP = process.env.SECURITY_HEADERS_CSP || "default-src 'self'";
const loginAttempts = new Map();
const globalRequests = new Map();
const SWAGGER_HTML = buildSwaggerHtml();
const jobSystem = await createJobSystem();

const getLoginKey = (req, username) => {
const getLoginKey = (req, username) => `${getRequestIp(req)}:${username}`;

const getRateLimitState = (key) => {
const now = Date.now();
const existing = loginAttempts.get(key);

if (!existing) {
const state = { count: 0, windowStart: now, blockedUntil: 0 };
loginAttempts.set(key, state);
return state;
}

if (existing.blockedUntil > 0 && existing.blockedUntil <= now) {
existing.count = 0;
existing.windowStart = now;
existing.blockedUntil = 0;
}

if (now - existing.windowStart > LOGIN_RATE_LIMIT_WINDOW_MS) {
existing.count = 0;
existing.windowStart = now;
}

return existing;
};


const getGlobalRateLimitState = (key) => {
const now = Date.now();
const existing = globalRequests.get(key);

if (!existing || now - existing.windowStart > GLOBAL_RATE_LIMIT_WINDOW_MS) {
const state = { count: 0, windowStart: now };
globalRequests.set(key, state);
return state;
}

return existing;
};
Comment on lines +54 to +65

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 globalRequests Map grows unboundedly — no cleanup of stale IP entries causes memory leak

The globalRequests Map at line 23 stores rate-limit state per IP address but never removes entries for IPs that stop making requests. Over time, this Map will grow without bound.

Root Cause and Impact

In backend/server.js:54-65, getGlobalRateLimitState creates a new entry in globalRequests for every unique IP. When a window expires, the entry is replaced (line 59-61), but entries for IPs that never return are never deleted:

const getGlobalRateLimitState = (key) => {
  const now = Date.now();
  const existing = globalRequests.get(key);
  if (!existing || now - existing.windowStart > GLOBAL_RATE_LIMIT_WINDOW_MS) {
    const state = { count: 0, windowStart: now };
    globalRequests.set(key, state);  // entry persists forever
    return state;
  }
  return existing;
};

Similarly, loginAttempts (line 22) has the same issue — clearRateLimitState exists but is never called.

In a production environment exposed to the internet, unique IPs from bots, scanners, and legitimate users will accumulate indefinitely, causing gradual memory growth that could eventually exhaust available memory.

Impact: Slow memory leak in production that could lead to OOM crashes over extended uptime periods, especially under diverse traffic patterns.

Prompt for agents
In backend/server.js, add periodic cleanup for both the `globalRequests` Map (line 23) and the `loginAttempts` Map (line 22). One approach is to add a setInterval that runs every GLOBAL_RATE_LIMIT_WINDOW_MS (e.g., every 15 minutes) and deletes entries whose window has expired:

setInterval(() => {
  const now = Date.now();
  for (const [key, state] of globalRequests) {
    if (now - state.windowStart > GLOBAL_RATE_LIMIT_WINDOW_MS) {
      globalRequests.delete(key);
    }
  }
  for (const [key, state] of loginAttempts) {
    if (state.blockedUntil > 0 && state.blockedUntil <= now) {
      loginAttempts.delete(key);
    } else if (now - state.windowStart > LOGIN_RATE_LIMIT_WINDOW_MS) {
      loginAttempts.delete(key);
    }
  }
}, GLOBAL_RATE_LIMIT_WINDOW_MS);

Place this after the Map declarations around line 23.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.


const getRequestIp = (req) => {
const forwardedFor = req.headers['x-forwarded-for'];
const firstForwardedIp = Array.isArray(forwardedFor)
? forwardedFor[0]
: typeof forwardedFor === 'string'
? forwardedFor.split(',')[0]
: '';
const remoteIp = firstForwardedIp?.trim() || req.socket?.remoteAddress || 'unknown-ip';
return `${remoteIp}:${username}`;

return firstForwardedIp?.trim() || req.socket?.remoteAddress || 'unknown-ip';
};

const clearRateLimitState = (key) => {
loginAttempts.delete(key);
};

const parseBearerToken = (authHeader) => {
Expand Down Expand Up @@ -109,13 +157,42 @@ const getUserFromAuthHeader = async (authHeader) => {
return database.getUserById(verifiedPayload.sub);
};

const recordFailedLoginAttempt = (key) => {
const now = Date.now();
const state = getRateLimitState(key);
state.count += 1;

if (state.count >= LOGIN_RATE_LIMIT_MAX_ATTEMPTS) {
state.blockedUntil = now + LOGIN_RATE_LIMIT_BLOCK_MS;
}
};
Comment on lines +160 to +168

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 New in-memory login rate-limit state is checked but never populated — brute-force protection is ineffective

The PR adds recordFailedLoginAttempt (line 160) and clearRateLimitState (line 78) to manage the new in-memory loginAttempts Map, and the login handler checks this Map via getRateLimitState (line 290). However, the login handler still calls the old cache-based rateLimiter.recordFailure (line 315) and rateLimiter.clear (line 324) instead of the new functions.

Root Cause and Impact

The new in-memory rate-limit check at backend/server.js:290-302 reads from the loginAttempts Map:

const rateLimitState = getRateLimitState(loginKey);
if (rateLimitState.blockedUntil > now) { ... }

But on failed login, the code records the failure in the old cache-based system instead:

await rateLimiter.recordFailure(loginKey, { ... });  // line 315

And on success, it clears the old system:

await rateLimiter.clear(loginKey);  // line 324

The new recordFailedLoginAttempt function (line 160) that would populate loginAttempts is never called. As a result, rateLimitState.blockedUntil will always be 0 (the initial value from getRateLimitState), and the in-memory rate-limit check will never trigger.

Meanwhile, the old cache-based rateLimiter.getBlockedSeconds check (line 303) still exists due to the bad merge, creating a confusing dual system where neither works correctly together.

Impact: The new in-memory brute-force rate limiting is completely non-functional. Attackers can make unlimited login attempts without being blocked by the in-memory limiter.

Prompt for agents
In backend/server.js, the login handler needs to be updated to use the new in-memory rate limiting functions instead of the old cache-based ones. Specifically:

1. At line 315, replace `await rateLimiter.recordFailure(loginKey, { ... })` with `recordFailedLoginAttempt(loginKey)`
2. At line 324, replace `await rateLimiter.clear(loginKey)` with `clearRateLimitState(loginKey)`
3. Remove the old duplicate rate-limit check at lines 303-310 that uses `rateLimiter.getBlockedSeconds`
4. Add a `return` statement after the sendJson call at line 302 (the new rate-limit 429 response)

This will make the in-memory rate limiting actually functional and remove the dead code from the bad merge.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.


const sendJson = (res, statusCode, body, extraHeaders = {}) => {
const sendJson = (res, statusCode, body) => {
Comment on lines +170 to 171

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Duplicate sendJson function declaration — new signature with extraHeaders is shadowed by old declaration

At lines 170-171, there are two consecutive sendJson declarations. The new version accepts extraHeaders (line 170) but is immediately followed by the old version without that parameter (line 171). In non-strict mode, the second declaration would shadow the first, losing the extraHeaders parameter.

Root Cause and Impact

The merge produced:

const sendJson = (res, statusCode, body, extraHeaders = {}) => {
const sendJson = (res, statusCode, body) => {

The function body uses ...extraHeaders at line 189, but if the second declaration takes effect, extraHeaders would be undefined, and ...undefined in an object literal is a no-op. This means the Retry-After headers passed as extraHeaders from the global rate limiter (line 248) and login rate limiter (line 301) would be silently dropped.

Note: This is also a syntax error that prevents the file from running, but the logical consequence is that even if one declaration were removed, the wrong one being kept would break the extraHeaders feature.

Impact: Rate-limit Retry-After headers would not be sent to clients if the old declaration is the one that survives.

Suggested change
const sendJson = (res, statusCode, body, extraHeaders = {}) => {
const sendJson = (res, statusCode, body) => {
const sendJson = (res, statusCode, body, extraHeaders = {}) => {
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

res.writeHead(statusCode, {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': CORS_ALLOW_ORIGIN,
'Access-Control-Allow-Headers': 'Content-Type, Authorization',
'Access-Control-Allow-Methods': 'GET,POST,DELETE,OPTIONS',
'Cross-Origin-Opener-Policy': 'same-origin',
'Cross-Origin-Resource-Policy': 'same-origin',
'Origin-Agent-Cluster': '?1',
'Referrer-Policy': 'no-referrer',
'Strict-Transport-Security': 'max-age=31536000; includeSubDomains',
'X-Content-Type-Options': 'nosniff',
'X-DNS-Prefetch-Control': 'off',
'X-Download-Options': 'noopen',
'X-Frame-Options': 'SAMEORIGIN',
'X-Permitted-Cross-Domain-Policies': 'none',
'X-XSS-Protection': '0',
'Content-Security-Policy': SECURITY_HEADERS_CSP,
...extraHeaders,
});
if (statusCode === 204) {
res.end();
return;
}

res.end(JSON.stringify(body));
};

Expand Down Expand Up @@ -156,6 +233,23 @@ const server = createServer(async (req, res) => {
const parsedUrl = new URL(req.url || '/', `http://localhost:${port}`);
const path = parsedUrl.pathname;

const globalRateLimitKey = getRequestIp(req);
const globalRateLimitState = getGlobalRateLimitState(globalRateLimitKey);
globalRateLimitState.count += 1;

if (globalRateLimitState.count > GLOBAL_RATE_LIMIT_MAX_REQUESTS) {
const retryAfterSeconds = Math.ceil(
(GLOBAL_RATE_LIMIT_WINDOW_MS - (Date.now() - globalRateLimitState.windowStart)) / 1000
);
sendJson(
res,
429,
{ error: 'Too many requests. Please try again later.', retryAfterSeconds },
{ 'Retry-After': String(Math.max(retryAfterSeconds, 1)) }
);
return;
}

if (method === 'OPTIONS') {
sendJson(res, 204, {});
return;
Expand Down Expand Up @@ -193,6 +287,19 @@ const server = createServer(async (req, res) => {
}

const loginKey = getLoginKey(req, username);
const rateLimitState = getRateLimitState(loginKey);
const now = Date.now();
if (rateLimitState.blockedUntil > now) {
const retryAfterSeconds = Math.ceil((rateLimitState.blockedUntil - now) / 1000);
sendJson(
res,
429,
{
error: 'Too many failed login attempts. Please try again later.',
retryAfterSeconds,
},
{ 'Retry-After': String(Math.max(retryAfterSeconds, 1)) }
);
Comment on lines +301 to +302

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Missing return after sending 429 response in login rate-limit check causes request to fall through

When a login request is blocked by the new in-memory rate limiter, sendJson is called at line 294-302 to send a 429 response, but there is no return statement afterward. This means execution continues past the rate-limit guard into the credential-checking logic below, potentially sending a second response on the same res object (causing a "headers already sent" error) or allowing a blocked attacker to still attempt login.

Root Cause and Impact

At backend/server.js:292-302, the new rate-limit block checks rateLimitState.blockedUntil > now and calls sendJson(res, 429, ...) but never returns:

if (rateLimitState.blockedUntil > now) {
  const retryAfterSeconds = Math.ceil((rateLimitState.blockedUntil - now) / 1000);
  sendJson(res, 429, { ... }, { 'Retry-After': ... });
  // ← missing return here!
}

After sending the 429, the handler falls through to database.getUserByCredentials(username, password) at line 312, which defeats the purpose of the brute-force rate limiter entirely. If the credentials happen to be correct, a successful login response would also be attempted on the already-written response, causing a Node.js runtime error.

Impact: The login brute-force protection is completely bypassed — blocked users can still authenticate.

Suggested change
{ 'Retry-After': String(Math.max(retryAfterSeconds, 1)) }
);
{ 'Retry-After': String(Math.max(retryAfterSeconds, 1)) }
);
return;
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

const retryAfterSeconds = await rateLimiter.getBlockedSeconds(loginKey);
if (retryAfterSeconds > 0) {
sendJson(res, 429, {
Expand Down
Loading