Architecting Enterprise Express.js: Security, Performance, and Advanced Patterns
This is the definitive reference for transforming a working application into an enterprise-grade system. We move beyond functionality to engineering excellence: implementing centralized error handling, defense-in-depth security strategies, and high-performance clustering. The guide concludes with senior-level architectural patterns—including Domain-Driven Design (DDD), Dependency Injection, and real-time scaling with WebSockets—to ensure your Express.js applications remain maintainable and scalable for the long haul.
Error Handling
Synchronous Error Handling
Express automatically catches synchronous errors thrown in route handlers and passes them to the error-handling middleware—no extra code needed for try/catch in simple sync routes.
app.get('/sync', (req, res) => { throw new Error('Sync error!'); // Express catches this automatically });
Asynchronous Error Handling
Async errors must be explicitly passed to next() or wrapped; Express 5+ handles async/await rejections automatically, but Express 4 requires manual handling or wrappers.
// Express 4 - Manual approach app.get('/async', async (req, res, next) => { try { await someAsyncOperation(); } catch (err) { next(err); // Must pass to next() } }); // Wrapper utility for Express 4 const asyncHandler = fn => (req, res, next) => Promise.resolve(fn(req, res, next)).catch(next); app.get('/clean', asyncHandler(async (req, res) => { await someAsyncOperation(); // Errors auto-forwarded }));
Custom Error Classes
Custom error classes provide structured error information, enabling consistent error handling with status codes, error types, and additional context throughout your application.
class AppError extends Error { constructor(message, statusCode, code = 'INTERNAL_ERROR') { super(message); this.statusCode = statusCode; this.code = code; this.isOperational = true; Error.captureStackTrace(this, this.constructor); } } class NotFoundError extends AppError { constructor(resource = 'Resource') { super(`${resource} not found`, 404, 'NOT_FOUND'); } } class ValidationError extends AppError { constructor(errors) { super('Validation failed', 400, 'VALIDATION_ERROR'); this.errors = errors; } } // Usage throw new NotFoundError('User');
Centralized Error Handling
A single error-handling middleware (4 parameters) at the end of middleware chain catches all errors, providing consistent error responses and logging across the entire application.
┌─────────────────────────────────────────────────────────────┐
│ REQUEST FLOW │
├─────────────────────────────────────────────────────────────┤
│ Route 1 ──┐ │
│ Route 2 ──┼──► Error? ──► next(err) ──► Error Handler │
│ Route 3 ──┘ │ │
│ ▼ │
│ ┌──────────────────┐ │
│ │ Log Error │ │
│ │ Format Response │ │
│ │ Send to Client │ │
│ └──────────────────┘ │
└─────────────────────────────────────────────────────────────┘
// Centralized error handler (must have 4 params) app.use((err, req, res, next) => { const statusCode = err.statusCode || 500; const message = err.isOperational ? err.message : 'Internal Server Error'; // Log error logger.error({ err, req: { method: req.method, url: req.url } }); res.status(statusCode).json({ success: false, error: { code: err.code || 'INTERNAL_ERROR', message, ...(process.env.NODE_ENV === 'development' && { stack: err.stack }) } }); });
Error Logging
Structured error logging with libraries like Winston or Pino captures error details, request context, and stack traces for debugging while avoiding sensitive data exposure in production.
const winston = require('winston'); const logger = winston.createLogger({ level: process.env.LOG_LEVEL || 'info', format: winston.format.combine( winston.format.timestamp(), winston.format.errors({ stack: true }), winston.format.json() ), transports: [ new winston.transports.File({ filename: 'error.log', level: 'error' }), new winston.transports.File({ filename: 'combined.log' }) ] }); // Error handler with logging app.use((err, req, res, next) => { logger.error({ message: err.message, stack: err.stack, method: req.method, path: req.path, userId: req.user?.id, correlationId: req.headers['x-correlation-id'] }); // ... send response });
Operational vs Programmer Errors
Operational errors are expected runtime problems (network failures, invalid input) that should be handled gracefully; programmer errors are bugs that often require process restart.
┌────────────────────────────────────────────────────────────────┐
│ ERROR CLASSIFICATION │
├─────────────────────────────┬──────────────────────────────────┤
│ OPERATIONAL │ PROGRAMMER │
│ (Expected) │ (Bugs) │
├─────────────────────────────┼──────────────────────────────────┤
│ • Invalid user input │ • TypeError/ReferenceError │
│ • Database connection lost │ • Undefined property access │
│ • API timeout │ • Syntax errors │
│ • File not found │ • Logic errors │
│ • Rate limit exceeded │ • Missing dependencies │
├─────────────────────────────┼──────────────────────────────────┤
│ HANDLE: Respond gracefully │ HANDLE: Log + restart process │
└─────────────────────────────┴──────────────────────────────────┘
app.use((err, req, res, next) => { if (err.isOperational) { // Operational: send error response return res.status(err.statusCode).json({ error: err.message }); } // Programmer error: log and crash logger.fatal(err); process.exit(1); // Let process manager restart });
Graceful Error Responses
Error responses should be user-friendly, consistent in structure, hide sensitive details in production, and include useful information like error codes for client-side handling.
const formatError = (err, req) => { const response = { success: false, error: { code: err.code || 'INTERNAL_ERROR', message: err.isOperational ? err.message : 'Something went wrong' }, timestamp: new Date().toISOString(), path: req.originalUrl }; // Add details only in development if (process.env.NODE_ENV === 'development') { response.error.stack = err.stack; response.error.details = err.details; } // Include validation errors if (err.errors) { response.error.validationErrors = err.errors; } return response; }; // Response examples: // Production: { success: false, error: { code: "NOT_FOUND", message: "User not found" }} // Development: { success: false, error: { code: "NOT_FOUND", message: "...", stack: "..." }}
404 Handling
A catch-all middleware placed after all routes handles requests that don't match any defined route, returning a proper 404 response with helpful information.
// Place AFTER all route definitions app.use('*', (req, res, next) => { next(new NotFoundError(`Cannot ${req.method} ${req.originalUrl}`)); }); // Or simpler version app.use((req, res) => { res.status(404).json({ success: false, error: { code: 'NOT_FOUND', message: `Route ${req.originalUrl} not found`, availableEndpoints: '/api/docs' // Help users } }); });
Uncaught Exception Handling
Uncaught exceptions indicate programmer errors and should trigger logging, graceful shutdown of connections, and process termination for restart by a process manager.
process.on('uncaughtException', (err) => { logger.fatal({ err }, 'Uncaught Exception - shutting down'); // Give time to log and close connections server.close(() => { process.exit(1); }); // Force exit after timeout setTimeout(() => { process.abort(); }, 5000).unref(); }); /* ┌──────────────────────────────────────────────────┐ │ UNCAUGHT EXCEPTION FLOW │ ├──────────────────────────────────────────────────┤ │ Exception ──► Log Fatal ──► Close Server │ │ │ │ │ ▼ │ │ Wait for connections │ │ │ │ │ ▼ │ │ process.exit(1) │ │ │ │ │ ▼ │ │ PM2/systemd restarts │ └──────────────────────────────────────────────────┘ */
Unhandled Promise Rejections
Unhandled promise rejections should be caught globally, logged, and depending on severity, may trigger graceful shutdown; Node.js 15+ terminates process by default.
process.on('unhandledRejection', (reason, promise) => { logger.error({ reason, promise }, 'Unhandled Promise Rejection'); // Option 1: Treat as uncaught exception (recommended) throw reason; // Option 2: Log and continue (risky) // Only if you're confident it won't corrupt state }); // Better: Prevent unhandled rejections // Always use .catch() or try/catch with async/await async function riskyOperation() { try { await mightFail(); } catch (err) { // Handle or rethrow as operational error throw new AppError('Operation failed', 500); } }
Security
Helmet.js
Helmet sets various HTTP security headers with sensible defaults, protecting against common web vulnerabilities like clickjacking, XSS, and content sniffing with a single middleware.
const helmet = require('helmet'); app.use(helmet()); // Applies all default protections // Or configure individually app.use(helmet({ contentSecurityPolicy: { directives: { defaultSrc: ["'self'"], scriptSrc: ["'self'", "trusted-cdn.com"], styleSrc: ["'self'", "'unsafe-inline'"] } }, crossOriginEmbedderPolicy: true, crossOriginOpenerPolicy: true, crossOriginResourcePolicy: { policy: "same-site" }, hsts: { maxAge: 31536000, includeSubDomains: true } })); /* Headers set by Helmet: ├── Content-Security-Policy ├── X-Content-Type-Options: nosniff ├── X-Frame-Options: DENY ├── X-XSS-Protection: 0 (deprecated, CSP preferred) ├── Strict-Transport-Security └── ... and more */
HTTPS/SSL/TLS
Production Express apps should run behind HTTPS either directly with certificates or (preferred) behind a reverse proxy like Nginx that handles TLS termination.
const https = require('https'); const fs = require('fs'); // Direct HTTPS (development/small deployments) const options = { key: fs.readFileSync('private-key.pem'), cert: fs.readFileSync('certificate.pem'), ca: fs.readFileSync('ca-certificate.pem') // Optional: CA chain }; https.createServer(options, app).listen(443); // Force HTTPS redirect app.use((req, res, next) => { if (req.secure || req.headers['x-forwarded-proto'] === 'https') { return next(); } res.redirect(301, `https://${req.headers.host}${req.url}`); }); // Trust proxy for correct req.secure behind load balancer app.set('trust proxy', 1);
XSS Protection
Cross-Site Scripting prevention requires output encoding, Content Security Policy headers, and sanitizing user input—never trust user-provided data in HTML output.
const xss = require('xss'); const helmet = require('helmet'); // CSP header (primary defense) app.use(helmet.contentSecurityPolicy({ directives: { defaultSrc: ["'self'"], scriptSrc: ["'self'"], // No inline scripts objectSrc: ["'none'"] } })); // Sanitize user input const sanitizeInput = (input) => xss(input, { whiteList: {}, // No tags allowed stripIgnoreTag: true }); app.post('/comment', (req, res) => { const safeComment = sanitizeInput(req.body.comment); // Store safeComment in database }); // Template engine auto-escaping (EJS example) // <%= userInput %> ← Auto-escaped // <%- userInput %> ← Raw, DANGEROUS
CSRF Protection
Cross-Site Request Forgery protection validates that state-changing requests originate from your site using synchronized tokens or same-site cookie attributes.
const csrf = require('csurf'); const cookieParser = require('cookie-parser'); app.use(cookieParser()); app.use(csrf({ cookie: { sameSite: 'strict', secure: true, httpOnly: true }})); // Provide token to client app.get('/form', (req, res) => { res.render('form', { csrfToken: req.csrfToken() }); }); // HTML form must include token // <input type="hidden" name="_csrf" value="<%= csrfToken %>"> // For SPAs: Send token in response header app.get('/api/csrf-token', (req, res) => { res.json({ csrfToken: req.csrfToken() }); }); // Client sends back in X-CSRF-Token header // Modern alternative: SameSite cookies // Set-Cookie: session=abc; SameSite=Strict; Secure; HttpOnly
SQL Injection Prevention
Use parameterized queries or prepared statements—never concatenate user input into SQL strings; ORMs like Sequelize or Knex provide built-in protection.
// ❌ VULNERABLE - String concatenation const query = `SELECT * FROM users WHERE id = ${req.params.id}`; // ✅ SAFE - Parameterized query (mysql2) const [rows] = await db.execute( 'SELECT * FROM users WHERE id = ? AND status = ?', [req.params.id, 'active'] ); // ✅ SAFE - Knex query builder const users = await knex('users') .where({ id: req.params.id }) .select('*'); // ✅ SAFE - Sequelize ORM const user = await User.findOne({ where: { id: req.params.id } }); /* Attack prevented: Input: "1 OR 1=1; DROP TABLE users;--" Parameterized: Treated as literal string, not SQL */
NoSQL Injection Prevention
NoSQL databases like MongoDB are vulnerable to query injection through object manipulation; validate input types and sanitize query operators.
const mongoSanitize = require('express-mongo-sanitize'); // Remove $ and . from request body/query/params app.use(mongoSanitize()); // ❌ VULNERABLE app.post('/login', async (req, res) => { // Attacker sends: { "username": {"$gt": ""}, "password": {"$gt": ""} } const user = await User.findOne({ username: req.body.username, // Becomes { $gt: "" } password: req.body.password }); }); // ✅ SAFE - Type validation app.post('/login', async (req, res) => { if (typeof req.body.username !== 'string' || typeof req.body.password !== 'string') { return res.status(400).json({ error: 'Invalid input' }); } const user = await User.findOne({ username: req.body.username, password: hashedPassword }); });
Input Validation
Validate and sanitize all user input at the entry point using libraries like Joi or express-validator; reject invalid data before it reaches business logic.
const { body, param, validationResult } = require('express-validator'); const createUserRules = [ body('email') .isEmail().normalizeEmail() .withMessage('Valid email required'), body('password') .isLength({ min: 8 }) .matches(/^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])/) .withMessage('Password must contain uppercase, lowercase, and number'), body('age') .optional() .isInt({ min: 0, max: 150 }) .toInt(), body('username') .trim().escape() .isAlphanumeric() .isLength({ min: 3, max: 30 }) ]; const validate = (req, res, next) => { const errors = validationResult(req); if (!errors.isEmpty()) { return res.status(400).json({ errors: errors.array() }); } next(); }; app.post('/users', createUserRules, validate, createUser);
HTTP Parameter Pollution
HPP attacks send duplicate query parameters to bypass validation or confuse logic; use hpp middleware to select only the last value of duplicated parameters.
const hpp = require('hpp'); // Attack: GET /search?category=books&category=1;DROP TABLE products // Without HPP: req.query.category = ['books', '1;DROP TABLE products'] app.use(hpp()); // With HPP: req.query.category = '1;DROP TABLE products' (last value) // Whitelist parameters that should allow arrays app.use(hpp({ whitelist: ['tags', 'ids', 'filter'] // These can be arrays })); // Example: // GET /products?tags=sale&tags=new&sort=price&sort=rating // req.query = { tags: ['sale', 'new'], sort: 'rating' }
Secure Headers
Security headers instruct browsers to enable protective features; use Helmet for comprehensive header management or set critical headers manually.
// Key security headers app.use((req, res, next) => { // Prevent clickjacking res.setHeader('X-Frame-Options', 'DENY'); // Prevent MIME sniffing res.setHeader('X-Content-Type-Options', 'nosniff'); // Force HTTPS res.setHeader('Strict-Transport-Security', 'max-age=31536000; includeSubDomains'); // Control referrer info res.setHeader('Referrer-Policy', 'strict-origin-when-cross-origin'); // Permissions Policy (formerly Feature-Policy) res.setHeader('Permissions-Policy', 'camera=(), microphone=(), geolocation=()'); next(); }); /* ┌───────────────────────────────────────────────────┐ │ SECURITY HEADERS SUMMARY │ ├────────────────────────┬──────────────────────────┤ │ Header │ Purpose │ ├────────────────────────┼──────────────────────────┤ │ X-Frame-Options │ Prevent clickjacking │ │ X-Content-Type-Options │ Prevent MIME sniffing │ │ HSTS │ Force HTTPS │ │ CSP │ Prevent XSS/injection │ │ Referrer-Policy │ Control referrer data │ └────────────────────────┴──────────────────────────┘ */
Content Security Policy
CSP is a powerful header that controls which resources can be loaded, providing defense against XSS, clickjacking, and data injection attacks.
app.use(helmet.contentSecurityPolicy({ directives: { defaultSrc: ["'self'"], // Default: only same origin scriptSrc: ["'self'", "cdn.example.com"], // Scripts sources styleSrc: ["'self'", "'unsafe-inline'"], // Styles (inline needed for many libs) imgSrc: ["'self'", "data:", "*.cloudinary.com"], fontSrc: ["'self'", "fonts.gstatic.com"], connectSrc: ["'self'", "api.example.com"], // AJAX/WebSocket frameSrc: ["'none'"], // No iframes objectSrc: ["'none'"], // No plugins upgradeInsecureRequests: [], // Upgrade HTTP to HTTPS reportUri: '/csp-violation-report' // Get violation reports } })); // CSP violation reporting endpoint app.post('/csp-violation-report', express.json({ type: 'application/csp-report' }), (req, res) => { logger.warn('CSP Violation:', req.body); res.status(204).end(); } );
Rate Limiting (express-rate-limit)
Rate limiting prevents abuse by restricting request frequency per client, protecting against DoS attacks and brute-force attempts on authentication endpoints.
const rateLimit = require('express-rate-limit'); const RedisStore = require('rate-limit-redis'); // General API rate limit const apiLimiter = rateLimit({ windowMs: 15 * 60 * 1000, // 15 minutes max: 100, // 100 requests per window standardHeaders: true, // Return rate limit info in headers legacyHeaders: false, message: { error: 'Too many requests, please try again later' } }); // Strict limit for auth endpoints const authLimiter = rateLimit({ windowMs: 60 * 60 * 1000, // 1 hour max: 5, // 5 attempts skipSuccessfulRequests: true }); // Redis store for distributed systems const distributedLimiter = rateLimit({ store: new RedisStore({ client: redisClient }), windowMs: 60 * 1000, max: 60 }); app.use('/api/', apiLimiter); app.use('/api/login', authLimiter);
Brute Force Protection
Protect authentication endpoints with progressive delays, account lockouts, and IP-based tracking to prevent automated credential stuffing attacks.
const ExpressBrute = require('express-brute'); const RedisStore = require('express-brute-redis'); const store = new RedisStore({ client: redisClient }); const bruteforce = new ExpressBrute(store, { freeRetries: 5, // Allow 5 attempts minWait: 5 * 60 * 1000, // 5 min initial delay maxWait: 60 * 60 * 1000, // 1 hour max delay lifetime: 24 * 60 * 60, // Reset after 24 hours failCallback: (req, res, next, nextValidRequestDate) => { res.status(429).json({ error: 'Too many failed attempts', retryAfter: nextValidRequestDate }); } }); app.post('/login', bruteforce.prevent, // Applies brute force protection passport.authenticate('local'), (req, res) => { req.brute.reset(); // Reset on successful login res.json({ success: true }); } );
Dependency Vulnerability Scanning
Regularly scan npm dependencies for known vulnerabilities using automated tools in CI/CD pipelines; keep dependencies updated and audit frequently.
# Built-in npm audit npm audit npm audit fix npm audit fix --force # Major version updates (review first!) # Snyk (more comprehensive) npx snyk test npx snyk monitor # Continuous monitoring # In package.json scripts { "scripts": { "security:audit": "npm audit --audit-level=moderate", "security:outdated": "npm outdated", "security:snyk": "snyk test" } } # CI/CD integration (GitHub Actions example) # - name: Security audit # run: npm audit --audit-level=high # continue-on-error: false
Security Best Practices
Security is layered—combine multiple defenses, minimize exposed data, use principle of least privilege, keep dependencies updated, and treat security as an ongoing process.
// Security checklist implementation const securitySetup = (app) => { app.use(helmet()); // Security headers app.use(hpp()); // Prevent param pollution app.use(mongoSanitize()); // NoSQL injection app.use(xss()); // XSS sanitization app.use(rateLimit({ windowMs: 900000, max: 100 })); app.disable('x-powered-by'); // Hide Express app.set('trust proxy', 1); // If behind proxy // Secure session cookies app.use(session({ secret: process.env.SESSION_SECRET, cookie: { secure: true, httpOnly: true, sameSite: 'strict' } })); }; /* ┌─────────────────────────────────────────────────────────┐ │ SECURITY LAYERS │ ├─────────────────────────────────────────────────────────┤ │ [Infrastructure] WAF, DDoS protection, HTTPS │ │ ↓ │ │ [Network] Firewall, VPN, Network segmentation │ │ ↓ │ │ [Application] Validation, Auth, Rate limiting │ │ ↓ │ │ [Data] Encryption at rest, minimal exposure │ │ ↓ │ │ [Monitoring] Logging, alerting, intrusion detection │ └─────────────────────────────────────────────────────────┘ */
Performance Optimization
Compression (gzip, brotli)
Compressing responses reduces bandwidth usage by 60-80%; use compression middleware for dynamic content and pre-compress static assets.
const compression = require('compression'); // Basic usage app.use(compression()); // Advanced configuration app.use(compression({ level: 6, // Compression level (0-9) threshold: 1024, // Only compress > 1KB filter: (req, res) => { if (req.headers['x-no-compression']) return false; return compression.filter(req, res); } })); // Brotli with shrink-ray-current const shrinkRay = require('shrink-ray-current'); app.use(shrinkRay()); // Supports brotli, gzip, deflate // Pre-compressed static files (best for production) // Serve .br and .gz files directly app.use(express.static('public', { setHeaders: (res, path) => { if (path.endsWith('.br')) res.set('Content-Encoding', 'br'); if (path.endsWith('.gz')) res.set('Content-Encoding', 'gzip'); } }));
Response Caching
HTTP caching headers tell browsers and CDNs when to cache responses, reducing server load and improving response times for repeat visitors.
// Cache-Control headers app.get('/static-data', (req, res) => { res.set('Cache-Control', 'public, max-age=86400'); // Cache 1 day res.json(data); }); app.get('/user-data', (req, res) => { res.set('Cache-Control', 'private, no-cache'); // Revalidate each request res.json(userData); }); // Middleware for static assets app.use('/assets', express.static('public', { maxAge: '1y', // Cache for 1 year immutable: true // Won't change })); // API response caching middleware const cacheMiddleware = (duration) => (req, res, next) => { res.set('Cache-Control', `public, max-age=${duration}`); next(); }; app.get('/api/products', cacheMiddleware(300), getProducts); // 5 min cache
Redis Caching Strategies
Redis provides fast in-memory caching for database results, sessions, and computed data; implement cache-aside, write-through, or TTL-based strategies based on use case.
const Redis = require('ioredis'); const redis = new Redis(); // Cache-aside pattern const getCached = async (key, fetchFn, ttl = 3600) => { const cached = await redis.get(key); if (cached) return JSON.parse(cached); const data = await fetchFn(); await redis.setex(key, ttl, JSON.stringify(data)); return data; }; // Usage app.get('/products/:id', async (req, res) => { const product = await getCached( `product:${req.params.id}`, () => db.products.findById(req.params.id), 3600 ); res.json(product); }); // Cache invalidation app.put('/products/:id', async (req, res) => { await db.products.update(req.params.id, req.body); await redis.del(`product:${req.params.id}`); // Invalidate cache await redis.del('products:list'); // Invalidate list res.json({ success: true }); }); /* ┌────────────────────────────────────────────────────────┐ │ CACHING STRATEGIES │ ├─────────────────┬──────────────────────────────────────┤ │ Cache-Aside │ Check cache → Miss → DB → Cache │ │ Write-Through │ Write DB → Write Cache │ │ Write-Behind │ Write Cache → Async write DB │ │ TTL-Based │ Auto-expire after time │ └─────────────────┴──────────────────────────────────────┘ */
ETag Handling
ETags provide content-based cache validation, allowing browsers to check if content changed without downloading the full response when cached content might be stale.
const etag = require('etag'); // Express auto-generates weak ETags app.set('etag', true); // Default app.set('etag', 'strong'); // Byte-for-byte matching // Manual ETag for dynamic content app.get('/data', async (req, res) => { const data = await fetchData(); const dataEtag = etag(JSON.stringify(data)); // Check if client has current version if (req.headers['if-none-match'] === dataEtag) { return res.status(304).end(); // Not Modified } res.set('ETag', dataEtag); res.json(data); }); /* ┌──────────────────────────────────────────────────┐ │ ETAG FLOW │ ├──────────────────────────────────────────────────┤ │ Client Request → Server generates ETag │ │ ↓ │ │ Response with ETag: "abc123" │ │ ↓ │ │ Client caches response + ETag │ │ ↓ │ │ Next request: If-None-Match: "abc123" │ │ ↓ │ │ ETag matches? → 304 Not Modified (no body) │ │ ETag differs? → 200 with new content + ETag │ └──────────────────────────────────────────────────┘ */
Clustering
Node.js clustering spawns worker processes across CPU cores, multiplying throughput; each worker handles requests independently with the master managing lifecycle.
const cluster = require('cluster'); const os = require('os'); if (cluster.isPrimary) { const numCPUs = os.cpus().length; console.log(`Primary ${process.pid} starting ${numCPUs} workers`); for (let i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('exit', (worker, code, signal) => { console.log(`Worker ${worker.process.pid} died`); cluster.fork(); // Replace dead worker }); } else { const app = require('./app'); app.listen(3000, () => { console.log(`Worker ${process.pid} started`); }); } /* ┌─────────────────────────────────────────────────────┐ │ CLUSTER ARCHITECTURE │ ├─────────────────────────────────────────────────────┤ │ PRIMARY │ │ │ │ │ ┌───────┬───────┼───────┬───────┐ │ │ ▼ ▼ ▼ ▼ ▼ │ │ Worker Worker Worker Worker Worker │ │ (CPU 1) (CPU 2) (CPU 3) (CPU 4) (CPU 5) │ │ │ │ │ │ │ │ │ └───────┴───────┼───────┴───────┘ │ │ ▼ │ │ Load Balancer │ │ (Round Robin) │ └─────────────────────────────────────────────────────┘ */
Load Balancing
Load balancers distribute traffic across multiple Node.js instances; use Nginx, HAProxy, or cloud load balancers for production with health checks and sticky sessions if needed.
# Nginx load balancer configuration upstream node_cluster { least_conn; # Least connections algorithm server 127.0.0.1:3001 weight=5; server 127.0.0.1:3002 weight=5; server 127.0.0.1:3003 weight=5; server 127.0.0.1:3004 backup; # Failover only keepalive 64; # Connection pooling } server { listen 80; location / { proxy_pass http://node_cluster; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; # Health check proxy_next_upstream error timeout http_500; } }
Memory Leak Detection
Memory leaks cause gradual performance degradation and crashes; monitor heap usage, use profiling tools, and analyze heap snapshots to identify retained objects.
// Monitor memory usage setInterval(() => { const usage = process.memoryUsage(); console.log({ heapUsed: Math.round(usage.heapUsed / 1024 / 1024) + 'MB', heapTotal: Math.round(usage.heapTotal / 1024 / 1024) + 'MB', external: Math.round(usage.external / 1024 / 1024) + 'MB' }); }, 30000); // Heap snapshot on demand app.get('/debug/heapdump', (req, res) => { const v8 = require('v8'); const fs = require('fs'); const snapshotPath = `./heapdump-${Date.now()}.heapsnapshot`; const snapshotStream = v8.writeHeapSnapshot(snapshotPath); res.json({ snapshot: snapshotPath }); }); // Common leak patterns to avoid: // ❌ Global arrays that grow indefinitely // ❌ Event listeners not removed // ❌ Closures holding large objects // ❌ Unbounded caches // Tools: clinic.js, node --inspect, Chrome DevTools // npm install -g clinic // clinic doctor -- node app.js
Profiling Express Apps
Profiling identifies CPU and I/O bottlenecks using Node.js inspector, APM tools, or specialized libraries; focus optimization efforts on measured hot paths.
// Built-in profiling // node --prof app.js // node --prof-process isolate-*.log > profile.txt // Response time middleware app.use((req, res, next) => { const start = process.hrtime.bigint(); res.on('finish', () => { const duration = Number(process.hrtime.bigint() - start) / 1e6; console.log(`${req.method} ${req.path} - ${duration.toFixed(2)}ms`); }); next(); }); // Route-level profiling const { performance, PerformanceObserver } = require('perf_hooks'); const obs = new PerformanceObserver((list) => { list.getEntries().forEach(entry => { console.log(`${entry.name}: ${entry.duration}ms`); }); }); obs.observe({ entryTypes: ['measure'] }); app.get('/api/data', async (req, res) => { performance.mark('start'); const data = await heavyOperation(); performance.mark('end'); performance.measure('heavy-operation', 'start', 'end'); res.json(data); });
Database Query Optimization
Optimize database interactions with indexing, query analysis, connection pooling, pagination, and selecting only needed fields to reduce I/O and memory usage.
// Connection pooling (pg example) const { Pool } = require('pg'); const pool = new Pool({ max: 20, // Maximum connections idleTimeoutMillis: 30000, connectionTimeoutMillis: 2000 }); // Select only needed fields // ❌ SELECT * FROM users // ✅ SELECT id, name, email FROM users // Pagination app.get('/users', async (req, res) => { const page = parseInt(req.query.page) || 1; const limit = Math.min(parseInt(req.query.limit) || 20, 100); const offset = (page - 1) * limit; const [users, total] = await Promise.all([ db.query('SELECT id, name FROM users LIMIT $1 OFFSET $2', [limit, offset]), db.query('SELECT COUNT(*) FROM users') ]); res.json({ users, page, totalPages: Math.ceil(total / limit) }); }); // N+1 prevention with JOIN/include // ❌ users.forEach(u => await db.getOrders(u.id)) // ✅ JOIN users with orders in single query
Connection Keep-Alive
HTTP keep-alive reuses TCP connections for multiple requests, reducing latency from connection setup; configure appropriately for both server and outbound requests.
const http = require('http'); const https = require('https'); // Incoming connections keep-alive const server = http.createServer(app); server.keepAliveTimeout = 65000; // Slightly higher than ALB (60s) server.headersTimeout = 66000; // Slightly higher than keepAliveTimeout // Outbound requests keep-alive (HTTP agent) const keepAliveAgent = new http.Agent({ keepAlive: true, keepAliveMsecs: 30000, maxSockets: 100, maxFreeSockets: 10 }); // Axios with keep-alive const axios = require('axios'); const apiClient = axios.create({ httpAgent: new http.Agent({ keepAlive: true }), httpsAgent: new https.Agent({ keepAlive: true }) }); // Database connections - maintain pool // Redis: Built-in connection reuse // PostgreSQL/MySQL: Connection pool
Lazy Loading
Lazy loading defers loading modules, data, or resources until needed, reducing initial startup time and memory footprint, especially for large applications.
// Lazy load expensive modules let pdfGenerator; const getPdfGenerator = () => { if (!pdfGenerator) { pdfGenerator = require('heavy-pdf-lib'); // Load only when needed } return pdfGenerator; }; app.get('/reports/:id/pdf', async (req, res) => { const generator = getPdfGenerator(); // First call loads module const pdf = await generator.create(req.params.id); res.type('pdf').send(pdf); }); // Lazy route loading (code splitting) app.use('/admin', (req, res, next) => { // Load admin routes only when accessed require('./routes/admin')(req, res, next); }); // Lazy data loading app.get('/dashboard', async (req, res) => { const dashboard = { stats: await getBasicStats(), // Load immediately get detailedMetrics() { // Computed lazily return getDetailedMetrics(); } }; res.json(dashboard); });
Response Streaming
Streaming sends data to clients in chunks rather than buffering entire responses, reducing memory usage and time-to-first-byte for large datasets or files.
const { Transform } = require('stream'); // Stream large JSON arrays app.get('/api/logs', (req, res) => { res.setHeader('Content-Type', 'application/json'); res.write('[\n'); let first = true; const cursor = db.logs.find().cursor(); cursor.on('data', (doc) => { res.write((first ? '' : ',\n') + JSON.stringify(doc)); first = false; }); cursor.on('end', () => { res.write('\n]'); res.end(); }); }); // Stream file downloads app.get('/download/:file', (req, res) => { const filePath = getFilePath(req.params.file); const stat = fs.statSync(filePath); res.setHeader('Content-Length', stat.size); res.setHeader('Content-Disposition', 'attachment'); fs.createReadStream(filePath).pipe(res); }); // Transform stream app.get('/api/export', (req, res) => { const transform = new Transform({ objectMode: true, transform(chunk, encoding, callback) { callback(null, processData(chunk)); } }); databaseCursor.pipe(transform).pipe(res); });
Testing
Unit Testing with Jest/Mocha
Unit testing isolates individual functions/modules to verify they work correctly independently; Jest is preferred for its zero-config setup and built-in mocking, while Mocha offers flexibility with assertion library choices.
// Jest example - userService.test.js const { calculateDiscount } = require('./userService'); describe('calculateDiscount', () => { test('applies 10% for orders over $100', () => { expect(calculateDiscount(150)).toBe(15); }); test('returns 0 for orders under $100', () => { expect(calculateDiscount(50)).toBe(0); }); });
Integration Testing
Integration testing verifies that multiple components (routes, middleware, database) work together correctly, testing the actual flow of requests through your Express application without mocking internal layers.
// integration/user.test.js const request = require('supertest'); const app = require('../app'); const db = require('../db'); describe('User API Integration', () => { beforeAll(async () => await db.connect()); afterAll(async () => await db.disconnect()); test('creates user and retrieves it', async () => { const createRes = await request(app) .post('/users') .send({ name: 'John', email: 'john@test.com' }); const getRes = await request(app) .get(`/users/${createRes.body.id}`); expect(getRes.body.name).toBe('John'); }); });
Supertest for HTTP Testing
Supertest provides a high-level abstraction for testing HTTP endpoints, allowing you to make requests to your Express app without starting a real server, supporting fluent assertions on status codes, headers, and body content.
const request = require('supertest'); const app = require('./app'); describe('API Endpoints', () => { test('GET /api/health returns 200', async () => { const response = await request(app) .get('/api/health') .set('Authorization', 'Bearer token123') .expect('Content-Type', /json/) .expect(200); expect(response.body).toEqual({ status: 'healthy' }); }); test('POST /api/users validates input', async () => { await request(app) .post('/api/users') .send({ name: '' }) // Invalid .expect(400); }); });
Mocking and Stubbing
Mocking replaces real dependencies (databases, APIs, services) with controlled fake implementations, allowing isolated testing without external systems; stubs provide predetermined responses while spies track function calls.
// Using Jest mocks const userService = require('./userService'); const emailService = require('./emailService'); jest.mock('./emailService'); // Auto-mock entire module describe('UserService', () => { test('sends welcome email on registration', async () => { // Arrange emailService.send.mockResolvedValue({ sent: true }); // Act await userService.register({ email: 'test@test.com' }); // Assert expect(emailService.send).toHaveBeenCalledWith( expect.objectContaining({ to: 'test@test.com', template: 'welcome' }) ); }); }); // Manual stub example const dbStub = { findUser: jest.fn().mockResolvedValue({ id: 1, name: 'John' }), saveUser: jest.fn().mockResolvedValue({ id: 2 }) };
Test Coverage
Test coverage measures what percentage of your code is executed during tests; aim for meaningful coverage (80%+) focusing on critical business logic rather than chasing 100% which often leads to brittle tests.
# package.json { "scripts": { "test": "jest", "test:coverage": "jest --coverage --coverageThreshold='{\"global\":{\"branches\":80,\"functions\":80,\"lines\":80}}'" } } # Output: # ----------------------|---------|----------|---------|---------| # File | % Stmts | % Branch | % Funcs | % Lines | # ----------------------|---------|----------|---------|---------| # All files | 85.32 | 78.45 | 89.12 | 84.56 | # controllers/ | 90.12 | 82.34 | 95.00 | 89.45 | # userController.js | 92.00 | 85.00 | 100.00 | 91.00 | # services/ | 82.45 | 75.23 | 85.00 | 81.23 | # ----------------------|---------|----------|---------|---------|
┌─────────────────────────────────────────────────┐
│ COVERAGE PYRAMID │
├─────────────────────────────────────────────────┤
│ ▲ │
│ /░\ E2E (10%) │
│ /░░░\ - Critical paths │
│ /─────\ │
│ /░░░░░░░\ Integration (20%) │
│ /░░░░░░░░░\ - API contracts │
│ /───────────\ │
│ /░░░░░░░░░░░░░\ Unit (70%) │
│ /░░░░░░░░░░░░░░░\- Business logic │
│ ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔ │
└─────────────────────────────────────────────────┘
API Testing
API testing validates endpoints against contracts, checking request/response schemas, status codes, error handling, and edge cases; tools like Postman/Newman automate these tests in CI/CD pipelines.
// api.test.js - Contract testing describe('User API Contract', () => { test('GET /users/:id matches schema', async () => { const response = await request(app).get('/users/1'); // Validate response schema expect(response.body).toMatchObject({ id: expect.any(Number), email: expect.stringMatching(/@/), createdAt: expect.stringMatching(/^\d{4}-\d{2}-\d{2}/), role: expect.stringMatching(/^(admin|user|guest)$/) }); }); test('handles all error scenarios', async () => { await request(app).get('/users/999').expect(404); await request(app).get('/users/invalid').expect(400); await request(app).delete('/users/1').expect(401); // No auth }); });
# Newman CLI for Postman collections # newman run collection.json -e production.env.json --reporters cli,junit
Load Testing
Load testing simulates concurrent users to identify performance bottlenecks, memory leaks, and breaking points; tools like Artillery, k6, or autocannon measure response times, throughput, and error rates under stress.
# artillery-config.yml config: target: "http://localhost:3000" phases: - duration: 60 # Warm up arrivalRate: 10 - duration: 120 # Ramp up arrivalRate: 10 rampTo: 100 - duration: 300 # Sustained load arrivalRate: 100 scenarios: - name: "Browse and purchase" flow: - get: url: "/api/products" - think: 2 - post: url: "/api/cart" json: productId: "{{ $randomNumber(1, 100) }}" - post: url: "/api/checkout"
┌────────────────────────────────────────────────────────────┐
│ LOAD TEST RESULTS │
├────────────────────────────────────────────────────────────┤
│ Scenarios launched: 15000 │
│ Scenarios completed: 14892 │
│ Requests completed: 44676 │
│ RPS: 148.92 │
│ │
│ Response time (ms): │
│ min: 12 ████ │
│ max: 2847 ████████████████████████████████████ │
│ median: 89 ████████ │
│ p95: 456 ████████████████████ │
│ p99: 1203 ████████████████████████████ │
│ │
│ Errors: 108 (0.7%) - 503 Service Unavailable │
└────────────────────────────────────────────────────────────┘
End-to-End Testing
E2E testing validates complete user journeys from UI through API to database, ensuring all system components integrate correctly; tools like Playwright or Cypress test real browser interactions with your Express backend.
// e2e/checkout.spec.js (Playwright) const { test, expect } = require('@playwright/test'); test.describe('Checkout Flow', () => { test('completes purchase successfully', async ({ page }) => { // Login await page.goto('/login'); await page.fill('[name="email"]', 'user@test.com'); await page.fill('[name="password"]', 'password123'); await page.click('button[type="submit"]'); // Add to cart await page.goto('/products/1'); await page.click('#add-to-cart'); // Checkout await page.goto('/checkout'); await page.fill('#card-number', '4242424242424242'); await page.click('#place-order'); // Verify await expect(page.locator('.order-confirmation')).toBeVisible(); await expect(page.locator('.order-id')).toContainText(/ORD-\d+/); }); });
Test-Driven Development (TDD)
TDD follows Red-Green-Refactor: write a failing test first, implement minimum code to pass, then refactor; this approach drives design decisions, ensures testability, and creates living documentation of expected behavior.
┌─────────────────────────────────────────────────────────┐
│ TDD CYCLE │
│ │
│ ┌─────────┐ │
│ │ RED │ ◄── Write failing test │
│ │ ✗ │ │
│ └────┬────┘ │
│ │ │
│ ▼ │
│ ┌─────────┐ │
│ │ GREEN │ ◄── Write minimum code to pass │
│ │ ✓ │ │
│ └────┬────┘ │
│ │ │
│ ▼ │
│ ┌─────────┐ │
│ │REFACTOR │ ◄── Improve code, tests still pass │
│ │ ♻️ │ │
│ └────┬────┘ │
│ │ │
│ └──────────► Repeat │
└─────────────────────────────────────────────────────────┘
// Step 1: RED - Write failing test test('validates email format', () => { expect(() => createUser({ email: 'invalid' })).toThrow('Invalid email'); }); // Step 2: GREEN - Minimum implementation function createUser({ email }) { if (!email.includes('@')) throw new Error('Invalid email'); return { email }; } // Step 3: REFACTOR - Improve const EMAIL_REGEX = /^[^\s@]+@[^\s@]+\.[^\s@]+$/; function createUser({ email }) { if (!EMAIL_REGEX.test(email)) throw new Error('Invalid email'); return { email, createdAt: new Date() }; }
Continuous Testing
Continuous testing integrates automated tests into CI/CD pipelines, running unit tests on every commit, integration tests on PRs, and E2E tests before deployment; this provides rapid feedback and prevents regressions from reaching production.
# .github/workflows/test.yml name: Continuous Testing on: push: branches: [main, develop] pull_request: branches: [main] jobs: unit-tests: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - uses: actions/setup-node@v3 with: node-version: '18' cache: 'npm' - run: npm ci - run: npm run test:unit -- --coverage - uses: codecov/codecov-action@v3 integration-tests: needs: unit-tests runs-on: ubuntu-latest services: postgres: image: postgres:15 env: POSTGRES_PASSWORD: test steps: - uses: actions/checkout@v3 - run: npm ci - run: npm run test:integration e2e-tests: needs: integration-tests if: github.event_name == 'pull_request' runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - run: npm ci - run: npx playwright install - run: npm run test:e2e
Logging & Monitoring
Morgan Logger
Morgan is an HTTP request logger middleware that logs incoming requests with configurable formats (tiny, combined, dev); ideal for development debugging and basic production access logs.
const express = require('express'); const morgan = require('morgan'); const app = express(); // Development - colored, concise app.use(morgan('dev')); // Output: GET /api/users 200 12.345 ms - 1234 // Production - Apache combined format app.use(morgan('combined')); // Output: ::1 - - [10/Jan/2025:10:23:45 +0000] "GET /api/users HTTP/1.1" 200 1234 "-" "Mozilla/5.0..." // Custom format with response time morgan.token('id', (req) => req.id); app.use(morgan(':id :method :url :status :response-time ms')); // Stream to file const fs = require('fs'); const accessLogStream = fs.createWriteStream('./access.log', { flags: 'a' }); app.use(morgan('combined', { stream: accessLogStream }));
Winston Integration
Winston is a versatile logging library supporting multiple transports (console, file, cloud), log levels, and formatting; it's the production standard for structured, queryable logs in Express applications.
// logger.js const winston = require('winston'); const logger = winston.createLogger({ level: process.env.LOG_LEVEL || 'info', format: winston.format.combine( winston.format.timestamp(), winston.format.errors({ stack: true }), winston.format.json() ), defaultMeta: { service: 'user-api' }, transports: [ new winston.transports.File({ filename: 'error.log', level: 'error' }), new winston.transports.File({ filename: 'combined.log' }), ], }); if (process.env.NODE_ENV !== 'production') { logger.add(new winston.transports.Console({ format: winston.format.combine( winston.format.colorize(), winston.format.simple() ) })); } // Usage in routes app.post('/users', async (req, res) => { logger.info('Creating user', { email: req.body.email }); try { const user = await userService.create(req.body); logger.info('User created', { userId: user.id }); res.json(user); } catch (err) { logger.error('Failed to create user', { error: err, body: req.body }); res.status(500).json({ error: 'Internal error' }); } });
Log Levels
Log levels categorize message severity (error, warn, info, debug, trace), allowing filtering based on environment; production typically uses 'info' while development enables 'debug' for detailed troubleshooting.
const winston = require('winston'); const levels = { error: 0, // System failures, exceptions warn: 1, // Potential issues, deprecations info: 2, // Business events, requests http: 3, // HTTP request logging debug: 4, // Detailed debugging info trace: 5 // Very verbose, function entry/exit }; const logger = winston.createLogger({ levels, level: process.env.LOG_LEVEL || 'info' }); // Usage examples logger.error('Database connection failed', { host: 'db.prod.com' }); logger.warn('API rate limit approaching', { usage: '90%' }); logger.info('Order placed', { orderId: '12345', amount: 99.99 }); logger.debug('Cache lookup', { key: 'user:123', hit: true }); logger.trace('Entering calculateDiscount()', { args: [100, 'VIP'] });
┌─────────────────────────────────────────────────────────┐
│ LOG LEVEL HIERARCHY │
├─────────────────────────────────────────────────────────┤
│ Level │ Production │ Staging │ Development │
├───────────┼────────────┼─────────┼─────────────────────┤
│ ERROR │ ✓ │ ✓ │ ✓ │
│ WARN │ ✓ │ ✓ │ ✓ │
│ INFO │ ✓ │ ✓ │ ✓ │
│ HTTP │ ○ │ ✓ │ ✓ │
│ DEBUG │ ✗ │ ○ │ ✓ │
│ TRACE │ ✗ │ ✗ │ ○ │
├───────────┴────────────┴─────────┴─────────────────────┤
│ ✓ = enabled ○ = optional ✗ = disabled │
└─────────────────────────────────────────────────────────┘
Structured Logging
Structured logging outputs JSON-formatted logs with consistent fields, enabling efficient searching, filtering, and analysis in log aggregation systems like ELK, Splunk, or CloudWatch.
// Structured log format const logger = winston.createLogger({ format: winston.format.combine( winston.format.timestamp({ format: 'YYYY-MM-DDTHH:mm:ss.SSSZ' }), winston.format.json() ) }); // Log with context logger.info('User login', { event: 'user.login', userId: '12345', email: 'user@example.com', ip: '192.168.1.1', userAgent: 'Mozilla/5.0...', duration: 145, success: true }); // Output (single line in reality): // { // "timestamp": "2025-01-15T10:30:45.123Z", // "level": "info", // "message": "User login", // "event": "user.login", // "userId": "12345", // "email": "user@example.com", // "ip": "192.168.1.1", // "duration": 145, // "success": true, // "service": "auth-api" // }
Log Rotation
Log rotation prevents disk exhaustion by archiving and compressing old logs based on size or time intervals; essential for long-running production services to maintain manageable log files.
const winston = require('winston'); require('winston-daily-rotate-file'); const transport = new winston.transports.DailyRotateFile({ filename: 'logs/app-%DATE%.log', datePattern: 'YYYY-MM-DD', zippedArchive: true, // Compress old files maxSize: '100m', // Rotate at 100MB maxFiles: '30d', // Keep 30 days auditFile: 'logs/audit.json' // Track rotations }); transport.on('rotate', (oldFilename, newFilename) => { console.log(`Rotated: ${oldFilename} → ${newFilename}`); }); const logger = winston.createLogger({ transports: [transport] });
logs/
├── app-2025-01-13.log.gz (compressed, 15MB)
├── app-2025-01-14.log.gz (compressed, 18MB)
├── app-2025-01-15.log (current, 45MB)
└── audit.json (rotation tracking)
Request ID Tracking
Request ID tracking assigns a unique identifier to each request, propagated through all logs and downstream services; essential for tracing requests across microservices and debugging distributed systems.
const { v4: uuidv4 } = require('uuid'); // Middleware to add request ID const requestIdMiddleware = (req, res, next) => { req.id = req.headers['x-request-id'] || uuidv4(); res.setHeader('x-request-id', req.id); next(); }; // Attach to all log entries const childLogger = (req) => logger.child({ requestId: req.id }); app.use(requestIdMiddleware); app.get('/orders/:id', async (req, res) => { const log = childLogger(req); log.info('Fetching order', { orderId: req.params.id }); const order = await orderService.find(req.params.id, req.id); log.info('Order retrieved', { status: order.status }); res.json(order); }); // All logs for this request share the same requestId // { "requestId": "abc-123", "message": "Fetching order", ... } // { "requestId": "abc-123", "message": "Order retrieved", ... }
┌─────────────────────────────────────────────────────────────────┐
│ REQUEST FLOW WITH TRACING │
│ │
│ Client ──► API Gateway ──► Order Service ──► Payment Service │
│ │ │ │ │ │
│ │ x-request-id: abc-123 │ │ │
│ └────────────┼───────────────┼──────────────────┘ │
│ ▼ ▼ │
│ ┌─────────────────────────────────────┐ │
│ │ Centralized Logs (filtered by ID) │ │
│ │ requestId: abc-123 │ │
│ │ ├── [gateway] Request received │ │
│ │ ├── [order] Fetching order │ │
│ │ ├── [order] Calling payment API │ │
│ │ ├── [payment] Processing payment │ │
│ │ └── [gateway] Response sent 200 │ │
│ └─────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
APM Integration
Application Performance Monitoring (APM) provides real-time visibility into application performance, tracking transactions, database queries, external calls, and errors; tools like New Relic, Datadog, or Elastic APM auto-instrument Express.
// Datadog APM setup (must be first import!) const tracer = require('dd-trace').init({ service: 'order-api', env: process.env.NODE_ENV, version: process.env.APP_VERSION, logInjection: true, // Correlate logs with traces runtimeMetrics: true }); const express = require('express'); const app = express(); // Custom span for business logic app.post('/checkout', async (req, res) => { const span = tracer.startSpan('checkout.process'); span.setTag('user.id', req.user.id); span.setTag('cart.items', req.body.items.length); try { const result = await processCheckout(req.body); span.setTag('order.id', result.orderId); res.json(result); } catch (err) { span.setTag('error', true); span.log({ 'error.message': err.message }); throw err; } finally { span.finish(); } });
Health Checks
Health checks expose endpoints for load balancers and orchestrators to verify application readiness (can accept traffic) and liveness (not deadlocked); they should check critical dependencies like database and cache connections.
const express = require('express'); const app = express(); // Liveness - is the process running? app.get('/health/live', (req, res) => { res.status(200).json({ status: 'alive' }); }); // Readiness - can it handle requests? app.get('/health/ready', async (req, res) => { const checks = { database: await checkDatabase(), redis: await checkRedis(), externalApi: await checkExternalApi() }; const allHealthy = Object.values(checks).every(c => c.status === 'up'); const status = allHealthy ? 200 : 503; res.status(status).json({ status: allHealthy ? 'ready' : 'degraded', timestamp: new Date().toISOString(), checks }); }); async function checkDatabase() { try { await db.query('SELECT 1'); return { status: 'up', latency: '5ms' }; } catch (err) { return { status: 'down', error: err.message }; } }
GET /health/ready
{
"status": "ready",
"timestamp": "2025-01-15T10:30:00Z",
"checks": {
"database": { "status": "up", "latency": "5ms" },
"redis": { "status": "up", "latency": "1ms" },
"externalApi": { "status": "up", "latency": "45ms" }
}
}
Metrics Collection
Metrics collection gathers numerical data about application behavior (request counts, response times, queue depths); these time-series data points enable dashboards, alerting, and capacity planning.
const promClient = require('prom-client'); // Default metrics (CPU, memory, event loop) promClient.collectDefaultMetrics(); // Custom metrics const httpRequestDuration = new promClient.Histogram({ name: 'http_request_duration_seconds', help: 'Duration of HTTP requests in seconds', labelNames: ['method', 'route', 'status'], buckets: [0.01, 0.05, 0.1, 0.5, 1, 5] }); const activeConnections = new promClient.Gauge({ name: 'active_connections', help: 'Number of active connections' }); const ordersTotal = new promClient.Counter({ name: 'orders_total', help: 'Total orders placed', labelNames: ['status'] }); // Middleware to track metrics app.use((req, res, next) => { const end = httpRequestDuration.startTimer(); res.on('finish', () => { end({ method: req.method, route: req.route?.path || req.path, status: res.statusCode }); }); next(); });
Prometheus Integration
Prometheus scrapes a /metrics endpoint exposing time-series data in a specific format; combined with Grafana, it provides powerful visualization and alerting for Express application monitoring.
const express = require('express'); const promClient = require('prom-client'); const app = express(); // Registry for all metrics const register = new promClient.Registry(); promClient.collectDefaultMetrics({ register }); // Custom business metrics const orderValue = new promClient.Summary({ name: 'order_value_dollars', help: 'Order values in dollars', percentiles: [0.5, 0.9, 0.99], registers: [register] }); register.registerMetric(orderValue); // Metrics endpoint for Prometheus scraping app.get('/metrics', async (req, res) => { res.set('Content-Type', register.contentType); res.send(await register.metrics()); });
# HELP http_request_duration_seconds Duration of HTTP requests
# TYPE http_request_duration_seconds histogram
http_request_duration_seconds_bucket{method="GET",route="/api/users",le="0.1"} 2450
http_request_duration_seconds_bucket{method="GET",route="/api/users",le="0.5"} 2890
http_request_duration_seconds_bucket{method="GET",route="/api/users",le="+Inf"} 2900
http_request_duration_seconds_sum{method="GET",route="/api/users"} 187.34
http_request_duration_seconds_count{method="GET",route="/api/users"} 2900
# HELP active_connections Number of active connections
# TYPE active_connections gauge
active_connections 145
Distributed Tracing
Distributed tracing tracks requests across multiple services using trace IDs and spans, visualizing the complete request path with timing for each service; essential for debugging latency issues in microservices architectures.
// Using OpenTelemetry for vendor-neutral tracing const { NodeTracerProvider } = require('@opentelemetry/sdk-trace-node'); const { SimpleSpanProcessor } = require('@opentelemetry/sdk-trace-base'); const { JaegerExporter } = require('@opentelemetry/exporter-jaeger'); const { ExpressInstrumentation } = require('@opentelemetry/instrumentation-express'); const { HttpInstrumentation } = require('@opentelemetry/instrumentation-http'); const provider = new NodeTracerProvider(); provider.addSpanProcessor(new SimpleSpanProcessor(new JaegerExporter())); provider.register(); // Auto-instrument Express and HTTP calls new ExpressInstrumentation(); new HttpInstrumentation();
┌─────────────────────────────────────────────────────────────────────────────┐
│ DISTRIBUTED TRACE: Order Checkout (trace_id: abc123) │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ [API Gateway] ████████████████████████████████████████ 450ms │
│ └─ /checkout │ │ │
│ │ │ │
│ [Order Service] │ ████████████████████████████ 320ms │ │
│ └─ createOrder │ │ │ │ │
│ │ │ │ │ │
│ [Inventory] │ │ █████████ 80ms │ │ │
│ └─ reserve │ │ │ │ │
│ │ │ │ │ │
│ [Payment] │ │ ██████████████ │ 180ms │ │
│ └─ charge │ │ │ │ │
│ │ │ │ │ │
│ [Notification] │ │ ███ 25ms │ │
│ └─ sendEmail │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ ───────────────────────────────────────────────────────────────────────── │
│ 0ms 100ms 200ms 300ms 400ms 450ms │
└─────────────────────────────────────────────────────────────────────────────┘
Architecture Patterns
MVC Architecture
Model-View-Controller separates data logic (Model), presentation (View), and request handling (Controller); in Express APIs, Views are often JSON responses, making it effectively MC with Views as serializers.
┌─────────────────────────────────────────────────────────────┐
│ MVC in Express │
├─────────────────────────────────────────────────────────────┤
│ │
│ Request ──► Router ──► Controller ──► Model ──► Database │
│ │ │ │
│ │ │ │
│ ▼ │ │
│ Response ◄── View/JSON ◄───┘ ◄──────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
// models/User.js class User { static async findById(id) { return db.query('SELECT * FROM users WHERE id = $1', [id]); } static async create(data) { return db.query('INSERT INTO users (name, email) VALUES ($1, $2) RETURNING *', [data.name, data.email]); } } // controllers/userController.js const User = require('../models/User'); exports.getUser = async (req, res) => { const user = await User.findById(req.params.id); if (!user) return res.status(404).json({ error: 'Not found' }); res.json(user); // View is JSON serialization }; exports.createUser = async (req, res) => { const user = await User.create(req.body); res.status(201).json(user); }; // routes/users.js router.get('/:id', userController.getUser); router.post('/', userController.createUser);
Service Layer Pattern
The service layer encapsulates business logic between controllers and data access, keeping controllers thin and focused on HTTP concerns while making business rules testable and reusable across different entry points.
// services/orderService.js class OrderService { constructor(orderRepo, inventoryService, paymentService, emailService) { this.orderRepo = orderRepo; this.inventoryService = inventoryService; this.paymentService = paymentService; this.emailService = emailService; } async createOrder(userId, items) { // Business logic lives here, not in controller const availability = await this.inventoryService.checkAvailability(items); if (!availability.available) { throw new BusinessError('Items out of stock', availability.unavailable); } const total = this.calculateTotal(items); const order = await this.orderRepo.create({ userId, items, total }); await this.inventoryService.reserve(order.id, items); await this.paymentService.charge(userId, total); await this.emailService.sendOrderConfirmation(userId, order); return order; } calculateTotal(items) { return items.reduce((sum, item) => sum + item.price * item.quantity, 0); } } // controllers/orderController.js - Thin controller exports.create = async (req, res) => { const order = await orderService.createOrder(req.user.id, req.body.items); res.status(201).json(order); };
Repository Pattern
The repository pattern abstracts data access behind a consistent interface, hiding database implementation details from services; this enables easy database switching, testing with mocks, and centralized query logic.
// repositories/userRepository.js class UserRepository { constructor(db) { this.db = db; } async findById(id) { const result = await this.db.query('SELECT * FROM users WHERE id = $1', [id]); return result.rows[0] || null; } async findByEmail(email) { const result = await this.db.query('SELECT * FROM users WHERE email = $1', [email]); return result.rows[0] || null; } async create(userData) { const result = await this.db.query( 'INSERT INTO users (name, email, password_hash) VALUES ($1, $2, $3) RETURNING *', [userData.name, userData.email, userData.passwordHash] ); return result.rows[0]; } async update(id, updates) { const fields = Object.keys(updates).map((k, i) => `${k} = $${i + 2}`).join(', '); const values = Object.values(updates); const result = await this.db.query( `UPDATE users SET ${fields} WHERE id = $1 RETURNING *`, [id, ...values] ); return result.rows[0]; } } // Easy to swap implementations class MongoUserRepository { async findById(id) { return User.findById(id); } // ... same interface, different implementation }
Dependency Injection
Dependency Injection passes dependencies to components rather than having them create their own, enabling loose coupling, easier testing with mocks, and flexible runtime configuration without code changes.
// container.js - Simple DI container const { createContainer, asClass, asValue } = require('awilix'); const container = createContainer(); container.register({ // Infrastructure db: asValue(require('./db')), redis: asValue(require('./redis')), // Repositories userRepository: asClass(UserRepository).singleton(), orderRepository: asClass(OrderRepository).singleton(), // Services userService: asClass(UserService).scoped(), orderService: asClass(OrderService).scoped(), emailService: asClass(EmailService).singleton(), }); // Classes receive dependencies via constructor class OrderService { constructor({ orderRepository, userService, emailService }) { this.orderRepo = orderRepository; this.userService = userService; this.emailService = emailService; } } // Express integration with awilix-express const { scopePerRequest } = require('awilix-express'); app.use(scopePerRequest(container)); app.get('/orders/:id', (req, res) => { const orderService = req.container.resolve('orderService'); const order = await orderService.getById(req.params.id); res.json(order); });
Clean Architecture
Clean Architecture organizes code in concentric layers where inner layers (entities, use cases) are pure business logic with no external dependencies, while outer layers (controllers, databases) depend inward; this maximizes testability and framework independence.
┌─────────────────────────────────────────────────────────────────────┐
│ CLEAN ARCHITECTURE │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ Frameworks & Drivers (Express, PostgreSQL, Redis) │ │
│ │ ┌─────────────────────────────────────────────────────┐ │ │
│ │ │ Interface Adapters (Controllers, Repositories) │ │ │
│ │ │ ┌─────────────────────────────────────────────┐ │ │ │
│ │ │ │ Use Cases (Application Business Rules) │ │ │ │
│ │ │ │ ┌─────────────────────────────────────┐ │ │ │ │
│ │ │ │ │ Entities (Enterprise Business) │ │ │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ User, Order, Product │ │ │ │ │
│ │ │ │ │ (Pure domain logic) │ │ │ │ │
│ │ │ │ └─────────────────────────────────────┘ │ │ │ │
│ │ │ │ CreateOrder, ProcessPayment │ │ │ │
│ │ │ └─────────────────────────────────────────────┘ │ │ │
│ │ │ UserController, OrderRepository impl │ │ │
│ │ └─────────────────────────────────────────────────────┘ │ │
│ │ Express routes, PostgreSQL queries │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
│ ←←← Dependencies point inward (toward business logic) │
└─────────────────────────────────────────────────────────────────────┘
src/
├── domain/ # Entities - pure business logic, no deps
│ ├── entities/
│ │ └── Order.js
│ └── valueObjects/
│ └── Money.js
├── application/ # Use cases - orchestrate business rules
│ ├── useCases/
│ │ └── CreateOrderUseCase.js
│ └── interfaces/ # Ports (abstract interfaces)
│ └── IOrderRepository.js
├── infrastructure/ # Adapters - implement interfaces
│ ├── repositories/
│ │ └── PostgresOrderRepository.js
│ └── services/
│ └── StripePaymentService.js
└── presentation/ # Express controllers, routes
├── controllers/
└── routes/
Domain-Driven Design
DDD focuses on modeling the business domain with ubiquitous language, bounded contexts, and rich domain objects that encapsulate behavior; it's suited for complex business logic where the domain model is the core value.
// domain/entities/Order.js - Rich domain model class Order { constructor(id, customerId, items, status = 'draft') { this.id = id; this.customerId = customerId; this.items = items; this.status = status; this.createdAt = new Date(); } get total() { return this.items.reduce((sum, item) => sum.add(item.price.multiply(item.quantity)), Money.zero()); } addItem(product, quantity) { if (this.status !== 'draft') { throw new DomainError('Cannot modify confirmed order'); } const existing = this.items.find(i => i.productId === product.id); if (existing) { existing.quantity += quantity; } else { this.items.push(new OrderItem(product, quantity)); } this.emit('ItemAdded', { productId: product.id, quantity }); } confirm() { if (this.items.length === 0) { throw new DomainError('Cannot confirm empty order'); } this.status = 'confirmed'; this.confirmedAt = new Date(); this.emit('OrderConfirmed', { orderId: this.id, total: this.total }); } } // domain/valueObjects/Money.js - Immutable value object class Money { constructor(amount, currency = 'USD') { this.amount = amount; this.currency = currency; Object.freeze(this); } add(other) { if (this.currency !== other.currency) throw new Error('Currency mismatch'); return new Money(this.amount + other.amount, this.currency); } static zero() { return new Money(0); } }
Microservices with Express
Express serves as lightweight individual microservices, each with focused responsibility, own database, and API contracts; communication happens via HTTP/gRPC or message queues, with API gateways handling routing.
// order-service/app.js const express = require('express'); const app = express(); // Service-specific routes only app.use('/api/orders', orderRoutes); // Inter-service communication const axios = require('axios'); const INVENTORY_SERVICE = process.env.INVENTORY_SERVICE_URL; const PAYMENT_SERVICE = process.env.PAYMENT_SERVICE_URL; async function createOrder(orderData) { // Call other services const inventory = await axios.post(`${INVENTORY_SERVICE}/reserve`, { items: orderData.items }); const payment = await axios.post(`${PAYMENT_SERVICE}/charge`, { userId: orderData.userId, amount: orderData.total }); return orderRepo.create(orderData); } // Health check for orchestrator app.get('/health', (req, res) => res.json({ status: 'healthy' })); app.listen(process.env.PORT || 3001);
┌───────────────────────────────────────────────────────────────────┐
│ MICROSERVICES ARCHITECTURE │
├───────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ │
│ Clients ──────────►│ API Gateway │ │
│ │ (Express) │ │
│ └──────┬───────┘ │
│ │ │
│ ┌────────────────────┼────────────────────┐ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Users │ │ Orders │ │ Inventory │ │
│ │ (Express) │ │ (Express) │ │ (Express) │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ [MongoDB] [PostgreSQL] [Redis] │
│ │
│ ◄─────────────── Message Queue (RabbitMQ) ──────────────► │
│ │
└───────────────────────────────────────────────────────────────────┘
Event-Driven Architecture
Event-driven architecture decouples services through events published to message brokers; services react asynchronously to events they subscribe to, enabling loose coupling, scalability, and eventual consistency.
// Event emitter for in-process events const EventEmitter = require('events'); const eventBus = new EventEmitter(); // For distributed events, use message broker const amqp = require('amqplib'); class EventPublisher { async publish(event, data) { const channel = await this.getChannel(); channel.publish('events', event, Buffer.from(JSON.stringify({ event, data, timestamp: new Date().toISOString(), source: 'order-service' }))); } } // Order service publishes events app.post('/orders', async (req, res) => { const order = await orderRepo.create(req.body); // Publish event - don't wait for downstream processing await eventPublisher.publish('order.created', { orderId: order.id, userId: order.userId, items: order.items, total: order.total }); res.status(201).json(order); }); // Inventory service subscribes and reacts eventSubscriber.subscribe('order.created', async (event) => { await inventoryService.reserveItems(event.data.orderId, event.data.items); await eventPublisher.publish('inventory.reserved', { orderId: event.data.orderId }); }); // Email service subscribes eventSubscriber.subscribe('order.created', async (event) => { await emailService.sendOrderConfirmation(event.data.userId, event.data.orderId); });
CQRS Pattern
Command Query Responsibility Segregation separates read and write operations into different models; writes go through command handlers with validation, while reads use optimized query models, enabling independent scaling and optimization.
// commands/createOrderCommand.js class CreateOrderCommand { constructor(userId, items) { this.userId = userId; this.items = items; this.timestamp = new Date(); } } // commandHandlers/orderCommandHandler.js class OrderCommandHandler { async handle(command) { if (command instanceof CreateOrderCommand) { // Validation, business rules const order = new Order(command.userId, command.items); await this.writeRepo.save(order); // Publish event for read model update await this.eventBus.publish('OrderCreated', order); return order.id; } } } // queryHandlers/orderQueryHandler.js class OrderQueryHandler { constructor(readDb) { this.readDb = readDb; // Optimized read database (denormalized) } async getOrderSummary(orderId) { // Read from optimized projection return this.readDb.query(` SELECT o.*, u.name as customer_name, COUNT(i.id) as item_count, SUM(i.total) as total FROM order_summaries o JOIN users u ON o.user_id = u.id WHERE o.id = $1 `, [orderId]); } } // Express routes separate commands and queries app.post('/orders', async (req, res) => { const command = new CreateOrderCommand(req.user.id, req.body.items); const orderId = await commandHandler.handle(command); res.status(201).json({ orderId }); }); app.get('/orders/:id', async (req, res) => { const order = await queryHandler.getOrderSummary(req.params.id); res.json(order); });
┌─────────────────────────────────────────────────────────────────┐
│ CQRS PATTERN │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌───────────────┐ │
│ │ Client │ │
│ └───────┬───────┘ │
│ │ │
│ ┌───────────────┴───────────────┐ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Commands │ │ Queries │ │
│ │ POST/PUT/DEL │ │ GET │ │
│ └────────┬────────┘ └────────┬────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Command Handler │ │ Query Handler │ │
│ │ (Write Model) │ │ (Read Model) │ │
│ └────────┬────────┘ └────────┬────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Write DB │────Event───►│ Read DB │ │
│ │ (Normalized) │ Sourcing │ (Denormalized) │ │
│ └─────────────────┘ └─────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Middleware Patterns
Middleware patterns in Express include pipeline (sequential processing), decorator (wrapping handlers), chain of responsibility (pass or handle), and composite (combining multiple middlewares); mastering these enables powerful request processing architectures.
// 1. Pipeline Pattern - Sequential processing const pipeline = (...middlewares) => { return (req, res, next) => { const runMiddleware = (index) => { if (index >= middlewares.length) return next(); middlewares[index](req, res, (err) => { if (err) return next(err); runMiddleware(index + 1); }); }; runMiddleware(0); }; }; // 2. Decorator Pattern - Wrap handlers const withLogging = (handler) => async (req, res, next) => { const start = Date.now(); try { await handler(req, res, next); logger.info(`${req.method} ${req.path} - ${Date.now() - start}ms`); } catch (err) { logger.error(`${req.method} ${req.path} failed`, err); next(err); } }; const withCache = (ttl) => (handler) => async (req, res, next) => { const cached = await cache.get(req.originalUrl); if (cached) return res.json(cached); const originalJson = res.json.bind(res); res.json = (data) => { cache.set(req.originalUrl, data, ttl); originalJson(data); }; return handler(req, res, next); }; app.get('/users', withLogging(withCache(60)(getUsers))); // 3. Conditional Middleware const unless = (condition, middleware) => (req, res, next) => { condition(req) ? next() : middleware(req, res, next); }; app.use(unless(req => req.path === '/health', authMiddleware));
Plugin Architecture
Plugin architecture allows extending Express applications with modular, self-contained feature bundles that register their own routes, middleware, and services; this enables composable applications and third-party extensibility.
// plugins/analyticsPlugin.js module.exports = function analyticsPlugin(app, options = {}) { const { trackingId, excludePaths = ['/health'] } = options; // Register middleware app.use((req, res, next) => { if (excludePaths.includes(req.path)) return next(); analytics.track(trackingId, req); next(); }); // Register routes app.get('/analytics/events', authenticate, (req, res) => { res.json(analytics.getEvents()); }); // Return plugin API return { track: (event, data) => analytics.customEvent(event, data) }; }; // plugins/rateLimitPlugin.js module.exports = function rateLimitPlugin(app, options) { app.use(rateLimit(options)); app.get('/rate-limit/status', (req, res) => { res.json(rateLimit.getStatus(req.ip)); }); }; // app.js - Plugin registration const plugins = {}; function registerPlugin(name, plugin, options) { plugins[name] = plugin(app, options); console.log(`Plugin registered: ${name}`); } registerPlugin('analytics', require('./plugins/analyticsPlugin'), { trackingId: 'UA-12345' }); registerPlugin('rateLimit', require('./plugins/rateLimitPlugin'), { windowMs: 15 * 60 * 1000, max: 100 }); // Access plugin API plugins.analytics.track('custom_event', { user: 'john' });
Real-time Features
WebSockets with Socket.io
Socket.io enables bidirectional real-time communication over WebSockets with automatic fallbacks, reconnection, and rooms/namespaces for organizing connections; it integrates seamlessly with Express sharing the same HTTP server.
const express = require('express'); const { createServer } = require('http'); const { Server } = require('socket.io'); const app = express(); const httpServer = createServer(app); const io = new Server(httpServer, { cors: { origin: 'http://localhost:3000' } }); // Connection handling io.on('connection', (socket) => { console.log(`Client connected: ${socket.id}`); // Join room based on user socket.on('join', (roomId) => { socket.join(roomId); socket.to(roomId).emit('user_joined', { userId: socket.userId }); }); // Handle messages socket.on('message', (data) => { io.to(data.roomId).emit('message', { ...data, timestamp: new Date() }); }); socket.on('disconnect', () => { console.log(`Client disconnected: ${socket.id}`); }); }); // Emit from Express routes app.post('/orders', async (req, res) => { const order = await orderService.create(req.body); io.to(`user:${order.userId}`).emit('order_created', order); res.json(order); }); httpServer.listen(3000);
Server-Sent Events (SSE)
Server-Sent Events provide one-way real-time streaming from server to client over HTTP; simpler than WebSockets with automatic reconnection, ideal for live feeds, notifications, and progress updates.
app.get('/events', (req, res) => { // SSE headers res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.flushHeaders(); // Send initial connection event res.write('event: connected\n'); res.write(`data: ${JSON.stringify({ status: 'connected' })}\n\n`); // Periodic heartbeat const heartbeat = setInterval(() => { res.write(': heartbeat\n\n'); }, 30000); // Subscribe to events const handler = (event) => { res.write(`event: ${event.type}\n`); res.write(`data: ${JSON.stringify(event.data)}\n`); res.write(`id: ${event.id}\n\n`); }; eventEmitter.on('notification', handler); // Cleanup on disconnect req.on('close', () => { clearInterval(heartbeat); eventEmitter.off('notification', handler); }); }); // Client usage const eventSource = new EventSource('/events'); eventSource.addEventListener('notification', (e) => { console.log('Notification:', JSON.parse(e.data)); });
Long Polling
Long polling simulates real-time by holding HTTP requests open until data is available or timeout occurs; fallback for environments without WebSocket support, with clients immediately reconnecting after each response.
// Server - Long polling endpoint const waitingClients = new Map(); app.get('/poll/:userId', async (req, res) => { const { userId } = req.params; const timeout = parseInt(req.query.timeout) || 30000; // Check for pending messages first const pending = await messageQueue.get(userId); if (pending.length > 0) { return res.json({ messages: pending }); } // Wait for new messages const timeoutId = setTimeout(() => { waitingClients.delete(userId); res.json({ messages: [], timeout: true }); }, timeout); waitingClients.set(userId, { res, cleanup: () => clearTimeout(timeoutId) }); req.on('close', () => { const client = waitingClients.get(userId); if (client) client.cleanup(); waitingClients.delete(userId); }); }); // Push message to waiting client async function pushToUser(userId, message) { const client = waitingClients.get(userId); if (client) { client.cleanup(); waitingClients.delete(userId); client.res.json({ messages: [message] }); } else { await messageQueue.add(userId, message); } }
┌─────────────────────────────────────────────────────────────┐
│ LONG POLLING FLOW │
├─────────────────────────────────────────────────────────────┤
│ │
│ Client Server │
│ │ │ │
│ │──── GET /poll ───────────────►│ │
│ │ (request held) │ │
│ │ │ ◄── New message │
│ │◄─── Response with data ───────│ │
│ │ │ │
│ │──── GET /poll ───────────────►│ (immediate reconnect) │
│ │ (request held) │ │
│ │ ... │ │
│ │◄─── Timeout (empty) ──────────│ (30s timeout) │
│ │ │ │
│ │──── GET /poll ───────────────►│ (reconnect) │
│ │ │ │
└─────────────────────────────────────────────────────────────┘
Real-time Notifications
Real-time notifications combine WebSocket connections with persistent storage for offline delivery, presence tracking, and read receipts; users receive instant alerts while online and queued notifications when reconnecting.
class NotificationService { constructor(io, redis, db) { this.io = io; this.redis = redis; this.db = db; } async send(userId, notification) { const enriched = { id: uuid(), ...notification, createdAt: new Date(), read: false }; // Persist for history/offline delivery await this.db.notifications.insert(enriched); // Check if user is online const socketId = await this.redis.get(`user:${userId}:socket`); if (socketId) { // Send real-time this.io.to(socketId).emit('notification', enriched); } else { // Queue for later delivery await this.redis.lpush(`user:${userId}:pending`, JSON.stringify(enriched)); } // Increment unread badge await this.redis.incr(`user:${userId}:unread`); } async getUnread(userId) { // Deliver pending notifications on connect const pending = await this.redis.lrange(`user:${userId}:pending`, 0, -1); await this.redis.del(`user:${userId}:pending`); return pending.map(JSON.parse); } async markRead(userId, notificationId) { await this.db.notifications.update(notificationId, { read: true }); await this.redis.decr(`user:${userId}:unread`); this.io.to(`user:${userId}`).emit('notification:read', { id: notificationId }); } }
Chat Implementation
Chat implementation requires message persistence, room management, presence indicators, typing status, read receipts, and message history pagination; Socket.io rooms naturally map to chat channels.
// Chat server implementation io.on('connection', (socket) => { const userId = socket.handshake.auth.userId; // Track online status redis.set(`user:${userId}:online`, 'true'); redis.set(`user:${userId}:socket`, socket.id); socket.on('join:room', async (roomId) => { socket.join(roomId); // Load message history const history = await db.messages .find({ roomId }) .sort({ createdAt: -1 }) .limit(50); socket.emit('room:history', history.reverse()); socket.to(roomId).emit('user:joined', { userId, roomId }); }); socket.on('message:send', async (data) => { const message = { id: uuid(), roomId: data.roomId, userId, content: data.content, createdAt: new Date() }; await db.messages.insert(message); io.to(data.roomId).emit('message:new', message); }); socket.on('typing:start', (roomId) => { socket.to(roomId).emit('typing:update', { userId, isTyping: true }); }); socket.on('typing:stop', (roomId) => { socket.to(roomId).emit('typing:update', { userId, isTyping: false }); }); socket.on('message:read', async ({ roomId, messageId }) => { await db.readReceipts.upsert({ roomId, userId, lastRead: messageId }); socket.to(roomId).emit('read:update', { userId, messageId }); }); socket.on('disconnect', () => { redis.del(`user:${userId}:online`); redis.del(`user:${userId}:socket`); socket.broadcast.emit('user:offline', { userId }); }); });
Live Updates
Live updates push data changes to subscribed clients immediately, implementing pub/sub patterns for dashboards, live scores, stock tickers, or collaborative editing; clients subscribe to specific data channels and receive patches or full updates.
class LiveUpdateService { constructor(io, redis) { this.io = io; this.redis = redis; this.subscriptions = new Map(); // Subscribe to Redis pub/sub for multi-server support this.subscriber = redis.duplicate(); this.subscriber.subscribe('data:updates'); this.subscriber.on('message', (channel, message) => { const update = JSON.parse(message); this.broadcast(update.channel, update.data); }); } subscribe(socket, channel) { socket.join(`live:${channel}`); // Track subscriptions for cleanup if (!this.subscriptions.has(socket.id)) { this.subscriptions.set(socket.id, new Set()); } this.subscriptions.get(socket.id).add(channel); } broadcast(channel, data) { this.io.to(`live:${channel}`).emit('update', { channel, data }); } // Publish update (works across multiple servers via Redis) async publish(channel, data) { await this.redis.publish('data:updates', JSON.stringify({ channel, data })); } } // Usage io.on('connection', (socket) => { socket.on('subscribe', (channels) => { channels.forEach(ch => liveUpdates.subscribe(socket, ch)); }); }); // Trigger updates from Express routes app.put('/stocks/:symbol', async (req, res) => { const stock = await stockService.updatePrice(req.params.symbol, req.body.price); await liveUpdates.publish(`stock:${req.params.symbol}`, stock); res.json(stock); });
WebSocket Authentication
WebSocket authentication validates connections during handshake using JWT tokens, session cookies, or custom auth; middleware verifies credentials before allowing connection, with periodic revalidation for long-lived connections.
const { Server } = require('socket.io'); const jwt = require('jsonwebtoken'); const io = new Server(httpServer, { cors: { origin: 'http://localhost:3000', credentials: true } }); // Authentication middleware io.use(async (socket, next) => { try { // Option 1: Token in auth object const token = socket.handshake.auth.token; // Option 2: Token in query string (for SSE fallback) // const token = socket.handshake.query.token; // Option 3: Cookie // const token = parseCookies(socket.handshake.headers.cookie).jwt; if (!token) { return next(new Error('Authentication required')); } const decoded = jwt.verify(token, process.env.JWT_SECRET); socket.userId = decoded.userId; socket.role = decoded.role; // Check if user is still valid (not banned, etc.) const user = await db.users.findById(decoded.userId); if (!user || user.banned) { return next(new Error('User not authorized')); } next(); } catch (err) { next(new Error('Invalid token')); } }); // Namespace-specific auth const adminNamespace = io.of('/admin'); adminNamespace.use((socket, next) => { if (socket.role !== 'admin') { return next(new Error('Admin access required')); } next(); }); // Client connection with auth const socket = io('http://localhost:3000', { auth: { token: localStorage.getItem('jwt') } }); socket.on('connect_error', (err) => { if (err.message === 'Authentication required') { // Redirect to login } });
Scaling WebSockets
Scaling WebSockets across multiple servers requires sticky sessions or a pub/sub layer (Redis adapter) so messages reach all clients regardless of which server they're connected to; this enables horizontal scaling while maintaining real-time functionality.
const { createAdapter } = require('@socket.io/redis-adapter'); const { createClient } = require('redis'); const pubClient = createClient({ url: process.env.REDIS_URL }); const subClient = pubClient.duplicate(); await Promise.all([pubClient.connect(), subClient.connect()]); // Redis adapter for multi-server support io.adapter(createAdapter(pubClient, subClient)); // Now io.emit() works across all servers! io.emit('announcement', { message: 'System update in 5 minutes' }); // Room broadcasts work across servers too io.to('room:123').emit('message', { text: 'Hello room!' });
┌─────────────────────────────────────────────────────────────────────────┐
│ SCALING WEBSOCKETS WITH REDIS ADAPTER │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Clients Load Balancer Servers │
│ (Sticky Sessions) │
│ │
│ ┌─────────┐ │ ┌─────────────────────┐ │
│ │Client A ├────────────┼─────────────►│ Server 1 (Express) │ │
│ └─────────┘ │ │ + Socket.io │ │
│ │ └──────────┬──────────┘ │
│ ┌─────────┐ │ │ │
│ │Client B ├────────────┼─────────────►┌──────────┴──────────┐ │
│ └─────────┘ │ │ │ │
│ │ │ Redis Pub/Sub │ │
│ ┌─────────┐ │ │ (Message Broker) │ │
│ │Client C ├────────────┼─────────────►│ │ │
│ └─────────┘ │ └──────────┬──────────┘ │
│ │ │ │
│ ┌─────────┐ │ ┌──────────┴──────────┐ │
│ │Client D ├────────────┼─────────────►│ Server 2 (Express) │ │
│ └─────────┘ │ │ + Socket.io │ │
│ └─────────────────────┘ │
│ │
│ Message from Server 1 → Redis → Server 2 → Client D │
│ │
└─────────────────────────────────────────────────────────────────────────┘
# docker-compose.yml for scaled setup services: app: build: . deploy: replicas: 4 environment: - REDIS_URL=redis://redis:6379 depends_on: - redis redis: image: redis:7-alpine nginx: image: nginx ports: - "80:80" volumes: - ./nginx.conf:/etc/nginx/nginx.conf depends_on: - app
# nginx.conf - Sticky sessions for WebSocket upstream socketio { ip_hash; # Sticky sessions by client IP server app:3000; } server { location /socket.io/ { proxy_pass http://socketio; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } }
This concludes the Advanced and Senior sections covering Testing, Logging & Monitoring, Architecture Patterns, and Real-time Features in Express.js. These patterns form the foundation for building production-grade, scalable applications.