Back to Articles
35 min read

Advanced Express.js Masterclass: Architecture, Observability, and Real-Time Systems

Moving beyond the basics: A definitive deep-dive into the three pillars of enterprise Node.js development. We cover Domain-Driven Design, distributed tracing, and high-concurrency real-time patterns in a single, production-ready guide.

Logging & Monitoring

Morgan Logger

Morgan is an HTTP request logger middleware that logs incoming requests with configurable formats (tiny, combined, dev); ideal for development debugging and basic production access logs.

const express = require('express'); const morgan = require('morgan'); const app = express(); // Development - colored, concise app.use(morgan('dev')); // Output: GET /api/users 200 12.345 ms - 1234 // Production - Apache combined format app.use(morgan('combined')); // Output: ::1 - - [10/Jan/2025:10:23:45 +0000] "GET /api/users HTTP/1.1" 200 1234 "-" "Mozilla/5.0..." // Custom format with response time morgan.token('id', (req) => req.id); app.use(morgan(':id :method :url :status :response-time ms')); // Stream to file const fs = require('fs'); const accessLogStream = fs.createWriteStream('./access.log', { flags: 'a' }); app.use(morgan('combined', { stream: accessLogStream }));

Winston Integration

Winston is a versatile logging library supporting multiple transports (console, file, cloud), log levels, and formatting; it's the production standard for structured, queryable logs in Express applications.

// logger.js const winston = require('winston'); const logger = winston.createLogger({ level: process.env.LOG_LEVEL || 'info', format: winston.format.combine( winston.format.timestamp(), winston.format.errors({ stack: true }), winston.format.json() ), defaultMeta: { service: 'user-api' }, transports: [ new winston.transports.File({ filename: 'error.log', level: 'error' }), new winston.transports.File({ filename: 'combined.log' }), ], }); if (process.env.NODE_ENV !== 'production') { logger.add(new winston.transports.Console({ format: winston.format.combine( winston.format.colorize(), winston.format.simple() ) })); } // Usage in routes app.post('/users', async (req, res) => { logger.info('Creating user', { email: req.body.email }); try { const user = await userService.create(req.body); logger.info('User created', { userId: user.id }); res.json(user); } catch (err) { logger.error('Failed to create user', { error: err, body: req.body }); res.status(500).json({ error: 'Internal error' }); } });

Log Levels

Log levels categorize message severity (error, warn, info, debug, trace), allowing filtering based on environment; production typically uses 'info' while development enables 'debug' for detailed troubleshooting.

const winston = require('winston'); const levels = { error: 0, // System failures, exceptions warn: 1, // Potential issues, deprecations info: 2, // Business events, requests http: 3, // HTTP request logging debug: 4, // Detailed debugging info trace: 5 // Very verbose, function entry/exit }; const logger = winston.createLogger({ levels, level: process.env.LOG_LEVEL || 'info' }); // Usage examples logger.error('Database connection failed', { host: 'db.prod.com' }); logger.warn('API rate limit approaching', { usage: '90%' }); logger.info('Order placed', { orderId: '12345', amount: 99.99 }); logger.debug('Cache lookup', { key: 'user:123', hit: true }); logger.trace('Entering calculateDiscount()', { args: [100, 'VIP'] });
┌─────────────────────────────────────────────────────────┐
│              LOG LEVEL HIERARCHY                        │
├─────────────────────────────────────────────────────────┤
│  Level    │ Production │ Staging │ Development         │
├───────────┼────────────┼─────────┼─────────────────────┤
│  ERROR    │     ✓      │    ✓    │      ✓              │
│  WARN     │     ✓      │    ✓    │      ✓              │
│  INFO     │     ✓      │    ✓    │      ✓              │
│  HTTP     │     ○      │    ✓    │      ✓              │
│  DEBUG    │     ✗      │    ○    │      ✓              │
│  TRACE    │     ✗      │    ✗    │      ○              │
├───────────┴────────────┴─────────┴─────────────────────┤
│  ✓ = enabled   ○ = optional   ✗ = disabled            │
└─────────────────────────────────────────────────────────┘

Structured Logging

Structured logging outputs JSON-formatted logs with consistent fields, enabling efficient searching, filtering, and analysis in log aggregation systems like ELK, Splunk, or CloudWatch.

// Structured log format const logger = winston.createLogger({ format: winston.format.combine( winston.format.timestamp({ format: 'YYYY-MM-DDTHH:mm:ss.SSSZ' }), winston.format.json() ) }); // Log with context logger.info('User login', { event: 'user.login', userId: '12345', email: 'user@example.com', ip: '192.168.1.1', userAgent: 'Mozilla/5.0...', duration: 145, success: true }); // Output (single line in reality): // { // "timestamp": "2025-01-15T10:30:45.123Z", // "level": "info", // "message": "User login", // "event": "user.login", // "userId": "12345", // "email": "user@example.com", // "ip": "192.168.1.1", // "duration": 145, // "success": true, // "service": "auth-api" // }

Log Rotation

Log rotation prevents disk exhaustion by archiving and compressing old logs based on size or time intervals; essential for long-running production services to maintain manageable log files.

const winston = require('winston'); require('winston-daily-rotate-file'); const transport = new winston.transports.DailyRotateFile({ filename: 'logs/app-%DATE%.log', datePattern: 'YYYY-MM-DD', zippedArchive: true, // Compress old files maxSize: '100m', // Rotate at 100MB maxFiles: '30d', // Keep 30 days auditFile: 'logs/audit.json' // Track rotations }); transport.on('rotate', (oldFilename, newFilename) => { console.log(`Rotated: ${oldFilename}${newFilename}`); }); const logger = winston.createLogger({ transports: [transport] });
logs/
├── app-2025-01-13.log.gz   (compressed, 15MB)
├── app-2025-01-14.log.gz   (compressed, 18MB)
├── app-2025-01-15.log      (current, 45MB)
└── audit.json              (rotation tracking)

Request ID Tracking

Request ID tracking assigns a unique identifier to each request, propagated through all logs and downstream services; essential for tracing requests across microservices and debugging distributed systems.

const { v4: uuidv4 } = require('uuid'); // Middleware to add request ID const requestIdMiddleware = (req, res, next) => { req.id = req.headers['x-request-id'] || uuidv4(); res.setHeader('x-request-id', req.id); next(); }; // Attach to all log entries const childLogger = (req) => logger.child({ requestId: req.id }); app.use(requestIdMiddleware); app.get('/orders/:id', async (req, res) => { const log = childLogger(req); log.info('Fetching order', { orderId: req.params.id }); const order = await orderService.find(req.params.id, req.id); log.info('Order retrieved', { status: order.status }); res.json(order); }); // All logs for this request share the same requestId // { "requestId": "abc-123", "message": "Fetching order", ... } // { "requestId": "abc-123", "message": "Order retrieved", ... }
┌─────────────────────────────────────────────────────────────────┐
│  REQUEST FLOW WITH TRACING                                      │
│                                                                 │
│  Client ──► API Gateway ──► Order Service ──► Payment Service   │
│    │            │               │                  │            │
│    │  x-request-id: abc-123     │                  │            │
│    └────────────┼───────────────┼──────────────────┘            │
│                 ▼               ▼                               │
│         ┌─────────────────────────────────────┐                 │
│         │  Centralized Logs (filtered by ID)  │                 │
│         │  requestId: abc-123                 │                 │
│         │  ├── [gateway] Request received     │                 │
│         │  ├── [order]   Fetching order       │                 │
│         │  ├── [order]   Calling payment API  │                 │
│         │  ├── [payment] Processing payment   │                 │
│         │  └── [gateway] Response sent 200    │                 │
│         └─────────────────────────────────────┘                 │
└─────────────────────────────────────────────────────────────────┘

APM Integration

Application Performance Monitoring (APM) provides real-time visibility into application performance, tracking transactions, database queries, external calls, and errors; tools like New Relic, Datadog, or Elastic APM auto-instrument Express.

// Datadog APM setup (must be first import!) const tracer = require('dd-trace').init({ service: 'order-api', env: process.env.NODE_ENV, version: process.env.APP_VERSION, logInjection: true, // Correlate logs with traces runtimeMetrics: true }); const express = require('express'); const app = express(); // Custom span for business logic app.post('/checkout', async (req, res) => { const span = tracer.startSpan('checkout.process'); span.setTag('user.id', req.user.id); span.setTag('cart.items', req.body.items.length); try { const result = await processCheckout(req.body); span.setTag('order.id', result.orderId); res.json(result); } catch (err) { span.setTag('error', true); span.log({ 'error.message': err.message }); throw err; } finally { span.finish(); } });

Health Checks

Health checks expose endpoints for load balancers and orchestrators to verify application readiness (can accept traffic) and liveness (not deadlocked); they should check critical dependencies like database and cache connections.

const express = require('express'); const app = express(); // Liveness - is the process running? app.get('/health/live', (req, res) => { res.status(200).json({ status: 'alive' }); }); // Readiness - can it handle requests? app.get('/health/ready', async (req, res) => { const checks = { database: await checkDatabase(), redis: await checkRedis(), externalApi: await checkExternalApi() }; const allHealthy = Object.values(checks).every(c => c.status === 'up'); const status = allHealthy ? 200 : 503; res.status(status).json({ status: allHealthy ? 'ready' : 'degraded', timestamp: new Date().toISOString(), checks }); }); async function checkDatabase() { try { await db.query('SELECT 1'); return { status: 'up', latency: '5ms' }; } catch (err) { return { status: 'down', error: err.message }; } }
GET /health/ready

{
  "status": "ready",
  "timestamp": "2025-01-15T10:30:00Z",
  "checks": {
    "database": { "status": "up", "latency": "5ms" },
    "redis": { "status": "up", "latency": "1ms" },
    "externalApi": { "status": "up", "latency": "45ms" }
  }
}

Metrics Collection

Metrics collection gathers numerical data about application behavior (request counts, response times, queue depths); these time-series data points enable dashboards, alerting, and capacity planning.

const promClient = require('prom-client'); // Default metrics (CPU, memory, event loop) promClient.collectDefaultMetrics(); // Custom metrics const httpRequestDuration = new promClient.Histogram({ name: 'http_request_duration_seconds', help: 'Duration of HTTP requests in seconds', labelNames: ['method', 'route', 'status'], buckets: [0.01, 0.05, 0.1, 0.5, 1, 5] }); const activeConnections = new promClient.Gauge({ name: 'active_connections', help: 'Number of active connections' }); const ordersTotal = new promClient.Counter({ name: 'orders_total', help: 'Total orders placed', labelNames: ['status'] }); // Middleware to track metrics app.use((req, res, next) => { const end = httpRequestDuration.startTimer(); res.on('finish', () => { end({ method: req.method, route: req.route?.path || req.path, status: res.statusCode }); }); next(); });

Prometheus Integration

Prometheus scrapes a /metrics endpoint exposing time-series data in a specific format; combined with Grafana, it provides powerful visualization and alerting for Express application monitoring.

const express = require('express'); const promClient = require('prom-client'); const app = express(); // Registry for all metrics const register = new promClient.Registry(); promClient.collectDefaultMetrics({ register }); // Custom business metrics const orderValue = new promClient.Summary({ name: 'order_value_dollars', help: 'Order values in dollars', percentiles: [0.5, 0.9, 0.99], registers: [register] }); register.registerMetric(orderValue); // Metrics endpoint for Prometheus scraping app.get('/metrics', async (req, res) => { res.set('Content-Type', register.contentType); res.send(await register.metrics()); });
# HELP http_request_duration_seconds Duration of HTTP requests
# TYPE http_request_duration_seconds histogram
http_request_duration_seconds_bucket{method="GET",route="/api/users",le="0.1"} 2450
http_request_duration_seconds_bucket{method="GET",route="/api/users",le="0.5"} 2890
http_request_duration_seconds_bucket{method="GET",route="/api/users",le="+Inf"} 2900
http_request_duration_seconds_sum{method="GET",route="/api/users"} 187.34
http_request_duration_seconds_count{method="GET",route="/api/users"} 2900

# HELP active_connections Number of active connections
# TYPE active_connections gauge
active_connections 145

Distributed Tracing

Distributed tracing tracks requests across multiple services using trace IDs and spans, visualizing the complete request path with timing for each service; essential for debugging latency issues in microservices architectures.

// Using OpenTelemetry for vendor-neutral tracing const { NodeTracerProvider } = require('@opentelemetry/sdk-trace-node'); const { SimpleSpanProcessor } = require('@opentelemetry/sdk-trace-base'); const { JaegerExporter } = require('@opentelemetry/exporter-jaeger'); const { ExpressInstrumentation } = require('@opentelemetry/instrumentation-express'); const { HttpInstrumentation } = require('@opentelemetry/instrumentation-http'); const provider = new NodeTracerProvider(); provider.addSpanProcessor(new SimpleSpanProcessor(new JaegerExporter())); provider.register(); // Auto-instrument Express and HTTP calls new ExpressInstrumentation(); new HttpInstrumentation();
┌─────────────────────────────────────────────────────────────────────────────┐
│  DISTRIBUTED TRACE: Order Checkout (trace_id: abc123)                       │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│  [API Gateway]     ████████████████████████████████████████  450ms          │
│    └─ /checkout    │                                      │                 │
│                    │                                      │                 │
│  [Order Service]   │ ████████████████████████████  320ms  │                 │
│    └─ createOrder  │ │                          │        │                 │
│                    │ │                          │        │                 │
│  [Inventory]       │ │ █████████  80ms          │        │                 │
│    └─ reserve      │ │                          │        │                 │
│                    │ │                          │        │                 │
│  [Payment]         │ │         ██████████████   │ 180ms  │                 │
│    └─ charge       │ │                          │        │                 │
│                    │ │                          │        │                 │
│  [Notification]    │ │                          ███ 25ms │                 │
│    └─ sendEmail    │ │                          │        │                 │
│                    ▼ ▼                          ▼        ▼                 │
│  ─────────────────────────────────────────────────────────────────────────  │
│  0ms              100ms       200ms       300ms       400ms      450ms      │
└─────────────────────────────────────────────────────────────────────────────┘

Architecture Patterns

MVC Architecture

Model-View-Controller separates data logic (Model), presentation (View), and request handling (Controller); in Express APIs, Views are often JSON responses, making it effectively MC with Views as serializers.

┌─────────────────────────────────────────────────────────────┐
│                     MVC in Express                          │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│   Request ──► Router ──► Controller ──► Model ──► Database  │
│                              │            │                 │
│                              │            │                 │
│                              ▼            │                 │
│   Response ◄── View/JSON ◄───┘ ◄──────────┘                 │
│                                                             │
└─────────────────────────────────────────────────────────────┘
// models/User.js class User { static async findById(id) { return db.query('SELECT * FROM users WHERE id = $1', [id]); } static async create(data) { return db.query('INSERT INTO users (name, email) VALUES ($1, $2) RETURNING *', [data.name, data.email]); } } // controllers/userController.js const User = require('../models/User'); exports.getUser = async (req, res) => { const user = await User.findById(req.params.id); if (!user) return res.status(404).json({ error: 'Not found' }); res.json(user); // View is JSON serialization }; exports.createUser = async (req, res) => { const user = await User.create(req.body); res.status(201).json(user); }; // routes/users.js router.get('/:id', userController.getUser); router.post('/', userController.createUser);

Service Layer Pattern

The service layer encapsulates business logic between controllers and data access, keeping controllers thin and focused on HTTP concerns while making business rules testable and reusable across different entry points.

// services/orderService.js class OrderService { constructor(orderRepo, inventoryService, paymentService, emailService) { this.orderRepo = orderRepo; this.inventoryService = inventoryService; this.paymentService = paymentService; this.emailService = emailService; } async createOrder(userId, items) { // Business logic lives here, not in controller const availability = await this.inventoryService.checkAvailability(items); if (!availability.available) { throw new BusinessError('Items out of stock', availability.unavailable); } const total = this.calculateTotal(items); const order = await this.orderRepo.create({ userId, items, total }); await this.inventoryService.reserve(order.id, items); await this.paymentService.charge(userId, total); await this.emailService.sendOrderConfirmation(userId, order); return order; } calculateTotal(items) { return items.reduce((sum, item) => sum + item.price * item.quantity, 0); } } // controllers/orderController.js - Thin controller exports.create = async (req, res) => { const order = await orderService.createOrder(req.user.id, req.body.items); res.status(201).json(order); };

Repository Pattern

The repository pattern abstracts data access behind a consistent interface, hiding database implementation details from services; this enables easy database switching, testing with mocks, and centralized query logic.

// repositories/userRepository.js class UserRepository { constructor(db) { this.db = db; } async findById(id) { const result = await this.db.query('SELECT * FROM users WHERE id = $1', [id]); return result.rows[0] || null; } async findByEmail(email) { const result = await this.db.query('SELECT * FROM users WHERE email = $1', [email]); return result.rows[0] || null; } async create(userData) { const result = await this.db.query( 'INSERT INTO users (name, email, password_hash) VALUES ($1, $2, $3) RETURNING *', [userData.name, userData.email, userData.passwordHash] ); return result.rows[0]; } async update(id, updates) { const fields = Object.keys(updates).map((k, i) => `${k} = $${i + 2}`).join(', '); const values = Object.values(updates); const result = await this.db.query( `UPDATE users SET ${fields} WHERE id = $1 RETURNING *`, [id, ...values] ); return result.rows[0]; } } // Easy to swap implementations class MongoUserRepository { async findById(id) { return User.findById(id); } // ... same interface, different implementation }

Dependency Injection

Dependency Injection passes dependencies to components rather than having them create their own, enabling loose coupling, easier testing with mocks, and flexible runtime configuration without code changes.

// container.js - Simple DI container const { createContainer, asClass, asValue } = require('awilix'); const container = createContainer(); container.register({ // Infrastructure db: asValue(require('./db')), redis: asValue(require('./redis')), // Repositories userRepository: asClass(UserRepository).singleton(), orderRepository: asClass(OrderRepository).singleton(), // Services userService: asClass(UserService).scoped(), orderService: asClass(OrderService).scoped(), emailService: asClass(EmailService).singleton(), }); // Classes receive dependencies via constructor class OrderService { constructor({ orderRepository, userService, emailService }) { this.orderRepo = orderRepository; this.userService = userService; this.emailService = emailService; } } // Express integration with awilix-express const { scopePerRequest } = require('awilix-express'); app.use(scopePerRequest(container)); app.get('/orders/:id', (req, res) => { const orderService = req.container.resolve('orderService'); const order = await orderService.getById(req.params.id); res.json(order); });

Clean Architecture

Clean Architecture organizes code in concentric layers where inner layers (entities, use cases) are pure business logic with no external dependencies, while outer layers (controllers, databases) depend inward; this maximizes testability and framework independence.

┌─────────────────────────────────────────────────────────────────────┐
│                      CLEAN ARCHITECTURE                             │
├─────────────────────────────────────────────────────────────────────┤
│                                                                     │
│  ┌─────────────────────────────────────────────────────────────┐   │
│  │  Frameworks & Drivers (Express, PostgreSQL, Redis)          │   │
│  │  ┌─────────────────────────────────────────────────────┐   │   │
│  │  │  Interface Adapters (Controllers, Repositories)     │   │   │
│  │  │  ┌─────────────────────────────────────────────┐   │   │   │
│  │  │  │  Use Cases (Application Business Rules)     │   │   │   │
│  │  │  │  ┌─────────────────────────────────────┐   │   │   │   │
│  │  │  │  │  Entities (Enterprise Business)     │   │   │   │   │
│  │  │  │  │                                     │   │   │   │   │
│  │  │  │  │     User, Order, Product           │   │   │   │   │
│  │  │  │  │     (Pure domain logic)            │   │   │   │   │
│  │  │  │  └─────────────────────────────────────┘   │   │   │   │
│  │  │  │     CreateOrder, ProcessPayment           │   │   │   │
│  │  │  └─────────────────────────────────────────────┘   │   │   │
│  │  │     UserController, OrderRepository impl          │   │   │
│  │  └─────────────────────────────────────────────────────┘   │   │
│  │     Express routes, PostgreSQL queries                      │   │
│  └─────────────────────────────────────────────────────────────┘   │
│                                                                     │
│  ←←← Dependencies point inward (toward business logic)             │
└─────────────────────────────────────────────────────────────────────┘
src/
├── domain/           # Entities - pure business logic, no deps
│   ├── entities/
│   │   └── Order.js
│   └── valueObjects/
│       └── Money.js
├── application/      # Use cases - orchestrate business rules
│   ├── useCases/
│   │   └── CreateOrderUseCase.js
│   └── interfaces/   # Ports (abstract interfaces)
│       └── IOrderRepository.js
├── infrastructure/   # Adapters - implement interfaces
│   ├── repositories/
│   │   └── PostgresOrderRepository.js
│   └── services/
│       └── StripePaymentService.js
└── presentation/     # Express controllers, routes
    ├── controllers/
    └── routes/

Domain-Driven Design

DDD focuses on modeling the business domain with ubiquitous language, bounded contexts, and rich domain objects that encapsulate behavior; it's suited for complex business logic where the domain model is the core value.

// domain/entities/Order.js - Rich domain model class Order { constructor(id, customerId, items, status = 'draft') { this.id = id; this.customerId = customerId; this.items = items; this.status = status; this.createdAt = new Date(); } get total() { return this.items.reduce((sum, item) => sum.add(item.price.multiply(item.quantity)), Money.zero()); } addItem(product, quantity) { if (this.status !== 'draft') { throw new DomainError('Cannot modify confirmed order'); } const existing = this.items.find(i => i.productId === product.id); if (existing) { existing.quantity += quantity; } else { this.items.push(new OrderItem(product, quantity)); } this.emit('ItemAdded', { productId: product.id, quantity }); } confirm() { if (this.items.length === 0) { throw new DomainError('Cannot confirm empty order'); } this.status = 'confirmed'; this.confirmedAt = new Date(); this.emit('OrderConfirmed', { orderId: this.id, total: this.total }); } } // domain/valueObjects/Money.js - Immutable value object class Money { constructor(amount, currency = 'USD') { this.amount = amount; this.currency = currency; Object.freeze(this); } add(other) { if (this.currency !== other.currency) throw new Error('Currency mismatch'); return new Money(this.amount + other.amount, this.currency); } static zero() { return new Money(0); } }

Microservices with Express

Express serves as lightweight individual microservices, each with focused responsibility, own database, and API contracts; communication happens via HTTP/gRPC or message queues, with API gateways handling routing.

// order-service/app.js const express = require('express'); const app = express(); // Service-specific routes only app.use('/api/orders', orderRoutes); // Inter-service communication const axios = require('axios'); const INVENTORY_SERVICE = process.env.INVENTORY_SERVICE_URL; const PAYMENT_SERVICE = process.env.PAYMENT_SERVICE_URL; async function createOrder(orderData) { // Call other services const inventory = await axios.post(`${INVENTORY_SERVICE}/reserve`, { items: orderData.items }); const payment = await axios.post(`${PAYMENT_SERVICE}/charge`, { userId: orderData.userId, amount: orderData.total }); return orderRepo.create(orderData); } // Health check for orchestrator app.get('/health', (req, res) => res.json({ status: 'healthy' })); app.listen(process.env.PORT || 3001);
┌───────────────────────────────────────────────────────────────────┐
│                    MICROSERVICES ARCHITECTURE                     │
├───────────────────────────────────────────────────────────────────┤
│                                                                   │
│                        ┌──────────────┐                           │
│     Clients ──────────►│ API Gateway  │                           │
│                        │   (Express)  │                           │
│                        └──────┬───────┘                           │
│                               │                                   │
│          ┌────────────────────┼────────────────────┐              │
│          │                    │                    │              │
│          ▼                    ▼                    ▼              │
│   ┌─────────────┐     ┌─────────────┐     ┌─────────────┐         │
│   │   Users     │     │   Orders    │     │  Inventory  │         │
│   │  (Express)  │     │  (Express)  │     │  (Express)  │         │
│   └──────┬──────┘     └──────┬──────┘     └──────┬──────┘         │
│          │                   │                   │                │
│          ▼                   ▼                   ▼                │
│      [MongoDB]          [PostgreSQL]          [Redis]             │
│                                                                   │
│   ◄─────────────── Message Queue (RabbitMQ) ──────────────►      │
│                                                                   │
└───────────────────────────────────────────────────────────────────┘

Event-Driven Architecture

Event-driven architecture decouples services through events published to message brokers; services react asynchronously to events they subscribe to, enabling loose coupling, scalability, and eventual consistency.

// Event emitter for in-process events const EventEmitter = require('events'); const eventBus = new EventEmitter(); // For distributed events, use message broker const amqp = require('amqplib'); class EventPublisher { async publish(event, data) { const channel = await this.getChannel(); channel.publish('events', event, Buffer.from(JSON.stringify({ event, data, timestamp: new Date().toISOString(), source: 'order-service' }))); } } // Order service publishes events app.post('/orders', async (req, res) => { const order = await orderRepo.create(req.body); // Publish event - don't wait for downstream processing await eventPublisher.publish('order.created', { orderId: order.id, userId: order.userId, items: order.items, total: order.total }); res.status(201).json(order); }); // Inventory service subscribes and reacts eventSubscriber.subscribe('order.created', async (event) => { await inventoryService.reserveItems(event.data.orderId, event.data.items); await eventPublisher.publish('inventory.reserved', { orderId: event.data.orderId }); }); // Email service subscribes eventSubscriber.subscribe('order.created', async (event) => { await emailService.sendOrderConfirmation(event.data.userId, event.data.orderId); });

CQRS Pattern

Command Query Responsibility Segregation separates read and write operations into different models; writes go through command handlers with validation, while reads use optimized query models, enabling independent scaling and optimization.

// commands/createOrderCommand.js class CreateOrderCommand { constructor(userId, items) { this.userId = userId; this.items = items; this.timestamp = new Date(); } } // commandHandlers/orderCommandHandler.js class OrderCommandHandler { async handle(command) { if (command instanceof CreateOrderCommand) { // Validation, business rules const order = new Order(command.userId, command.items); await this.writeRepo.save(order); // Publish event for read model update await this.eventBus.publish('OrderCreated', order); return order.id; } } } // queryHandlers/orderQueryHandler.js class OrderQueryHandler { constructor(readDb) { this.readDb = readDb; // Optimized read database (denormalized) } async getOrderSummary(orderId) { // Read from optimized projection return this.readDb.query(` SELECT o.*, u.name as customer_name, COUNT(i.id) as item_count, SUM(i.total) as total FROM order_summaries o JOIN users u ON o.user_id = u.id WHERE o.id = $1 `, [orderId]); } } // Express routes separate commands and queries app.post('/orders', async (req, res) => { const command = new CreateOrderCommand(req.user.id, req.body.items); const orderId = await commandHandler.handle(command); res.status(201).json({ orderId }); }); app.get('/orders/:id', async (req, res) => { const order = await queryHandler.getOrderSummary(req.params.id); res.json(order); });
┌─────────────────────────────────────────────────────────────────┐
│                        CQRS PATTERN                             │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│                      ┌───────────────┐                          │
│                      │    Client     │                          │
│                      └───────┬───────┘                          │
│                              │                                  │
│              ┌───────────────┴───────────────┐                  │
│              │                               │                  │
│              ▼                               ▼                  │
│     ┌─────────────────┐             ┌─────────────────┐         │
│     │   Commands      │             │    Queries      │         │
│     │  POST/PUT/DEL   │             │      GET        │         │
│     └────────┬────────┘             └────────┬────────┘         │
│              │                               │                  │
│              ▼                               ▼                  │
│     ┌─────────────────┐             ┌─────────────────┐         │
│     │ Command Handler │             │  Query Handler  │         │
│     │  (Write Model)  │             │  (Read Model)   │         │
│     └────────┬────────┘             └────────┬────────┘         │
│              │                               │                  │
│              ▼                               ▼                  │
│     ┌─────────────────┐             ┌─────────────────┐         │
│     │   Write DB      │────Event───►│   Read DB       │         │
│     │  (Normalized)   │   Sourcing  │ (Denormalized)  │         │
│     └─────────────────┘             └─────────────────┘         │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Middleware Patterns

Middleware patterns in Express include pipeline (sequential processing), decorator (wrapping handlers), chain of responsibility (pass or handle), and composite (combining multiple middlewares); mastering these enables powerful request processing architectures.

// 1. Pipeline Pattern - Sequential processing const pipeline = (...middlewares) => { return (req, res, next) => { const runMiddleware = (index) => { if (index >= middlewares.length) return next(); middlewares[index](req, res, (err) => { if (err) return next(err); runMiddleware(index + 1); }); }; runMiddleware(0); }; }; // 2. Decorator Pattern - Wrap handlers const withLogging = (handler) => async (req, res, next) => { const start = Date.now(); try { await handler(req, res, next); logger.info(`${req.method} ${req.path} - ${Date.now() - start}ms`); } catch (err) { logger.error(`${req.method} ${req.path} failed`, err); next(err); } }; const withCache = (ttl) => (handler) => async (req, res, next) => { const cached = await cache.get(req.originalUrl); if (cached) return res.json(cached); const originalJson = res.json.bind(res); res.json = (data) => { cache.set(req.originalUrl, data, ttl); originalJson(data); }; return handler(req, res, next); }; app.get('/users', withLogging(withCache(60)(getUsers))); // 3. Conditional Middleware const unless = (condition, middleware) => (req, res, next) => { condition(req) ? next() : middleware(req, res, next); }; app.use(unless(req => req.path === '/health', authMiddleware));

Plugin Architecture

Plugin architecture allows extending Express applications with modular, self-contained feature bundles that register their own routes, middleware, and services; this enables composable applications and third-party extensibility.

// plugins/analyticsPlugin.js module.exports = function analyticsPlugin(app, options = {}) { const { trackingId, excludePaths = ['/health'] } = options; // Register middleware app.use((req, res, next) => { if (excludePaths.includes(req.path)) return next(); analytics.track(trackingId, req); next(); }); // Register routes app.get('/analytics/events', authenticate, (req, res) => { res.json(analytics.getEvents()); }); // Return plugin API return { track: (event, data) => analytics.customEvent(event, data) }; }; // plugins/rateLimitPlugin.js module.exports = function rateLimitPlugin(app, options) { app.use(rateLimit(options)); app.get('/rate-limit/status', (req, res) => { res.json(rateLimit.getStatus(req.ip)); }); }; // app.js - Plugin registration const plugins = {}; function registerPlugin(name, plugin, options) { plugins[name] = plugin(app, options); console.log(`Plugin registered: ${name}`); } registerPlugin('analytics', require('./plugins/analyticsPlugin'), { trackingId: 'UA-12345' }); registerPlugin('rateLimit', require('./plugins/rateLimitPlugin'), { windowMs: 15 * 60 * 1000, max: 100 }); // Access plugin API plugins.analytics.track('custom_event', { user: 'john' });

Real-time Features

WebSockets with Socket.io

Socket.io enables bidirectional real-time communication over WebSockets with automatic fallbacks, reconnection, and rooms/namespaces for organizing connections; it integrates seamlessly with Express sharing the same HTTP server.

const express = require('express'); const { createServer } = require('http'); const { Server } = require('socket.io'); const app = express(); const httpServer = createServer(app); const io = new Server(httpServer, { cors: { origin: 'http://localhost:3000' } }); // Connection handling io.on('connection', (socket) => { console.log(`Client connected: ${socket.id}`); // Join room based on user socket.on('join', (roomId) => { socket.join(roomId); socket.to(roomId).emit('user_joined', { userId: socket.userId }); }); // Handle messages socket.on('message', (data) => { io.to(data.roomId).emit('message', { ...data, timestamp: new Date() }); }); socket.on('disconnect', () => { console.log(`Client disconnected: ${socket.id}`); }); }); // Emit from Express routes app.post('/orders', async (req, res) => { const order = await orderService.create(req.body); io.to(`user:${order.userId}`).emit('order_created', order); res.json(order); }); httpServer.listen(3000);

Server-Sent Events (SSE)

Server-Sent Events provide one-way real-time streaming from server to client over HTTP; simpler than WebSockets with automatic reconnection, ideal for live feeds, notifications, and progress updates.

app.get('/events', (req, res) => { // SSE headers res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.flushHeaders(); // Send initial connection event res.write('event: connected\n'); res.write(`data: ${JSON.stringify({ status: 'connected' })}\n\n`); // Periodic heartbeat const heartbeat = setInterval(() => { res.write(': heartbeat\n\n'); }, 30000); // Subscribe to events const handler = (event) => { res.write(`event: ${event.type}\n`); res.write(`data: ${JSON.stringify(event.data)}\n`); res.write(`id: ${event.id}\n\n`); }; eventEmitter.on('notification', handler); // Cleanup on disconnect req.on('close', () => { clearInterval(heartbeat); eventEmitter.off('notification', handler); }); }); // Client usage const eventSource = new EventSource('/events'); eventSource.addEventListener('notification', (e) => { console.log('Notification:', JSON.parse(e.data)); });

Long Polling

Long polling simulates real-time by holding HTTP requests open until data is available or timeout occurs; fallback for environments without WebSocket support, with clients immediately reconnecting after each response.

// Server - Long polling endpoint const waitingClients = new Map(); app.get('/poll/:userId', async (req, res) => { const { userId } = req.params; const timeout = parseInt(req.query.timeout) || 30000; // Check for pending messages first const pending = await messageQueue.get(userId); if (pending.length > 0) { return res.json({ messages: pending }); } // Wait for new messages const timeoutId = setTimeout(() => { waitingClients.delete(userId); res.json({ messages: [], timeout: true }); }, timeout); waitingClients.set(userId, { res, cleanup: () => clearTimeout(timeoutId) }); req.on('close', () => { const client = waitingClients.get(userId); if (client) client.cleanup(); waitingClients.delete(userId); }); }); // Push message to waiting client async function pushToUser(userId, message) { const client = waitingClients.get(userId); if (client) { client.cleanup(); waitingClients.delete(userId); client.res.json({ messages: [message] }); } else { await messageQueue.add(userId, message); } }
┌─────────────────────────────────────────────────────────────┐
│                    LONG POLLING FLOW                        │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  Client                          Server                     │
│    │                               │                        │
│    │──── GET /poll ───────────────►│                        │
│    │         (request held)        │                        │
│    │                               │ ◄── New message        │
│    │◄─── Response with data ───────│                        │
│    │                               │                        │
│    │──── GET /poll ───────────────►│  (immediate reconnect) │
│    │         (request held)        │                        │
│    │              ...              │                        │
│    │◄─── Timeout (empty) ──────────│  (30s timeout)         │
│    │                               │                        │
│    │──── GET /poll ───────────────►│  (reconnect)           │
│    │                               │                        │
└─────────────────────────────────────────────────────────────┘

Real-time Notifications

Real-time notifications combine WebSocket connections with persistent storage for offline delivery, presence tracking, and read receipts; users receive instant alerts while online and queued notifications when reconnecting.

class NotificationService { constructor(io, redis, db) { this.io = io; this.redis = redis; this.db = db; } async send(userId, notification) { const enriched = { id: uuid(), ...notification, createdAt: new Date(), read: false }; // Persist for history/offline delivery await this.db.notifications.insert(enriched); // Check if user is online const socketId = await this.redis.get(`user:${userId}:socket`); if (socketId) { // Send real-time this.io.to(socketId).emit('notification', enriched); } else { // Queue for later delivery await this.redis.lpush(`user:${userId}:pending`, JSON.stringify(enriched)); } // Increment unread badge await this.redis.incr(`user:${userId}:unread`); } async getUnread(userId) { // Deliver pending notifications on connect const pending = await this.redis.lrange(`user:${userId}:pending`, 0, -1); await this.redis.del(`user:${userId}:pending`); return pending.map(JSON.parse); } async markRead(userId, notificationId) { await this.db.notifications.update(notificationId, { read: true }); await this.redis.decr(`user:${userId}:unread`); this.io.to(`user:${userId}`).emit('notification:read', { id: notificationId }); } }

Chat Implementation

Chat implementation requires message persistence, room management, presence indicators, typing status, read receipts, and message history pagination; Socket.io rooms naturally map to chat channels.

// Chat server implementation io.on('connection', (socket) => { const userId = socket.handshake.auth.userId; // Track online status redis.set(`user:${userId}:online`, 'true'); redis.set(`user:${userId}:socket`, socket.id); socket.on('join:room', async (roomId) => { socket.join(roomId); // Load message history const history = await db.messages .find({ roomId }) .sort({ createdAt: -1 }) .limit(50); socket.emit('room:history', history.reverse()); socket.to(roomId).emit('user:joined', { userId, roomId }); }); socket.on('message:send', async (data) => { const message = { id: uuid(), roomId: data.roomId, userId, content: data.content, createdAt: new Date() }; await db.messages.insert(message); io.to(data.roomId).emit('message:new', message); }); socket.on('typing:start', (roomId) => { socket.to(roomId).emit('typing:update', { userId, isTyping: true }); }); socket.on('typing:stop', (roomId) => { socket.to(roomId).emit('typing:update', { userId, isTyping: false }); }); socket.on('message:read', async ({ roomId, messageId }) => { await db.readReceipts.upsert({ roomId, userId, lastRead: messageId }); socket.to(roomId).emit('read:update', { userId, messageId }); }); socket.on('disconnect', () => { redis.del(`user:${userId}:online`); redis.del(`user:${userId}:socket`); socket.broadcast.emit('user:offline', { userId }); }); });

Live Updates

Live updates push data changes to subscribed clients immediately, implementing pub/sub patterns for dashboards, live scores, stock tickers, or collaborative editing; clients subscribe to specific data channels and receive patches or full updates.

class LiveUpdateService { constructor(io, redis) { this.io = io; this.redis = redis; this.subscriptions = new Map(); // Subscribe to Redis pub/sub for multi-server support this.subscriber = redis.duplicate(); this.subscriber.subscribe('data:updates'); this.subscriber.on('message', (channel, message) => { const update = JSON.parse(message); this.broadcast(update.channel, update.data); }); } subscribe(socket, channel) { socket.join(`live:${channel}`); // Track subscriptions for cleanup if (!this.subscriptions.has(socket.id)) { this.subscriptions.set(socket.id, new Set()); } this.subscriptions.get(socket.id).add(channel); } broadcast(channel, data) { this.io.to(`live:${channel}`).emit('update', { channel, data }); } // Publish update (works across multiple servers via Redis) async publish(channel, data) { await this.redis.publish('data:updates', JSON.stringify({ channel, data })); } } // Usage io.on('connection', (socket) => { socket.on('subscribe', (channels) => { channels.forEach(ch => liveUpdates.subscribe(socket, ch)); }); }); // Trigger updates from Express routes app.put('/stocks/:symbol', async (req, res) => { const stock = await stockService.updatePrice(req.params.symbol, req.body.price); await liveUpdates.publish(`stock:${req.params.symbol}`, stock); res.json(stock); });

WebSocket Authentication

WebSocket authentication validates connections during handshake using JWT tokens, session cookies, or custom auth; middleware verifies credentials before allowing connection, with periodic revalidation for long-lived connections.

const { Server } = require('socket.io'); const jwt = require('jsonwebtoken'); const io = new Server(httpServer, { cors: { origin: 'http://localhost:3000', credentials: true } }); // Authentication middleware io.use(async (socket, next) => { try { // Option 1: Token in auth object const token = socket.handshake.auth.token; // Option 2: Token in query string (for SSE fallback) // const token = socket.handshake.query.token; // Option 3: Cookie // const token = parseCookies(socket.handshake.headers.cookie).jwt; if (!token) { return next(new Error('Authentication required')); } const decoded = jwt.verify(token, process.env.JWT_SECRET); socket.userId = decoded.userId; socket.role = decoded.role; // Check if user is still valid (not banned, etc.) const user = await db.users.findById(decoded.userId); if (!user || user.banned) { return next(new Error('User not authorized')); } next(); } catch (err) { next(new Error('Invalid token')); } }); // Namespace-specific auth const adminNamespace = io.of('/admin'); adminNamespace.use((socket, next) => { if (socket.role !== 'admin') { return next(new Error('Admin access required')); } next(); }); // Client connection with auth const socket = io('http://localhost:3000', { auth: { token: localStorage.getItem('jwt') } }); socket.on('connect_error', (err) => { if (err.message === 'Authentication required') { // Redirect to login } });

Scaling WebSockets

Scaling WebSockets across multiple servers requires sticky sessions or a pub/sub layer (Redis adapter) so messages reach all clients regardless of which server they're connected to; this enables horizontal scaling while maintaining real-time functionality.

const { createAdapter } = require('@socket.io/redis-adapter'); const { createClient } = require('redis'); const pubClient = createClient({ url: process.env.REDIS_URL }); const subClient = pubClient.duplicate(); await Promise.all([pubClient.connect(), subClient.connect()]); // Redis adapter for multi-server support io.adapter(createAdapter(pubClient, subClient)); // Now io.emit() works across all servers! io.emit('announcement', { message: 'System update in 5 minutes' }); // Room broadcasts work across servers too io.to('room:123').emit('message', { text: 'Hello room!' });
┌─────────────────────────────────────────────────────────────────────────┐
│              SCALING WEBSOCKETS WITH REDIS ADAPTER                      │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│    Clients          Load Balancer              Servers                  │
│                     (Sticky Sessions)                                   │
│                                                                         │
│  ┌─────────┐            │              ┌─────────────────────┐          │
│  │Client A ├────────────┼─────────────►│  Server 1 (Express) │          │
│  └─────────┘            │              │  + Socket.io        │          │
│                         │              └──────────┬──────────┘          │
│  ┌─────────┐            │                         │                     │
│  │Client B ├────────────┼─────────────►┌──────────┴──────────┐          │
│  └─────────┘            │              │                     │          │
│                         │              │   Redis Pub/Sub     │          │
│  ┌─────────┐            │              │   (Message Broker)  │          │
│  │Client C ├────────────┼─────────────►│                     │          │
│  └─────────┘            │              └──────────┬──────────┘          │
│                         │                         │                     │
│  ┌─────────┐            │              ┌──────────┴──────────┐          │
│  │Client D ├────────────┼─────────────►│  Server 2 (Express) │          │
│  └─────────┘            │              │  + Socket.io        │          │
│                                        └─────────────────────┘          │
│                                                                         │
│  Message from Server 1 → Redis → Server 2 → Client D                    │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘
# docker-compose.yml for scaled setup services: app: build: . deploy: replicas: 4 environment: - REDIS_URL=redis://redis:6379 depends_on: - redis redis: image: redis:7-alpine nginx: image: nginx ports: - "80:80" volumes: - ./nginx.conf:/etc/nginx/nginx.conf depends_on: - app
# nginx.conf - Sticky sessions for WebSocket upstream socketio { ip_hash; # Sticky sessions by client IP server app:3000; } server { location /socket.io/ { proxy_pass http://socketio; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } }

This concludes the Advanced and Senior sections covering Testing, Logging & Monitoring, Architecture Patterns, and Real-time Features in Express.js. These patterns form the foundation for building production-grade, scalable applications.