When Middy Isn't Enough - Building Custom Lambda Middleware Frameworks
Discover the production challenges that pushed us beyond Middy's limits and how we built a custom middleware framework optimized for performance and scale
When Middy Isn't Enough - Building Custom Lambda Middleware Frameworks#
In Part 1, we explored how Middy transforms Lambda development with clean middleware patterns. But what happens when you're managing 50+ Lambda functions and Middy starts showing its limitations?
That's exactly where we found ourselves during a major platform migration. What started as a love affair with Middy's elegance became a story of scaling challenges, performance bottlenecks, and ultimately, the decision to build our own middleware framework.
The Breaking Points - War Stories from Production#
The Multi-Tenant Validation Crisis#
Our fintech platform served multiple clients, each with completely different validation rules. Customer A required UK postal codes, Customer B needed German VAT validation, and Customer C had entirely custom business rules.
Middy's static middleware approach hit a wall:
// The problem with Middy - static configuration
const schema = getSchemaForTenant(tenantId) // We need this to be dynamic!
.use(validator({ eventSchema: schema })) // But this must be static
We needed dynamic schema generation at runtime, but Middy configures middleware at initialization time. The workaround? Nasty conditional logic scattered throughout our handlers, defeating the entire purpose of clean middleware separation.
Business Impact: Three days of development delay and a custom validation layer we didn't want to maintain.
The Bundle Size Nightmare#
As our middleware stack grew to 8 different Middy packages, something alarming happened during our quarterly performance review:
Performance Metrics:
- Bundle size: 2MB (up from 400KB)
- Cold start time: 1.2 seconds (target: <500ms)
- Memory usage: 128MB baseline
- First response time: 1.8 seconds
For our high-frequency trading API, this was catastrophic. Every millisecond of latency translated to lost revenue. The business team wasn't pleased when they discovered our "elegant" middleware was costing us customers.
The Team Consistency Problem#
With 12 developers working on different services, middleware usage became wildly inconsistent:
// Developer A's approach
export const handler = middy(businessLogic)
.use(httpJsonBodyParser())
.use(validator())
.use(httpErrorHandler())
// Developer B's approach (order is different!)
export const handler = middy(businessLogic)
.use(httpErrorHandler()) // Error handling first?
.use(httpJsonBodyParser())
.use(validator())
// Developer C's approach
export const handler = middy(businessLogic)
.use(customAuth()) // Team-specific middleware
.use(httpJsonBodyParser())
// No validator at all!
Result: Production incidents, debugging nightmares, and error handling that worked differently across services. We needed enforcement, not just conventions.
Designing a Custom Middleware Framework#
These pain points forced us to rethink middleware entirely. Our custom framework addressed three core principles:
1. Performance-First Architecture#
We built a lightweight context system and pre-compiled middleware chains for maximum speed:
interface LightweightContext {
event: any
context: any
response?: any
metadata: Map<string, any> // Memory efficient storage
startTime: number
}
type MiddlewareHandler = (
ctx: LightweightContext,
next: () => Promise<void>
) => Promise<void>
class CustomMiddlewareEngine {
private middlewares: MiddlewareHandler[] = []
private isCompiled = false
private compiledChain?: (ctx: LightweightContext) => Promise<void>
use(middleware: MiddlewareHandler): this {
if (this.isCompiled) {
throw new Error('Cannot add middleware after compilation')
}
this.middlewares.push(middleware)
return this
}
// Pre-compile middleware chain for performance
private compile(): void {
const chain = this.middlewares.reduceRight(
(next, middleware) => (ctx: LightweightContext) =>
middleware(ctx, () => next(ctx)),
() => Promise.resolve()
)
this.compiledChain = chain
this.isCompiled = true
}
async execute(event: any, context: any): Promise<any> {
if (!this.isCompiled) this.compile()
const ctx: LightweightContext = {
event,
context,
metadata: new Map(),
startTime: Date.now()
}
try {
await this.compiledChain!(ctx)
return ctx.response
} catch (error) {
return this.handleError(error, ctx)
}
}
}
Key optimization: We pre-compile the middleware chain instead of building it on every request. This single change cut our middleware overhead by 40%.
2. Dynamic Configuration Support#
For our multi-tenant validation problem, we built dynamic middleware that resolves configuration at runtime:
interface DynamicValidationOptions {
getSchema: (ctx: LightweightContext) => Promise<any>
cacheKey?: (ctx: LightweightContext) => string
}
const dynamicValidator = (options: DynamicValidationOptions): MiddlewareHandler => {
const schemaCache = new Map<string, any>()
return async (ctx, next) => {
let schema: any
if (options.cacheKey) {
const key = options.cacheKey(ctx)
schema = schemaCache.get(key)
if (!schema) {
schema = await options.getSchema(ctx)
schemaCache.set(key, schema)
}
} else {
schema = await options.getSchema(ctx)
}
const isValid = validateAgainstSchema(ctx.event, schema)
if (!isValid) {
throw new ValidationError('Invalid request data')
}
await next()
}
}
// Usage with multi-tenant support
const handler = new CustomMiddlewareEngine()
.use(dynamicValidator({
getSchema: async (ctx) => {
const tenantId = ctx.event.pathParameters?.tenantId
return await getTenantSchema(tenantId)
},
cacheKey: (ctx) => `tenant:${ctx.event.pathParameters?.tenantId}`
}))
This solved our multi-tenant validation while maintaining performance through intelligent caching.
3. Team Convention Enforcement#
Instead of hoping developers follow conventions, we built enforcement into the framework:
interface TeamStandards {
requiredMiddlewares: string[]
forbiddenMiddlewares?: string[]
middlewareOrder: string[]
}
const teamStandardsEnforcer = (standards: TeamStandards): MiddlewareHandler => {
return async (ctx, next) => {
const appliedMiddlewares = ctx.metadata.get('middlewares') || []
// Validate required middlewares are present
for (const required of standards.requiredMiddlewares) {
if (!appliedMiddlewares.includes(required)) {
throw new Error(`Required middleware missing: ${required}`)
}
}
await next()
}
}
// Create standardized handler factory
const createStandardHandler = (businessLogic: Function) => {
return new CustomMiddlewareEngine()
.use(teamStandardsEnforcer({
requiredMiddlewares: ['auth', 'validation', 'errorHandler'],
middlewareOrder: ['auth', 'validation', 'businessLogic', 'errorHandler']
}))
.use(authMiddleware())
.use(validationMiddleware())
.use(wrapBusinessLogic(businessLogic))
.use(errorHandlerMiddleware())
}
Now our team couldn't accidentally skip critical middleware or mess up the ordering. The framework enforced our standards.
Performance Benchmarking - The Numbers#
We ran comprehensive benchmarks comparing Middy with our custom framework using identical functionality:
Test Scenario:
- Simple HTTP API with auth, validation, error handling
- 1000 cold starts, 10,000 warm requests
- Node.js 18 runtime, 1024MB memory
Results:
Metric | Middy + 5 Middlewares | Custom Framework | Improvement |
---|---|---|---|
Bundle Size | 1.8MB | 0.6MB | 67% smaller |
Cold Start | 980ms | 320ms | 67% faster |
Warm Request | 45ms | 28ms | 38% faster |
Memory Usage | 128MB | 94MB | 27% less |
The numbers spoke for themselves. Our custom framework wasn't just faster—it was dramatically faster.
Code Comparison#
Middy Approach:
export const handler = middy(businessLogic)
.use(httpJsonBodyParser())
.use(httpCors({ origin: true }))
.use(validator({ eventSchema: schema }))
.use(httpErrorHandler())
.use(httpSecurityHeaders())
Custom Framework:
const handler = new CustomMiddlewareEngine()
.use(jsonParser())
.use(corsHandler({ origin: true }))
.use(requestValidator(schema))
.use(businessLogicWrapper(businessLogic))
.use(errorHandler())
.use(securityHeaders())
Similar API, drastically different performance characteristics.
Real-World Custom Middleware Examples#
Here are some production middleware we built that would be impossible with Middy:
1. Circuit Breaker with Exponential Backoff#
interface CircuitBreakerOptions {
failureThreshold: number
recoveryTimeout: number
monitor?: (state: 'open' | 'closed' | 'half-open') => void
}
const circuitBreaker = (options: CircuitBreakerOptions): MiddlewareHandler => {
let failures = 0
let lastFailure = 0
let state: 'open' | 'closed' | 'half-open' = 'closed'
return async (ctx, next) => {
const now = Date.now()
// Check if we should attempt recovery
if (state === 'open' && now - lastFailure > options.recoveryTimeout) {
state = 'half-open'
options.monitor?.(state)
}
// Block requests if circuit is open
if (state === 'open') {
throw new Error('Circuit breaker is open - service temporarily unavailable')
}
try {
await next()
// Success - reset failures
if (failures > 0) {
failures = 0
state = 'closed'
options.monitor?.(state)
}
} catch (error) {
failures++
lastFailure = now
if (failures >= options.failureThreshold) {
state = 'open'
options.monitor?.(state)
}
throw error
}
}
}
This middleware automatically protects downstream services from cascading failures—something that would require significant workarounds in Middy.
2. Smart Caching with Invalidation#
interface CacheOptions {
ttl: number
keyGenerator: (ctx: LightweightContext) => string
shouldCache: (ctx: LightweightContext) => boolean
invalidateOn?: string[]
}
const smartCache = (options: CacheOptions): MiddlewareHandler => {
const cache = new Map<string, { data: any, expires: number }>()
return async (ctx, next) => {
const cacheKey = options.keyGenerator(ctx)
const now = Date.now()
// Check cache hit
if (options.shouldCache(ctx)) {
const cached = cache.get(cacheKey)
if (cached && cached.expires > now) {
ctx.response = cached.data
ctx.metadata.set('cache', 'hit')
return // Skip remaining middleware
}
}
await next()
// Cache the response
if (ctx.response && options.shouldCache(ctx)) {
cache.set(cacheKey, {
data: ctx.response,
expires: now + options.ttl
})
ctx.metadata.set('cache', 'miss')
}
}
}
// Usage with intelligent caching
const handler = new CustomMiddlewareEngine()
.use(smartCache({
ttl: 5 * 60 * 1000, // 5 minutes
keyGenerator: (ctx) => `user:${ctx.event.pathParameters?.userId}`,
shouldCache: (ctx) => ctx.event.httpMethod === 'GET'
}))
.use(businessLogicWrapper(getUserProfile))
This middleware can short-circuit the entire request pipeline when cache hits—a huge performance win impossible with Middy's linear approach.
Migration Strategy - From Middy to Custom#
Moving from Middy to our custom framework in production required a careful, phased approach:
Phase 1: Hybrid Approach#
// Mix custom middleware with existing Middy
export const handler = middy(businessLogic)
.use(customPerformanceMiddleware()) // Our custom
.use(httpJsonBodyParser()) // Middy
.use(customValidation()) // Our custom
.use(httpErrorHandler()) // Middy
Phase 2: Feature Parity#
// Build custom equivalents for all Middy middleware
const customJsonParser = (): MiddlewareHandler => {
return async (ctx, next) => {
if (ctx.event.body && typeof ctx.event.body === 'string') {
try {
ctx.event.body = JSON.parse(ctx.event.body)
} catch (error) {
throw new Error('Invalid JSON body')
}
}
await next()
}
}
Phase 3: Performance Optimization#
Once all middleware were ported, we optimized for our specific use cases, achieving the 67% performance improvement shown earlier.
Phase 4: Team Training & Standards#
The final phase involved training the team and establishing new development standards around our custom framework.
When to Choose Custom vs Middy#
Based on our experience, here's the decision matrix:
Choose Middy When:#
- ✅ Team is new to middleware patterns
- ✅ Standard use cases (HTTP APIs, basic validation)
- ✅ Fast development is the priority
- ✅ Bundle size <1MB is acceptable
- ✅ Cold start <1s is acceptable
- ✅ Limited development resources for custom solutions
Choose Custom Framework When:#
- ✅ Performance is critical (<500ms cold start required)
- ✅ Complex business rules requiring dynamic behavior
- ✅ Team has middleware expertise
- ✅ Specific compliance/security requirements
- ✅ Large-scale applications (50+ functions)
- ✅ Need for team standardization and enforcement
Hybrid Approach When:#
- ✅ Migration phase between solutions
- ✅ Different performance requirements per function
- ✅ Learning custom patterns while maintaining productivity
Production Lessons Learned#
1. Performance vs Developer Experience#
Our custom framework was 3x faster but took 2x longer to develop. Evaluate this trade-off based on your business requirements and team capabilities.
2. Team Adoption is Critical#
The best framework is worthless if your team can't adopt it. Change management and training are as important as the technical solution.
3. Maintenance Overhead is Real#
Custom solutions mean custom maintenance. Middy's community support has real value—factor this into your decision.
4. Gradual Migration is Safer#
Big-bang migrations are risky. The gradual, phased approach proved much safer and allowed us to validate our approach incrementally.
Testing Custom Middleware#
Testing our custom framework required a different approach:
describe('Custom Middleware Framework', () => {
test('should execute middleware chain in order', async () => {
const executionOrder: string[] = []
const middleware1 = async (ctx: any, next: Function) => {
executionOrder.push('before-1')
await next()
executionOrder.push('after-1')
}
const middleware2 = async (ctx: any, next: Function) => {
executionOrder.push('before-2')
await next()
executionOrder.push('after-2')
}
const engine = new CustomMiddlewareEngine()
.use(middleware1)
.use(middleware2)
await engine.execute({}, {})
expect(executionOrder).toEqual([
'before-1', 'before-2', 'after-2', 'after-1'
])
})
test('should handle circuit breaker correctly', async () => {
const failingMiddleware = async () => {
throw new Error('Service unavailable')
}
const engine = new CustomMiddlewareEngine()
.use(circuitBreaker({ failureThreshold: 2, recoveryTimeout: 1000 }))
.use(failingMiddleware)
// First failure
await expect(engine.execute({}, {})).rejects.toThrow('Service unavailable')
// Second failure - should open circuit
await expect(engine.execute({}, {})).rejects.toThrow('Service unavailable')
// Third request - should be blocked by circuit breaker
await expect(engine.execute({}, {})).rejects.toThrow('Circuit breaker is open')
})
})
Production Checklist#
Before taking a custom middleware framework to production:
- Performance benchmarks documented and validated
- Error handling comprehensive across all scenarios
- Monitoring and alerting integrated
- Team training completed with hands-on exercises
- Documentation up-to-date and accessible
- Rollback plan tested and ready
- A/B testing capability implemented
- Security review passed with penetration testing
- Load testing completed under realistic conditions
The Bottom Line#
Middy is an excellent starting point for most Lambda applications. But when you're operating at scale, dealing with complex business requirements, or facing strict performance constraints, a custom middleware framework can be transformative.
Key Takeaways:
- Start with Middy - It's proven, battle-tested, and great for learning middleware patterns
- Measure before optimizing - Let performance data drive your decisions, not assumptions
- Team consistency matters more than framework choice - Standards and enforcement are critical
- Custom isn't always better - Factor in maintenance costs and team expertise
- Migration requires careful planning - Gradual approaches reduce risk and allow validation
Our journey from Middy to a custom framework taught us that sometimes the best solution is the one you build yourself—but only when you have compelling business reasons and the team expertise to execute it well.
The middleware patterns we learned from Middy became the foundation for something even better suited to our specific needs. Whether you stick with Middy or build your own, the principles of clean middleware design will serve you well in your serverless journey.
AWS Lambda Middleware Mastery
From Middy basics to building custom middleware frameworks for production-scale Lambda applications
All Posts in This Series
Comments (0)
Join the conversation
Sign in to share your thoughts and engage with the community
No comments yet
Be the first to share your thoughts on this post!
Comments (0)
Join the conversation
Sign in to share your thoughts and engage with the community
No comments yet
Be the first to share your thoughts on this post!