Server-Side HTTP Clients: From Native Fetch to Effect, A Production Perspective
A comprehensive comparison of Node.js HTTP clients including performance benchmarks, circuit breaker patterns, and real production experiences
The HTTP Client Challenge
Microservices architectures often start simple - services communicating over HTTP without much thought about edge cases. Then high-traffic events expose the limitations. Payment services might start timing out under load, hanging for 30 seconds per request.
A common issue: using native fetch without proper timeout handling. Those hanging connections can consume Lambda concurrent executions, impacting infrastructure costs.
This experience illustrates that choosing an HTTP client isn't just about features - it's about understanding what breaks under production load.
Why Server-Side HTTP Clients Matter More Than You Think
In the browser, HTTP clients are straightforward. You make a request, handle the response, done. Server-side? That's where things get interesting:
- Connection pooling becomes critical when you're making thousands of requests per second
- Memory leaks can slowly kill your Node.js process over days
- Circuit breakers mean the difference between graceful degradation and cascading failures
- Retry strategies determine whether a network blip becomes an outage
Let's dive into each major player and see how they handle production reality.
Native Fetch: The Default That's Not Always Enough
Since Node.js 18, we've had native fetch. It's tempting to use it everywhere - zero dependencies, standard API, what's not to love?
Where Native Fetch Shines
- Zero dependencies: Your docker images stay lean
- Standard API: Same code works in browser, Node.js, Deno, Bun
- Modern: Built on undici under the hood (since Node.js 18)
Where It Falls Short
Here's what bit us in production:
The AbortController only cancels the JavaScript side. The underlying TCP connection? That might stick around, slowly eating your connection pool.
Production Verdict
Use native fetch for:
- Simple scripts and CLI tools
- Prototypes and POCs
- When you control both client and server
Avoid it when:
- You need retries, circuit breakers, or connection pooling
- Making thousands of requests per second
- Integrating with flaky third-party APIs
Axios: The Swiss Army Knife
Axios remains the most popular choice, with over 65 million weekly downloads. There's a reason it's everywhere.
Memory Leak Detection
Axios can leak memory when handling 502 errors, often due to issues in the follow-redirects dependency. Here's how to identify this pattern:
Connection Pooling and Advanced Configuration
Plain Axios opens a new connection per request. At scale, this kills your server:
Production Verdict
Axios is still solid for:
- Complex request/response transformations
- When you need extensive middleware
- Teams already familiar with it
But watch out for:
- Bundle size (1.84MB unpacked/unzipped, ~13KB gzipped for production bundles)
- Memory leaks with error responses
- Connection pooling requires extra setup
Undici: The Performance Champion
Undici is what powers Node.js fetch internally. But using it directly gives you superpowers.
The Performance Numbers
We ran benchmarks on our payment service (1000 concurrent requests):
Here's the benchmark script we used:
HTTP/2 Support
Undici has HTTP/2 support, but it needs to be explicitly enabled:
HTTP/2 brings significant performance benefits for multiple parallel requests:
Advanced Configuration: Proxy and Certificates
Undici provides extensive proxy and certificate management for production environments:
Production Verdict
Undici excels at:
- High-throughput microservices
- When every millisecond counts
- Memory-constrained environments
Skip it if:
- Your team prefers higher-level abstractions
- You're migrating from Axios (too different)
- You need extensive middleware ecosystem
Effect: The Functional Powerhouse
Effect takes a completely different approach. Instead of promises, you get composable effects with built-in error handling.
The Learning Curve Story
We introduced Effect to one team. Week 1: confusion. Week 2: frustration. Week 4: "We're never going back." The type-safe error handling eliminated an entire class of bugs.
Production Verdict
Effect is perfect for:
- Complex business logic with multiple failure modes
- Teams comfortable with functional programming
- When type safety is critical
Think twice if:
- Your team is new to FP concepts
- You need to onboard juniors quickly
- It's a simple CRUD service
The Others: Quick Rounds
Got: The Node.js Specialist
Great for Node.js-only projects. Built-in pagination, streaming, and DNS caching support.
Ky: The Lightweight Fetch Wrapper
Perfect when you want fetch with batteries included but minimal overhead.
SuperAgent: Still Alive
Plugin system is powerful, but Axios won the popularity contest.
Hono: The Edge Runtime Champion
Perfect for Cloudflare Workers, Vercel Edge Functions, and other edge runtimes where bundle size and cold start time matter most.
Enterprise Environment: Proxies, Certificates, and Corporate Networks
Working in enterprise? Here's what you really need to know:
Corporate Proxy Debugging
Common "connection refused" errors in enterprise environments often stem from:
- Corporate proxy requiring NTLM authentication
- Proxy configuration varying between environments
- Internal APIs being incorrectly routed through the proxy
- Proxy stripping certain headers
Solution: A smart client that auto-detects internal vs external URLs:
Circuit Breakers: Your Production Lifesaver
No matter which HTTP client you choose, add a circuit breaker. Here's our production setup with Cockatiel:
Circuit Breaker Production Benefits
Payment providers can have intermittent timeouts during high-traffic periods. Without circuit breakers, entire checkout flows become blocked. With circuit breakers, systems automatically fail over to backup providers after a threshold of failures, preventing revenue loss.
Production Monitoring Setup
Whatever client you choose, instrument it:
The Decision Matrix
After years of production experience, here's my recommendation matrix:
Production Debugging: Lessons from Experience
Phantom Memory Leak Debugging
Services can slowly consume memory over days without obvious signs in heap dumps. A common cause is subtle bugs in error handling:
Lesson: Always clean up request tracking, even in error paths.
Connection Pool Exhaustion
High-traffic events can expose connection pool limitations when services start returning 502s. The issue often traces to default connection limits:
Debugging Slow Requests in Production
We built a request analyzer that saved us countless debugging hours:
Key Lessons
-
Connection pooling is essential - Production systems can exhaust file descriptors without proper connection limits.
-
Memory leaks hide in error paths - Error scenarios need thorough testing. Clean up resources in finally blocks.
-
Circuit breakers prevent outages - External APIs will fail. Implementing circuit breakers before issues arise saves time and money.
-
Timeouts need layers - Configure connection timeout, request timeout, and total timeout with different values.
-
Comprehensive monitoring is critical - Logs alone aren't sufficient. Metrics and tracing provide essential insights into user experience.
-
Default configurations need review - HTTP clients often have production-unfriendly defaults. Always configure explicitly.
-
Stack traces provide crucial context - Understanding which code paths trigger slow requests significantly reduces debugging time.
What's Next?
The HTTP client landscape keeps evolving. Native fetch is getting better, undici is adding HTTP/2, and Effect is gaining traction. Choose based on your team and use case, not hype.
Start simple (native fetch), measure everything, and upgrade when you hit real limitations. Whatever you choose, add circuit breakers before you need them.
Choose the HTTP client that best fits your team's needs and use case, implement proper error handling, and monitor everything.