6.0 KiB
Class: RoundRobinPool
Extends: undici.Dispatcher
A pool of Client instances connected to the same upstream target with round-robin client selection.
Unlike Pool, which always selects the first available client, RoundRobinPool cycles through clients in a round-robin fashion. This ensures even distribution of requests across all connections, which is particularly useful when the upstream target is behind a load balancer that round-robins TCP connections across multiple backend servers (e.g., Kubernetes Services).
Requests are not guaranteed to be dispatched in order of invocation.
new RoundRobinPool(url[, options])
Arguments:
- url
URL | string- It should only include the protocol, hostname, and port. - options
RoundRobinPoolOptions(optional)
Parameter: RoundRobinPoolOptions
Extends: ClientOptions
- factory
(origin: URL, opts: Object) => Dispatcher- Default:(origin, opts) => new Client(origin, opts) - connections
number | null(optional) - Default:null- The number ofClientinstances to create. When set tonull, theRoundRobinPoolinstance will create an unlimited amount ofClientinstances. - clientTtl
number | null(optional) - Default:null- The amount of time before aClientinstance is removed from theRoundRobinPooland closed. When set tonull,Clientinstances will not be removed or closed based on age.
Use Case
RoundRobinPool is designed for scenarios where:
- You connect to a single origin (e.g.,
http://my-service.namespace.svc) - That origin is backed by a load balancer distributing TCP connections across multiple servers
- You want requests evenly distributed across all backend servers
Example: In Kubernetes, when using a Service DNS name with multiple Pod replicas, kube-proxy load balances TCP connections. RoundRobinPool ensures each connection (and thus each Pod) receives an equal share of requests.
Important: Backend Distribution Considerations
RoundRobinPool distributes HTTP requests evenly across TCP connections. Whether this translates to even backend server distribution depends on the load balancer's behavior:
✓ Works when the load balancer:
- Assigns different backends to different TCP connections from the same client
- Uses algorithms like: round-robin, random, least-connections (without client affinity)
- Example: Default Kubernetes Services without
sessionAffinity
✗ Does NOT work when:
- Load balancer has client/source IP affinity (all connections from one IP → same backend)
- Load balancer uses source-IP-hash or sticky sessions
How it works:
RoundRobinPoolcreates N TCP connections to the load balancer endpoint- Load balancer assigns each TCP connection to a backend (per its algorithm)
RoundRobinPoolcycles HTTP requests across those N connections- Result: Requests distributed proportionally to how the LB distributed the connections
If the load balancer assigns all connections to the same backend (e.g., due to session affinity), RoundRobinPool cannot overcome this. In such cases, consider using BalancedPool with direct backend addresses (e.g., individual pod IPs) instead of a load-balanced endpoint.
Instance Properties
RoundRobinPool.closed
Implements Client.closed
RoundRobinPool.destroyed
Implements Client.destroyed
RoundRobinPool.stats
Returns PoolStats instance for this pool.
Instance Methods
RoundRobinPool.close([callback])
Implements Dispatcher.close([callback]).
RoundRobinPool.destroy([error, callback])
Implements Dispatcher.destroy([error, callback]).
RoundRobinPool.connect(options[, callback])
See Dispatcher.connect(options[, callback]).
RoundRobinPool.dispatch(options, handler)
Implements Dispatcher.dispatch(options, handler).
RoundRobinPool.pipeline(options, handler)
See Dispatcher.pipeline(options, handler).
RoundRobinPool.request(options[, callback])
See Dispatcher.request(options [, callback]).
RoundRobinPool.stream(options, factory[, callback])
See Dispatcher.stream(options, factory[, callback]).
RoundRobinPool.upgrade(options[, callback])
See Dispatcher.upgrade(options[, callback]).
Instance Events
Event: 'connect'
See Dispatcher Event: 'connect'.
Event: 'disconnect'
See Dispatcher Event: 'disconnect'.
Event: 'drain'
See Dispatcher Event: 'drain'.
Example
import { RoundRobinPool } from 'undici'
const pool = new RoundRobinPool('http://my-service.default.svc.cluster.local', {
connections: 10
})
// Requests will be distributed evenly across all 10 connections
for (let i = 0; i < 100; i++) {
const { body } = await pool.request({
path: '/api/data',
method: 'GET'
})
console.log(await body.json())
}
await pool.close()
See Also
- Pool - Connection pool without round-robin
- BalancedPool - Load balancing across multiple origins
- Issue #3648 - Original issue describing uneven distribution