Skip to content

Instantly share code, notes, and snippets.

@Ethan-Arrowood
Last active July 15, 2025 15:11
Show Gist options
  • Select an option

  • Save Ethan-Arrowood/9abe565816caf8c885e31ef17aa8c7ad to your computer and use it in GitHub Desktop.

Select an option

Save Ethan-Arrowood/9abe565816caf8c885e31ef17aa8c7ad to your computer and use it in GitHub Desktop.
A short demonstration of how Undici's `Pool` manages connections and keep-alive directives.
import { createServer } from 'node:http';
import { Pool } from 'undici';
// Create a simple HTTP server that is configured with a 15s keep-alive timeout
// And includes the 'Keep-Alive' header in responses
const server = createServer(
{ joinDuplicateHeaders: true, keepAlive: true, keepAliveTimeout: 15 * 1000 },
(req, res) => {
res.writeHead(200, {
'Content-Type': 'text/plain',
Connection: 'keep-alive',
'Keep-Alive': 'timeout=15',
});
res.end('foo');
},
);
// Track the number of connections to the server
let connections = 0;
server.on('connection', () => {
connections++;
});
// We will use a keep-alive timeout of 10 seconds and expect 10 connections from the client pool
const keepAliveTimeout = 10 * 1000;
const expectedConnections = 10;
// Start the server
server.listen(0, () => {
// Create a new pool configured with the current server, as well as a 10s keep-alive timeout.
const pool = new Pool(`http://localhost:${server.address().port}`, {
connections: expectedConnections,
keepAliveTimeout,
keepAliveMaxTimeout: keepAliveTimeout * 2,
});
// Determine a reasonable batch size for requests
const batchSize = 100;
let batch1Completed = 0;
for (let i = 0; i < batchSize; i++) {
// Generate `batchSize` requests to the server
pool.request({ path: '/', method: 'GET' }, (err, res) => {
if (err) {
console.error(`Request error:`, err);
return;
}
// Consume the response body to ensure the request is fully processed
res.body
.on('end', () => {
// When the response completes, increment the batch completion counter
batch1Completed++;
// Once all `batchSize` requests are completed, move on to the next step
if (batch1Completed === batchSize) {
console.log(
`Pool stats after first batch completed: ${stats(pool)}`,
);
// Once the first batch is done, wait half of the keep-alive timeout
setTimeout(() => {
// And execute another batch of requests
let batch2Completed = 0;
for (let j = 0; j < batchSize; j++) {
pool.request({ path: '/', method: 'GET' }, (err, res) => {
if (err) {
console.error(`Request error:`, err);
return;
}
res.body
.on('end', () => {
batch2Completed++;
// Do the same thing as before. Track request completion and when all are done, move on
if (batch2Completed === batchSize) {
console.log(
`Pool stats after second batch completed: ${stats(pool)}`,
);
console.log(
`Server reported ${connections} connections. We expected ${expectedConnections} connections.`,
);
// Finally, wait for the server's keep-alive timeout to expire (15s + an extra 1s)
// And see how the pool stats connections go to 0 automatically
setTimeout(
() => {
console.log(
`Pool stats after waiting for keep-alive to expire: ${stats(pool)}`,
);
server.close();
},
keepAliveTimeout + 5000 + 1000,
);
}
})
.resume();
});
}
}, keepAliveTimeout / 2);
}
})
.resume();
});
}
});
function stats(pool) {
return `Connected: ${pool.stats.connected} | Free: ${pool.stats.free} | Pending: ${pool.stats.pending} | Queued: ${pool.stats.queued} | Running: ${pool.stats.running} | Total: ${pool.stats.size}`;
}
@Ethan-Arrowood
Copy link
Author

Example run:

Pool stats after first batch completed: Connected: 10 | Free: 9 | Pending: 0 | Queued: 0 | Running: 0 | Total: 0
Pool stats after second batch completed: Connected: 10 | Free: 5 | Pending: 0 | Queued: 0 | Running: 0 | Total: 0
Server reported 10 connections. We expected 10 connections.
Pool stats after waiting for keep-alive to expire: Connected: 0 | Free: 0 | Pending: 0 | Queued: 0 | Running: 0 | Total: 0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment