bitreq is HTTP/1.1 only. The pool stores one connection per (host, port, scheme), so all concurrent requests to a given host share that single connection. Two modes are available, neither suitable for a high-throughput workload to a single backend.
- Default (pipelining off): Requests are sent sequentially. Only one request is in flight at a time, and each new request waits for the previous one to finish.
- Pipelining on: Writes can overlap, but responses must be read in order. A slow response holds up every faster response queued behind it.
HTTP/2's stream multiplexing lifts both constraints. Many requests share one connection with independent stream IDs, and the server returns responses in any order. There is no application-layer head-of-line blocking and no per-connection serialization.
bitreqis HTTP/1.1 only. The pool stores one connection per(host, port, scheme), so all concurrent requests to a given host share that single connection. Two modes are available, neither suitable for a high-throughput workload to a single backend.HTTP/2's stream multiplexing lifts both constraints. Many requests share one connection with independent stream IDs, and the server returns responses in any order. There is no application-layer head-of-line blocking and no per-connection serialization.