Skip to content

Rails: Use a connection pool for the database connection#177

Open
p8 wants to merge 1 commit intoMDA2AV:mainfrom
p8:rails/connection-pool
Open

Rails: Use a connection pool for the database connection#177
p8 wants to merge 1 commit intoMDA2AV:mainfrom
p8:rails/connection-pool

Conversation

@p8
Copy link
Copy Markdown
Contributor

@p8 p8 commented Mar 26, 2026

No description provided.

@p8 p8 requested a review from MDA2AV as a code owner March 26, 2026 17:31
@p8
Copy link
Copy Markdown
Contributor Author

p8 commented Mar 26, 2026

/benchmark

@github-actions
Copy link
Copy Markdown

🚀 Benchmark run triggered for rails (all profiles). Results will be posted here when done.

@BennyFranciscus
Copy link
Copy Markdown
Collaborator

Nice idea moving from thread-local connections to a proper connection pool — that's cleaner for managing DB access across Puma workers.

One thing though: the ConnectionPool gem doesn't seem to be in the Gemfile. You'd need to add gem 'connection_pool' there, otherwise the server will crash on the first DB request. Unless Rails 8 bundles it as a dependency? I know ActiveRecord has its own connection pool, but ConnectionPool the gem is a separate thing (from Mike Perham / sidekiq).

Also worth considering: with size: 4 and Puma running multiple threads, you might want to tune that to match the thread count. Though for a read-only SQLite DB the pool might not even help much since SQLite serializes writes anyway — the previous thread-local approach was already giving one connection per thread with no contention.

Benchmark is running so we'll see if it makes a measurable difference!

@github-actions
Copy link
Copy Markdown

Benchmark Results

Framework: rails | Profile: all profiles

rails / baseline / 512c (p=1, r=0, cpu=64)
  Best: 40471 req/s (CPU: 6310.4%, Mem: 2.3GiB) ===

rails / baseline / 4096c (p=1, r=0, cpu=64)
  Best: 41462 req/s (CPU: 6118.9%, Mem: 2.4GiB) ===

rails / baseline / 16384c (p=1, r=0, cpu=64)
  Best: 39898 req/s (CPU: 5939.9%, Mem: 2.4GiB) ===

rails / pipelined / 512c (p=16, r=0, cpu=unlimited)
  Best: 214650 req/s (CPU: 10323.3%, Mem: 3.5GiB) ===

rails / pipelined / 4096c (p=16, r=0, cpu=unlimited)
  Best: 208840 req/s (CPU: 10237.4%, Mem: 3.5GiB) ===

rails / pipelined / 16384c (p=16, r=0, cpu=unlimited)
  Best: 191758 req/s (CPU: 8964.2%, Mem: 3.6GiB) ===

rails / limited-conn / 512c (p=1, r=10, cpu=unlimited)
  Best: 36334 req/s (CPU: 5983.0%, Mem: 4.4GiB) ===

rails / limited-conn / 4096c (p=1, r=10, cpu=unlimited)
  Best: 36851 req/s (CPU: 6650.1%, Mem: 4.4GiB) ===

rails / json / 4096c (p=1, r=0, cpu=unlimited)
  Best: 135071 req/s (CPU: 10098.1%, Mem: 7.1GiB) ===

rails / json / 16384c (p=1, r=0, cpu=unlimited)
  Best: 111032 req/s (CPU: 8674.4%, Mem: 7.3GiB) ===

rails / upload / 64c (p=1, r=0, cpu=unlimited)
  Best: 219 req/s (CPU: 4922.2%, Mem: 9.0GiB) ===

rails / upload / 256c (p=1, r=0, cpu=unlimited)
  Best: 188 req/s (CPU: 11644.6%, Mem: 11.8GiB) ===

rails / upload / 512c (p=1, r=0, cpu=unlimited)
  Best: 181 req/s (CPU: 11904.8%, Mem: 12.2GiB) ===

rails / compression / 4096c (p=1, r=0, cpu=unlimited)
  Best: 7773 req/s (CPU: 12051.4%, Mem: 15.2GiB) ===

rails / compression / 16384c (p=1, r=0, cpu=unlimited)
  Best: 7302 req/s (CPU: 10772.9%, Mem: 15.9GiB) ===

rails / noisy / 512c (p=1, r=0, cpu=unlimited)
  Best: 148713 req/s (CPU: 10097.6%, Mem: 4.5GiB) ===

rails / noisy / 4096c (p=1, r=0, cpu=unlimited)
  Best: 111771 req/s (CPU: 10174.6%, Mem: 4.8GiB) ===

rails / noisy / 16384c (p=1, r=0, cpu=unlimited)
  Best: 81448 req/s (CPU: 8634.0%, Mem: 4.8GiB) ===

rails / mixed / 4096c (p=1, r=5, cpu=unlimited)
  Best: 21164 req/s (CPU: 10945.0%, Mem: 21.4GiB) ===

rails / mixed / 16384c (p=1, r=5, cpu=unlimited)
  Best: 19215 req/s (CPU: 9991.9%, Mem: 21.0GiB) ===
Full log
[wait] Waiting for server...
[ready] Server is up

[run 1/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  15s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   76.45ms   13.40ms   232.90ms   408.90ms   618.30ms

  348884 requests in 15.00s, 326049 responses
  Throughput: 21.73K req/s
  Bandwidth:  847.38MB/s
  Status codes: 2xx=290453, 3xx=0, 4xx=0, 5xx=35596
  Latency samples: 326049 / 326049 responses (100.0%)
  Latency overflow (>5s): 133
  Reconnects: 69252
  Errors: connect 0, read 55, timeout 0
  Per-template: 34013,34231,34434,34692,35040,28276,35596,34907,27626,27234
  Per-template-ok: 34013,34231,34434,34692,35040,28276,0,34907,27626,27234

  WARNING: 35596/326049 responses (10.9%) had unexpected status (expected 2xx)
  CPU: 10968.8% | Mem: 19.3GiB

[run 2/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  15s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   65.50ms   9.92ms   216.60ms   350.90ms   500.90ms

  380232 requests in 15.00s, 356231 responses
  Throughput: 23.74K req/s
  Bandwidth:  930.61MB/s
  Status codes: 2xx=317469, 3xx=0, 4xx=0, 5xx=38762
  Latency samples: 356228 / 356231 responses (100.0%)
  Reconnects: 75689
  Errors: connect 0, read 5, timeout 0
  Per-template: 37126,37359,37590,37943,38170,30749,38762,38265,30279,29985
  Per-template-ok: 37126,37359,37590,37943,38170,30749,0,38265,30279,29985

  WARNING: 38762/356231 responses (10.9%) had unexpected status (expected 2xx)
  CPU: 10945.0% | Mem: 21.4GiB

[run 3/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  15s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   66.67ms   10.20ms   219.60ms   365.10ms   522.60ms

  377748 requests in 15.00s, 354085 responses
  Throughput: 23.60K req/s
  Bandwidth:  924.28MB/s
  Status codes: 2xx=315556, 3xx=0, 4xx=0, 5xx=38529
  Latency samples: 354085 / 354085 responses (100.0%)
  Reconnects: 75225
  Errors: connect 0, read 4, timeout 0
  Per-template: 37154,37272,37383,37534,37896,30584,38529,37882,30014,29837
  Per-template-ok: 37154,37272,37383,37534,37896,30584,0,37882,30014,29837

  WARNING: 38529/354085 responses (10.9%) had unexpected status (expected 2xx)
  CPU: 10845.5% | Mem: 21.4GiB

=== Best: 21164 req/s (CPU: 10945.0%, Mem: 21.4GiB) ===
  Input BW: 2.07GB/s (avg template: 104924 bytes)
[dry-run] Results not saved (use --save to persist)
httparena-bench-rails
httparena-bench-rails

==============================================
=== rails / mixed / 16384c (p=1, r=5, cpu=unlimited) ===
==============================================
a864dee3bd4cd89449f909b813fe6a688522a3c36d71bf2dcc1164ce0055c9b9
[wait] Waiting for server...
[ready] Server is up

[run 1/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     16384 (256/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  15s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   83.36ms   16.60ms   255.30ms   538.50ms    1.07s

  327749 requests in 15.02s, 299870 responses
  Throughput: 19.96K req/s
  Bandwidth:  800.64MB/s
  Status codes: 2xx=268198, 3xx=0, 4xx=0, 5xx=31672
  Latency samples: 299870 / 299870 responses (100.0%)
  Reconnects: 64224
  Errors: connect 0, read 348, timeout 0
  Per-template: 32346,32272,32225,31799,31789,25469,31672,30347,26153,25798
  Per-template-ok: 32346,32272,32225,31799,31789,25469,0,30347,26153,25798

  WARNING: 31672/299870 responses (10.6%) had unexpected status (expected 2xx)
  CPU: 10034.7% | Mem: 19.2GiB

[run 2/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     16384 (256/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  15s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   75.62ms   15.20ms   242.90ms   447.00ms   734.00ms

  349901 requests in 15.05s, 320835 responses
  Throughput: 21.31K req/s
  Bandwidth:  858.12MB/s
  Status codes: 2xx=286767, 3xx=0, 4xx=0, 5xx=34068
  Latency samples: 320835 / 320835 responses (100.0%)
  Reconnects: 68619
  Errors: connect 0, read 300, timeout 0
  Per-template: 34457,34394,34224,33999,34076,27247,34068,32573,28149,27648
  Per-template-ok: 34457,34394,34224,33999,34076,27247,0,32573,28149,27648

  WARNING: 34068/320835 responses (10.6%) had unexpected status (expected 2xx)
  CPU: 10132.6% | Mem: 20.6GiB

[run 3/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     16384 (256/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  15s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   88.23ms   14.60ms   245.50ms   461.10ms    5.00s

  352678 requests in 15.01s, 322692 responses
  Throughput: 21.50K req/s
  Bandwidth:  865.27MB/s
  Status codes: 2xx=288424, 3xx=0, 4xx=0, 5xx=34268
  Latency samples: 322692 / 322692 responses (100.0%)
  Latency overflow (>5s): 385
  Reconnects: 69763
  Errors: connect 0, read 327, timeout 0
  Per-template: 34652,34443,34438,34473,34327,27506,34267,32473,28234,27879
  Per-template-ok: 34652,34443,34438,34473,34327,27505,0,32473,28234,27879

  WARNING: 34268/322692 responses (10.6%) had unexpected status (expected 2xx)
  CPU: 9991.9% | Mem: 21.0GiB

=== Best: 19215 req/s (CPU: 9991.9%, Mem: 21.0GiB) ===
  Input BW: 1.88GB/s (avg template: 104924 bytes)
[dry-run] Results not saved (use --save to persist)
httparena-bench-rails
httparena-bench-rails
[skip] rails does not subscribe to async-db
[skip] rails does not subscribe to baseline-h2
[skip] rails does not subscribe to static-h2
[skip] rails does not subscribe to baseline-h3
[skip] rails does not subscribe to static-h3
[skip] rails does not subscribe to unary-grpc
[skip] rails does not subscribe to unary-grpc-tls
[skip] rails does not subscribe to echo-ws
[restore] Restoring CPU governor to powersave...

@BennyFranciscus
Copy link
Copy Markdown
Collaborator

Benchmark results are in! Comparing against PR #175 baseline:

Baseline/pipelined/json — essentially identical, which makes sense since those don't touch the DB.

Mixed test (DB-heavy) — regression:

PR #175 (thread-local) PR #177 (connection pool) Δ
mixed 4096c 24,068 21,164 -12%
mixed 16384c 22,429 19,215 -14%

And notably, PR #177 is showing ~10.9% 5xx errors in the mixed test that weren't present in PR #175 (which had 0 5xx). That suggests the connection pool is actually causing failures under high concurrency — possibly pool exhaustion with size: 4 being too small for the thread count, or checkout timeouts under load.

The thread-local approach (one connection per Puma thread, no contention) might actually be the better fit for this workload. Connection pools shine when you have more threads than DB connections, but here SQLite is the bottleneck anyway.

Might be worth bumping the pool size to match Puma's thread count and seeing if the 5xx errors go away, but honestly the previous approach was simpler and faster for this use case.

Copy link
Copy Markdown
Collaborator

@BennyFranciscus BennyFranciscus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Straightforward improvement. ConnectionPool is standard Ruby practice for thread-safe DB access. Pool size 4 with timeout 5 is reasonable. Clean.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants