Skip to content

Fix #933: cap main DuckDB memory_limit, per-pair compaction exclude detection#955

Merged
erikdarlingdata merged 2 commits into
devfrom
feature/933-compaction-binder-and-adaptive-memory
May 15, 2026
Merged

Fix #933: cap main DuckDB memory_limit, per-pair compaction exclude detection#955
erikdarlingdata merged 2 commits into
devfrom
feature/933-compaction-binder-and-adaptive-memory

Conversation

@erikdarlingdata
Copy link
Copy Markdown
Owner

Summary

Two related fixes for #933, on top of #952:

  1. Cap the main DuckDB connection's memory_limit at 1 GB (raised transiently to 4 GB around parquet COPY operations).
  2. Detect compaction exclude-columns per merge step instead of once globally — fixes a Binder Error on query_store_stats archives that span the v13 schema change.

Why this is the right fix

We re-read #933 carefully and realized the titled complaint is "Memory usage on client" — the reporter says Lite uses 2.7-2.9 GB after 10 minutes with 4 servers. The compaction OOMs we've been chasing are downstream symptoms: by the time compaction runs, the app already holds 2.7 GB, leaving little headroom on a 16 GB / 1.6 GB-free machine. The compaction tunings in #942 / #952 were treating symptoms.

Root cause: DuckDbInitializer.ConnectionString set no memory_limit, so the buffer pool ran at DuckDB's default ceiling of 80% of system RAM (~12.8 GB). With archive parquet files accumulating, every UI query over archive views caches parquet pages — the buffer pool grows freely. That's the 2.7-2.9 GB the reporter is seeing.

The wrinkle

We can't just cap the main connection at 1 GB statically. DuckDB v1.5.2 parquet COPY has a buffer-manager-bypass pre-reservation that needs 2-4 GB headroom (validated against DuckDB CLI v1.5.2 standalone — same OOM in both .NET binding and CLI; tracked upstream at duckdb#16482). So:

  • Resting: memory_limit=1GB in the ConnectionString — caps the archive-page cache, addresses the actual complaint.
  • Around each parquet COPY on the main connection: SET memory_limit='4GB', run COPY, restore to 1GB. Factored into a WithRaisedCopyMemoryLimit helper. Three call sites: ExportToParquet and the two COPY paths in ArchiveAllAndResetAsync.
  • Compaction connections (separate :memory: instances): keep 4 GB cap from Fix #933: raise compaction memory_limit to 4 GB #952.

Separate Binder fix

CompactParquetFiles detected CompactionExcludeColumns once across the global union of files, then applied * EXCLUDE (col) to each pair. query_plan_text was added to query_store_stats in migration v13 (2026-02-23), so the reporter's mix of pre-v13 and post-v13 archives lets the global detector see the column, but a pair of two pre-v13 files hits Binder Error: Column "query_plan_text" in EXCLUDE list not found in FROM clause. Fixed by detecting exclude-columns per merge-set in a new BuildSelectClause helper.

Validation

DuckDB CLI v1.5.2 against synthetic query_snapshots-shaped data:

Operation 256 MB 512 MB 1 GB 2 GB 4 GB
INSERT (Appender, 1000 wide rows)
SELECT recent / GROUP BY
COPY table → parquet ❌ pre-resv ❌ pre-resv ❌ pre-resv
COPY read_parquet → parquet (compaction) ✅*

* succeeds on 200 MB synthetic; OOMs on 1.5 GB. The 4 GB raise covers both.

Binder fix reproduced and verified against the DuckDB CLI on a 1-file-with-column + 1-file-without pair.

Tradeoff (named, not hidden)

The resting 1 GB cap forces eviction of cached archive parquet pages. Long-range historical UI queries that re-scan many parquet files will do more disk I/O. Live/recent-data queries against the hot DB are unaffected — hot DB is small enough to fit in 1 GB easily.

Test plan

  • dotnet build Lite/PerformanceMonitorLite.csproj -c Release clean
  • DuckDB CLI validation matrix above
  • Binder fix verified end-to-end (mixed-schema parquet pair)
  • Local hands-on with 4 SQL servers — confirm resting RSS drops materially from 2.7-2.9 GB baseline, UI feels normal, archival/compaction cycle completes
  • Reporter confirms on next nightly

🤖 Generated with Claude Code

erikdarlingdata and others added 2 commits May 14, 2026 09:40
CompactParquetFiles detected CompactionExcludeColumns once, globally,
across the union schema of every source file in a group. It then applied
that "* EXCLUDE (col)" clause to each pair in the pairwise merge.

query_plan_text was added to query_store_stats in migration v13
(2026-02-23). A reporter's archive contains both pre-v13 files (no
column) and post-v13 files (column present). The global DESCRIBE saw
the column in the newer files, so every merge step ran with
"* EXCLUDE (query_plan_text)" — including the steps that merged two
pre-v13 files, which fail with:

  Binder Error: Column "query_plan_text" in EXCLUDE list not found
  in FROM clause

Extract the schema detection into BuildSelectClause(table, paths) and
call it per merge-set instead of once globally — with the actual pair
in the pairwise path, and with all sources in the small-group path. A
pair that doesn't carry an exclude-column now merges with a plain "*".

Verified against DuckDB CLI v1.5.2: DESCRIBE of an [old, old] pair
correctly omits the column, and "* EXCLUDE (query_plan_text)" on that
pair reproduces the reporter's exact Binder Error. Cost is one extra
DESCRIBE per merge step — parquet footer reads, not data.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
#933's titled complaint is "Memory usage on client": Lite holds ~2.7-2.9 GB
after 10 minutes with 4 servers. The compaction OOMs everyone has been
chasing in this thread are a downstream symptom — by the time compaction
runs the app already holds 2.7 GB, leaving little headroom on the reporter's
16 GB / ~1.6 GB-free machine.

Root cause: the main DuckDB ConnectionString set no memory_limit, so the
buffer pool ran at the DuckDB default of 80% of system RAM (~12.8 GB on a
16 GB box). With archive parquet files accumulating on disk, every UI query
over an archive view caches pages and the buffer pool grows freely.

The fix has to navigate one wrinkle: parquet COPY in DuckDB v1.5.2 hits a
buffer-manager-bypass pre-reservation that needs ~2-4 GB headroom. Capping
the main connection at 1 GB statically would break ExportToParquet and the
two COPY paths in ArchiveAllAndResetAsync. So:

- ConnectionString: memory_limit=1GB (caps resting buffer pool — addresses
  the actual complaint by stopping the archive-page cache from growing
  unbounded).
- Around each parquet COPY on the main connection: SET memory_limit='4GB',
  run the COPY, SET back to '1GB'. Factored into a WithRaisedCopyMemoryLimit
  helper so the three call sites stay consistent (ExportToParquet, and the
  two COPYs in ArchiveAllAndResetAsync).
- Compaction connections (separate :memory: instances) keep their 4 GB cap
  from #952.

Verified against DuckDB CLI v1.5.2 with synthetic query_snapshots-shaped
data:
- COPY table→parquet at 256MB/512MB/1GB: OOMs (pre-reservation, matches the
  read_parquet→parquet path we saw in #952 testing).
- COPY table→parquet at 2GB/4GB: succeeds, peak RSS well under cap.
- INSERT (Appender) and SELECT (including GROUP BY across 11k rows) work
  fine at 256MB cap — confirms collectors and UI queries don't have the
  pre-reservation behavior and aren't affected by the resting cap.

Tradeoff: the resting cap forces buffer-pool eviction of cached archive
parquet pages. Long-range historical UI queries that re-scan many parquet
files will do more disk I/O. Live/recent-data queries against the hot DB
are unaffected (hot DB is small enough to fit in 1 GB easily).

Plus the per-merge-step BuildSelectClause from the previous commit fixes
the separate query_store_stats Binder Error on archives that span the
v13 schema change.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@erikdarlingdata erikdarlingdata merged commit b534d89 into dev May 15, 2026
2 checks passed
@erikdarlingdata erikdarlingdata deleted the feature/933-compaction-binder-and-adaptive-memory branch May 15, 2026 14:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant