Skip to content

Fix Iceberg read optimization returning NULLs for stats-less manifests (#1545)#1764

Open
il9ue wants to merge 1 commit intoAltinity:antalya-26.1from
il9ue:fix/iceberg-empty-stats-26.1
Open

Fix Iceberg read optimization returning NULLs for stats-less manifests (#1545)#1764
il9ue wants to merge 1 commit intoAltinity:antalya-26.1from
il9ue:fix/iceberg-empty-stats-26.1

Conversation

@il9ue
Copy link
Copy Markdown

@il9ue il9ue commented May 8, 2026

Changelog category (leave one):

  • Bug Fix

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Fix Iceberg read optimization returning NULL for every column when reading from manifests written without per-file column statistics (typical of non-Spark writers like pyiceberg with default settings). Affects icebergLocal, icebergS3, icebergAzure, icebergHDFS, and all *Cluster variants. Antalya 26.1 fix for Altinity/ClickHouse#1545.

Description

Antalya-specific bug fix on antalya-26.1 (base tag v26.1.x.altinityantalya). No upstream cherry-pick — this bug exists only on Antalya, introduced by Altinity/ClickHouse#1069 ("Read optimization using Iceberg metadata"). Same fix is being applied to antalya-25.8 and antalya-26.3 as separate PRs.

Why this fires

When reading an Iceberg table written by a non-Spark writer that omits per-file column statistics from the manifest's Avro schema (pyiceberg with default settings, format v1 writers, and others), the allow_experimental_iceberg_read_optimization path produces silent data loss: correct row counts, every column value NULL. The reporter confirmed it on icebergLocal; investigation showed the same code path fires for icebergS3, icebergAzure, icebergHDFS, and all *Cluster variants — anyone using a non-Spark Iceberg writer on any of these is silently returning NULLs.

Root cause

IcebergIterator always populates file_meta_info before yielding objects, so the file_meta_data.has_value() check in the optimization passes. The issue is what's inside the populated DataFileMetaInfo: when the manifest's data_file.value_counts / column_sizes / null_value_counts Avro fields are all absent (per the Iceberg spec, all three are optional), DataFileMetaInfo::columns_info stays empty.

The optimization's second loop in StorageObjectStorageSource::createReader then iterates every requested column, finds none of them in the empty columns_info map, and adds them all to constant_columns_with_values with Field() (NULL). requested_columns_copy is cleared, need_only_count = true, the Parquet reader returns row count only, and generate() injects every column as a constant-NULL column at the correct row count.

The optimization conflates "no stats were written" with "all columns are absent." Absent stats tell us nothing about which columns are physically present in the file.

The fix

Add any_stats_field_present (bool) to DataFileMetaInfo. Populate it in ManifestFile.cpptrue if any of value_counts, column_sizes, or null_value_counts were emitted by the writer. Gate the optimization's absent-NULL loop on this flag: when no stats were emitted, skip the loop entirely and fall through to the Parquet reader, which correctly handles both physically-present columns (read normally) and schema-evolved-absent columns (handled upstream by IcebergMetadata::getInitialSchemaByPath setting the file's own schema as initial_header).

A per-column presence set was considered but is unnecessary because schema evolution is already handled upstream of the optimization; the boolean is sufficient.

JSON serialization (cluster reads via toJson() / JSON-ptr constructor) is updated to round-trip the new field. Missing-on-deserialization defaults to false, which matches pre-fix behavior.

Files changed

  • src/Storages/ObjectStorage/DataLakes/IDataLakeMetadata.h: added any_stats_field_present field to DataFileMetaInfo; constructor signature updated.
  • src/Storages/ObjectStorage/DataLakes/IDataLakeMetadata.cpp: JSON serde round-trips the new field; missing-on-deserialize defaults to false.
  • src/Storages/ObjectStorage/DataLakes/Iceberg/ManifestFile.h / .cpp: tracks whether any stats Avro field was present; passes the bool through ManifestFileEntry to DataFileMetaInfo.
  • src/Storages/ObjectStorage/DataLakes/Iceberg/IcebergIterator.cpp: forwards the new bool when constructing DataFileMetaInfo.
  • src/Storages/ObjectStorage/StorageObjectStorageSource.cpp: the absent-NULL loop now skips when any_stats_field_present is false.

Tested

  • Local build on this branch: PASS (ninja clickhouse, RelWithDebInfo, clang-21).
  • Integration tests (new), all PASS:
    • test_iceberg_local_returns_actual_rows_with_stats_less_manifest — reproducer, fails without the fix, passes with it.
    • test_iceberg_local_returns_correct_rows_when_optimization_disabled — control.
    • test_iceberg_local_partial_stats_manifest_reads_correctly — manifest with value_counts only.
    • test_iceberg_local_full_stats_manifest_reads_correctly — full Spark-style stats regression guard.
  • Existing iceberg integration suite: green.

CI/CD Options

Exclude tests:

  • Fast test
  • Integration Tests
  • Stateless tests
  • Stateful tests
  • Performance tests
  • All with ASAN
  • All with TSAN
  • All with MSAN
  • All with UBSAN
  • All with Coverage
  • All with Aarch64
  • All Regression
  • Disable CI Cache

Regression jobs to run:

  • Fast suites (mostly <1h)
  • Aggregate Functions (2h)
  • Alter (1.5h)
  • Benchmark (30m)
  • ClickHouse Keeper (1h)
  • Iceberg (2h)
  • LDAP (1h)
  • Parquet (1.5h)
  • RBAC (1.5h)
  • SSL Server (1h)
  • S3 (2h)
  • Tiered Storage (2h)

When an Iceberg manifest's per-file column statistics are absent (a
common case for non-Spark writers like pyiceberg with default
settings), DataFileMetaInfo::columns_info is empty. The optimization
in StorageObjectStorageSource::createReader misread this as 'all
columns are absent from the file' and returned constant NULLs for
every row while still returning the correct row count. Result: silent
data loss on icebergLocal, icebergS3, icebergAzure, icebergHDFS, and
all *Cluster variants.

Track whether any per-file stats were emitted via a new
'any_stats_field_present' boolean on DataFileMetaInfo, populated
during manifest parsing. The optimization's absent-NULL loop only
fires when stats are present; when stats are absent entirely, fall
through to the Parquet reader, which correctly handles both
physically-present columns (read normally) and schema-evolved-absent
columns (handled by IcebergMetadata::getInitialSchemaByPath setting
the file's own schema as initial_header).

Closes Altinity#1545.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant