Skip to content

⚡ Bolt: [performance improvement] Optimize PyArrow .as_py() calls#2588

Open
SatoryKono wants to merge 4 commits intomainfrom
bolt-optimize-as-py-8558119282180853928
Open

⚡ Bolt: [performance improvement] Optimize PyArrow .as_py() calls#2588
SatoryKono wants to merge 4 commits intomainfrom
bolt-optimize-as-py-8558119282180853928

Conversation

@SatoryKono
Copy link
Copy Markdown
Owner

@SatoryKono SatoryKono commented Mar 30, 2026

💡 What: Optimize PyArrow scalar to Python object conversion in serialize_column_to_json by caching .as_py() result using walrus operator.
🎯 Why: PyArrow's .as_py() method is expensive. The previous implementation called it twice for each element (once for null-checking, once for serialization). This caching avoids redundant evaluations.
📊 Impact: Expected to significantly improve serialization performance for complex Arrow columns (lists, structs) during export-safe flattening.
🔬 Measurement: To verify, benchmark the flatten_arrow_table_for_export function on a large table with list or struct columns and compare execution times.


PR created automatically by Jules for task 8558119282180853928 started by @SatoryKono

Summary by CodeRabbit

  • Refactor

    • Optimized data serialization to eliminate redundant processing during exports, improving export performance and reducing memory overhead.
  • Chores

    • Removed a stale test artifact from the repository to reduce clutter and avoid misleading test output during local inspection.

Co-authored-by: SatoryKono <13055362+SatoryKono@users.noreply.github.com>
@google-labs-jules
Copy link
Copy Markdown
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@chatgpt-codex-connector
Copy link
Copy Markdown

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@github-actions github-actions bot added the layer:domain Domain layer label Mar 30, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 30, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6d6906fd-9a99-4966-82f5-f34d62af9535

📥 Commits

Reviewing files that changed from the base of the PR and between 499191c and 5f4cd0b.

📒 Files selected for processing (1)
  • .pytest-tmp/infra-integ/collect-only.txt
💤 Files with no reviewable changes (1)
  • .pytest-tmp/infra-integ/collect-only.txt

📝 Walkthrough

Walkthrough

Cached v.as_py() via the walrus operator in serialize_column_to_json to avoid a redundant call; also removed a generated pytest collect-only artifact file containing test collection errors.

Changes

Cohort / File(s) Summary
JSON Serialization Optimization
src/bioetl/domain/serialization.py
Use the walrus operator to store v.as_py() in serialize_column_to_json, removing a duplicate call in the comprehension while preserving existing null handling and output construction.
Test Artifact Removal
.pytest-tmp/infra-integ/collect-only.txt
Deleted a generated pytest collect-only output file that recorded multiple import errors during test collection; no code behavior changed.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

🐰 I munched a call, then cached it fast,

No more double-hops from present to past.
A tiny cleanup, a vanished test page,
My whiskers twitch — the code runs sage. 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning The description provides context (what, why, impact, measurement) but lacks the structured template sections (Summary, Changes, Type, Affected layers, Test plan, Checklist) required by the repository. Use the repository's PR description template with all required sections: Summary, Changes, Type, Affected layers, Test plan, and Checklist. Mark applicable checkboxes and clearly list test verification steps.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly identifies the main change: optimizing PyArrow .as_py() calls for performance improvement, directly matching the core modification in serialize_column_to_json.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch bolt-optimize-as-py-8558119282180853928

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src/bioetl/domain/serialization.py (1)

266-273: Consider using to_pylist() for more efficient bulk conversion instead of per-element .as_py() calls.

The walrus operator approach works correctly, but using col.to_pylist() delegates the entire conversion to PyArrow's C++ layer, avoiding per-scalar Python calls:

♻️ Suggested refactor
 def serialize_column_to_json(col: pa.ChunkedArray) -> pa.Array:
     """Serialize a complex Arrow column into stringified JSON values."""
-    # Use walrus operator to cache expensive .as_py() call
     vals = [
-        serialize_to_json(val) if (val := v.as_py()) is not None else None
-        for v in col
+        serialize_to_json(val) if val is not None else None
+        for val in col.to_pylist()
     ]
     return pa.array(vals, type=pa.string())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bioetl/domain/serialization.py` around lines 266 - 273, The current
serialize_column_to_json uses per-element v.as_py() calls which are slow; change
it to call col.to_pylist() and iterate that Python list, passing each element to
serialize_to_json (preserving None handling) and then build the output
pa.array(..., type=pa.string()); update serialize_column_to_json to use
col.to_pylist() and keep serialize_to_json for element serialization to leverage
PyArrow's C++ bulk conversion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/bioetl/domain/serialization.py`:
- Around line 266-273: The current serialize_column_to_json uses per-element
v.as_py() calls which are slow; change it to call col.to_pylist() and iterate
that Python list, passing each element to serialize_to_json (preserving None
handling) and then build the output pa.array(..., type=pa.string()); update
serialize_column_to_json to use col.to_pylist() and keep serialize_to_json for
element serialization to leverage PyArrow's C++ bulk conversion.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 3b621fa0-f76e-4a9f-88b8-c2b480ef4ecc

📥 Commits

Reviewing files that changed from the base of the PR and between 83d9d1b and 499191c.

📒 Files selected for processing (1)
  • src/bioetl/domain/serialization.py

google-labs-jules bot and others added 3 commits March 30, 2026 22:23
Co-authored-by: SatoryKono <13055362+SatoryKono@users.noreply.github.com>
Co-authored-by: SatoryKono <13055362+SatoryKono@users.noreply.github.com>
Co-authored-by: SatoryKono <13055362+SatoryKono@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

layer:domain Domain layer

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant