Skip to content

[Scheduler] Add metrics for Cluster round trip#2269

Draft
DiegoTavares wants to merge 11 commits into
AcademySoftwareFoundation:masterfrom
DiegoTavares:sched_metric_clusters
Draft

[Scheduler] Add metrics for Cluster round trip#2269
DiegoTavares wants to merge 11 commits into
AcademySoftwareFoundation:masterfrom
DiegoTavares:sched_metric_clusters

Conversation

@DiegoTavares
Copy link
Copy Markdown
Collaborator

@DiegoTavares DiegoTavares commented May 5, 2026

Add metrics to measure how long a cluster takes to round trip the scheduler loop.

Fix minor issues and refactor a confusing if let Some construct.

LLM usage disclosure

Claude Opus was used to investigate panic surfaces that might lead to abandoned clusters and to implement the metric collecting logic.

Summary by CodeRabbit

  • New Features

    • Added cluster round-trip latency metrics for improved observability.
  • Bug Fixes

    • Enhanced error handling and unwind-safety in resource accounting processes.
  • Performance Improvements

    • Optimized tag processing configuration with reduced chunk sizes.
    • Improved permit management in layer processing.

Scc silently drops inserts where the key already exists.
Also ensure all_sleeping_rounds is reset at the end of each full iteration
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 5, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: adf99737-97c1-4790-b583-6d28cc0cbace

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds per-cluster round-trip timing and a histogram metric, refactors cluster feed streaming to use a dedicated control channel and per-cluster timestamps, improves unwind-safety in async resource loops, switches permit grant to upsert behavior, and reduces default tag-chunk sizes in scheduler config.

Changes

Cluster feed + metrics

Layer / File(s) Summary
Metrics Definition
rust/crates/scheduler/src/metrics/mod.rs
Adds CLUSTER_ROUND_TRIP_SECONDS histogram and observe_cluster_round_trip(duration: Duration) function.
Control Channel + API Wiring
rust/crates/scheduler/src/cluster.rs
Replaces previous cancel channel with a dedicated small feed_sender/feed_receiver control channel while continuing to return Sender<FeedMessage> to callers.
Per-cluster Timing State
rust/crates/scheduler/src/cluster.rs
Introduces last_sent_map: Arc<Mutex<HashMap<Cluster, Instant>>> and producer handle to track emission timestamps.
Instrumentation on Emit
rust/crates/scheduler/src/cluster.rs
On sending a cluster, records now and, if previous timestamp exists, calls observe_cluster_round_trip(now.duration_since(prev)).
Imports
rust/crates/scheduler/src/cluster.rs
Adds Instant, FutureExt, and metrics::observe_cluster_round_trip to imports.

Robustness and behavior tweaks

Layer / File(s) Summary
Async Unwind Safety
rust/crates/scheduler/src/resource_accounting.rs
Wraps resource and subscription recalculation loops with AssertUnwindSafe and uses .catch_unwind() to prevent unwinding across await points. Adds panic::AssertUnwindSafe and futures::FutureExt imports.
Overflow Handling
rust/crates/scheduler/src/resource_accounting.rs
Replaces unwrap_or_else fallback on try_into with explicit match, warning and preserving existing values on overflow.
Permit Upsert
rust/crates/scheduler/src/pipeline/layer_permit.rs
insert_syncupsert_sync when granting a new permit, enabling update-or-insert semantics for existing layer IDs.
Queue Defaults
rust/config/scheduler.yaml, rust/crates/scheduler/src/config/mod.rs
Reduces manual_tags_chunk_size from 100→50 and hostname_tags_chunk_size from 300→50 in defaults.
Minor Formatting
rust/crates/scheduler/src/config/mod.rs
show_names initialization reformatted to a single-line form (no semantic change).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • lithorus
  • ramonfigueiredo

Poem

🐰 I timed the hops from burrow to tree,
Feeds found a new path, control wings set free,
Panics now loosened, permits softly change,
Chunk sizes trimmed small — the trail's rearranged. 🥕

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly matches the main objective of the PR: adding metrics to measure cluster round trip duration through the scheduler loop.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@DiegoTavares DiegoTavares marked this pull request as ready for review May 5, 2026 21:12
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
rust/crates/scheduler/src/resource_accounting.rs (1)

120-157: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

These refresh loops still die on the first panic and need unwind handling restructured.

Lines 133 and 155 log panics from catch_unwind(), but because it wraps the entire async block, the task exits immediately afterward. This permanently disables resource recomputation and subscription cache refresh, leaving the scheduler to make decisions from stale accounting data.

The unwind boundary must move inside each loop iteration so the task logs the panic and continues running rather than terminating on the first error.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@rust/crates/scheduler/src/resource_accounting.rs` around lines 120 - 157,
Both background tasks currently wrap the whole async block with catch_unwind so
a single panic stops the task; instead, move the unwind boundary inside the loop
and catch per-iteration panics so the loop continues. Concretely: in the
resource loop wrap each iteration's call to
dao.recompute_all_from_proc(&target_shows_opt).await with
AssertUnwindSafe(...).catch_unwind().await and on Err(e) log the panic (as
currently done) and continue the loop; likewise in the subscription loop wrap
each iteration's recalculate_and_refresh(&cache, &dao, &target_shows).await with
AssertUnwindSafe(...).catch_unwind().await, log on Err(e) and continue; keep the
outer interval/tick logic and CONFIG.queue.* intervals unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@rust/crates/scheduler/src/cluster.rs`:
- Around line 379-520: The panic handling currently wraps the whole producer and
receiver async blocks (the async move tasks that contain the main loops in the
cluster feed), so any panic aborts the entire task; change this by moving the
AssertUnwindSafe(...).catch_unwind().await inside each loop iteration: for the
producer loop (the async block that reads self.clusters, sends via sender, and
updates last_sent_map_producer) wrap the per-iteration work in
AssertUnwindSafe(...).catch_unwind().await, log the error (e.g., "Iteration
panicked: {:?}"), and continue the loop so the feed keeps running; do the same
for the receiver loop that matches on FeedMessage (the async block that awaits
feed_receiver.recv(), updates sleep_map and last_sent_map_receiver, and handles
FeedMessage::Stop), so a single iteration panic is logged and skipped without
terminating the whole spawn.

---

Outside diff comments:
In `@rust/crates/scheduler/src/resource_accounting.rs`:
- Around line 120-157: Both background tasks currently wrap the whole async
block with catch_unwind so a single panic stops the task; instead, move the
unwind boundary inside the loop and catch per-iteration panics so the loop
continues. Concretely: in the resource loop wrap each iteration's call to
dao.recompute_all_from_proc(&target_shows_opt).await with
AssertUnwindSafe(...).catch_unwind().await and on Err(e) log the panic (as
currently done) and continue the loop; likewise in the subscription loop wrap
each iteration's recalculate_and_refresh(&cache, &dao, &target_shows).await with
AssertUnwindSafe(...).catch_unwind().await, log on Err(e) and continue; keep the
outer interval/tick logic and CONFIG.queue.* intervals unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: fcbc6f53-5ca3-4275-a8ce-20bb44a4c5dd

📥 Commits

Reviewing files that changed from the base of the PR and between 4d98183 and 7f2c512.

📒 Files selected for processing (4)
  • rust/crates/scheduler/src/cluster.rs
  • rust/crates/scheduler/src/metrics/mod.rs
  • rust/crates/scheduler/src/pipeline/layer_permit.rs
  • rust/crates/scheduler/src/resource_accounting.rs

Comment thread rust/crates/scheduler/src/cluster.rs Outdated
…queries

Phase 1 scheduler quick wins: empty-cluster sleep, LIMIT, refresh guard

- Empty-cluster sleep now configurable (cluster_empty_sleep, default 30s).
- QUERY_PENDING_BY_SHOW_FACILITY_TAG capped via max_jobs_per_cluster_pass
  (default 20). Strict ORDER BY priority DESC; low-priority jobs deferred.
- HostCacheService skips overlapping refresh ticks via an AtomicBool guard.

Add V40 indexes for scheduler pending-job query

GIN on layer.str_tags (array overlap), composite partial on
job(pk_show, pk_facility, str_state, b_paused) WHERE PENDING/not paused,
partial on layer_stat(pk_layer) WHERE int_waiting_count > 0.

Plain CREATE INDEX (Flyway 5.2.0 wraps in a transaction, which Postgres
rejects for CONCURRENTLY); apply with CONCURRENTLY via psql before Flyway
when running against populated production tables.

Drop LOWER(pk_facility) hack and rewrite QUERY_PENDING with EXISTS

Scheduler-side facility id is now String (was Uuid). The dao::helpers
parse_uuid path was lower-casing every facility round-trip, which forced
LOWER() compares in 6 SQL sites. Cuebot writes canonical casing on insert,
so a String swap removes the hack at the source.

QUERY_PENDING_BY_SHOW_FACILITY_TAG rewritten to a single bookable_shows
CTE plus EXISTS subquery, removing the layer ⨝ layer_stat ⨝ DISTINCT
cardinality blowup. Folder cap split into outer early-out and per-layer
fit inside the EXISTS.
@DiegoTavares DiegoTavares marked this pull request as draft May 7, 2026 19:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant