[Scheduler] Add metrics for Cluster round trip#2269
Conversation
Scc silently drops inserts where the key already exists.
Also ensure all_sleeping_rounds is reset at the end of each full iteration
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughAdds per-cluster round-trip timing and a histogram metric, refactors cluster feed streaming to use a dedicated control channel and per-cluster timestamps, improves unwind-safety in async resource loops, switches permit grant to upsert behavior, and reduces default tag-chunk sizes in scheduler config. ChangesCluster feed + metrics
Robustness and behavior tweaks
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
rust/crates/scheduler/src/resource_accounting.rs (1)
120-157:⚠️ Potential issue | 🟠 Major | ⚡ Quick winThese refresh loops still die on the first panic and need unwind handling restructured.
Lines 133 and 155 log panics from
catch_unwind(), but because it wraps the entire async block, the task exits immediately afterward. This permanently disables resource recomputation and subscription cache refresh, leaving the scheduler to make decisions from stale accounting data.The unwind boundary must move inside each loop iteration so the task logs the panic and continues running rather than terminating on the first error.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@rust/crates/scheduler/src/resource_accounting.rs` around lines 120 - 157, Both background tasks currently wrap the whole async block with catch_unwind so a single panic stops the task; instead, move the unwind boundary inside the loop and catch per-iteration panics so the loop continues. Concretely: in the resource loop wrap each iteration's call to dao.recompute_all_from_proc(&target_shows_opt).await with AssertUnwindSafe(...).catch_unwind().await and on Err(e) log the panic (as currently done) and continue the loop; likewise in the subscription loop wrap each iteration's recalculate_and_refresh(&cache, &dao, &target_shows).await with AssertUnwindSafe(...).catch_unwind().await, log on Err(e) and continue; keep the outer interval/tick logic and CONFIG.queue.* intervals unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@rust/crates/scheduler/src/cluster.rs`:
- Around line 379-520: The panic handling currently wraps the whole producer and
receiver async blocks (the async move tasks that contain the main loops in the
cluster feed), so any panic aborts the entire task; change this by moving the
AssertUnwindSafe(...).catch_unwind().await inside each loop iteration: for the
producer loop (the async block that reads self.clusters, sends via sender, and
updates last_sent_map_producer) wrap the per-iteration work in
AssertUnwindSafe(...).catch_unwind().await, log the error (e.g., "Iteration
panicked: {:?}"), and continue the loop so the feed keeps running; do the same
for the receiver loop that matches on FeedMessage (the async block that awaits
feed_receiver.recv(), updates sleep_map and last_sent_map_receiver, and handles
FeedMessage::Stop), so a single iteration panic is logged and skipped without
terminating the whole spawn.
---
Outside diff comments:
In `@rust/crates/scheduler/src/resource_accounting.rs`:
- Around line 120-157: Both background tasks currently wrap the whole async
block with catch_unwind so a single panic stops the task; instead, move the
unwind boundary inside the loop and catch per-iteration panics so the loop
continues. Concretely: in the resource loop wrap each iteration's call to
dao.recompute_all_from_proc(&target_shows_opt).await with
AssertUnwindSafe(...).catch_unwind().await and on Err(e) log the panic (as
currently done) and continue the loop; likewise in the subscription loop wrap
each iteration's recalculate_and_refresh(&cache, &dao, &target_shows).await with
AssertUnwindSafe(...).catch_unwind().await, log on Err(e) and continue; keep the
outer interval/tick logic and CONFIG.queue.* intervals unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fcbc6f53-5ca3-4275-a8ce-20bb44a4c5dd
📒 Files selected for processing (4)
rust/crates/scheduler/src/cluster.rsrust/crates/scheduler/src/metrics/mod.rsrust/crates/scheduler/src/pipeline/layer_permit.rsrust/crates/scheduler/src/resource_accounting.rs
…queries Phase 1 scheduler quick wins: empty-cluster sleep, LIMIT, refresh guard - Empty-cluster sleep now configurable (cluster_empty_sleep, default 30s). - QUERY_PENDING_BY_SHOW_FACILITY_TAG capped via max_jobs_per_cluster_pass (default 20). Strict ORDER BY priority DESC; low-priority jobs deferred. - HostCacheService skips overlapping refresh ticks via an AtomicBool guard. Add V40 indexes for scheduler pending-job query GIN on layer.str_tags (array overlap), composite partial on job(pk_show, pk_facility, str_state, b_paused) WHERE PENDING/not paused, partial on layer_stat(pk_layer) WHERE int_waiting_count > 0. Plain CREATE INDEX (Flyway 5.2.0 wraps in a transaction, which Postgres rejects for CONCURRENTLY); apply with CONCURRENTLY via psql before Flyway when running against populated production tables. Drop LOWER(pk_facility) hack and rewrite QUERY_PENDING with EXISTS Scheduler-side facility id is now String (was Uuid). The dao::helpers parse_uuid path was lower-casing every facility round-trip, which forced LOWER() compares in 6 SQL sites. Cuebot writes canonical casing on insert, so a String swap removes the hack at the source. QUERY_PENDING_BY_SHOW_FACILITY_TAG rewritten to a single bookable_shows CTE plus EXISTS subquery, removing the layer ⨝ layer_stat ⨝ DISTINCT cardinality blowup. Folder cap split into outer early-out and per-layer fit inside the EXISTS.
Signed-off-by: Diego Tavares <dtavares@imageworks.com>
Add metrics to measure how long a cluster takes to round trip the scheduler loop.
Fix minor issues and refactor a confusing
if let Someconstruct.LLM usage disclosure
Claude Opus was used to investigate panic surfaces that might lead to abandoned clusters and to implement the metric collecting logic.
Summary by CodeRabbit
New Features
Bug Fixes
Performance Improvements