|
| 1 | +# Pipelines |
| 2 | + |
| 3 | +This document describes how the `dstack` server implements background processing via so-called "pipelines". |
| 4 | + |
| 5 | +*Historical context: `dstack` used to do all background processing via scheduled tasks. A scheduled task would process a specific resource type like volumes or runs by keeping DB transaction open for the entire processing duration and keeping the resource lock with SELECT FOR UPDATE (or in-memory lock on SQLite). This approach didn't scale well because the number of DB connections was a huge bottleneck. Pipelines replaced scheduled tasks: the do all the heavy processing outside of DB transactions and write locks to DB columns.* |
| 6 | + |
| 7 | +## Overview |
| 8 | + |
| 9 | +* Resources are continuously processed in the background by pipelines. A pipeline consists of a fetcher, workers, and a heartbeater. |
| 10 | +* A fetcher selects rows to be processed from the DB, marks them as locked in the DB, and puts them into an in-memory queue. |
| 11 | +* Workers consume rows from the in-memory queue, process the rows, and unlock them. |
| 12 | +* The locking (unlocking) is done by setting (unsetting) `lock_expires_at`, `lock_token`, `lock_owner`. |
| 13 | +* If the replica/pipeline dies, the rows stay locked in the db. Another replica picks up the rows after `lock_expires_at`. |
| 14 | +* `lock_token` prevents stale replica/pipeline to update the rows already picked up by the new replica. |
| 15 | +* `lock_owner` stores the pipeline that's locked the row so that only that pipeline can recover if it's stale. |
| 16 | +* A heartbeater tracks all rows in the pipeline (in the queue or in processing), and updates the lock expiration. This allows setting small `lock_expires_at` and picking up stale rows quickly |
| 17 | +* A fetcher performs the fetch when the queue size goes under a configured lower limit. It has exponential retry delays between empty fetches, thus reducing load on the DB. |
| 18 | +* There is a fetch hint mechanism that services can use to notify the pipelines within the replica – in that case the fetcher stops sleeping and fetches immediately. |
| 19 | +* Each pipeline locks one main resource but may lock related resources as well. It's not necessary to heartbeat related resources if the pipeline ensures no one else can re-lock them. This is typically done via setting and respecting `lock_owner`. |
| 20 | + |
| 21 | +Related notes: |
| 22 | + |
| 23 | +* All write APIs must respect DB-level locks. The endpoints can either try to acquire the lock with a timeout and error or provide an async API by storing the request in the DB. |
| 24 | + |
| 25 | +## Implementation checklist |
| 26 | + |
| 27 | +Brief checklist for implementing a new pipeline: |
| 28 | + |
| 29 | +1. Fetcher locks only rows that are ready for processing: |
| 30 | +`status/time` filters, `lock_expires_at` is empty or expired, and `lock_owner` is empty or equal to the pipeline name. Keep the fetch order stable with `last_processed_at`. |
| 31 | +2. Fetcher takes row locks with `skip_locked` and updates `lock_expires_at`, `lock_token`, `lock_owner` before enqueueing items. |
| 32 | +3. Worker keeps heavy work outside DB sessions. DB sessions should be short and used only for refetch/locking and final apply. |
| 33 | +4. Apply stage updates rows using update maps/update rows, not by relying on mutating detached ORM models. |
| 34 | +5. Main apply update is guarded by `id + lock_token`. If the update affects `0` rows, the item is stale and processing results must not be applied. |
| 35 | +6. Successful apply updates `last_processed_at` and unlocks resources that were locked by this item. |
| 36 | +7. If related lock is unavailable, reset main lock for retry: keep `lock_owner`, clear `lock_token` and `lock_expires_at`, and set `last_processed_at` to now. |
| 37 | +8. Register the pipeline in `PipelineManager` and hint fetch from services after commit via `pipeline_hinter.hint_fetch(Model.__name__)`. |
| 38 | +9. Add minimum tests: fetch eligibility/order, successful unlock path, stale lock token path, and related lock contention retry path. |
| 39 | + |
| 40 | +## Implementation patterns |
| 41 | + |
| 42 | +**Guarded apply by lock token** |
| 43 | + |
| 44 | +When writing processing results, update the main row with a filter by both `id` and `lock_token`. This guarantees that only the worker that still owns the lock can apply its results. If the update affects no rows, treat the item as stale and skip applying other changes (status changes, related updates, events). A stale item means another worker or replica already continued processing. |
| 45 | + |
| 46 | +**Locking many related resources** |
| 47 | + |
| 48 | +A pipeline may need to lock a potentially big set of related resource, e.g. fleet pipeline locking all fleet's instances. For this, do one SELECT FOR UPDATE of non-locked instances and one SELECT to see how many instances there are, and check if you managed to lock all of them. If fail to lock, release the main lock and try processing on another fetch iteration. You may keep `lock_owner` on the main resource or set `lock_owner` on locked related resource and make other pipelines respect that to guarantee the eventual locking of all related resources and avoid lock starvation. |
| 49 | + |
| 50 | +**Locking a shared related resource** |
| 51 | + |
| 52 | +Multiple main resources may need to lock the same related resource, e.g. multiple jobs may need to change the shared instance. In this case it's not sufficient to set `lock_owner` on the related resource to the pipeline name because workers processing different main resources can still race with each other. To avoid heartbeating the related resource, you may include main resource id in `lock_owner`, e.g. set `lock_owner = f"{Pipeline.__name__}:{item.id}"`. |
| 53 | + |
| 54 | +**Reset-and-retry when related lock is unavailable** |
| 55 | + |
| 56 | +If a worker cannot lock a required related resource, it should release only the main lock state needed for fast retry: unset `lock_token` and `lock_expires_at`, keep `lock_owner`, and set `last_processed_at` to now. This avoids long waiting and lets the same pipeline retry quickly on the next fetch iteration while other pipelines can still respect ownership intent. |
| 57 | + |
| 58 | +**Dealing with side effects** |
| 59 | + |
| 60 | +If processing has side effects and the apply phase fails due to a lock mismatch, there are several options: a) revert side effects b) make processing idempotent, i.e. next processing iteration detects side effects does not perform duplicating actions c) log side effects as errors and warn user about possible issues such as orphaned instances – as a temporary solution. |
| 61 | + |
| 62 | +**Bulk apply with one consistent current time** |
| 63 | + |
| 64 | +When apply needs to update multiple rows (main + related resources), build update maps/update rows first and resolve current-time placeholders once in the apply transaction using `NOW_PLACEHOLDER` + `resolve_now_placeholders()`. This keeps timestamps consistent across all rows and avoids subtle ordering bugs when the same processing pass writes several `*_at` fields. |
| 65 | + |
| 66 | +## Performance analysis |
| 67 | + |
| 68 | +* Pipeline throughput = workers_num / worker_processing_time. So quick tasks easily give high-throughput pipelines, e.g. 1s task with 20 workers is 1200 tasks/min. |
| 69 | +A slow 30s task gives only 40 tasks/min with the same number of workers. We can increase the number of workers but the peak memory usage will grow proportionally. |
| 70 | +In general, workers should be optimized to be as quick as possible to improve throughput. |
| 71 | +* Processing latency (wait) is close to 0 due to fetch hints if the pipeline is not saturated. In general, latency = queue_size / throughput. |
| 72 | +* In-memory queue maxsize provides a cap on memory usage and recovery time after crashes (number of locked items to retry). |
| 73 | +* Fetcher's DB load is proportional to the number of pipelines and is expected to be negligible. Workers can put a considerable read/write DB load as it's proportional to the number of workers. This can be optimized by batching workers' writes. Workers do processing outside of transactions so DB connections won't be a bottleneck. |
| 74 | +* There is a risk of lock starvation if a worker needs to lock all related resources. This is to be mitigated by 1) related pipelines checking `lock_owner` and skip locking to let the parent pipeline acquire all the locks eventually and 2) do the related resource locking only on paths that require it. |
0 commit comments