Skip to content

Mass transfer scaling#349

Open
Lucius1274 wants to merge 4 commits into
mainfrom
mass_transfer_scaling
Open

Mass transfer scaling#349
Lucius1274 wants to merge 4 commits into
mainfrom
mass_transfer_scaling

Conversation

@Lucius1274
Copy link
Copy Markdown
Collaborator

@Lucius1274 Lucius1274 commented May 11, 2026

Summary by CodeRabbit

  • New Features

    • Added mass transfer worker autoscaling capability with cron-based scheduling for production environments.
    • Introduced three new CLI commands: scale-mass-transfer-worker, configure-mass-transfer-worker-cron, and remove-mass-transfer-worker-cron.
    • Added configuration environment variables for autoscaling replicas and cron schedules.
  • Tests

    • Added comprehensive test coverage for mass transfer worker scaling behavior with queued tasks.
  • Chores

    • Updated development environment configuration.

Review Change Stack

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 11, 2026

Warning

Rate limit exceeded

@Lucius1274 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 36 minutes and 34 seconds before requesting another review.

You’ve run out of usage credits. Purchase more in the billing tab.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c76ebf2e-1dfd-4fea-a830-62460a129fc3

📥 Commits

Reviewing files that changed from the base of the PR and between 237d346 and 9a1a83e.

📒 Files selected for processing (3)
  • adit/mass_transfer/tests/test_scale_mass_transfer_worker.py
  • cli.py
  • example.env
📝 Walkthrough

Walkthrough

This PR adds automated mass transfer worker scaling infrastructure for production environments. It introduces three new CLI commands for scaling Docker Swarm replicas, configuring cron-based schedules, and cleaning up scheduled jobs. Environment variables control scaling parameters. Development configuration is adjusted, and comprehensive integration tests validate scaling behavior with existing queued tasks and graceful shutdown semantics.

Changes

Mass Transfer Worker Autoscaling

Layer / File(s) Summary
Environment Configuration
example.env
New autoscaling variables: MASS_TRANSFER_WORKER_REPLICAS_DOWNSCALED, MASS_TRANSFER_WORKER_SCALE_UP_CRON, MASS_TRANSFER_WORKER_SCALE_DOWN_CRON, and MASS_TRANSFER_WORKER_STOP_GRACE_PERIOD.
CLI Imports
cli.py
Adds shlex and Path imports for command construction and file operations.
CLI: Scale Command
cli.py
scale_mass_transfer_worker validates replica count, enforces production-only execution, derives grace period from config, and runs Docker update/scale commands.
CLI: Cron Management
cli.py
configure_mass_transfer_worker_cron parses/validates environment variables, generates marked crontab entries for scheduled scaling, and installs them via crontab/sed. remove_mass_transfer_worker_cron removes the marked block.
Development Configuration
docker-compose.dev.yml
Removes --autoreload from mass transfer worker; updates orthanc1 port from 7501 to 6501; adds orthanc2 port 6502.
Test Infrastructure
adit/mass_transfer/tests/test_scale_mass_transfer_worker.py
Adds test module with imports, constants, long-running graceful_mass_transfer_test_task, and helpers to build database URLs and poll job statuses.
Integration Tests
adit/mass_transfer/tests/test_scale_mass_transfer_worker.py
Two tests validate that scaling preserves queued task state and that graceful shutdown completes current jobs while blocking subsequent tasks; verify exact Docker command sequences.
Build Artifacts
.gitignore
Ignores application logs/ directory.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 A worker scales up with the crone's gentle call,
Down comes the dusk, gracefully stopping it all,
Jobs complete their dance before shutting down slow,
With crontab's precision and logs all aglow! 📋✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 36.36% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main objective of the pull request, which is to implement mass transfer worker scaling functionality through new CLI commands, cron configuration, and related infrastructure changes.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch mass_transfer_scaling

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Lucius1274 Lucius1274 marked this pull request as ready for review May 11, 2026 15:17
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an autoscaling mechanism for the mass transfer worker, featuring new CLI commands to configure cron-based scaling of Docker Swarm services and comprehensive tests for graceful shutdowns. Feedback suggests optimizing test execution speed by reducing sleep intervals, enhancing shell command security through improved quoting and consolidation, and restoring the --autoreload flag in the development environment. Additionally, corrections were requested for a misleading log message and a syntax error in the example environment configuration.

Comment on lines +128 to +129
running_task_seconds = 30
graceful_timeout_seconds = 50
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The test uses a 30-second sleep which significantly slows down the test suite. Consider reducing these values to a few seconds (e.g., 2s and 5s) to maintain test validity while improving performance.

Suggested change
running_task_seconds = 30
graceful_timeout_seconds = 50
running_task_seconds = 2
graceful_timeout_seconds = 5

Comment thread cli.py Outdated
Comment on lines +76 to +80
helper.execute_cmd(
f"docker service update --stop-grace-period {shlex.quote(grace_period)} {service_name}"
)

helper.execute_cmd(f"docker service scale {service_name}={replicas}")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The docker service update and docker service scale commands can be combined into a single docker service update call. This is more efficient as it triggers only one service reconciliation in Docker Swarm. Additionally, it's safer to quote the service name.

Suggested change
helper.execute_cmd(
f"docker service update --stop-grace-period {shlex.quote(grace_period)} {service_name}"
)
helper.execute_cmd(f"docker service scale {service_name}={replicas}")
helper.execute_cmd(
f"docker service update --stop-grace-period {shlex.quote(grace_period)} "
f"--replicas {replicas} "
f"{shlex.quote(service_name)}"
)

Comment thread cli.py
Comment on lines +121 to +126
logs_dir = shlex.quote(str(Path(helper.root_path) / "logs"))

# Ensure logs directory exists
Path(helper.root_path).joinpath("logs").mkdir(exist_ok=True)

log_file = f"{logs_dir}/mass_transfer_worker_cron.log"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

Quoting only the directory part of the log file path can lead to awkward shell commands. It's better to quote the entire path to the log file.

Suggested change
logs_dir = shlex.quote(str(Path(helper.root_path) / "logs"))
# Ensure logs directory exists
Path(helper.root_path).joinpath("logs").mkdir(exist_ok=True)
log_file = f"{logs_dir}/mass_transfer_worker_cron.log"
logs_dir = str(Path(helper.root_path) / "logs")
# Ensure logs directory exists
Path(logs_dir).mkdir(exist_ok=True)
log_file = shlex.quote(f"{logs_dir}/mass_transfer_worker_cron.log")

Comment thread cli.py Outdated
cron_marker_end,
]
)
typer.echo("Executed: {}".format(cron_block))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The message 'Executed' is misleading because the cron block hasn't been installed yet at this point in the code. 'Installing cron block:' would be more accurate.

Suggested change
typer.echo("Executed: {}".format(cron_block))
typer.echo("Installing cron block:\n{}".format(cron_block))

Comment thread docker-compose.dev.yml
bash -c "
wait-for-it -s postgres.local:5432 -t 60 &&
./manage.py bg_worker -l debug -q mass_transfer --autoreload
./manage.py bg_worker -l debug -q mass_transfer
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The --autoreload flag was removed from the mass_transfer_worker command. This seems like a regression for the local development environment, as other workers still have it enabled.

        ./manage.py bg_worker -l debug -q mass_transfer --autoreload

Comment thread example.env Outdated
MASS_TRANSFER_WORKER_REPLICAS_DOWNSCALED=1
MASS_TRANSFER_WORKER_SCALE_UP_CRON="0 18 * * 1-5"
MASS_TRANSFER_WORKER_SCALE_DOWN_CRON="0 8 * * 1-5"
MASS_TRANSFER_WORKER_STOP_GRACE_PERIOD ="2h"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is a typo in the environment variable assignment (a space before the = sign). This can cause issues with some .env parsers or lead to the variable being incorrectly named with a trailing space.

MASS_TRANSFER_WORKER_STOP_GRACE_PERIOD="2h"

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
cli.py (1)

147-147: ⚡ Quick win

Use structured logging for cron install/update events.

This operational state change is currently emitted as plain stdout. Structured fields here will make cron-triggered scale changes much easier to query and alert on. As per coding guidelines, "Use structured logging for agent actions and state transitions".

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@cli.py` at line 147, Replace the plain stdout call that prints cron_block
with a structured log entry: instead of typer.echo("Executed:
{}".format(cron_block)) emit a structured log (e.g., logger.info or
processLogger.info) including explicit fields such as event="cron_executed" (or
event="cron_install" depending on context), action/cron_block=cron_block, and
any relevant metadata (timestamp, source) so the cron install/update is
queryable; update the caller in cli.py to use the logger symbol used elsewhere
in the module and keep the human-facing typer.echo only if needed but ensure
operational state changes are recorded via structured logging.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@adit/mass_transfer/tests/test_scale_mass_transfer_worker.py`:
- Around line 40-47: The helper _build_database_url_from_connection builds
DATABASE_URL with raw credentials which breaks when USER, PASSWORD or NAME
contain reserved URI characters; update it to URL-encode user, password and
database name before interpolating into the connection string (use
urllib.parse.quote_plus or urllib.parse.quote and import it) so the returned
string is a valid postgres URI; reference the function
_build_database_url_from_connection and connection.settings_dict when making the
change.

In `@cli.py`:
- Around line 128-134: Replace the hardcoded "/usr/local/bin/uv" in the cron
command strings by resolving the uv executable with shutil.which("uv") (e.g.
uv_path = shutil.which("uv")) and pass that through shlex.quote() before
building the command; then use the quoted uv_path variable in the scale_up_cmd
and scale_down_cmd f-strings (which also use project_root, log_file,
up_replicas, down_replicas) so the cron entries call the actual installed uv
binary safely; if shutil.which returns None, decide on a fallback (e.g. raise or
use "uv") before quoting.
- Around line 76-80: Current code calls helper.execute_cmd twice
(helper.execute_cmd(... stop-grace-period ...) and helper.execute_cmd(... scale
...)) causing two separate Swarm updates; change it to a single atomic update by
invoking helper.execute_cmd once with docker service update including both
--stop-grace-period and --replicas flags (use shlex.quote on grace_period and
replicas and include service_name) so the stop grace period and replica count
are applied in the same spec update.

In `@docker-compose.dev.yml`:
- Around line 72-76: The dev docker-compose command for the mass_transfer worker
should include the autoreload flag so it reloads updated code from
develop.watch; update the command that runs "./manage.py bg_worker -l debug -q
mass_transfer" to add "--autoreload" (preserving the existing wait-for-it usage
and options) so the bg_worker process restarts on file changes during
development.

---

Nitpick comments:
In `@cli.py`:
- Line 147: Replace the plain stdout call that prints cron_block with a
structured log entry: instead of typer.echo("Executed: {}".format(cron_block))
emit a structured log (e.g., logger.info or processLogger.info) including
explicit fields such as event="cron_executed" (or event="cron_install" depending
on context), action/cron_block=cron_block, and any relevant metadata (timestamp,
source) so the cron install/update is queryable; update the caller in cli.py to
use the logger symbol used elsewhere in the module and keep the human-facing
typer.echo only if needed but ensure operational state changes are recorded via
structured logging.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 948e7d57-cfdd-4749-b8f8-91f55d75b000

📥 Commits

Reviewing files that changed from the base of the PR and between cb521ce and 237d346.

📒 Files selected for processing (5)
  • .gitignore
  • adit/mass_transfer/tests/test_scale_mass_transfer_worker.py
  • cli.py
  • docker-compose.dev.yml
  • example.env

Comment on lines +40 to +47
def _build_database_url_from_connection() -> str:
db_settings = connection.settings_dict
user = db_settings.get("USER") or ""
password = db_settings.get("PASSWORD") or ""
host = db_settings.get("HOST") or "localhost"
port = db_settings.get("PORT") or "5432"
name = db_settings["NAME"]
return f"postgres://{user}:{password}@{host}:{port}/{name}"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

URL-encode the database credentials in DATABASE_URL.

Any reserved character in the username, password, or database name will produce an invalid URI and make this test fail on CI environments with non-trivial credentials.

Suggested fix
+from urllib.parse import quote
+
 ...
 def _build_database_url_from_connection() -> str:
     db_settings = connection.settings_dict
     user = db_settings.get("USER") or ""
     password = db_settings.get("PASSWORD") or ""
     host = db_settings.get("HOST") or "localhost"
     port = db_settings.get("PORT") or "5432"
     name = db_settings["NAME"]
-    return f"postgres://{user}:{password}@{host}:{port}/{name}"
+    credentials = ""
+    if user:
+        credentials = quote(user, safe="")
+        if password:
+            credentials += f":{quote(password, safe='')}"
+        credentials += "@"
+    return f"postgres://{credentials}{host}:{port}/{quote(name, safe='')}"
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@adit/mass_transfer/tests/test_scale_mass_transfer_worker.py` around lines 40
- 47, The helper _build_database_url_from_connection builds DATABASE_URL with
raw credentials which breaks when USER, PASSWORD or NAME contain reserved URI
characters; update it to URL-encode user, password and database name before
interpolating into the connection string (use urllib.parse.quote_plus or
urllib.parse.quote and import it) so the returned string is a valid postgres
URI; reference the function _build_database_url_from_connection and
connection.settings_dict when making the change.

Comment thread cli.py Outdated
Comment on lines +76 to +80
helper.execute_cmd(
f"docker service update --stop-grace-period {shlex.quote(grace_period)} {service_name}"
)

helper.execute_cmd(f"docker service scale {service_name}={replicas}")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

# First, let's locate and read the cli.py file
find . -name "cli.py" -type f | head -20

Repository: openradx/adit

Length of output: 66


🏁 Script executed:

# Read the specific lines mentioned in the review comment
# Let's get lines 70-85 for context
sed -n '70,85p' cli.py | cat -n

Repository: openradx/adit

Length of output: 647


🏁 Script executed:

# Get the full function context to understand the code better
sed -n '60,95p' cli.py | cat -n

Repository: openradx/adit

Length of output: 1478


🌐 Web query:

docker service update --detach flag atomic operations race condition

💡 Result:

Docker Swarm service updates using docker service update --detach (or -d) are atomic at the service specification level due to the Raft consensus algorithm used by Swarm managers to replicate the global cluster state consistently across nodes [1]. Each update submits a new service spec version; if accepted by the Raft quorum, it becomes the desired state, ensuring no partial spec application [2][3]. Concurrent updates from multiple clients can result in "update out of sequence" errors if a client uses an outdated version index, preventing conflicting changes [2][3]. The --detach flag only affects client-side behavior: it exits immediately without waiting for task convergence, while --detach=false (default) waits [4][5]. This does not introduce races in the update itself, as the spec change remains atomic. Rolling updates to tasks are controlled separately (e.g., parallelism, delay) and may pause/rollback on failures but follow the committed spec [6][7][8]. No sources indicate races specifically tied to --detach; concurrency issues stem from rapid successive updates, not the flag [9][2].

Citations:


Combine grace-period and replica updates into a single atomic operation.

These are two separate Swarm updates. While Docker Swarm makes each update atomic via Raft consensus, combining them into one operation ensures the grace-period and replica changes are applied in the same spec version. This prevents intermediate states where the grace period has been updated but scaling hasn't occurred yet, which could cause in-flight transfers to be interrupted if scaling proceeds before the new grace period is fully reconciled.

Suggested fix
-    helper.execute_cmd(
-        f"docker service update --stop-grace-period {shlex.quote(grace_period)} {service_name}"
-    )
-
-    helper.execute_cmd(f"docker service scale {service_name}={replicas}")
+    helper.execute_cmd(
+        "docker service update "
+        f"--detach=false "
+        f"--stop-grace-period {shlex.quote(grace_period)} "
+        f"--replicas {replicas} "
+        f"{service_name}"
+    )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
helper.execute_cmd(
f"docker service update --stop-grace-period {shlex.quote(grace_period)} {service_name}"
)
helper.execute_cmd(f"docker service scale {service_name}={replicas}")
helper.execute_cmd(
"docker service update "
f"--detach=false "
f"--stop-grace-period {shlex.quote(grace_period)} "
f"--replicas {replicas} "
f"{service_name}"
)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@cli.py` around lines 76 - 80, Current code calls helper.execute_cmd twice
(helper.execute_cmd(... stop-grace-period ...) and helper.execute_cmd(... scale
...)) causing two separate Swarm updates; change it to a single atomic update by
invoking helper.execute_cmd once with docker service update including both
--stop-grace-period and --replicas flags (use shlex.quote on grace_period and
replicas and include service_name) so the stop grace period and replica count
are applied in the same spec update.

Comment thread cli.py
Comment on lines +128 to +134
scale_up_cmd = (
f"cd {project_root} && /usr/local/bin/uv run cli scale-mass-transfer-worker {up_replicas}"
f" >> {log_file} 2>&1"
)
scale_down_cmd = (
f"cd {project_root} && /usr/local/bin/uv run cli scale-mass-transfer-worker {down_replicas}"
f" >> {log_file} 2>&1"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

cd /repo && find . -name "cli.py" -type f | head -20

Repository: openradx/adit

Length of output: 113


🏁 Script executed:

pwd && ls -la && find . -name "cli.py" -type f 2>/dev/null | head -20

Repository: openradx/adit

Length of output: 3055


🏁 Script executed:

wc -l cli.py && head -20 cli.py && sed -n '120,145p' cli.py

Repository: openradx/adit

Length of output: 1487


🏁 Script executed:

rg '/usr/local/bin' cli.py && rg -n 'shutil.which|shlex.quote' cli.py

Repository: openradx/adit

Length of output: 489


🏁 Script executed:

sed -n '100,150p' cli.py

Repository: openradx/adit

Length of output: 2045


Don't hardcode /usr/local/bin/uv into the crontab.

uv is not guaranteed to be at that location on every production host. When installed elsewhere, the cron entries are written successfully but autoscaling never executes. Use shutil.which("uv") to locate the executable and shlex.quote() to safely pass it to the shell command. The necessary imports (shutil and shlex) are already available in this file.

Suggested fix
+    uv_bin = shutil.which("uv")
+    if not uv_bin:
+        typer.echo("uv executable not found in PATH")
+        raise typer.Exit(code=1)
+
     scale_up_cmd = (
-        f"cd {project_root} && /usr/local/bin/uv run cli scale-mass-transfer-worker {up_replicas}"
+        f"cd {project_root} && {shlex.quote(uv_bin)} run cli scale-mass-transfer-worker {up_replicas}"
         f" >> {log_file} 2>&1"
     )
     scale_down_cmd = (
-        f"cd {project_root} && /usr/local/bin/uv run cli scale-mass-transfer-worker {down_replicas}"
+        f"cd {project_root} && {shlex.quote(uv_bin)} run cli scale-mass-transfer-worker {down_replicas}"
         f" >> {log_file} 2>&1"
     )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
scale_up_cmd = (
f"cd {project_root} && /usr/local/bin/uv run cli scale-mass-transfer-worker {up_replicas}"
f" >> {log_file} 2>&1"
)
scale_down_cmd = (
f"cd {project_root} && /usr/local/bin/uv run cli scale-mass-transfer-worker {down_replicas}"
f" >> {log_file} 2>&1"
uv_bin = shutil.which("uv")
if not uv_bin:
typer.echo("uv executable not found in PATH")
raise typer.Exit(code=1)
scale_up_cmd = (
f"cd {project_root} && {shlex.quote(uv_bin)} run cli scale-mass-transfer-worker {up_replicas}"
f" >> {log_file} 2>&1"
)
scale_down_cmd = (
f"cd {project_root} && {shlex.quote(uv_bin)} run cli scale-mass-transfer-worker {down_replicas}"
f" >> {log_file} 2>&1"
)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@cli.py` around lines 128 - 134, Replace the hardcoded "/usr/local/bin/uv" in
the cron command strings by resolving the uv executable with shutil.which("uv")
(e.g. uv_path = shutil.which("uv")) and pass that through shlex.quote() before
building the command; then use the quoted uv_path variable in the scale_up_cmd
and scale_down_cmd f-strings (which also use project_root, log_file,
up_replicas, down_replicas) so the cron entries call the actual installed uv
binary safely; if shutil.which returns None, decide on a fallback (e.g. raise or
use "uv") before quoting.

Comment thread docker-compose.dev.yml
Comment on lines 72 to 76
command: >
bash -c "
wait-for-it -s postgres.local:5432 -t 60 &&
./manage.py bg_worker -l debug -q mass_transfer --autoreload
./manage.py bg_worker -l debug -q mass_transfer
"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Restore --autoreload for the dev mass transfer worker.

develop.watch only syncs files into the container. Without autoreload, this worker keeps running stale code until the container is restarted, unlike the other dev workers.

Suggested fix
   command: >
     bash -c "
       wait-for-it -s postgres.local:5432 -t 60 &&
-        ./manage.py bg_worker -l debug -q mass_transfer
+        ./manage.py bg_worker -l debug -q mass_transfer --autoreload
     "
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
command: >
bash -c "
wait-for-it -s postgres.local:5432 -t 60 &&
./manage.py bg_worker -l debug -q mass_transfer --autoreload
./manage.py bg_worker -l debug -q mass_transfer
"
command: >
bash -c "
wait-for-it -s postgres.local:5432 -t 60 &&
./manage.py bg_worker -l debug -q mass_transfer --autoreload
"
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@docker-compose.dev.yml` around lines 72 - 76, The dev docker-compose command
for the mass_transfer worker should include the autoreload flag so it reloads
updated code from develop.watch; update the command that runs "./manage.py
bg_worker -l debug -q mass_transfer" to add "--autoreload" (preserving the
existing wait-for-it usage and options) so the bg_worker process restarts on
file changes during development.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant