Conversation
WalkthroughShared reusable Bash libraries are added and sourced by main scripts. Backup and restore modules are introduced with Docker-aware DB dump/restore flows, Telegram notifications, cron-based backup service management, and temp/env/GitHub helpers. README install examples switch to inline ChangesPasarGuard scripts (shared libs, backup/restore, wiring, README)
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant Backup as "lib/pasarguard-backup.sh"
participant Docker as "Docker (DB containers)"
participant FS as "Filesystem (/tmp, backup dir)"
participant Telegram as "Telegram API"
User->>Backup: invoke backup_command or cron job
Backup->>Docker: detect DB type and run DB dump
Docker-->>Backup: return DB dump artifact
Backup->>FS: collect config/data, create archive (split if large)
Backup->>Telegram: upload archive parts and captions
Telegram-->>Backup: upload responses
Backup->>User: send summary or error notification
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/docker.sh`:
- Around line 3-6: The install_docker function can report success even if curl
failed because the pipeline lacks pipefail; modify install_docker (and related
shell context) to fail when the download fails by enabling pipeline failure
detection (e.g., set -o pipefail) or by separating the download and install
steps: download the script with curl and verify curl's exit code (or that the
file is non-empty) before piping to sh, and only call colorized_echo green
"Docker installed successfully" after the curl and sh steps both return success;
reference install_docker and colorized_echo when applying the change.
In `@lib/github.sh`:
- Around line 19-25: The github_install_script_from_repo function currently
pipes curl into install which can create a truncated/zero-byte binary on partial
downloads; change it to download to a temporary file first (use mktemp), run
curl -fSL to write into that temp file while checking curl's exit status, set
the executable mode (chmod 755) on the temp file, then atomically move/install
it into /usr/local/bin/$install_name (using install or mv) and cleanup the temp
file via trap; reference github_install_script_from_repo and mirror the pattern
used by install_shared_libs_from_repo/github_download_file for robust error
handling.
In `@lib/system.sh`:
- Around line 47-52: install_package() may run before OS is set which causes
downstream callers like ensure_acme_dependencies() to hit "Unsupported operating
system"; update install_package (and any helper
detect_and_update_package_manager) to ensure detect_os() is invoked when the OS
variable is empty/undefined so OS is initialized before PKG_MANAGER logic runs —
check for an empty "${OS:-}" and call detect_os() early in install_package (or
at start of detect_and_update_package_manager) so package installation paths
have a valid OS value.
In `@pasarguard.sh`:
- Around line 4-26: The script exits if shared libs are missing, preventing the
self-hosting installer from ever fetching them; update pasarguard.sh so that
instead of immediately exiting when the for-loop (which checks
SHARED_LIB_DIR/$shared_lib) detects missing files, it invokes the bootstrap
installer (e.g., call install_pasarguard_script() or equivalent) to
download/install the shared libs into SHARED_LIB_DIR (or a temp dir), then
re-check and only error if the install fails; ensure the subsequent source lines
(source "$SHARED_LIB_DIR/common.sh", system.sh, docker.sh, github.sh, env.sh)
run after a successful install so curl | bash installs work on fresh machines.
In `@pg-node.sh`:
- Around line 4-24: The loader aborts when the shared libs aren't present
(SCRIPT_DIR/SHARED_LIB_DIR check and the for loop that exits), preventing
first-time installs; change the logic so that if required files under
SHARED_LIB_DIR are missing you do not exit immediately but set a flag (e.g.,
need_install=true), skip sourcing, perform or invoke a bootstrap installer
(download/copy the shared libs) and only then source the four files (common.sh,
system.sh, docker.sh, github.sh); specifically modify the SHARED_LIB_DIR
resolution and the existence check loop (the for shared_lib in ... block and
subsequent source calls) to defer exit and call the bootstrap/installation
routine when need_install is true, then re-check and source the libraries once
installed.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1c398e22-aa0b-4652-875c-58b3f2786c5f
📒 Files selected for processing (8)
README.mdlib/common.shlib/docker.shlib/env.shlib/github.shlib/system.shpasarguard.shpg-node.sh
There was a problem hiding this comment.
Actionable comments posted: 9
♻️ Duplicate comments (1)
pg-node.sh (1)
4-24:⚠️ Potential issue | 🔴 Critical | 🏗️ Heavy liftShared-lib loader still blocks first-time remote installs.
This still exits before any bootstrap path when
lib/*.shis missing (fresh hosts runningbash -c "$(curl ...)"). The script cannot self-heal because it never gets far enough to source/install shared libs.🧩 Suggested flow change
-SHARED_LIB_DIR="${SCRIPT_DIR}/lib" -if [ ! -f "$SHARED_LIB_DIR/common.sh" ]; then - SHARED_LIB_DIR="/usr/local/lib/pasarguard-scripts/lib" -fi - -for shared_lib in common.sh system.sh docker.sh github.sh; do - if [ ! -f "$SHARED_LIB_DIR/$shared_lib" ]; then - printf 'Missing shared library: %s\n' "$SHARED_LIB_DIR/$shared_lib" >&2 - exit 1 - fi -done +SHARED_LIB_DIR="${SCRIPT_DIR}/lib" +[ -f "$SHARED_LIB_DIR/common.sh" ] || SHARED_LIB_DIR="/usr/local/lib/pasarguard-scripts/lib" + +need_install=false +for shared_lib in common.sh system.sh docker.sh github.sh; do + if [ ! -f "$SHARED_LIB_DIR/$shared_lib" ]; then + need_install=true + break + fi +done + +if [ "$need_install" = true ]; then + bootstrap_shared_libs # download lib files without requiring sourced helpers +fi + +# re-check and then source🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pg-node.sh` around lines 4 - 24, The current loader exits immediately if shared libs are missing (the for loop checking "$SHARED_LIB_DIR/$shared_lib"), which prevents first-time bootstrap; update the logic in the block around SCRIPT_DIR/SHARED_LIB_DIR and the for shared_lib loop so that on missing libs it attempts a self-heal bootstrap (e.g., invoke an installer/bootstrap routine or download the missing lib files) and then re-checks the files before sourcing; only call exit 1 if the second check still fails. Refer to SCRIPT_DIR, SHARED_LIB_DIR, the for shared_lib in common.sh/system.sh/docker.sh/github.sh existence checks, and the source "$SHARED_LIB_DIR/..." lines to place the bootstrap attempt and re-validation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/pasarguard-backup.sh`:
- Around line 693-695: In remove_backup_service, crontab -l can exit non‑zero
when no crontab exists and with set -e will abort; change the logic around the
temporary file creation (temp_cron / mktemp and the crontab -l invocation) to
tolerate that failure by capturing crontab -l’s output into the temp file but
treating a non‑zero exit as "no crontab" (e.g., use crontab -l || true or an if
check) so the function proceeds to remove the job and restore crontab safely
even when no existing crontab is present.
- Around line 142-170: The summary block is unreachable because the array
uploaded_files is never initialized or populated; initialize uploaded_files as
an empty array (e.g., uploaded_files=() or local uploaded_files=() before the
loop that sends parts) and, inside the success branch where the script detects
'"ok":true' (the block that currently prints "Backup part $custom_filename
successfully sent to Telegram."), append the sent filename to the array (e.g.,
uploaded_files+=("$custom_filename")); this ensures the later conditional if [
${`#uploaded_files`[@]} -gt 0 ] will be true and the info_message summary will
include the uploaded files.
- Around line 785-797: The script currently writes SQLALCHEMY_DATABASE_URL
(which may contain credentials) to the log_file; instead create a redacted copy
and log that. In the block using SQLALCHEMY_DATABASE_URL and log_file, add a
step to produce a safe variable (e.g. safe_sqlalchemy_url) which strips or
replaces the userinfo portion (the username:password between "://" and "@") with
a fixed token like "REDACTED" (you can use shell parameter expansion or sed to
remove/replace the segment), then write safe_sqlalchemy_url to "$log_file" and
use it in the "Parsing ..." echo; leave the original SQLALCHEMY_DATABASE_URL for
actual DB operations but never log it. Ensure references to
SQLALCHEMY_DATABASE_URL in echo/log statements are replaced with
safe_sqlalchemy_url.
- Line 1: The file starts with a UTF-8 BOM which breaks shebang recognition;
remove the BOM so the file begins exactly with the shebang "#!/usr/bin/env bash"
(save the file as UTF-8 without BOM or strip the BOM), and apply the same fix to
the other affected files pasarguard-restore.sh and pasarguard.sh.
In `@lib/pasarguard-restore.sh`:
- Around line 835-867: The backups currently copy from the extracted payload
($temp_restore_dir) instead of the live files, so change the cp source to the
existing live files under $APP_DIR when creating backups (e.g., replace cp
"$temp_restore_dir/.env" "$APP_DIR/.env.backup.$(...)" with cp "$APP_DIR/.env"
"$APP_DIR/.env.backup.$(...)" and similarly for docker-compose.yml), and wrap
each backup cp in an existence check (if [ -f "$APP_DIR/.env" ] / if [ -f
"$APP_DIR/docker-compose.yml" ]) so you only back up files that actually exist
before overwriting with the payload.
- Line 1: The file contains a UTF-8 BOM before the shebang which can break
interpreter recognition; remove the BOM so the file begins exactly with
"#!/usr/bin/env bash" (ensure the first bytes are 23 21 2F ... for the shebang)
by saving the file without BOM (e.g., re-encode as UTF-8 without BOM) so scripts
like pasarguard-restore.sh execute and source correctly.
- Around line 723-728: The three psql -c commands currently interpolating
$target_db_name and $target_db_owner are vulnerable to injection and broken by
special characters; update those SQL strings to use PostgreSQL's quote_ident()
for identifiers (and quote_literal() if you ever need literals) instead of raw
bash expansion — e.g., in the SELECT pg_terminate_backend(...), DROP DATABASE IF
EXISTS ..., and CREATE DATABASE ... OWNER ... replace datname =
'$target_db_name' / "$target_db_name" / "$target_db_owner" with datname =
quote_ident('$target_db_name') / quote_ident('$target_db_name') /
quote_ident('$target_db_owner') respectively; likewise adjust subsequent psql
invocations that use "$target_db_name" as a connection target (the later docker
exec psql -d "$target_db_name" calls referenced) to instead connect to postgres
and run SQL that references the target via quote_ident('$target_db_name') so
identifiers are safely escaped (references: the SELECT pg_terminate_backend
call, DROP DATABASE IF EXISTS, CREATE DATABASE ... OWNER, and later psql -d
uses).
In `@lib/system.sh`:
- Around line 10-13: The check for /etc/lsb-release calls the lsb_release binary
unguarded which can fail on minimal images; change the condition around the OS
assignment so you only run lsb_release when the command exists (e.g. use command
-v or type to test lsb_release) and fall back to the /etc/os-release awk branch
otherwise; update the block that sets OS (the lsb_release -si usage and the
/etc/os-release awk fallback) to first verify lsb_release is available before
invoking it.
- Around line 187-194: The install currently writes the remote yq binary
directly to /usr/local/bin/yq (using the yq_url variable) without integrity
checks; update the flow to download the corresponding checksum file (prefer the
checksums-bsd or the tarball + its checksum), save downloads to temporary files,
compute the expected digest (using shasum/sha256sum or the appropriate algorithm
matching the checksums file), compare the computed checksum against the
published value and call die on mismatch, and only then move the verified binary
to /usr/local/bin/yq and run chmod +x and colorized_echo; ensure both curl and
wget branches download the checksum and binary to temp paths and verify before
replacing the target.
---
Duplicate comments:
In `@pg-node.sh`:
- Around line 4-24: The current loader exits immediately if shared libs are
missing (the for loop checking "$SHARED_LIB_DIR/$shared_lib"), which prevents
first-time bootstrap; update the logic in the block around
SCRIPT_DIR/SHARED_LIB_DIR and the for shared_lib loop so that on missing libs it
attempts a self-heal bootstrap (e.g., invoke an installer/bootstrap routine or
download the missing lib files) and then re-checks the files before sourcing;
only call exit 1 if the second check still fails. Refer to SCRIPT_DIR,
SHARED_LIB_DIR, the for shared_lib in common.sh/system.sh/docker.sh/github.sh
existence checks, and the source "$SHARED_LIB_DIR/..." lines to place the
bootstrap attempt and re-validation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 19a514dc-3f37-405e-89da-6a9543c750e7
📒 Files selected for processing (12)
docker-compose/node.ymldocker-compose/pasarguard-mariadb.ymldocker-compose/pasarguard-mysql.ymldocker-compose/pasarguard-postgresql.ymldocker-compose/pasarguard-timescaledb.ymllib/docker.shlib/github.shlib/pasarguard-backup.shlib/pasarguard-restore.shlib/system.shpasarguard.shpg-node.sh
✅ Files skipped from review due to trivial changes (1)
- lib/docker.sh
🚧 Files skipped from review as they are similar to previous changes (1)
- lib/github.sh
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (1)
lib/system.sh (1)
203-204:⚠️ Potential issue | 🟠 Major | ⚡ Quick winVerify the yq checksum parser against the upstream release format.
awk '$2 == name { print $1 }'assumes a two-column checksum file. If thechecksumsasset is not in that format,expected_checksumstays empty or resolves the wrong field andinstall_yq()fails on clean machines. Prefer the release asset meant for standard SHA-256 verification, or parse the upstream format explicitly instead of assuming$1/$2.How is the mikefarah/yq GitHub release `checksums` asset formatted, and which release asset (`checksums` vs `checksums-bsd`) should a shell installer use to verify `yq_linux_amd64` with `sha256sum`?🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@lib/system.sh` around lines 203 - 204, The current awk lookup in install_yq assumes a two-column file and can fail for upstream formats; instead fetch and use the release "checksums" asset (the one compatible with sha256sum for Linux) when verifying yq_linux_amd64, and parse checksum_tmp explicitly by matching the yq_binary filename and extracting the hex checksum field (e.g., find the line containing yq_binary in checksum_tmp and pull the SHA-256 token) before assigning expected_checksum; update references in install_yq, expected_checksum, and checksum_tmp to use this robust lookup so checksums-bsd or other formats don't break verification.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@lib/pasarguard-backup.sh`:
- Around line 721-729: The script currently uses global shared paths
(temp_dir="/tmp/pasarguard_backup" and
log_file="/var/log/pasarguard_backup_error.log") which allows concurrent runs to
clobber each other; change to create per-run temp and log files (use mktemp -d
to set temp_dir and mktemp for log_file or derive one from timestamp/$$) and
update uses of temp_dir, log_file, and final_backup_paths accordingly, and add a
simple mutual-exclusion guard (e.g., a lock using flock or a PID lockfile)
around the main backup routine so concurrent invocations are serialized and each
run cleans up only its own temp/log artifacts.
- Around line 744-748: Currently the script removes all previous backups (rm -f
"$backup_dir"/backup_*.tar.gz, zip, z[0-9][0-9]) before the
dump/archive/Telegram steps, risking no recoverable backup if later steps fail;
change the flow so that deletion/retention happens only after a successful
archive creation: move the rm -f retention cleanup (and any mkdir -p if needed)
to run after the archive creation step completes successfully (check the archive
command exit status or wrap with a success conditional), or instead implement a
retention routine that, after successful archive creation, lists existing
"backup_*.tar.gz" and "backup_*.zip" files in "$backup_dir" and deletes only the
oldest files beyond the desired N recent backups.
- Around line 270-272: The script currently prints the full secret in
colorized_echo calls (e.g., where telegram_bot_key is echoed); change those to
print a masked version instead by transforming telegram_bot_key into a short
visible fragment (for example show first 4 and last 4 chars with the middle
replaced by asterisks, or just show "****<last6>") before passing to
colorized_echo. Update every occurrence that prints the token (the
colorized_echo call that outputs "Telegram Bot API Key: $telegram_bot_key" and
the similar prints mentioned around the same areas) so they use a helper/masked
variable (e.g., masked_telegram_bot_key) created by slicing and replacing the
middle with asterisks, then pass that masked value to colorized_echo.
- Around line 460-462: The crontab removal currently uses grep -v "$command"
which can accidentally remove unrelated lines; change the dedupe to filter by
the stable marker instead (the comment "# pasarguard-backup-service") and use a
literal/fgrep style match to avoid regex surprises: read crontab into the temp
file (temp_cron), write a filtered version by removing any line containing the
marker, then mv the temp back and append the new schedule/command (variables
schedule and command) with the marker; update the grep invocation that produces
"${temp_cron}.tmp" to target the marker (and use a literal match option like -F)
rather than "$command".
In `@lib/pasarguard-restore.sh`:
- Around line 691-694: The restore logic sets restore_user/restore_password from
the backup metadata but should prefer the current installation identity when
available; update the restore_user, restore_password (and restore_db/name if
applicable) assignments to use current_db_user, current_db_password, and
current_db_name as the preferred values (falling back to db_user/DB_USER and
db_password/DB_PASSWORD from the backup) so non-Timescale PostgreSQL restores
run against the local/rotated credentials; apply the same change in the other
restore block referenced around the 772-786 region so both restore paths behave
consistently.
- Around line 393-401: The script currently prints the first 50 chars of
SQLALCHEMY_DATABASE_URL which can leak credentials; update the success message
that uses colorized_echo to print a redacted version of SQLALCHEMY_DATABASE_URL
instead of ${SQLALCHEMY_DATABASE_URL:0:50}—reuse the existing redaction helper
used elsewhere in this script/logging (instead of slicing) to strip or mask
credentials (e.g., replace password/user or show only scheme+host) before
passing into colorized_echo, keeping the checks around extracted_env and
temp_restore_dir unchanged.
- Around line 843-867: The restore logic sets preserve_db_credentials and copies
current_db_user/current_db_name/current_db_password/current_sqlalchemy_url back
into the ENV_FILE but omits preserving the captured MYSQL_ROOT_PASSWORD; add
handling to write the preserved root password when preserve_db_credentials is
true: after checking current_db_password (and before/with
current_sqlalchemy_url) call replace_or_append_env_var for "MYSQL_ROOT_PASSWORD"
using the captured variable (e.g., current_mysql_root_password or the name used
earlier) with the appropriate sensitive flag and ENV_FILE so the restored .env
retains the active DB root password.
---
Duplicate comments:
In `@lib/system.sh`:
- Around line 203-204: The current awk lookup in install_yq assumes a two-column
file and can fail for upstream formats; instead fetch and use the release
"checksums" asset (the one compatible with sha256sum for Linux) when verifying
yq_linux_amd64, and parse checksum_tmp explicitly by matching the yq_binary
filename and extracting the hex checksum field (e.g., find the line containing
yq_binary in checksum_tmp and pull the SHA-256 token) before assigning
expected_checksum; update references in install_yq, expected_checksum, and
checksum_tmp to use this robust lookup so checksums-bsd or other formats don't
break verification.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a9eda172-6c63-413b-adf6-38e70e08b7de
📒 Files selected for processing (4)
lib/pasarguard-backup.shlib/pasarguard-restore.shlib/system.shpasarguard.sh
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@lib/pasarguard-backup.sh`:
- Around line 1322-1334: The rsync exclude list currently only omits "mysql"
causing raw DB files for other engines to be copied; update the rsync call that
references "$DATA_DIR/" and "$temp_dir/pasarguard_data/" to exclude all DB
engine directories (e.g. "mysql", "mariadb", "postgresql", "timescaledb") or
dynamically build the exclude list from the deployment DB type (the variable
that identifies DB engine), so raw on-disk DB folders are never copied alongside
the SQL dump and the archive stays consistent.
In `@lib/pasarguard-restore.sh`:
- Around line 828-845: The restore currently uses rsync -a which merges instead
of overwriting; update the restore logic around extracted_data_dir/DATA_DIR in
the pasarguard-restore flow to (1) create a timestamped backup of the current
$DATA_DIR (similar to how .env and docker-compose.yml are backed up) into
$temp_restore_dir or a backups dir, and then (2) either run rsync with deletion
semantics (add --delete and any necessary --delete-excluded) when invoking
rsync, or remove the destination contents (e.g., rm -rf "$DATA_DIR"/*) before
rsync to ensure stale files are removed; keep existing logging to $log_file and
use colorized_echo for status messages, and retain detect_os/install_package
checks for rsync as-is.
- Around line 855-882: The preserve_db_credentials flag logic fails to detect
when only the MySQL root password changed; update the conditional that sets
preserve_db_credentials (the block referencing current_db_user, current_db_name,
current_db_password) to also check current_mysql_root_password vs
${MYSQL_ROOT_PASSWORD:-} so the flag is true when the rotated root password
differs; ensure the same variable name current_mysql_root_password is used and
that when preserve_db_credentials is true the existing code still calls
replace_or_append_env_var "MYSQL_ROOT_PASSWORD" "$current_mysql_root_password"
to keep the live MYSQL_ROOT_PASSWORD intact in the ENV_FILE.
- Around line 786-803: The postgres-superuser fallback currently connects to -d
"$db_name" (parsed from the backup) which can target the wrong DB; change the
superuser attempt in the docker exec command that uses -U postgres to connect to
-d "$restore_db_name" (the resolved target that prefers current_db_name) instead
of "$db_name", and ensure any success/failure messages still reference
$restore_db_name (symbols: $restore_user, $restore_db_name, $db_name,
temp_restore_dir/db_backup.sql, the docker exec psql lines).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 78b2314b-775b-46db-93b8-3fd8e3e46336
📒 Files selected for processing (2)
lib/pasarguard-backup.shlib/pasarguard-restore.sh
| colorized_echo blue "Copying data directory..." | ||
| # Ensure destination directory exists and is empty (already cleaned above, but be explicit) | ||
| if [ -d "$DATA_DIR" ]; then | ||
| if ! rsync -av --exclude 'xray-core' --exclude 'mysql' "$DATA_DIR/" "$temp_dir/pasarguard_data/" >>"$log_file" 2>&1; then | ||
| error_messages+=("Failed to copy data directory.") | ||
| echo "Failed to copy data directory" >>"$log_file" | ||
| fi | ||
| else | ||
| colorized_echo yellow "Data directory $DATA_DIR does not exist. Skipping data directory backup." | ||
| echo "Data directory $DATA_DIR does not exist. Skipping." >>"$log_file" | ||
| # Create empty directory structure so tar doesn't fail | ||
| mkdir -p "$temp_dir/pasarguard_data" | ||
| fi |
There was a problem hiding this comment.
Data-directory exclude list misses non-MySQL DB engines.
rsync only excludes mysql/, but the script also supports mariadb, postgresql, and timescaledb deployments. If those engines mount their data under $DATA_DIR/<engine>/, those raw on-disk files will be silently copied into pasarguard_data/ alongside the proper SQL dump, bloating the archive and potentially producing inconsistent state during restore.
Consider broadening the exclude list (or driving it from $db_type) so the raw DB files are never included regardless of engine.
♻️ Suggested change
- if ! rsync -av --exclude 'xray-core' --exclude 'mysql' "$DATA_DIR/" "$temp_dir/pasarguard_data/" >>"$log_file" 2>&1; then
+ if ! rsync -av \
+ --exclude 'xray-core' \
+ --exclude 'mysql' \
+ --exclude 'mariadb' \
+ --exclude 'postgresql' \
+ --exclude 'timescaledb' \
+ "$DATA_DIR/" "$temp_dir/pasarguard_data/" >>"$log_file" 2>&1; then📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| colorized_echo blue "Copying data directory..." | |
| # Ensure destination directory exists and is empty (already cleaned above, but be explicit) | |
| if [ -d "$DATA_DIR" ]; then | |
| if ! rsync -av --exclude 'xray-core' --exclude 'mysql' "$DATA_DIR/" "$temp_dir/pasarguard_data/" >>"$log_file" 2>&1; then | |
| error_messages+=("Failed to copy data directory.") | |
| echo "Failed to copy data directory" >>"$log_file" | |
| fi | |
| else | |
| colorized_echo yellow "Data directory $DATA_DIR does not exist. Skipping data directory backup." | |
| echo "Data directory $DATA_DIR does not exist. Skipping." >>"$log_file" | |
| # Create empty directory structure so tar doesn't fail | |
| mkdir -p "$temp_dir/pasarguard_data" | |
| fi | |
| colorized_echo blue "Copying data directory..." | |
| # Ensure destination directory exists and is empty (already cleaned above, but be explicit) | |
| if [ -d "$DATA_DIR" ]; then | |
| if ! rsync -av \ | |
| --exclude 'xray-core' \ | |
| --exclude 'mysql' \ | |
| --exclude 'mariadb' \ | |
| --exclude 'postgresql' \ | |
| --exclude 'timescaledb' \ | |
| "$DATA_DIR/" "$temp_dir/pasarguard_data/" >>"$log_file" 2>&1; then | |
| error_messages+=("Failed to copy data directory.") | |
| echo "Failed to copy data directory" >>"$log_file" | |
| fi | |
| else | |
| colorized_echo yellow "Data directory $DATA_DIR does not exist. Skipping data directory backup." | |
| echo "Data directory $DATA_DIR does not exist. Skipping." >>"$log_file" | |
| # Create empty directory structure so tar doesn't fail | |
| mkdir -p "$temp_dir/pasarguard_data" | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@lib/pasarguard-backup.sh` around lines 1322 - 1334, The rsync exclude list
currently only omits "mysql" causing raw DB files for other engines to be
copied; update the rsync call that references "$DATA_DIR/" and
"$temp_dir/pasarguard_data/" to exclude all DB engine directories (e.g. "mysql",
"mariadb", "postgresql", "timescaledb") or dynamically build the exclude list
from the deployment DB type (the variable that identifies DB engine), so raw
on-disk DB folders are never copied alongside the SQL dump and the archive stays
consistent.
| colorized_echo blue "Attempting restore using app user '$restore_user' to database '$restore_db_name'..." | ||
| if docker exec -i "$container_name" psql -v ON_ERROR_STOP=1 -U "$restore_user" -d "$restore_db_name" < "$temp_restore_dir/db_backup.sql" 2>>"$log_file"; then | ||
| colorized_echo green "$db_type database restored successfully." | ||
| restore_success=true | ||
| else | ||
| # If that fails, try using postgres superuser | ||
| colorized_echo yellow "Trying with postgres superuser..." | ||
| if docker exec -i "$container_name" psql -v ON_ERROR_STOP=1 -U postgres -d "$db_name" < "$temp_restore_dir/db_backup.sql" 2>>"$log_file"; then | ||
| colorized_echo green "$db_type database restored successfully." | ||
| restore_success=true | ||
| else | ||
| # Try restoring to postgres database (for pg_dumpall backups) | ||
| if docker exec -i "$container_name" psql -v ON_ERROR_STOP=1 -U postgres -d postgres < "$temp_restore_dir/db_backup.sql" 2>>"$log_file"; then | ||
| colorized_echo green "$db_type database restored successfully." | ||
| restore_success=true | ||
| fi | ||
| fi | ||
| fi |
There was a problem hiding this comment.
Postgres superuser fallback connects to the backup's DB name instead of the resolved target.
Line 787 correctly targets $restore_db_name (which prefers current_db_name), but the fallback at Line 793 still uses $db_name (parsed from the backup's SQLALCHEMY_DATABASE_URL). On cross-server restores or when the local DB has been renamed, the superuser fallback will hit the wrong database (or one that doesn't exist on this host) and fail or — worse — silently restore into an unintended DB.
🛠️ Suggested fix
- if docker exec -i "$container_name" psql -v ON_ERROR_STOP=1 -U postgres -d "$db_name" < "$temp_restore_dir/db_backup.sql" 2>>"$log_file"; then
+ if docker exec -i "$container_name" psql -v ON_ERROR_STOP=1 -U postgres -d "$restore_db_name" < "$temp_restore_dir/db_backup.sql" 2>>"$log_file"; then🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@lib/pasarguard-restore.sh` around lines 786 - 803, The postgres-superuser
fallback currently connects to -d "$db_name" (parsed from the backup) which can
target the wrong DB; change the superuser attempt in the docker exec command
that uses -U postgres to connect to -d "$restore_db_name" (the resolved target
that prefers current_db_name) instead of "$db_name", and ensure any
success/failure messages still reference $restore_db_name (symbols:
$restore_user, $restore_db_name, $db_name, temp_restore_dir/db_backup.sql, the
docker exec psql lines).
| colorized_echo blue "Restoring data directory..." | ||
| local extracted_data_dir="$temp_restore_dir/pasarguard_data" | ||
| if [ -d "$extracted_data_dir" ]; then | ||
| if ! command -v rsync >/dev/null 2>&1; then | ||
| detect_os | ||
| install_package rsync | ||
| fi | ||
| mkdir -p "$DATA_DIR" | ||
| if ! rsync -a "$extracted_data_dir/" "$DATA_DIR/" 2>>"$log_file"; then | ||
| colorized_echo red "Failed to restore data directory." | ||
| echo "Failed to restore data directory from $extracted_data_dir to $DATA_DIR" >>"$log_file" | ||
| rm -rf "$temp_restore_dir" | ||
| exit 1 | ||
| fi | ||
| colorized_echo green "Data directory restored to $DATA_DIR." | ||
| else | ||
| colorized_echo yellow "No pasarguard_data directory found in backup. Skipping data restore." | ||
| fi |
There was a problem hiding this comment.
Data-directory restore merges instead of overwriting — stale files survive.
rsync -a without --delete only adds/updates files from the backup; anything currently in $DATA_DIR that isn't in the backup stays behind. After a restore the data directory can end up mixing pre- and post-restore state (stale certs, orphaned configs, removed user assets), which defeats the purpose of "restore" and can cause subtle runtime inconsistencies.
Consider either using rsync -a --delete or wiping the destination before the rsync — and either way, take a timestamped backup of $DATA_DIR first (mirroring what's already done for .env and docker-compose.yml).
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@lib/pasarguard-restore.sh` around lines 828 - 845, The restore currently uses
rsync -a which merges instead of overwriting; update the restore logic around
extracted_data_dir/DATA_DIR in the pasarguard-restore flow to (1) create a
timestamped backup of the current $DATA_DIR (similar to how .env and
docker-compose.yml are backed up) into $temp_restore_dir or a backups dir, and
then (2) either run rsync with deletion semantics (add --delete and any
necessary --delete-excluded) when invoking rsync, or remove the destination
contents (e.g., rm -rf "$DATA_DIR"/*) before rsync to ensure stale files are
removed; keep existing logging to $log_file and use colorized_echo for status
messages, and retain detect_os/install_package checks for rsync as-is.
| local preserve_db_credentials=false | ||
| if [[ "$db_type" != "sqlite" ]]; then | ||
| if [ -n "$current_db_user" ] && [ -n "${DB_USER:-}" ] && [ "$current_db_user" != "$DB_USER" ]; then | ||
| preserve_db_credentials=true | ||
| elif [ -n "$current_db_name" ] && [ -n "${DB_NAME:-}" ] && [ "$current_db_name" != "$DB_NAME" ]; then | ||
| preserve_db_credentials=true | ||
| elif [ -n "$current_db_password" ] && [ -n "${DB_PASSWORD:-}" ] && [ "$current_db_password" != "$DB_PASSWORD" ]; then | ||
| preserve_db_credentials=true | ||
| fi | ||
| fi | ||
| if [ "$preserve_db_credentials" = true ]; then | ||
| colorized_echo yellow "Database credentials in backup differ from current installation; preserving current database credentials." | ||
| if [ -n "$current_mysql_root_password" ]; then | ||
| replace_or_append_env_var "MYSQL_ROOT_PASSWORD" "$current_mysql_root_password" true "$ENV_FILE" | ||
| fi | ||
| if [ -n "$current_db_user" ]; then | ||
| replace_or_append_env_var "DB_USER" "$current_db_user" false "$ENV_FILE" | ||
| fi | ||
| if [ -n "$current_db_name" ]; then | ||
| replace_or_append_env_var "DB_NAME" "$current_db_name" false "$ENV_FILE" | ||
| fi | ||
| if [ -n "$current_db_password" ]; then | ||
| replace_or_append_env_var "DB_PASSWORD" "$current_db_password" false "$ENV_FILE" | ||
| fi | ||
| if [ -n "$current_sqlalchemy_url" ]; then | ||
| replace_or_append_env_var "SQLALCHEMY_DATABASE_URL" "$current_sqlalchemy_url" true "$ENV_FILE" | ||
| fi | ||
| fi |
There was a problem hiding this comment.
preserve_db_credentials doesn't trigger when only MYSQL_ROOT_PASSWORD differs.
The trigger conditions on Lines 857-863 only check DB_USER, DB_NAME, and DB_PASSWORD. If a deployment rotated only MYSQL_ROOT_PASSWORD (app credentials unchanged), this block is skipped and the restored .env silently reverts to the backup's stale root password — leaving the live container with a root credential that no longer matches what's on disk.
Add MYSQL_ROOT_PASSWORD to the comparison so preservation kicks in when only the root password was rotated.
🛠️ Suggested fix
if [[ "$db_type" != "sqlite" ]]; then
if [ -n "$current_db_user" ] && [ -n "${DB_USER:-}" ] && [ "$current_db_user" != "$DB_USER" ]; then
preserve_db_credentials=true
elif [ -n "$current_db_name" ] && [ -n "${DB_NAME:-}" ] && [ "$current_db_name" != "$DB_NAME" ]; then
preserve_db_credentials=true
elif [ -n "$current_db_password" ] && [ -n "${DB_PASSWORD:-}" ] && [ "$current_db_password" != "$DB_PASSWORD" ]; then
preserve_db_credentials=true
+ elif [ -n "$current_mysql_root_password" ] && [ -n "${MYSQL_ROOT_PASSWORD:-}" ] && [ "$current_mysql_root_password" != "$MYSQL_ROOT_PASSWORD" ]; then
+ preserve_db_credentials=true
fi
fi🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@lib/pasarguard-restore.sh` around lines 855 - 882, The
preserve_db_credentials flag logic fails to detect when only the MySQL root
password changed; update the conditional that sets preserve_db_credentials (the
block referencing current_db_user, current_db_name, current_db_password) to also
check current_mysql_root_password vs ${MYSQL_ROOT_PASSWORD:-} so the flag is
true when the rotated root password differs; ensure the same variable name
current_mysql_root_password is used and that when preserve_db_credentials is
true the existing code still calls replace_or_append_env_var
"MYSQL_ROOT_PASSWORD" "$current_mysql_root_password" to keep the live
MYSQL_ROOT_PASSWORD intact in the ENV_FILE.
Summary by CodeRabbit
Documentation
New Features
Refactor