Fix: Price caching now expires instead of lasting forever#11
Fix: Price caching now expires instead of lasting forever#11
Conversation
- Changed permanent=True to permanent=False for game_details_cache in all populate scripts - Changed permanent=True to permanent=False in steam_helpers, plugin_admin_actions, and admin_commands - Added configurable ITAD cache TTL (itad_prices: 336 hours default) to config-template.yml - Added ITAD_CACHE_TTL variable to config.py - Updated utils.py to use configurable ITAD_CACHE_TTL instead of hardcoded 336 hours - Standardizes ITAD caching TTL across all code paths Resolves issue #10: Inconsistent price caching TTLs and permanent storage of current prices
📝 WalkthroughWalkthroughAdded configurable hour-based TTLs for Steam-family caches and switched multiple cache writes (Steam game details and ITAD prices) from permanent storage to TTL-bound caching; introduced ITAD cache TTL constant, updated DB timestamp formatting, and added tooling to inspect cache tables. Changes
Sequence Diagram(s)sequenceDiagram
participant PopScript as Populate Script
participant SteamAPI as Steam API
participant ITADAPI as ITAD API
participant DB as Cache DB
PopScript->>SteamAPI: fetch appdetails(app_id)
SteamAPI-->>PopScript: game_details (includes price_overview)
PopScript->>DB: cache_game_details(app_id, game_details, permanent=False, cache_hours=GAME_DETAILS_CACHE_TTL)
PopScript->>ITADAPI: fetch historyLow(app_id/name)
ITADAPI-->>PopScript: historyLow (price data)
PopScript->>DB: cache_itad_price_enhanced(app_id, itad_data, permanent=False, cache_hours=ITAD_CACHE_TTL)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
scripts/populate_prices_optimized.py (1)
552-559:⚠️ Potential issue | 🟠 MajorUse
ITAD_CACHE_TTLin the optimized ITAD writer.This call still inherits
cache_itad_price_enhanced(..., cache_hours=6), and the fallback block below still writes permanently. The optimized path therefore remains out of sync with the new configurable ITAD TTL.💡 Proposed fix
-from familybot.config import ITAD_API_KEY, STEAMWORKS_API_KEY # pylint: disable=wrong-import-position +from familybot.config import ( # pylint: disable=wrong-import-position + ITAD_API_KEY, + ITAD_CACHE_TTL, + STEAMWORKS_API_KEY, +) @@ cache_itad_price_enhanced( app_id, price_data, lookup_method=lookup_method, steam_game_name=game_name, permanent=False, + cache_hours=ITAD_CACHE_TTL, conn=conn, ) @@ cache_itad_price_enhanced( app_id, price_data, lookup_method=lookup_method, steam_game_name=game_name, - permanent=True, + permanent=False, + cache_hours=ITAD_CACHE_TTL, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/populate_prices_optimized.py` around lines 552 - 559, The optimized ITAD writer currently calls cache_itad_price_enhanced(app_id, price_data, lookup_method=lookup_method, steam_game_name=game_name, permanent=False, conn=conn) but still relies on the old 6-hour default and the fallback branch writes permanently; update both the optimized call and the fallback write to use the configured TTL by passing cache_hours=ITAD_CACHE_TTL to cache_itad_price_enhanced (instead of relying on permanent/defaults) so the optimized path and fallback honor the ITAD_CACHE_TTL constant; ensure you reference cache_itad_price_enhanced and ITAD_CACHE_TTL when making the changes.scripts/populate_prices.py (1)
443-452:⚠️ Potential issue | 🟠 MajorPass
ITAD_CACHE_TTLthroughcache_itad_price_enhanced.
src/familybot/lib/database.pydefaultscache_itad_price_enhanced(..., cache_hours=6), so flipping onlypermanent=Falsehere makes populate runs expire historical lows after 6 hours instead of the new configurable TTL.💡 Proposed fix
-from familybot.config import ITAD_API_KEY, STEAMWORKS_API_KEY # pylint: disable=wrong-import-position +from familybot.config import ( # pylint: disable=wrong-import-position + ITAD_API_KEY, + ITAD_CACHE_TTL, + STEAMWORKS_API_KEY, +) @@ cache_itad_price_enhanced( app_id, { "lowest_price": str(history_low["amount"]), "lowest_price_formatted": f"${history_low['amount']}", "shop_name": "Historical Low (All Stores)", }, lookup_method="appid", permanent=False, + cache_hours=ITAD_CACHE_TTL, ) @@ cache_itad_price_enhanced( app_id, { "lowest_price": str(history_low["amount"]), "lowest_price_formatted": f"${history_low['amount']}", "shop_name": "Historical Low (All Stores)", }, lookup_method="name_search", steam_game_name=game_name, permanent=False, + cache_hours=ITAD_CACHE_TTL, )Also applies to: 505-515
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/populate_prices.py` around lines 443 - 452, The call to cache_itad_price_enhanced is leaving cache_hours at the default 6 hours, causing historical lows to expire incorrectly; update the calls to pass the configured ITAD_CACHE_TTL value (e.g., cache_hours=ITAD_CACHE_TTL) into cache_itad_price_enhanced for the historical-low caches (the call shown and the similar block around the 505-515 range) so the TTL uses the configurable setting rather than the hardcoded default.scripts/populate_prices_async.py (2)
462-489:⚠️ Potential issue | 🟠 MajorMirror the TTL behavior in the Steam recovery path.
The transactional write now uses
permanent=False, but the salvage path still writes recovered records withpermanent=Trueon Line 488. If a batch rolls back, thoseprice_overviewentries become non-expiring again, which undercuts the stale-price fix for exactly the rows you recover.Suggested fix
if source == "steam_library": cache_game_details_with_source(app_id, game_data, source) else: - cache_game_details(app_id, game_data, permanent=True) + cache_game_details(app_id, game_data, permanent=False)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/populate_prices_async.py` around lines 462 - 489, The recovery loop is writing recovered records with permanent=True and without the DB connection, which contradicts the transactional path that uses permanent=False and conn=conn; update the salvage branch so calls to cache_game_details_with_source(app_id, game_data, source) and cache_game_details(app_id, game_data, permanent=True) mirror the transactional behavior by passing the same conn and using permanent=False (i.e., call cache_game_details_with_source(app_id, game_data, source, conn=conn) and cache_game_details(app_id, game_data, permanent=False, conn=conn)).
526-555:⚠️ Potential issue | 🟠 MajorKeep ITAD recovery writes non-permanent too.
This has the same inconsistency as the Steam path: the transaction uses
permanent=False, but the per-record fallback still usespermanent=Trueon Line 554. A batch failure will therefore leave recovered ITAD rows immune toITAD_CACHE_TTL.Suggested fix
cache_itad_price_enhanced( app_id, price_data, lookup_method=lookup_method, steam_game_name=game_name, - permanent=True, + permanent=False, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/populate_prices_async.py` around lines 526 - 555, The per-record fallback in the ITAD batch recovery is incorrectly marking recovered rows permanent whereas the batch path uses temporary writes; in the exception handler for the batch (around the cache_itad_price_enhanced calls inside the for app_id, price_info in batch loop) change the call to cache_itad_price_enhanced to pass permanent=False (same as the batch path) and preserve the other args (app_id, price_data, lookup_method, steam_game_name) so recovered records remain subject to ITAD_CACHE_TTL.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/populate_prices_optimized.py`:
- Around line 504-506: The per-record recovery block still calls
cache_game_details(..., permanent=True) which reintroduces never-expiring Steam
entries; change that call to use permanent=False (and keep conn parameter usage
consistent with the batch path) so recovered writes remain TTL-bound like the
batch transaction and do not create permanent rows.
---
Outside diff comments:
In `@scripts/populate_prices_async.py`:
- Around line 462-489: The recovery loop is writing recovered records with
permanent=True and without the DB connection, which contradicts the
transactional path that uses permanent=False and conn=conn; update the salvage
branch so calls to cache_game_details_with_source(app_id, game_data, source) and
cache_game_details(app_id, game_data, permanent=True) mirror the transactional
behavior by passing the same conn and using permanent=False (i.e., call
cache_game_details_with_source(app_id, game_data, source, conn=conn) and
cache_game_details(app_id, game_data, permanent=False, conn=conn)).
- Around line 526-555: The per-record fallback in the ITAD batch recovery is
incorrectly marking recovered rows permanent whereas the batch path uses
temporary writes; in the exception handler for the batch (around the
cache_itad_price_enhanced calls inside the for app_id, price_info in batch loop)
change the call to cache_itad_price_enhanced to pass permanent=False (same as
the batch path) and preserve the other args (app_id, price_data, lookup_method,
steam_game_name) so recovered records remain subject to ITAD_CACHE_TTL.
In `@scripts/populate_prices_optimized.py`:
- Around line 552-559: The optimized ITAD writer currently calls
cache_itad_price_enhanced(app_id, price_data, lookup_method=lookup_method,
steam_game_name=game_name, permanent=False, conn=conn) but still relies on the
old 6-hour default and the fallback branch writes permanently; update both the
optimized call and the fallback write to use the configured TTL by passing
cache_hours=ITAD_CACHE_TTL to cache_itad_price_enhanced (instead of relying on
permanent/defaults) so the optimized path and fallback honor the ITAD_CACHE_TTL
constant; ensure you reference cache_itad_price_enhanced and ITAD_CACHE_TTL when
making the changes.
In `@scripts/populate_prices.py`:
- Around line 443-452: The call to cache_itad_price_enhanced is leaving
cache_hours at the default 6 hours, causing historical lows to expire
incorrectly; update the calls to pass the configured ITAD_CACHE_TTL value (e.g.,
cache_hours=ITAD_CACHE_TTL) into cache_itad_price_enhanced for the
historical-low caches (the call shown and the similar block around the 505-515
range) so the TTL uses the configurable setting rather than the hardcoded
default.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: cf06c2b9-4097-478c-9868-b09984d08796
📒 Files selected for processing (9)
config-template.ymlscripts/populate_prices.pyscripts/populate_prices_async.pyscripts/populate_prices_optimized.pysrc/familybot/config.pysrc/familybot/lib/admin_commands.pysrc/familybot/lib/plugin_admin_actions.pysrc/familybot/lib/steam_helpers.pysrc/familybot/lib/utils.py
…ITAD_CACHE_TTL - Added ITAD_CACHE_TTL import to all three populate scripts - Fixed recovery blocks in populate_prices_optimized.py to use permanent=False and conn parameter - Fixed recovery blocks in populate_prices_async.py to use permanent=False, cache_hours=ITAD_CACHE_TTL, and conn parameter - Added cache_hours=ITAD_CACHE_TTL to cache_itad_price_enhanced calls in populate_prices.py - Added cache_hours=ITAD_CACHE_TTL to batch_write_itad_data in populate_prices_optimized.py This ensures: 1. All cache entries use TTL-based expiration consistently 2. Recovery paths match the main transactional paths 3. ITAD cache uses configurable ITAD_CACHE_TTL instead of default 6 hours
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
scripts/populate_prices_optimized.py (1)
556-563:⚠️ Potential issue | 🟠 MajorUse
ITAD_CACHE_TTLon the main ITAD write path too.Lines 556-563 set
permanent=False, but withoutcache_hoursthe helper falls back to its 6-hour default. The optimized path will therefore churn ITAD entries far more aggressively than configured, while the salvage path below uses the correct TTL.💡 Proposed fix
cache_itad_price_enhanced( app_id, price_data, lookup_method=lookup_method, steam_game_name=game_name, permanent=False, + cache_hours=ITAD_CACHE_TTL, conn=conn, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/populate_prices_optimized.py` around lines 556 - 563, The call to cache_itad_price_enhanced in the optimized write path currently sets permanent=False but omits cache_hours, causing it to use the 6-hour default; change the call to pass cache_hours=ITAD_CACHE_TTL (the same TTL used by the salvage path) so cache_itad_price_enhanced(app_id, price_data, lookup_method=lookup_method, steam_game_name=game_name, permanent=False, cache_hours=ITAD_CACHE_TTL, conn=conn) to ensure consistent TTL handling.scripts/populate_prices_async.py (1)
530-537:⚠️ Potential issue | 🟠 MajorPass
ITAD_CACHE_TTLin the normal ITAD batch write.Lines 530-537 still rely on
cache_itad_price_enhanced's defaultcache_hours=6, so the primary async path expires historical-low data after 6 hours while the recovery path below uses the configured TTL. That reintroduces the inconsistency this PR is trying to remove.💡 Proposed fix
cache_itad_price_enhanced( app_id, price_data, lookup_method=lookup_method, steam_game_name=game_name, permanent=False, + cache_hours=ITAD_CACHE_TTL, conn=conn, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/populate_prices_async.py` around lines 530 - 537, The normal async batch call to cache_itad_price_enhanced is relying on its default cache_hours=6; change that call to pass the configured TTL by adding cache_hours=ITAD_CACHE_TTL (the same TTL used in the recovery path) so both paths use the same expiry; update the call site that currently invokes cache_itad_price_enhanced(app_id, price_data, lookup_method=..., steam_game_name=..., permanent=False, conn=...) to include cache_hours=ITAD_CACHE_TTL and ensure ITAD_CACHE_TTL is imported/available in the module.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/check_price_cache.py`:
- Around line 14-138: The check_price_cache function is too long; extract the
three logical blocks into helpers and ensure the DB connection is closed
robustly. Create _print_game_details_cache(cursor, now) containing the
game_details_cache SELECT/print loop, _print_itad_price_cache(cursor, now) for
the itad_price_cache SELECT/print loop, and _print_summary(cursor) for the
summary COUNT queries/prints; then simplify check_price_cache to open conn via
get_db_connection(), obtain cursor, compute now, call those three helpers, and
wrap the cursor/connection usage in a try/finally (or context manager) to
guarantee conn.close() even on error. Ensure helper names match exactly
(_print_game_details_cache, _print_itad_price_cache, _print_summary) so callers
remain clear.
- Around line 14-138: The function check_price_cache currently opens a DB
connection and cursor but calls conn.close() only at the end, which leaks the
connection if any query raises; wrap the main body of check_price_cache in a
try/finally (or use a context manager) so conn.close() always runs, and also
ensure the cursor is closed (cursor.close()) in the finally block; locate the
code in check_price_cache (references: conn, cursor, conn.close()) and
move/ensure cleanup into the finally to guarantee resources are released on
exceptions.
- Line 19: Replace use of timezone.utc with the Python 3.11+ alias datetime.UTC:
change the expression datetime.now(timezone.utc).isoformat() + "Z" to
datetime.now(datetime.UTC).isoformat() + "Z" (and remove any unused timezone
import if applicable) so the code follows the Ruff UP017 recommendation; ensure
the module still imports datetime in the form used (i.e., from datetime import
datetime) or adjust imports accordingly.
In `@scripts/populate_prices_async.py`:
- Around line 486-492: The salvage writes using cache_game_details_with_source
and cache_game_details are being executed with an explicit conn passed in so
they are not auto-committed; after each call you must explicitly commit the
transaction (e.g., call conn.commit()) or update those helper functions to
commit when given a connection; add a conn.commit() (or equivalent commit call)
immediately after each call to cache_game_details_with_source(...) and
cache_game_details(...) in the salvage/rollback recovery paths so each
per-record insert is actually persisted.
In `@scripts/populate_prices_optimized.py`:
- Around line 520-526: The recovery path uses the shared conn and calls
cache_game_details_with_source and cache_game_details after a prior rollback but
never commits, so salvaged rows are lost; after each call that writes via conn
in the rollback recovery flow (e.g., calls to
cache_game_details_with_source(app_id, game_data, source, conn=conn) and
cache_game_details(app_id, game_data, permanent=False, conn=conn)), ensure you
call conn.commit() (or the existing transaction-commit helper) once the salvage
write succeeds (and only rollback on errors) so the salvaged rows are persisted
before closing the connection.
---
Outside diff comments:
In `@scripts/populate_prices_async.py`:
- Around line 530-537: The normal async batch call to cache_itad_price_enhanced
is relying on its default cache_hours=6; change that call to pass the configured
TTL by adding cache_hours=ITAD_CACHE_TTL (the same TTL used in the recovery
path) so both paths use the same expiry; update the call site that currently
invokes cache_itad_price_enhanced(app_id, price_data, lookup_method=...,
steam_game_name=..., permanent=False, conn=...) to include
cache_hours=ITAD_CACHE_TTL and ensure ITAD_CACHE_TTL is imported/available in
the module.
In `@scripts/populate_prices_optimized.py`:
- Around line 556-563: The call to cache_itad_price_enhanced in the optimized
write path currently sets permanent=False but omits cache_hours, causing it to
use the 6-hour default; change the call to pass cache_hours=ITAD_CACHE_TTL (the
same TTL used by the salvage path) so cache_itad_price_enhanced(app_id,
price_data, lookup_method=lookup_method, steam_game_name=game_name,
permanent=False, cache_hours=ITAD_CACHE_TTL, conn=conn) to ensure consistent TTL
handling.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 99718368-568d-476b-985f-279b57a53ac0
📒 Files selected for processing (5)
justfilescripts/check_price_cache.pyscripts/populate_prices.pyscripts/populate_prices_async.pyscripts/populate_prices_optimized.py
scripts/check_price_cache.py
Outdated
| def check_price_cache(): | ||
| """Check game_details_cache and itad_price_cache tables.""" | ||
| conn = get_db_connection() | ||
| cursor = conn.cursor() | ||
|
|
||
| now = datetime.now(timezone.utc).isoformat() + "Z" | ||
|
|
||
| print("=" * 80) | ||
| print("GAME DETAILS CACHE (Steam prices)") | ||
| print("=" * 80) | ||
|
|
||
| cursor.execute( | ||
| """ | ||
| SELECT appid, name, permanent, cached_at, expires_at, | ||
| CASE | ||
| WHEN permanent = 1 THEN 'PERMANENT (never expires)' | ||
| WHEN expires_at > ? THEN 'VALID' | ||
| ELSE 'EXPIRED' | ||
| END as status | ||
| FROM game_details_cache | ||
| ORDER BY cached_at DESC | ||
| LIMIT 20 | ||
| """, | ||
| (now,), | ||
| ) | ||
|
|
||
| rows = cursor.fetchall() | ||
| if not rows: | ||
| print(" No entries found in game_details_cache") | ||
| else: | ||
| print(f" Showing {len(rows)} most recent entries:\n") | ||
| print( | ||
| f" {'AppID':<12} {'Name':<30} {'Permanent':<10} {'Status':<20} {'Expires At'}" | ||
| ) | ||
| print(" " + "-" * 95) | ||
| for row in rows: | ||
| appid = row[0] | ||
| name = (row[1] or "Unknown")[:28] | ||
| permanent = "YES" if row[2] else "NO" | ||
| status = row[5] | ||
| expires = row[4] or "Never" | ||
| if expires != "Never": | ||
| expires = expires[:19] | ||
| print(f" {appid:<12} {name:<30} {permanent:<10} {status:<20} {expires}") | ||
|
|
||
| print("\n" + "=" * 80) | ||
| print("ITAD PRICE CACHE (Historical lows)") | ||
| print("=" * 80) | ||
|
|
||
| cursor.execute( | ||
| """ | ||
| SELECT appid, lowest_price_formatted, permanent, cached_at, expires_at, | ||
| CASE | ||
| WHEN permanent = 1 THEN 'PERMANENT (never expires)' | ||
| WHEN expires_at > ? THEN 'VALID' | ||
| ELSE 'EXPIRED' | ||
| END as status | ||
| FROM itad_price_cache | ||
| ORDER BY cached_at DESC | ||
| LIMIT 20 | ||
| """, | ||
| (now,), | ||
| ) | ||
|
|
||
| rows = cursor.fetchall() | ||
| if not rows: | ||
| print(" No entries found in itad_price_cache") | ||
| else: | ||
| print(f" Showing {len(rows)} most recent entries:\n") | ||
| print( | ||
| f" {'AppID':<12} {'Price':<15} {'Permanent':<10} {'Status':<20} {'Expires At'}" | ||
| ) | ||
| print(" " + "-" * 80) | ||
| for row in rows: | ||
| appid = row[0] | ||
| price = row[1] or "N/A" | ||
| permanent = "YES" if row[2] else "NO" | ||
| status = row[5] | ||
| expires = row[4] or "Never" | ||
| if expires != "Never": | ||
| expires = expires[:19] | ||
| print(f" {appid:<12} {price:<15} {permanent:<10} {status:<20} {expires}") | ||
|
|
||
| # Summary statistics | ||
| print("\n" + "=" * 80) | ||
| print("SUMMARY") | ||
| print("=" * 80) | ||
|
|
||
| cursor.execute("SELECT COUNT(*) FROM game_details_cache") | ||
| total_game = cursor.fetchone()[0] | ||
|
|
||
| cursor.execute("SELECT COUNT(*) FROM game_details_cache WHERE permanent = 1") | ||
| permanent_game = cursor.fetchone()[0] | ||
|
|
||
| cursor.execute("SELECT COUNT(*) FROM game_details_cache WHERE permanent = 0") | ||
| ttl_game = cursor.fetchone()[0] | ||
|
|
||
| cursor.execute("SELECT COUNT(*) FROM itad_price_cache") | ||
| total_itad = cursor.fetchone()[0] | ||
|
|
||
| cursor.execute("SELECT COUNT(*) FROM itad_price_cache WHERE permanent = 1") | ||
| permanent_itad = cursor.fetchone()[0] | ||
|
|
||
| cursor.execute("SELECT COUNT(*) FROM itad_price_cache WHERE permanent = 0") | ||
| ttl_itad = cursor.fetchone()[0] | ||
|
|
||
| print("\n Game Details Cache:") | ||
| print(f" Total entries: {total_game}") | ||
| print(f" Permanent (never expires): {permanent_game}") | ||
| print(f" TTL-based (will expire): {ttl_game}") | ||
|
|
||
| print("\n ITAD Price Cache:") | ||
| print(f" Total entries: {total_itad}") | ||
| print(f" Permanent (never expires): {permanent_itad}") | ||
| print(f" TTL-based (will expire): {ttl_itad}") | ||
|
|
||
| if permanent_game > 0: | ||
| print( | ||
| f"\n ⚠️ WARNING: {permanent_game} game details entries are still marked as permanent!" | ||
| ) | ||
| print(" These will need to be refreshed or will show stale prices.") | ||
| else: | ||
| print("\n ✅ All game details entries use TTL-based expiration.") | ||
|
|
||
| conn.close() |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider refactoring to reduce function length.
The function has 71 statements, exceeding the typical 50-statement guideline (flagged by Ruff PLR0915). For a diagnostic script this is acceptable, but extracting helper functions would improve maintainability.
♻️ Suggested extraction pattern
def _print_game_details_cache(cursor, now):
"""Display game_details_cache entries."""
# Lines 21-57 extracted here
def _print_itad_price_cache(cursor, now):
"""Display itad_price_cache entries."""
# Lines 59-95 extracted here
def _print_summary(cursor):
"""Display summary statistics."""
# Lines 97-136 extracted here
def check_price_cache():
"""Check game_details_cache and itad_price_cache tables."""
conn = get_db_connection()
try:
cursor = conn.cursor()
now = datetime.now(timezone.utc).isoformat() + "Z"
_print_game_details_cache(cursor, now)
_print_itad_price_cache(cursor, now)
_print_summary(cursor)
finally:
conn.close()🧰 Tools
🪛 Ruff (0.15.6)
[warning] 14-14: Too many statements (71 > 50)
(PLR0915)
[warning] 19-19: Use datetime.UTC alias
Convert to datetime.UTC alias
(UP017)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/check_price_cache.py` around lines 14 - 138, The check_price_cache
function is too long; extract the three logical blocks into helpers and ensure
the DB connection is closed robustly. Create _print_game_details_cache(cursor,
now) containing the game_details_cache SELECT/print loop,
_print_itad_price_cache(cursor, now) for the itad_price_cache SELECT/print loop,
and _print_summary(cursor) for the summary COUNT queries/prints; then simplify
check_price_cache to open conn via get_db_connection(), obtain cursor, compute
now, call those three helpers, and wrap the cursor/connection usage in a
try/finally (or context manager) to guarantee conn.close() even on error. Ensure
helper names match exactly (_print_game_details_cache, _print_itad_price_cache,
_print_summary) so callers remain clear.
Wrap database operations in try/finally to prevent connection leaks.
If any query or operation raises an exception, conn.close() on line 138 will be skipped, causing a resource leak. This is especially important for a diagnostic script that might encounter unexpected database states.
🛡️ Proposed fix using try/finally
def check_price_cache():
"""Check game_details_cache and itad_price_cache tables."""
conn = get_db_connection()
- cursor = conn.cursor()
-
- now = datetime.now(timezone.utc).isoformat() + "Z"
+ try:
+ cursor = conn.cursor()
+ now = datetime.now(timezone.utc).isoformat() + "Z"
# ... rest of function body indented ...
- conn.close()
+ finally:
+ conn.close()🧰 Tools
🪛 Ruff (0.15.6)
[warning] 14-14: Too many statements (71 > 50)
(PLR0915)
[warning] 19-19: Use datetime.UTC alias
Convert to datetime.UTC alias
(UP017)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/check_price_cache.py` around lines 14 - 138, The function
check_price_cache currently opens a DB connection and cursor but calls
conn.close() only at the end, which leaks the connection if any query raises;
wrap the main body of check_price_cache in a try/finally (or use a context
manager) so conn.close() always runs, and also ensure the cursor is closed
(cursor.close()) in the finally block; locate the code in check_price_cache
(references: conn, cursor, conn.close()) and move/ensure cleanup into the
finally to guarantee resources are released on exceptions.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
scripts/populate_prices_async.py (2)
455-472:⚠️ Potential issue | 🟠 MajorKeep the happy path atomic at the batch level.
These per-record commits defeat the surrounding batch transaction. If a later record fails, the fallback replays rows that were already persisted and
written_countcan drift upward.💡 Suggested fix
try: conn.execute("BEGIN TRANSACTION") for app_id, game_info in batch: game_data = game_info["data"] source = game_info["source"] if source == "steam_library": cache_game_details_with_source( app_id, game_data, source, conn=conn ) else: cache_game_details( app_id, game_data, permanent=False, conn=conn ) - conn.commit() written_count += 1 + conn.commit()try: conn.execute("BEGIN TRANSACTION") for app_id, price_info in batch: price_data = price_info["data"] lookup_method = price_info["method"] game_name = price_info.get("game_name") cache_itad_price_enhanced( app_id, price_data, lookup_method=lookup_method, steam_game_name=game_name, permanent=False, cache_hours=ITAD_CACHE_TTL, conn=conn, ) - conn.commit() written_count += 1 + conn.commit()Also applies to: 523-541
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/populate_prices_async.py` around lines 455 - 472, The loop is committing per-record which breaks the surrounding batch transaction and allows partial persistence and incorrect written_count; move the commit out of the for loop so the transaction begun with conn.execute("BEGIN TRANSACTION") is committed once after the entire batch loop completes, and ensure you rollback the transaction on exception; update the blocks that call cache_game_details_with_source(...) and cache_game_details(...) (where conn.commit() is currently inside the loop) to call conn.commit() after the loop finishes and add proper error handling to conn.rollback() in the except path; apply the same change to the other identical block around the cache_game_details(...) calls at lines 523-541.
495-500:⚠️ Potential issue | 🟡 MinorRemove the compensating decrement on salvage failures.
A failed salvage write was never added to
written_countin this path, so subtracting here can make the final totals negative or underreport successful writes from earlier batches.💡 Suggested fix
except Exception as individual_error: conn.rollback() logger.error( f"Failed to write individual Steam record {app_id}: {individual_error}" ) - written_count -= 1 # Adjust count for failed individual writes except Exception as individual_error: conn.rollback() logger.error( f"Failed to write individual ITAD record {app_id}: {individual_error}" ) - written_count -= 1 # Adjust count for failed individual writesAlso applies to: 566-571
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/populate_prices_async.py` around lines 495 - 500, The except block that handles individual salvage write failures should not decrement written_count because the write was never incremented on failure; remove the line "written_count -= 1" from the except handling around conn.rollback() and logger.error (the block that catches Exception as individual_error), and similarly remove the identical compensating decrement in the other except block around lines 566-571 so written_count only reflects successful increments performed when writes actually succeed.scripts/populate_prices_optimized.py (1)
492-508:⚠️ Potential issue | 🟠 MajorRestore batch-level commits on the happy path.
Committing inside these loops makes a later row failure partially persist the batch. The fallback then replays already-written rows, and
written_countcan overreport the actual number of newly persisted entries.💡 Suggested fix
try: conn.execute("BEGIN TRANSACTION") for app_id, game_info in batch: game_data = game_info["data"] source = game_info["source"] if source == "steam_library": cache_game_details_with_source( app_id, game_data, source, conn=conn ) else: cache_game_details( app_id, game_data, permanent=False, conn=conn ) - conn.commit() written_count += 1 + conn.commit()try: conn.execute("BEGIN TRANSACTION") for app_id, price_info in batch: price_data = price_info["data"] lookup_method = price_info["method"] game_name = price_info.get("game_name") cache_itad_price_enhanced( app_id, price_data, lookup_method=lookup_method, steam_game_name=game_name, permanent=False, cache_hours=ITAD_CACHE_TTL, conn=conn, ) - conn.commit() written_count += 1 + conn.commit()Also applies to: 549-567
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/populate_prices_optimized.py` around lines 492 - 508, The code currently commits inside the per-row loop (using conn.commit() while iterating over batch items), causing partial persistence if a later row fails; change to commit once per batch instead: begin the transaction with get_db_connection() and conn.execute("BEGIN TRANSACTION"), call cache_game_details_with_source or cache_game_details for each app_id without committing inside the loop, then after the for loop call conn.commit() once and increment written_count only after a successful commit; also ensure exceptions trigger conn.rollback() and that the same change is applied for the other similar block (the block handling lines 549-567) to avoid double-counting and replayed writes.
♻️ Duplicate comments (1)
scripts/populate_prices_optimized.py (1)
518-526:⚠️ Potential issue | 🟠 MajorCommit the recovered
steam_libraryrows too.Line 526 is still inside the
else, so thecache_game_details_with_source(..., conn=conn)salvage branch never commits after the earlier rollback. Those recovered rows are dropped when the connection closes.💡 Suggested fix
if source == "steam_library": cache_game_details_with_source( app_id, game_data, source, conn=conn ) else: cache_game_details( app_id, game_data, permanent=False, conn=conn ) - conn.commit() + conn.commit() written_count += 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/populate_prices_optimized.py` around lines 518 - 526, The commit currently only runs in the else branch so rows recovered via cache_game_details_with_source(app_id, game_data, source, conn=conn) are not persisted; after calling cache_game_details_with_source(...) you must call conn.commit() (or move the existing conn.commit() out of the else so both branches commit) to ensure recovered steam_library rows are saved—modify the block containing cache_game_details_with_source and cache_game_details so that conn.commit() executes for both paths.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/populate_prices_optimized.py`:
- Around line 528-530: The except block that currently does "except Exception:
conn.rollback(); pass" must not silently swallow errors; change it to "except
Exception as e" where you call conn.rollback(), log the exception and include
identifying context (e.g., the current row id/app_id/price input variables
available in this scope) and the traceback, and record the failed row to a
persistent audit (append to a failures list or write to a CSV/log file) so
partial population is auditable; apply the same change to the other occurrence
around the conn.rollback() at the later block (lines referenced in the review)
instead of using pass.
---
Outside diff comments:
In `@scripts/populate_prices_async.py`:
- Around line 455-472: The loop is committing per-record which breaks the
surrounding batch transaction and allows partial persistence and incorrect
written_count; move the commit out of the for loop so the transaction begun with
conn.execute("BEGIN TRANSACTION") is committed once after the entire batch loop
completes, and ensure you rollback the transaction on exception; update the
blocks that call cache_game_details_with_source(...) and cache_game_details(...)
(where conn.commit() is currently inside the loop) to call conn.commit() after
the loop finishes and add proper error handling to conn.rollback() in the except
path; apply the same change to the other identical block around the
cache_game_details(...) calls at lines 523-541.
- Around line 495-500: The except block that handles individual salvage write
failures should not decrement written_count because the write was never
incremented on failure; remove the line "written_count -= 1" from the except
handling around conn.rollback() and logger.error (the block that catches
Exception as individual_error), and similarly remove the identical compensating
decrement in the other except block around lines 566-571 so written_count only
reflects successful increments performed when writes actually succeed.
In `@scripts/populate_prices_optimized.py`:
- Around line 492-508: The code currently commits inside the per-row loop (using
conn.commit() while iterating over batch items), causing partial persistence if
a later row fails; change to commit once per batch instead: begin the
transaction with get_db_connection() and conn.execute("BEGIN TRANSACTION"), call
cache_game_details_with_source or cache_game_details for each app_id without
committing inside the loop, then after the for loop call conn.commit() once and
increment written_count only after a successful commit; also ensure exceptions
trigger conn.rollback() and that the same change is applied for the other
similar block (the block handling lines 549-567) to avoid double-counting and
replayed writes.
---
Duplicate comments:
In `@scripts/populate_prices_optimized.py`:
- Around line 518-526: The commit currently only runs in the else branch so rows
recovered via cache_game_details_with_source(app_id, game_data, source,
conn=conn) are not persisted; after calling cache_game_details_with_source(...)
you must call conn.commit() (or move the existing conn.commit() out of the else
so both branches commit) to ensure recovered steam_library rows are saved—modify
the block containing cache_game_details_with_source and cache_game_details so
that conn.commit() executes for both paths.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 22af45f7-460e-4129-91f5-7a7154b9c156
📒 Files selected for processing (4)
scripts/check_price_cache.pyscripts/populate_prices_async.pyscripts/populate_prices_optimized.pysrc/familybot/lib/database.py
| except Exception: | ||
| conn.rollback() | ||
| pass |
There was a problem hiding this comment.
Don't drop salvage failures silently.
These recovery paths can skip rows with no app id or error in the logs, so partial cache population becomes impossible to audit.
💡 Suggested fix
- except Exception:
+ except Exception as individual_error:
conn.rollback()
- pass
+ logger.error(
+ "Failed to salvage Steam record %s: %s",
+ app_id,
+ individual_error,
+ )
- except Exception:
+ except Exception as individual_error:
conn.rollback()
- pass
+ logger.error(
+ "Failed to salvage ITAD record %s: %s",
+ app_id,
+ individual_error,
+ )Also applies to: 589-591
🧰 Tools
🪛 Ruff (0.15.6)
[warning] 528-528: Do not catch blind exception: Exception
(BLE001)
[warning] 530-530: Unnecessary pass statement
Remove unnecessary pass
(PIE790)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/populate_prices_optimized.py` around lines 528 - 530, The except
block that currently does "except Exception: conn.rollback(); pass" must not
silently swallow errors; change it to "except Exception as e" where you call
conn.rollback(), log the exception and include identifying context (e.g., the
current row id/app_id/price input variables available in this scope) and the
traceback, and record the failed row to a persistent audit (append to a failures
list or write to a CSV/log file) so partial population is auditable; apply the
same change to the other occurrence around the conn.rollback() at the later
block (lines referenced in the review) instead of using pass.
Current Steam prices were cached permanently, which meant the bot would show stale prices until someone manually ran
--force-refresh. That's a problem during Steam sales when prices change daily.ITAD historical low caching was also inconsistent - the populate scripts used permanent caching while
utils.pyused 14 days. Runningjust populate-prices-turbogave you permanent cache, but!dealsonly cached for 14 days.What changed
Current Steam prices (
price_overviewingame_details_cache):permanent=Falseso they expire afterGAME_DETAILS_CACHE_TTL(default: 1 week)ITAD historical lows:
itad_pricesconfig option (default: 336 hours / 14 days)Config changes:
cache_ttl_hourssection to config-template.yml with all cache TTLs documentedITAD_CACHE_TTLto config.pyWhy this approach
Went with Option A from the issue - using
permanent=Falsewith existing TTL config. It's the simplest fix and the 1-week default is reasonable for most cases. During Steam sales, run--refresh-currentmore often.The alternative (separate price cache table) would be cleaner but more work for marginal benefit.
Fixes #10
Summary by CodeRabbit
New Features
Improvements