Skip to content

Fix: Price caching now expires instead of lasting forever#11

Open
SuaveIV wants to merge 4 commits intomainfrom
fix/issue-10-cache-ttls
Open

Fix: Price caching now expires instead of lasting forever#11
SuaveIV wants to merge 4 commits intomainfrom
fix/issue-10-cache-ttls

Conversation

@SuaveIV
Copy link
Owner

@SuaveIV SuaveIV commented Mar 25, 2026

Current Steam prices were cached permanently, which meant the bot would show stale prices until someone manually ran --force-refresh. That's a problem during Steam sales when prices change daily.

ITAD historical low caching was also inconsistent - the populate scripts used permanent caching while utils.py used 14 days. Running just populate-prices-turbo gave you permanent cache, but !deals only cached for 14 days.

What changed

Current Steam prices (price_overview in game_details_cache):

  • Now use permanent=False so they expire after GAME_DETAILS_CACHE_TTL (default: 1 week)
  • This affects all populate scripts, steam_helpers, plugin_admin_actions, and admin_commands

ITAD historical lows:

  • Added itad_prices config option (default: 336 hours / 14 days)
  • Standardized across all code paths - no more inconsistency between scripts and runtime
  • Historical lows only go down over time, so longer TTL is fine

Config changes:

  • Added cache_ttl_hours section to config-template.yml with all cache TTLs documented
  • Added ITAD_CACHE_TTL to config.py
  • Updated utils.py to use the configurable value instead of hardcoded 336

Why this approach

Went with Option A from the issue - using permanent=False with existing TTL config. It's the simplest fix and the 1-week default is reasonable for most cases. During Steam sales, run --refresh-current more often.

The alternative (separate price cache table) would be cleaner but more work for marginal benefit.

Fixes #10

Summary by CodeRabbit

  • New Features

    • Configurable cache TTLs for family library, wishlist, game details, and ITAD prices (defaults provided).
    • New cache-inspection script and a convenience task to run it.
  • Improvements

    • Cache semantics updated so game details and price data are stored with time-based TTLs (not permanent) and honor configured TTLs for expiration.

- Changed permanent=True to permanent=False for game_details_cache in all populate scripts
- Changed permanent=True to permanent=False in steam_helpers, plugin_admin_actions, and admin_commands
- Added configurable ITAD cache TTL (itad_prices: 336 hours default) to config-template.yml
- Added ITAD_CACHE_TTL variable to config.py
- Updated utils.py to use configurable ITAD_CACHE_TTL instead of hardcoded 336 hours
- Standardizes ITAD caching TTL across all code paths

Resolves issue #10: Inconsistent price caching TTLs and permanent storage of current prices
@coderabbitai
Copy link

coderabbitai bot commented Mar 25, 2026

📝 Walkthrough

Walkthrough

Added configurable hour-based TTLs for Steam-family caches and switched multiple cache writes (Steam game details and ITAD prices) from permanent storage to TTL-bound caching; introduced ITAD cache TTL constant, updated DB timestamp formatting, and added tooling to inspect cache tables.

Changes

Cohort / File(s) Summary
Configuration
config-template.yml, src/familybot/config.py
Added steam_family.cache_ttl_hours keys and new ITAD_CACHE_TTL constant (default 336h) read from steam_family.cache_ttl_hours.itad_prices.
Populate scripts
scripts/populate_prices.py, scripts/populate_prices_async.py, scripts/populate_prices_optimized.py
Switched Steam game-details and ITAD enhanced price cache writes from permanent=Truepermanent=False; ITAD writes now pass cache_hours=ITAD_CACHE_TTL; added per-record conn.commit()/conn.rollback() adjustments and ensure conn is forwarded in salvage paths.
Runtime cache callers
src/familybot/lib/steam_helpers.py, src/familybot/lib/admin_commands.py, src/familybot/lib/plugin_admin_actions.py
Changed cache_game_details(..., permanent=True)permanent=False in fetch, wishlist, new-games and admin caching paths.
ITAD utilities
src/familybot/lib/utils.py
Replaced hardcoded 336h with ITAD_CACHE_TTL for get_lowest_price and prefetch_itad_prices; updated log messages to reflect configured TTL.
Database formatting
src/familybot/lib/database.py
Normalized UTC timestamp serialization to use datetime.isoformat().replace("+00:00","Z") for cached_at/expires_at across bulk and single cache writers.
Inspection tooling & recipes
scripts/check_price_cache.py, justfile
Added scripts/check_price_cache.py to inspect recent rows and counts for game_details_cache and itad_price_cache; added just recipe check-price-cache.

Sequence Diagram(s)

sequenceDiagram
    participant PopScript as Populate Script
    participant SteamAPI as Steam API
    participant ITADAPI as ITAD API
    participant DB as Cache DB

    PopScript->>SteamAPI: fetch appdetails(app_id)
    SteamAPI-->>PopScript: game_details (includes price_overview)
    PopScript->>DB: cache_game_details(app_id, game_details, permanent=False, cache_hours=GAME_DETAILS_CACHE_TTL)
    PopScript->>ITADAPI: fetch historyLow(app_id/name)
    ITADAPI-->>PopScript: historyLow (price data)
    PopScript->>DB: cache_itad_price_enhanced(app_id, itad_data, permanent=False, cache_hours=ITAD_CACHE_TTL)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐇 I hopped through configs, nudged a TTL rhyme,
No stale discounts trapped in time.
Cache now breathes, expires with grace,
Fresh deals hop in, a lively chase.
🥕✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Fix: Price caching now expires instead of lasting forever' accurately and clearly summarizes the main change: converting permanent cache entries to TTL-based expiration for game details and ITAD prices.
Linked Issues check ✅ Passed The PR successfully addresses all coding objectives from issue #10: stopping permanent caching of Steam prices by changing populate scripts and utilities to use permanent=False, standardizing ITAD caching with configurable TTL (336 hours default), and ensuring consistency across all callers.
Out of Scope Changes check ✅ Passed All changes are directly scoped to issue #10 requirements: price cache expiration configuration (config-template.yml, config.py), caching behavior updates across populate/utility scripts, and a new cache inspection script for verification.
Docstring Coverage ✅ Passed Docstring coverage is 95.83% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/issue-10-cache-ttls

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
scripts/populate_prices_optimized.py (1)

552-559: ⚠️ Potential issue | 🟠 Major

Use ITAD_CACHE_TTL in the optimized ITAD writer.

This call still inherits cache_itad_price_enhanced(..., cache_hours=6), and the fallback block below still writes permanently. The optimized path therefore remains out of sync with the new configurable ITAD TTL.

💡 Proposed fix
-from familybot.config import ITAD_API_KEY, STEAMWORKS_API_KEY  # pylint: disable=wrong-import-position
+from familybot.config import (  # pylint: disable=wrong-import-position
+    ITAD_API_KEY,
+    ITAD_CACHE_TTL,
+    STEAMWORKS_API_KEY,
+)
@@
                     cache_itad_price_enhanced(
                         app_id,
                         price_data,
                         lookup_method=lookup_method,
                         steam_game_name=game_name,
                         permanent=False,
+                        cache_hours=ITAD_CACHE_TTL,
                         conn=conn,
                     )
@@
                         cache_itad_price_enhanced(
                             app_id,
                             price_data,
                             lookup_method=lookup_method,
                             steam_game_name=game_name,
-                            permanent=True,
+                            permanent=False,
+                            cache_hours=ITAD_CACHE_TTL,
                         )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/populate_prices_optimized.py` around lines 552 - 559, The optimized
ITAD writer currently calls cache_itad_price_enhanced(app_id, price_data,
lookup_method=lookup_method, steam_game_name=game_name, permanent=False,
conn=conn) but still relies on the old 6-hour default and the fallback branch
writes permanently; update both the optimized call and the fallback write to use
the configured TTL by passing cache_hours=ITAD_CACHE_TTL to
cache_itad_price_enhanced (instead of relying on permanent/defaults) so the
optimized path and fallback honor the ITAD_CACHE_TTL constant; ensure you
reference cache_itad_price_enhanced and ITAD_CACHE_TTL when making the changes.
scripts/populate_prices.py (1)

443-452: ⚠️ Potential issue | 🟠 Major

Pass ITAD_CACHE_TTL through cache_itad_price_enhanced.

src/familybot/lib/database.py defaults cache_itad_price_enhanced(..., cache_hours=6), so flipping only permanent=False here makes populate runs expire historical lows after 6 hours instead of the new configurable TTL.

💡 Proposed fix
-from familybot.config import ITAD_API_KEY, STEAMWORKS_API_KEY  # pylint: disable=wrong-import-position
+from familybot.config import (  # pylint: disable=wrong-import-position
+    ITAD_API_KEY,
+    ITAD_CACHE_TTL,
+    STEAMWORKS_API_KEY,
+)
@@
                 cache_itad_price_enhanced(
                     app_id,
                     {
                         "lowest_price": str(history_low["amount"]),
                         "lowest_price_formatted": f"${history_low['amount']}",
                         "shop_name": "Historical Low (All Stores)",
                     },
                     lookup_method="appid",
                     permanent=False,
+                    cache_hours=ITAD_CACHE_TTL,
                 )
@@
             cache_itad_price_enhanced(
                 app_id,
                 {
                     "lowest_price": str(history_low["amount"]),
                     "lowest_price_formatted": f"${history_low['amount']}",
                     "shop_name": "Historical Low (All Stores)",
                 },
                 lookup_method="name_search",
                 steam_game_name=game_name,
                 permanent=False,
+                cache_hours=ITAD_CACHE_TTL,
             )

Also applies to: 505-515

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/populate_prices.py` around lines 443 - 452, The call to
cache_itad_price_enhanced is leaving cache_hours at the default 6 hours, causing
historical lows to expire incorrectly; update the calls to pass the configured
ITAD_CACHE_TTL value (e.g., cache_hours=ITAD_CACHE_TTL) into
cache_itad_price_enhanced for the historical-low caches (the call shown and the
similar block around the 505-515 range) so the TTL uses the configurable setting
rather than the hardcoded default.
scripts/populate_prices_async.py (2)

462-489: ⚠️ Potential issue | 🟠 Major

Mirror the TTL behavior in the Steam recovery path.

The transactional write now uses permanent=False, but the salvage path still writes recovered records with permanent=True on Line 488. If a batch rolls back, those price_overview entries become non-expiring again, which undercuts the stale-price fix for exactly the rows you recover.

Suggested fix
                     if source == "steam_library":
                         cache_game_details_with_source(app_id, game_data, source)
                     else:
-                        cache_game_details(app_id, game_data, permanent=True)
+                        cache_game_details(app_id, game_data, permanent=False)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/populate_prices_async.py` around lines 462 - 489, The recovery loop
is writing recovered records with permanent=True and without the DB connection,
which contradicts the transactional path that uses permanent=False and
conn=conn; update the salvage branch so calls to
cache_game_details_with_source(app_id, game_data, source) and
cache_game_details(app_id, game_data, permanent=True) mirror the transactional
behavior by passing the same conn and using permanent=False (i.e., call
cache_game_details_with_source(app_id, game_data, source, conn=conn) and
cache_game_details(app_id, game_data, permanent=False, conn=conn)).

526-555: ⚠️ Potential issue | 🟠 Major

Keep ITAD recovery writes non-permanent too.

This has the same inconsistency as the Steam path: the transaction uses permanent=False, but the per-record fallback still uses permanent=True on Line 554. A batch failure will therefore leave recovered ITAD rows immune to ITAD_CACHE_TTL.

Suggested fix
                         cache_itad_price_enhanced(
                             app_id,
                             price_data,
                             lookup_method=lookup_method,
                             steam_game_name=game_name,
-                            permanent=True,
+                            permanent=False,
                         )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/populate_prices_async.py` around lines 526 - 555, The per-record
fallback in the ITAD batch recovery is incorrectly marking recovered rows
permanent whereas the batch path uses temporary writes; in the exception handler
for the batch (around the cache_itad_price_enhanced calls inside the for app_id,
price_info in batch loop) change the call to cache_itad_price_enhanced to pass
permanent=False (same as the batch path) and preserve the other args (app_id,
price_data, lookup_method, steam_game_name) so recovered records remain subject
to ITAD_CACHE_TTL.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/populate_prices_optimized.py`:
- Around line 504-506: The per-record recovery block still calls
cache_game_details(..., permanent=True) which reintroduces never-expiring Steam
entries; change that call to use permanent=False (and keep conn parameter usage
consistent with the batch path) so recovered writes remain TTL-bound like the
batch transaction and do not create permanent rows.

---

Outside diff comments:
In `@scripts/populate_prices_async.py`:
- Around line 462-489: The recovery loop is writing recovered records with
permanent=True and without the DB connection, which contradicts the
transactional path that uses permanent=False and conn=conn; update the salvage
branch so calls to cache_game_details_with_source(app_id, game_data, source) and
cache_game_details(app_id, game_data, permanent=True) mirror the transactional
behavior by passing the same conn and using permanent=False (i.e., call
cache_game_details_with_source(app_id, game_data, source, conn=conn) and
cache_game_details(app_id, game_data, permanent=False, conn=conn)).
- Around line 526-555: The per-record fallback in the ITAD batch recovery is
incorrectly marking recovered rows permanent whereas the batch path uses
temporary writes; in the exception handler for the batch (around the
cache_itad_price_enhanced calls inside the for app_id, price_info in batch loop)
change the call to cache_itad_price_enhanced to pass permanent=False (same as
the batch path) and preserve the other args (app_id, price_data, lookup_method,
steam_game_name) so recovered records remain subject to ITAD_CACHE_TTL.

In `@scripts/populate_prices_optimized.py`:
- Around line 552-559: The optimized ITAD writer currently calls
cache_itad_price_enhanced(app_id, price_data, lookup_method=lookup_method,
steam_game_name=game_name, permanent=False, conn=conn) but still relies on the
old 6-hour default and the fallback branch writes permanently; update both the
optimized call and the fallback write to use the configured TTL by passing
cache_hours=ITAD_CACHE_TTL to cache_itad_price_enhanced (instead of relying on
permanent/defaults) so the optimized path and fallback honor the ITAD_CACHE_TTL
constant; ensure you reference cache_itad_price_enhanced and ITAD_CACHE_TTL when
making the changes.

In `@scripts/populate_prices.py`:
- Around line 443-452: The call to cache_itad_price_enhanced is leaving
cache_hours at the default 6 hours, causing historical lows to expire
incorrectly; update the calls to pass the configured ITAD_CACHE_TTL value (e.g.,
cache_hours=ITAD_CACHE_TTL) into cache_itad_price_enhanced for the
historical-low caches (the call shown and the similar block around the 505-515
range) so the TTL uses the configurable setting rather than the hardcoded
default.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: cf06c2b9-4097-478c-9868-b09984d08796

📥 Commits

Reviewing files that changed from the base of the PR and between e7c8a04 and 504066e.

📒 Files selected for processing (9)
  • config-template.yml
  • scripts/populate_prices.py
  • scripts/populate_prices_async.py
  • scripts/populate_prices_optimized.py
  • src/familybot/config.py
  • src/familybot/lib/admin_commands.py
  • src/familybot/lib/plugin_admin_actions.py
  • src/familybot/lib/steam_helpers.py
  • src/familybot/lib/utils.py

SuaveIV added 2 commits March 24, 2026 21:53
…ITAD_CACHE_TTL

- Added ITAD_CACHE_TTL import to all three populate scripts
- Fixed recovery blocks in populate_prices_optimized.py to use permanent=False and conn parameter
- Fixed recovery blocks in populate_prices_async.py to use permanent=False, cache_hours=ITAD_CACHE_TTL, and conn parameter
- Added cache_hours=ITAD_CACHE_TTL to cache_itad_price_enhanced calls in populate_prices.py
- Added cache_hours=ITAD_CACHE_TTL to batch_write_itad_data in populate_prices_optimized.py

This ensures:
1. All cache entries use TTL-based expiration consistently
2. Recovery paths match the main transactional paths
3. ITAD cache uses configurable ITAD_CACHE_TTL instead of default 6 hours
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
scripts/populate_prices_optimized.py (1)

556-563: ⚠️ Potential issue | 🟠 Major

Use ITAD_CACHE_TTL on the main ITAD write path too.

Lines 556-563 set permanent=False, but without cache_hours the helper falls back to its 6-hour default. The optimized path will therefore churn ITAD entries far more aggressively than configured, while the salvage path below uses the correct TTL.

💡 Proposed fix
                     cache_itad_price_enhanced(
                         app_id,
                         price_data,
                         lookup_method=lookup_method,
                         steam_game_name=game_name,
                         permanent=False,
+                        cache_hours=ITAD_CACHE_TTL,
                         conn=conn,
                     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/populate_prices_optimized.py` around lines 556 - 563, The call to
cache_itad_price_enhanced in the optimized write path currently sets
permanent=False but omits cache_hours, causing it to use the 6-hour default;
change the call to pass cache_hours=ITAD_CACHE_TTL (the same TTL used by the
salvage path) so cache_itad_price_enhanced(app_id, price_data,
lookup_method=lookup_method, steam_game_name=game_name, permanent=False,
cache_hours=ITAD_CACHE_TTL, conn=conn) to ensure consistent TTL handling.
scripts/populate_prices_async.py (1)

530-537: ⚠️ Potential issue | 🟠 Major

Pass ITAD_CACHE_TTL in the normal ITAD batch write.

Lines 530-537 still rely on cache_itad_price_enhanced's default cache_hours=6, so the primary async path expires historical-low data after 6 hours while the recovery path below uses the configured TTL. That reintroduces the inconsistency this PR is trying to remove.

💡 Proposed fix
                     cache_itad_price_enhanced(
                         app_id,
                         price_data,
                         lookup_method=lookup_method,
                         steam_game_name=game_name,
                         permanent=False,
+                        cache_hours=ITAD_CACHE_TTL,
                         conn=conn,
                     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/populate_prices_async.py` around lines 530 - 537, The normal async
batch call to cache_itad_price_enhanced is relying on its default cache_hours=6;
change that call to pass the configured TTL by adding cache_hours=ITAD_CACHE_TTL
(the same TTL used in the recovery path) so both paths use the same expiry;
update the call site that currently invokes cache_itad_price_enhanced(app_id,
price_data, lookup_method=..., steam_game_name=..., permanent=False, conn=...)
to include cache_hours=ITAD_CACHE_TTL and ensure ITAD_CACHE_TTL is
imported/available in the module.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/check_price_cache.py`:
- Around line 14-138: The check_price_cache function is too long; extract the
three logical blocks into helpers and ensure the DB connection is closed
robustly. Create _print_game_details_cache(cursor, now) containing the
game_details_cache SELECT/print loop, _print_itad_price_cache(cursor, now) for
the itad_price_cache SELECT/print loop, and _print_summary(cursor) for the
summary COUNT queries/prints; then simplify check_price_cache to open conn via
get_db_connection(), obtain cursor, compute now, call those three helpers, and
wrap the cursor/connection usage in a try/finally (or context manager) to
guarantee conn.close() even on error. Ensure helper names match exactly
(_print_game_details_cache, _print_itad_price_cache, _print_summary) so callers
remain clear.
- Around line 14-138: The function check_price_cache currently opens a DB
connection and cursor but calls conn.close() only at the end, which leaks the
connection if any query raises; wrap the main body of check_price_cache in a
try/finally (or use a context manager) so conn.close() always runs, and also
ensure the cursor is closed (cursor.close()) in the finally block; locate the
code in check_price_cache (references: conn, cursor, conn.close()) and
move/ensure cleanup into the finally to guarantee resources are released on
exceptions.
- Line 19: Replace use of timezone.utc with the Python 3.11+ alias datetime.UTC:
change the expression datetime.now(timezone.utc).isoformat() + "Z" to
datetime.now(datetime.UTC).isoformat() + "Z" (and remove any unused timezone
import if applicable) so the code follows the Ruff UP017 recommendation; ensure
the module still imports datetime in the form used (i.e., from datetime import
datetime) or adjust imports accordingly.

In `@scripts/populate_prices_async.py`:
- Around line 486-492: The salvage writes using cache_game_details_with_source
and cache_game_details are being executed with an explicit conn passed in so
they are not auto-committed; after each call you must explicitly commit the
transaction (e.g., call conn.commit()) or update those helper functions to
commit when given a connection; add a conn.commit() (or equivalent commit call)
immediately after each call to cache_game_details_with_source(...) and
cache_game_details(...) in the salvage/rollback recovery paths so each
per-record insert is actually persisted.

In `@scripts/populate_prices_optimized.py`:
- Around line 520-526: The recovery path uses the shared conn and calls
cache_game_details_with_source and cache_game_details after a prior rollback but
never commits, so salvaged rows are lost; after each call that writes via conn
in the rollback recovery flow (e.g., calls to
cache_game_details_with_source(app_id, game_data, source, conn=conn) and
cache_game_details(app_id, game_data, permanent=False, conn=conn)), ensure you
call conn.commit() (or the existing transaction-commit helper) once the salvage
write succeeds (and only rollback on errors) so the salvaged rows are persisted
before closing the connection.

---

Outside diff comments:
In `@scripts/populate_prices_async.py`:
- Around line 530-537: The normal async batch call to cache_itad_price_enhanced
is relying on its default cache_hours=6; change that call to pass the configured
TTL by adding cache_hours=ITAD_CACHE_TTL (the same TTL used in the recovery
path) so both paths use the same expiry; update the call site that currently
invokes cache_itad_price_enhanced(app_id, price_data, lookup_method=...,
steam_game_name=..., permanent=False, conn=...) to include
cache_hours=ITAD_CACHE_TTL and ensure ITAD_CACHE_TTL is imported/available in
the module.

In `@scripts/populate_prices_optimized.py`:
- Around line 556-563: The call to cache_itad_price_enhanced in the optimized
write path currently sets permanent=False but omits cache_hours, causing it to
use the 6-hour default; change the call to pass cache_hours=ITAD_CACHE_TTL (the
same TTL used by the salvage path) so cache_itad_price_enhanced(app_id,
price_data, lookup_method=lookup_method, steam_game_name=game_name,
permanent=False, cache_hours=ITAD_CACHE_TTL, conn=conn) to ensure consistent TTL
handling.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 99718368-568d-476b-985f-279b57a53ac0

📥 Commits

Reviewing files that changed from the base of the PR and between 504066e and 7554395.

📒 Files selected for processing (5)
  • justfile
  • scripts/check_price_cache.py
  • scripts/populate_prices.py
  • scripts/populate_prices_async.py
  • scripts/populate_prices_optimized.py

Comment on lines +14 to +138
def check_price_cache():
"""Check game_details_cache and itad_price_cache tables."""
conn = get_db_connection()
cursor = conn.cursor()

now = datetime.now(timezone.utc).isoformat() + "Z"

print("=" * 80)
print("GAME DETAILS CACHE (Steam prices)")
print("=" * 80)

cursor.execute(
"""
SELECT appid, name, permanent, cached_at, expires_at,
CASE
WHEN permanent = 1 THEN 'PERMANENT (never expires)'
WHEN expires_at > ? THEN 'VALID'
ELSE 'EXPIRED'
END as status
FROM game_details_cache
ORDER BY cached_at DESC
LIMIT 20
""",
(now,),
)

rows = cursor.fetchall()
if not rows:
print(" No entries found in game_details_cache")
else:
print(f" Showing {len(rows)} most recent entries:\n")
print(
f" {'AppID':<12} {'Name':<30} {'Permanent':<10} {'Status':<20} {'Expires At'}"
)
print(" " + "-" * 95)
for row in rows:
appid = row[0]
name = (row[1] or "Unknown")[:28]
permanent = "YES" if row[2] else "NO"
status = row[5]
expires = row[4] or "Never"
if expires != "Never":
expires = expires[:19]
print(f" {appid:<12} {name:<30} {permanent:<10} {status:<20} {expires}")

print("\n" + "=" * 80)
print("ITAD PRICE CACHE (Historical lows)")
print("=" * 80)

cursor.execute(
"""
SELECT appid, lowest_price_formatted, permanent, cached_at, expires_at,
CASE
WHEN permanent = 1 THEN 'PERMANENT (never expires)'
WHEN expires_at > ? THEN 'VALID'
ELSE 'EXPIRED'
END as status
FROM itad_price_cache
ORDER BY cached_at DESC
LIMIT 20
""",
(now,),
)

rows = cursor.fetchall()
if not rows:
print(" No entries found in itad_price_cache")
else:
print(f" Showing {len(rows)} most recent entries:\n")
print(
f" {'AppID':<12} {'Price':<15} {'Permanent':<10} {'Status':<20} {'Expires At'}"
)
print(" " + "-" * 80)
for row in rows:
appid = row[0]
price = row[1] or "N/A"
permanent = "YES" if row[2] else "NO"
status = row[5]
expires = row[4] or "Never"
if expires != "Never":
expires = expires[:19]
print(f" {appid:<12} {price:<15} {permanent:<10} {status:<20} {expires}")

# Summary statistics
print("\n" + "=" * 80)
print("SUMMARY")
print("=" * 80)

cursor.execute("SELECT COUNT(*) FROM game_details_cache")
total_game = cursor.fetchone()[0]

cursor.execute("SELECT COUNT(*) FROM game_details_cache WHERE permanent = 1")
permanent_game = cursor.fetchone()[0]

cursor.execute("SELECT COUNT(*) FROM game_details_cache WHERE permanent = 0")
ttl_game = cursor.fetchone()[0]

cursor.execute("SELECT COUNT(*) FROM itad_price_cache")
total_itad = cursor.fetchone()[0]

cursor.execute("SELECT COUNT(*) FROM itad_price_cache WHERE permanent = 1")
permanent_itad = cursor.fetchone()[0]

cursor.execute("SELECT COUNT(*) FROM itad_price_cache WHERE permanent = 0")
ttl_itad = cursor.fetchone()[0]

print("\n Game Details Cache:")
print(f" Total entries: {total_game}")
print(f" Permanent (never expires): {permanent_game}")
print(f" TTL-based (will expire): {ttl_game}")

print("\n ITAD Price Cache:")
print(f" Total entries: {total_itad}")
print(f" Permanent (never expires): {permanent_itad}")
print(f" TTL-based (will expire): {ttl_itad}")

if permanent_game > 0:
print(
f"\n ⚠️ WARNING: {permanent_game} game details entries are still marked as permanent!"
)
print(" These will need to be refreshed or will show stale prices.")
else:
print("\n ✅ All game details entries use TTL-based expiration.")

conn.close()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider refactoring to reduce function length.

The function has 71 statements, exceeding the typical 50-statement guideline (flagged by Ruff PLR0915). For a diagnostic script this is acceptable, but extracting helper functions would improve maintainability.

♻️ Suggested extraction pattern
def _print_game_details_cache(cursor, now):
    """Display game_details_cache entries."""
    # Lines 21-57 extracted here

def _print_itad_price_cache(cursor, now):
    """Display itad_price_cache entries."""
    # Lines 59-95 extracted here

def _print_summary(cursor):
    """Display summary statistics."""
    # Lines 97-136 extracted here

def check_price_cache():
    """Check game_details_cache and itad_price_cache tables."""
    conn = get_db_connection()
    try:
        cursor = conn.cursor()
        now = datetime.now(timezone.utc).isoformat() + "Z"
        _print_game_details_cache(cursor, now)
        _print_itad_price_cache(cursor, now)
        _print_summary(cursor)
    finally:
        conn.close()
🧰 Tools
🪛 Ruff (0.15.6)

[warning] 14-14: Too many statements (71 > 50)

(PLR0915)


[warning] 19-19: Use datetime.UTC alias

Convert to datetime.UTC alias

(UP017)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/check_price_cache.py` around lines 14 - 138, The check_price_cache
function is too long; extract the three logical blocks into helpers and ensure
the DB connection is closed robustly. Create _print_game_details_cache(cursor,
now) containing the game_details_cache SELECT/print loop,
_print_itad_price_cache(cursor, now) for the itad_price_cache SELECT/print loop,
and _print_summary(cursor) for the summary COUNT queries/prints; then simplify
check_price_cache to open conn via get_db_connection(), obtain cursor, compute
now, call those three helpers, and wrap the cursor/connection usage in a
try/finally (or context manager) to guarantee conn.close() even on error. Ensure
helper names match exactly (_print_game_details_cache, _print_itad_price_cache,
_print_summary) so callers remain clear.

⚠️ Potential issue | 🟡 Minor

Wrap database operations in try/finally to prevent connection leaks.

If any query or operation raises an exception, conn.close() on line 138 will be skipped, causing a resource leak. This is especially important for a diagnostic script that might encounter unexpected database states.

🛡️ Proposed fix using try/finally
 def check_price_cache():
     """Check game_details_cache and itad_price_cache tables."""
     conn = get_db_connection()
-    cursor = conn.cursor()
-
-    now = datetime.now(timezone.utc).isoformat() + "Z"
+    try:
+        cursor = conn.cursor()
+        now = datetime.now(timezone.utc).isoformat() + "Z"
         # ... rest of function body indented ...
-    conn.close()
+    finally:
+        conn.close()
🧰 Tools
🪛 Ruff (0.15.6)

[warning] 14-14: Too many statements (71 > 50)

(PLR0915)


[warning] 19-19: Use datetime.UTC alias

Convert to datetime.UTC alias

(UP017)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/check_price_cache.py` around lines 14 - 138, The function
check_price_cache currently opens a DB connection and cursor but calls
conn.close() only at the end, which leaks the connection if any query raises;
wrap the main body of check_price_cache in a try/finally (or use a context
manager) so conn.close() always runs, and also ensure the cursor is closed
(cursor.close()) in the finally block; locate the code in check_price_cache
(references: conn, cursor, conn.close()) and move/ensure cleanup into the
finally to guarantee resources are released on exceptions.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
scripts/populate_prices_async.py (2)

455-472: ⚠️ Potential issue | 🟠 Major

Keep the happy path atomic at the batch level.

These per-record commits defeat the surrounding batch transaction. If a later record fails, the fallback replays rows that were already persisted and written_count can drift upward.

💡 Suggested fix
             try:
                 conn.execute("BEGIN TRANSACTION")
                 for app_id, game_info in batch:
                     game_data = game_info["data"]
                     source = game_info["source"]

                     if source == "steam_library":
                         cache_game_details_with_source(
                             app_id, game_data, source, conn=conn
                         )
                     else:
                         cache_game_details(
                             app_id, game_data, permanent=False, conn=conn
                         )
-                    conn.commit()

                     written_count += 1
+                conn.commit()
             try:
                 conn.execute("BEGIN TRANSACTION")
                 for app_id, price_info in batch:
                     price_data = price_info["data"]
                     lookup_method = price_info["method"]
                     game_name = price_info.get("game_name")

                     cache_itad_price_enhanced(
                         app_id,
                         price_data,
                         lookup_method=lookup_method,
                         steam_game_name=game_name,
                         permanent=False,
                         cache_hours=ITAD_CACHE_TTL,
                         conn=conn,
                     )
-                    conn.commit()
                     written_count += 1
+                conn.commit()

Also applies to: 523-541

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/populate_prices_async.py` around lines 455 - 472, The loop is
committing per-record which breaks the surrounding batch transaction and allows
partial persistence and incorrect written_count; move the commit out of the for
loop so the transaction begun with conn.execute("BEGIN TRANSACTION") is
committed once after the entire batch loop completes, and ensure you rollback
the transaction on exception; update the blocks that call
cache_game_details_with_source(...) and cache_game_details(...) (where
conn.commit() is currently inside the loop) to call conn.commit() after the loop
finishes and add proper error handling to conn.rollback() in the except path;
apply the same change to the other identical block around the
cache_game_details(...) calls at lines 523-541.

495-500: ⚠️ Potential issue | 🟡 Minor

Remove the compensating decrement on salvage failures.

A failed salvage write was never added to written_count in this path, so subtracting here can make the final totals negative or underreport successful writes from earlier batches.

💡 Suggested fix
                     except Exception as individual_error:
                         conn.rollback()
                         logger.error(
                             f"Failed to write individual Steam record {app_id}: {individual_error}"
                         )
-                        written_count -= 1  # Adjust count for failed individual writes

                     except Exception as individual_error:
                         conn.rollback()
                         logger.error(
                             f"Failed to write individual ITAD record {app_id}: {individual_error}"
                         )
-                        written_count -= 1  # Adjust count for failed individual writes

Also applies to: 566-571

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/populate_prices_async.py` around lines 495 - 500, The except block
that handles individual salvage write failures should not decrement
written_count because the write was never incremented on failure; remove the
line "written_count -= 1" from the except handling around conn.rollback() and
logger.error (the block that catches Exception as individual_error), and
similarly remove the identical compensating decrement in the other except block
around lines 566-571 so written_count only reflects successful increments
performed when writes actually succeed.
scripts/populate_prices_optimized.py (1)

492-508: ⚠️ Potential issue | 🟠 Major

Restore batch-level commits on the happy path.

Committing inside these loops makes a later row failure partially persist the batch. The fallback then replays already-written rows, and written_count can overreport the actual number of newly persisted entries.

💡 Suggested fix
             try:
                 conn.execute("BEGIN TRANSACTION")
                 for app_id, game_info in batch:
                     game_data = game_info["data"]
                     source = game_info["source"]

                     if source == "steam_library":
                         cache_game_details_with_source(
                             app_id, game_data, source, conn=conn
                         )
                     else:
                         cache_game_details(
                             app_id, game_data, permanent=False, conn=conn
                         )
-                    conn.commit()
                     written_count += 1
+                conn.commit()
             try:
                 conn.execute("BEGIN TRANSACTION")
                 for app_id, price_info in batch:
                     price_data = price_info["data"]
                     lookup_method = price_info["method"]
                     game_name = price_info.get("game_name")

                     cache_itad_price_enhanced(
                         app_id,
                         price_data,
                         lookup_method=lookup_method,
                         steam_game_name=game_name,
                         permanent=False,
                         cache_hours=ITAD_CACHE_TTL,
                         conn=conn,
                     )
-                    conn.commit()
                     written_count += 1
+                conn.commit()

Also applies to: 549-567

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/populate_prices_optimized.py` around lines 492 - 508, The code
currently commits inside the per-row loop (using conn.commit() while iterating
over batch items), causing partial persistence if a later row fails; change to
commit once per batch instead: begin the transaction with get_db_connection()
and conn.execute("BEGIN TRANSACTION"), call cache_game_details_with_source or
cache_game_details for each app_id without committing inside the loop, then
after the for loop call conn.commit() once and increment written_count only
after a successful commit; also ensure exceptions trigger conn.rollback() and
that the same change is applied for the other similar block (the block handling
lines 549-567) to avoid double-counting and replayed writes.
♻️ Duplicate comments (1)
scripts/populate_prices_optimized.py (1)

518-526: ⚠️ Potential issue | 🟠 Major

Commit the recovered steam_library rows too.

Line 526 is still inside the else, so the cache_game_details_with_source(..., conn=conn) salvage branch never commits after the earlier rollback. Those recovered rows are dropped when the connection closes.

💡 Suggested fix
                         if source == "steam_library":
                             cache_game_details_with_source(
                                 app_id, game_data, source, conn=conn
                             )
                         else:
                             cache_game_details(
                                 app_id, game_data, permanent=False, conn=conn
                             )
-                            conn.commit()
+                        conn.commit()
                         written_count += 1
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/populate_prices_optimized.py` around lines 518 - 526, The commit
currently only runs in the else branch so rows recovered via
cache_game_details_with_source(app_id, game_data, source, conn=conn) are not
persisted; after calling cache_game_details_with_source(...) you must call
conn.commit() (or move the existing conn.commit() out of the else so both
branches commit) to ensure recovered steam_library rows are saved—modify the
block containing cache_game_details_with_source and cache_game_details so that
conn.commit() executes for both paths.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/populate_prices_optimized.py`:
- Around line 528-530: The except block that currently does "except Exception:
conn.rollback(); pass" must not silently swallow errors; change it to "except
Exception as e" where you call conn.rollback(), log the exception and include
identifying context (e.g., the current row id/app_id/price input variables
available in this scope) and the traceback, and record the failed row to a
persistent audit (append to a failures list or write to a CSV/log file) so
partial population is auditable; apply the same change to the other occurrence
around the conn.rollback() at the later block (lines referenced in the review)
instead of using pass.

---

Outside diff comments:
In `@scripts/populate_prices_async.py`:
- Around line 455-472: The loop is committing per-record which breaks the
surrounding batch transaction and allows partial persistence and incorrect
written_count; move the commit out of the for loop so the transaction begun with
conn.execute("BEGIN TRANSACTION") is committed once after the entire batch loop
completes, and ensure you rollback the transaction on exception; update the
blocks that call cache_game_details_with_source(...) and cache_game_details(...)
(where conn.commit() is currently inside the loop) to call conn.commit() after
the loop finishes and add proper error handling to conn.rollback() in the except
path; apply the same change to the other identical block around the
cache_game_details(...) calls at lines 523-541.
- Around line 495-500: The except block that handles individual salvage write
failures should not decrement written_count because the write was never
incremented on failure; remove the line "written_count -= 1" from the except
handling around conn.rollback() and logger.error (the block that catches
Exception as individual_error), and similarly remove the identical compensating
decrement in the other except block around lines 566-571 so written_count only
reflects successful increments performed when writes actually succeed.

In `@scripts/populate_prices_optimized.py`:
- Around line 492-508: The code currently commits inside the per-row loop (using
conn.commit() while iterating over batch items), causing partial persistence if
a later row fails; change to commit once per batch instead: begin the
transaction with get_db_connection() and conn.execute("BEGIN TRANSACTION"), call
cache_game_details_with_source or cache_game_details for each app_id without
committing inside the loop, then after the for loop call conn.commit() once and
increment written_count only after a successful commit; also ensure exceptions
trigger conn.rollback() and that the same change is applied for the other
similar block (the block handling lines 549-567) to avoid double-counting and
replayed writes.

---

Duplicate comments:
In `@scripts/populate_prices_optimized.py`:
- Around line 518-526: The commit currently only runs in the else branch so rows
recovered via cache_game_details_with_source(app_id, game_data, source,
conn=conn) are not persisted; after calling cache_game_details_with_source(...)
you must call conn.commit() (or move the existing conn.commit() out of the else
so both branches commit) to ensure recovered steam_library rows are saved—modify
the block containing cache_game_details_with_source and cache_game_details so
that conn.commit() executes for both paths.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 22af45f7-460e-4129-91f5-7a7154b9c156

📥 Commits

Reviewing files that changed from the base of the PR and between 7554395 and 53ef8dd.

📒 Files selected for processing (4)
  • scripts/check_price_cache.py
  • scripts/populate_prices_async.py
  • scripts/populate_prices_optimized.py
  • src/familybot/lib/database.py

Comment on lines 528 to 530
except Exception:
conn.rollback()
pass
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't drop salvage failures silently.

These recovery paths can skip rows with no app id or error in the logs, so partial cache population becomes impossible to audit.

💡 Suggested fix
-                    except Exception:
+                    except Exception as individual_error:
                         conn.rollback()
-                        pass
+                        logger.error(
+                            "Failed to salvage Steam record %s: %s",
+                            app_id,
+                            individual_error,
+                        )

-                    except Exception:
+                    except Exception as individual_error:
                         conn.rollback()
-                        pass
+                        logger.error(
+                            "Failed to salvage ITAD record %s: %s",
+                            app_id,
+                            individual_error,
+                        )

Also applies to: 589-591

🧰 Tools
🪛 Ruff (0.15.6)

[warning] 528-528: Do not catch blind exception: Exception

(BLE001)


[warning] 530-530: Unnecessary pass statement

Remove unnecessary pass

(PIE790)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/populate_prices_optimized.py` around lines 528 - 530, The except
block that currently does "except Exception: conn.rollback(); pass" must not
silently swallow errors; change it to "except Exception as e" where you call
conn.rollback(), log the exception and include identifying context (e.g., the
current row id/app_id/price input variables available in this scope) and the
traceback, and record the failed row to a persistent audit (append to a failures
list or write to a CSV/log file) so partial population is auditable; apply the
same change to the other occurrence around the conn.rollback() at the later
block (lines referenced in the review) instead of using pass.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Inconsistent price caching TTLs and permanent storage of current prices

1 participant