⚡ Bolt: Optimized Blockchain for Field Officer Visits#590
⚡ Bolt: Optimized Blockchain for Field Officer Visits#590RohanExploit wants to merge 2 commits intomainfrom
Conversation
Implemented cryptographically chained blockchain for Field Officer Visit records to ensure immutability and accountability. - Added previous_visit_hash to FieldOfficerVisit model and schema. - Updated generate_visit_hash to support HMAC-SHA256 chaining. - Optimized check-in path using visit_last_hash_cache to eliminate redundant database lookups for the previous hash. - Implemented O(1) single-record blockchain verification endpoint. - Added database migrations and comprehensive integrity tests.
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
✅ Deploy Preview for fixmybharat canceled.
|
🙏 Thank you for your contribution, @RohanExploit!PR Details:
Quality Checklist:
Review Process:
Note: The maintainers will monitor code quality and ensure the overall project flow isn't broken. |
📝 WalkthroughWalkthroughThis PR introduces blockchain-style record chaining for field officer visits by adding a Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Router as API Router
participant Cache as ThreadSafeCache
participant DB as Database
participant GeofenceService as Geofence Service
Client->>Router: POST /check-in (new visit)
Router->>Cache: Get previous visit hash
alt Cache Hit
Cache-->>Router: Return cached hash + ID
else Cache Miss
Router->>DB: Query latest visit (project id, visit_hash)
DB-->>Router: Return previous hash or empty
end
Router->>GeofenceService: generate_visit_hash(visit_data, prev_hash)
GeofenceService-->>Router: new_visit_hash
Router->>DB: Insert visit with visit_hash, previous_visit_hash
DB-->>Router: Committed visit record
Router->>Cache: Update cache with new_visit_hash, id
Router-->>Client: Check-in response with hashes
sequenceDiagram
participant Client
participant Router as API Router
participant DB as Database
participant GeofenceService as Geofence Service
Client->>Router: GET /field-officer/visit/{id}/blockchain-verify
Router->>DB: Fetch visit (with previous_visit_hash)
DB-->>Router: Visit record
Router->>GeofenceService: verify_visit_integrity(visit_data, stored_hash, prev_hash)
GeofenceService->>GeofenceService: Recompute hash with prev_hash
GeofenceService-->>Router: Computed hash == stored hash?
Router-->>Client: BlockchainVerificationResponse {is_valid, current_hash, computed_hash}
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
3 issues found across 8 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="backend/routers/field_officer.py">
<violation number="1" location="backend/routers/field_officer.py:104">
P1: Race condition: the read → hash → commit → cache-update sequence is not atomic, so concurrent check-ins (or requests across multiple worker processes) will read the same `prev_hash` and fork the blockchain chain, silently breaking the tamper-evidence guarantee this feature is designed to provide.
To fix this properly, either use a database-level advisory lock / `SELECT ... FOR UPDATE` on the last row, or serialize check-ins through an application-level mutex (noting that an in-process lock only works with a single worker).</violation>
</file>
<file name="tests/test_field_officer_blockchain.py">
<violation number="1" location="tests/test_field_officer_blockchain.py:12">
P1: This fixture uses the shared application engine and drops all tables, which can destroy non-test data when tests run against a real DATABASE_URL. Use an isolated test engine/session instead of `backend.database.engine`.</violation>
<violation number="2" location="tests/test_field_officer_blockchain.py:29">
P2: The test fixture recreates tables but does not clear `visit_last_hash_cache`, so blockchain tests can inherit stale previous hashes from earlier tests and become order-dependent.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
|
|
||
| # Blockchain Chaining Logic | ||
| # Use thread-safe cache to eliminate database lookups for the previous hash | ||
| prev_hash = visit_last_hash_cache.get("last_hash") |
There was a problem hiding this comment.
P1: Race condition: the read → hash → commit → cache-update sequence is not atomic, so concurrent check-ins (or requests across multiple worker processes) will read the same prev_hash and fork the blockchain chain, silently breaking the tamper-evidence guarantee this feature is designed to provide.
To fix this properly, either use a database-level advisory lock / SELECT ... FOR UPDATE on the last row, or serialize check-ins through an application-level mutex (noting that an in-process lock only works with a single worker).
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At backend/routers/field_officer.py, line 104:
<comment>Race condition: the read → hash → commit → cache-update sequence is not atomic, so concurrent check-ins (or requests across multiple worker processes) will read the same `prev_hash` and fork the blockchain chain, silently breaking the tamper-evidence guarantee this feature is designed to provide.
To fix this properly, either use a database-level advisory lock / `SELECT ... FOR UPDATE` on the last row, or serialize check-ins through an application-level mutex (noting that an in-process lock only works with a single worker).</comment>
<file context>
@@ -95,6 +99,22 @@ def officer_check_in(request: OfficerCheckInRequest, db: Session = Depends(get_d
+ # Blockchain Chaining Logic
+ # Use thread-safe cache to eliminate database lookups for the previous hash
+ prev_hash = visit_last_hash_cache.get("last_hash")
+
+ if prev_hash is None:
</file context>
|
|
||
| @pytest.fixture | ||
| def db_session(): | ||
| Base.metadata.create_all(bind=engine) |
There was a problem hiding this comment.
P1: This fixture uses the shared application engine and drops all tables, which can destroy non-test data when tests run against a real DATABASE_URL. Use an isolated test engine/session instead of backend.database.engine.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At tests/test_field_officer_blockchain.py, line 12:
<comment>This fixture uses the shared application engine and drops all tables, which can destroy non-test data when tests run against a real DATABASE_URL. Use an isolated test engine/session instead of `backend.database.engine`.</comment>
<file context>
@@ -0,0 +1,160 @@
+
+@pytest.fixture
+def db_session():
+ Base.metadata.create_all(bind=engine)
+ session = Session(bind=engine)
+
</file context>
|
|
||
| yield session | ||
| session.close() | ||
| Base.metadata.drop_all(bind=engine) |
There was a problem hiding this comment.
P2: The test fixture recreates tables but does not clear visit_last_hash_cache, so blockchain tests can inherit stale previous hashes from earlier tests and become order-dependent.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At tests/test_field_officer_blockchain.py, line 29:
<comment>The test fixture recreates tables but does not clear `visit_last_hash_cache`, so blockchain tests can inherit stale previous hashes from earlier tests and become order-dependent.</comment>
<file context>
@@ -0,0 +1,160 @@
+
+ yield session
+ session.close()
+ Base.metadata.drop_all(bind=engine)
+
+@pytest.fixture
</file context>
There was a problem hiding this comment.
Pull request overview
This PR adds a tamper-evident, cryptographically chained integrity mechanism for FieldOfficerVisit records, extending the existing “blockchain-like” integrity approach used elsewhere in the backend.
Changes:
- Added
previous_visit_hashtoFieldOfficerVisit(model + migration) and exposedvisit_hash/previous_visit_hashin API responses. - Implemented chained hash generation during field-officer check-in, using
ThreadSafeCacheto reduce DB lookups. - Added a new visit integrity verification endpoint plus a dedicated test suite covering chaining and tamper detection.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
tests/test_field_officer_blockchain.py |
Adds tests for visit chaining and tamper detection via the new API endpoint. |
backend/schemas.py |
Extends FieldOfficerVisitResponse to return visit_hash and previous_visit_hash. |
backend/routers/field_officer.py |
Implements chaining on check-in and adds GET /field-officer/visit/{id}/blockchain-verify. |
backend/models.py |
Adds previous_visit_hash column to FieldOfficerVisit. |
backend/init_db.py |
Adds migration logic to create the new column and index if missing. |
backend/geofencing_service.py |
Updates generate_visit_hash / verify_visit_integrity to include previous-hash chaining. |
backend/cache.py |
Adds visit_last_hash_cache global cache instance. |
.jules/bolt.md |
Documents the rationale/learning around O(1) chaining and cache sizing. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| from backend.main import app | ||
| from backend.database import get_db, Base, engine | ||
| from backend.models import FieldOfficerVisit, Issue | ||
| from datetime import datetime, timezone |
There was a problem hiding this comment.
Unused imports: datetime and timezone are imported but never used in this test module. Removing them will keep the test file clean and avoid lint warnings.
| from datetime import datetime, timezone |
| # Use thread-safe cache to eliminate database lookups for the previous hash | ||
| prev_hash = visit_last_hash_cache.get("last_hash") | ||
|
|
||
| if prev_hash is None: | ||
| # Cache miss: fetch only the last hash and ID from DB | ||
| # Optimization: Use column projection to avoid full model loading | ||
| last_visit = db.query(FieldOfficerVisit.id, FieldOfficerVisit.visit_hash).order_by(FieldOfficerVisit.id.desc()).first() | ||
| if last_visit: | ||
| prev_hash = last_visit[1] or "" | ||
| visit_last_hash_cache.set(data=prev_hash, key="last_hash") | ||
| visit_last_hash_cache.set(data=last_visit[0], key="last_id") | ||
| else: | ||
| prev_hash = "" | ||
| visit_last_hash_cache.set(data=prev_hash, key="last_hash") |
There was a problem hiding this comment.
Chaining via a process-local cache isn’t atomic across concurrent requests: two check-ins can read the same prev_hash before either commits, producing a fork (multiple visits with the same previous_visit_hash). If a strictly linear chain is required, consider determining prev_hash from the DB within the same transaction as the insert (or using a DB-level lock/sequence) rather than relying on the in-memory cache alone.
| # Use thread-safe cache to eliminate database lookups for the previous hash | |
| prev_hash = visit_last_hash_cache.get("last_hash") | |
| if prev_hash is None: | |
| # Cache miss: fetch only the last hash and ID from DB | |
| # Optimization: Use column projection to avoid full model loading | |
| last_visit = db.query(FieldOfficerVisit.id, FieldOfficerVisit.visit_hash).order_by(FieldOfficerVisit.id.desc()).first() | |
| if last_visit: | |
| prev_hash = last_visit[1] or "" | |
| visit_last_hash_cache.set(data=prev_hash, key="last_hash") | |
| visit_last_hash_cache.set(data=last_visit[0], key="last_id") | |
| else: | |
| prev_hash = "" | |
| visit_last_hash_cache.set(data=prev_hash, key="last_hash") | |
| # Determine previous hash from the database within the same transaction | |
| last_visit = ( | |
| db.query(FieldOfficerVisit) | |
| .order_by(FieldOfficerVisit.id.desc()) | |
| .with_for_update() | |
| .first() | |
| ) | |
| if last_visit: | |
| prev_hash = last_visit.visit_hash or "" | |
| else: | |
| prev_hash = "" |
| data_string = ( | ||
| f"{visit_data.get('issue_id')}" | ||
| f"{visit_data.get('officer_email')}" | ||
| f"{visit_data.get('check_in_latitude')}" | ||
| f"{visit_data.get('check_in_longitude')}" |
There was a problem hiding this comment.
generate_visit_hash() constructs data_string by concatenating fields without any delimiter/structured encoding. This is ambiguous (different field combinations can produce the same string), which undermines the integrity guarantee even with HMAC. Use deterministic structured serialization (e.g., JSON with sorted keys) or add unambiguous separators and normalized numeric formatting before hashing.
| if last_visit: | ||
| prev_hash = last_visit[1] or "" | ||
| visit_last_hash_cache.set(data=prev_hash, key="last_hash") | ||
| visit_last_hash_cache.set(data=last_visit[0], key="last_id") |
There was a problem hiding this comment.
last_id is cached here but never read anywhere, so it’s currently dead state. Either remove it (and reduce cache max_size), or use it to validate cache freshness against the DB before chaining (see the pattern in backend/grievance_service.py where cached last_id/hash are compared to the DB tail to avoid chaining to stale values).
| visit_last_hash_cache.set(data=last_visit[0], key="last_id") |
| from backend.cache import visit_last_hash_cache | ||
| from backend.schemas import BlockchainVerificationResponse | ||
|
|
There was a problem hiding this comment.
Duplicate import: BlockchainVerificationResponse is imported separately from backend.schemas even though this file already has a from backend.schemas import (...) block above. Please merge into a single import to avoid duplication.
| """ | ||
| Verify the cryptographic integrity of a field officer visit using O(1) blockchain verification. | ||
|
|
||
| Checks if the stored visit_hash correctly seals the record data and links to the previous record. | ||
| """ |
There was a problem hiding this comment.
The endpoint docstring claims it verifies that the visit "links to the previous record", but the implementation only recomputes the hash using the stored previous_visit_hash value; it does not validate that previous_visit_hash matches the actual previous visit’s visit_hash. If link validation is intended, fetch the previous visit (by ID/order) and compare its visit_hash to previous_visit_hash (or adjust the docstring/message to describe what is actually verified).
| @router.get("/field-officer/visit/{visit_id}/blockchain-verify", response_model=BlockchainVerificationResponse) | ||
| def verify_visit_blockchain(visit_id: int, db: Session = Depends(get_db)): |
There was a problem hiding this comment.
BlockchainVerificationResponse is reused here for visit verification, but its field descriptions are issue-specific (e.g., "issue integrity" / "previous issue's hash"), which will make the OpenAPI schema misleading for this route. Consider adding a visit-specific response model or making the existing response model descriptions generic.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
backend/routers/field_officer.py (1)
102-156:⚠️ Potential issue | 🔴 CriticalA worker-local cache will fork the chain under load.
This code treats
visit_last_hash_cacheas the source of truth forprev_hash, but the cache is per process and the read → insert → cache-update sequence is not atomic. Two concurrent check-ins, or just sequential requests routed to different workers with stale caches, can both link to the same predecessor and create sibling branches. After that, the visit chain is no longer linear even though each row still verifies in isolation. The chain head needs to be advanced inside a DB-serialized transaction; the cache can only be a hint.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routers/field_officer.py` around lines 102 - 156, The chain can fork because you read prev_hash from the worker-local visit_last_hash_cache and then insert later without any DB-level serialization; fix by moving the authoritative prev_hash read and the insert into a DB-serialized transaction/row-lock so only one worker advances the chain head: inside a transaction (use your DB session transaction context) query the latest FieldOfficerVisit row with a SELECT ... FOR UPDATE (or equivalent SQLAlchemy locking option) to obtain the canonical prev_hash, compute visit_hash via generate_visit_hash(prev_hash=...), create and add the new FieldOfficerVisit (new_visit) and commit the transaction, then update the worker cache (visit_last_hash_cache.set) only after successful commit so the cache stays a hint rather than the source of truth.backend/geofencing_service.py (1)
93-125:⚠️ Potential issue | 🟠 MajorSeal a stable, complete visit payload.
visit_notesis part of the HMAC input here, butbackend/routers/field_officer.pylater appends check-out notes into that same column during a normal check-out. That makes a legitimate checked-out visit fail verification. Meanwhile fields likegrievance_id,officer_name, andofficer_departmentare outside the hash entirely, so those edits would not be detected. Hash one canonical immutable payload, or move mutable check-out data out of the sealed fields / finalize the seal after check-out.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/geofencing_service.py` around lines 93 - 125, The current generate_visit_hash includes mutable fields like visit_notes (which field_officer.py later appends to on check-out) and omits important identifiers (grievance_id, officer_name, officer_department), causing valid updates to break verification; fix by defining and hashing a single canonical, immutable payload inside generate_visit_hash (include grievance_id, officer_email, officer_name, officer_department, check_in_time, check_in_latitude, check_in_longitude and prev_hash) and exclude/omit mutable check-out fields (visit_notes/checkout_notes) or alternatively provide a separate finalize_visit_hash/on_checkout function that re-seals with final check_out_time/location if you intend to seal after checkout; update references in backend/routers/field_officer.py to use the new canonical sealing behavior or call the finalize routine at check-out.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/init_db.py`:
- Around line 202-207: The migration that adds previous_visit_hash is not being
run at startup, causing FieldOfficerVisit.previous_visit_hash access in
routers/field_officer.py to fail on upgraded DBs; wire the migration runner into
app startup by ensuring migrate_db() (from backend/init_db.py) is invoked during
application bootstrap in backend/main.py (e.g., in the FastAPI startup event or
where DB/init functions are called) so the ALTER TABLE and index creation run
before the app serves requests that import/read
FieldOfficerVisit.previous_visit_hash.
In `@tests/test_field_officer_blockchain.py`:
- Around line 10-36: The test fixtures recreate the DB but do not clear the
singleton cache visit_last_hash_cache, causing cross-test pollution; update the
db_session or client fixture to reset visit_last_hash_cache at setup and
teardown (e.g., clear or reinitialize the cache object) so each test starts with
a fresh visit_last_hash_cache; target the db_session fixture (and/or client
teardown) where Base.metadata.create_all/drop_all and session creation/close
occur and add a call to clear the visit_last_hash_cache before yielding and
after cleanup.
---
Outside diff comments:
In `@backend/geofencing_service.py`:
- Around line 93-125: The current generate_visit_hash includes mutable fields
like visit_notes (which field_officer.py later appends to on check-out) and
omits important identifiers (grievance_id, officer_name, officer_department),
causing valid updates to break verification; fix by defining and hashing a
single canonical, immutable payload inside generate_visit_hash (include
grievance_id, officer_email, officer_name, officer_department, check_in_time,
check_in_latitude, check_in_longitude and prev_hash) and exclude/omit mutable
check-out fields (visit_notes/checkout_notes) or alternatively provide a
separate finalize_visit_hash/on_checkout function that re-seals with final
check_out_time/location if you intend to seal after checkout; update references
in backend/routers/field_officer.py to use the new canonical sealing behavior or
call the finalize routine at check-out.
In `@backend/routers/field_officer.py`:
- Around line 102-156: The chain can fork because you read prev_hash from the
worker-local visit_last_hash_cache and then insert later without any DB-level
serialization; fix by moving the authoritative prev_hash read and the insert
into a DB-serialized transaction/row-lock so only one worker advances the chain
head: inside a transaction (use your DB session transaction context) query the
latest FieldOfficerVisit row with a SELECT ... FOR UPDATE (or equivalent
SQLAlchemy locking option) to obtain the canonical prev_hash, compute visit_hash
via generate_visit_hash(prev_hash=...), create and add the new FieldOfficerVisit
(new_visit) and commit the transaction, then update the worker cache
(visit_last_hash_cache.set) only after successful commit so the cache stays a
hint rather than the source of truth.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2206a3e7-898f-44c9-bb33-79928331668a
📒 Files selected for processing (8)
.jules/bolt.mdbackend/cache.pybackend/geofencing_service.pybackend/init_db.pybackend/models.pybackend/routers/field_officer.pybackend/schemas.pytests/test_field_officer_blockchain.py
| if not column_exists("field_officer_visits", "previous_visit_hash"): | ||
| conn.execute(text("ALTER TABLE field_officer_visits ADD COLUMN previous_visit_hash VARCHAR")) | ||
| logger.info("Added previous_visit_hash column to field_officer_visits") | ||
|
|
||
| if not index_exists("field_officer_visits", "ix_field_officer_visits_previous_visit_hash"): | ||
| conn.execute(text("CREATE INDEX IF NOT EXISTS ix_field_officer_visits_previous_visit_hash ON field_officer_visits (previous_visit_hash)")) |
There was a problem hiding this comment.
This migration is not wired into the running app.
backend/main.py currently skips migrate_db() during startup, while backend/routers/field_officer.py already reads FieldOfficerVisit.previous_visit_hash on live requests. On an upgraded database, the column added here never gets created and the first check-in/verify call will fail with a missing-column error. The current tests will not catch this because they create the latest schema directly with Base.metadata.create_all().
Minimal startup fix outside this file
- # await run_in_threadpool(migrate_db)
+ await run_in_threadpool(migrate_db)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/init_db.py` around lines 202 - 207, The migration that adds
previous_visit_hash is not being run at startup, causing
FieldOfficerVisit.previous_visit_hash access in routers/field_officer.py to fail
on upgraded DBs; wire the migration runner into app startup by ensuring
migrate_db() (from backend/init_db.py) is invoked during application bootstrap
in backend/main.py (e.g., in the FastAPI startup event or where DB/init
functions are called) so the ALTER TABLE and index creation run before the app
serves requests that import/read FieldOfficerVisit.previous_visit_hash.
| @pytest.fixture | ||
| def db_session(): | ||
| Base.metadata.create_all(bind=engine) | ||
| session = Session(bind=engine) | ||
|
|
||
| # Create a dummy issue for testing | ||
| issue = Issue( | ||
| reference_id=str(uuid.uuid4()), | ||
| description="Test issue for field officer visit", | ||
| category="Road", | ||
| status="open", | ||
| latitude=19.0760, | ||
| longitude=72.8777 | ||
| ) | ||
| session.add(issue) | ||
| session.commit() | ||
|
|
||
| yield session | ||
| session.close() | ||
| Base.metadata.drop_all(bind=engine) | ||
|
|
||
| @pytest.fixture | ||
| def client(db_session): | ||
| app.dependency_overrides[get_db] = lambda: db_session | ||
| with TestClient(app) as c: | ||
| yield c | ||
| app.dependency_overrides = {} |
There was a problem hiding this comment.
Reset the singleton cache in the test fixture.
The schema is recreated per test, but visit_last_hash_cache survives across test functions. That lets a “first” visit in one test chain to a hash created in an earlier test, which makes the suite order-dependent and hides root-record behavior. Clear the cache in setup/teardown alongside the DB reset.
Suggested fixture hardening
from backend.database import get_db, Base, engine
+from backend.cache import visit_last_hash_cache
from backend.models import FieldOfficerVisit, Issue
`@pytest.fixture`
def db_session():
+ visit_last_hash_cache.clear()
Base.metadata.create_all(bind=engine)
session = Session(bind=engine)
@@
yield session
session.close()
+ visit_last_hash_cache.clear()
Base.metadata.drop_all(bind=engine)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_field_officer_blockchain.py` around lines 10 - 36, The test
fixtures recreate the DB but do not clear the singleton cache
visit_last_hash_cache, causing cross-test pollution; update the db_session or
client fixture to reset visit_last_hash_cache at setup and teardown (e.g., clear
or reinitialize the cache object) so each test starts with a fresh
visit_last_hash_cache; target the db_session fixture (and/or client teardown)
where Base.metadata.create_all/drop_all and session creation/close occur and add
a call to clear the visit_last_hash_cache before yielding and after cleanup.
Implemented a cryptographically chained blockchain for field officer visit records with performance optimizations. - Added `previous_visit_hash` to `FieldOfficerVisit` model and schemas. - Updated hashing logic in `geofencing_service.py` to support chaining. - Optimized `officer_check_in` path using `visit_last_hash_cache` (max_size=2) to achieve O(1) creation by eliminating redundant database lookups. - Refactored `blockchain-verify` endpoint to use shared verification logic and support O(1) single-record integrity checks. - Added database migrations and comprehensive tests in `tests/test_field_officer_blockchain.py`.
💡 What: Implemented a cryptographically chained blockchain system for
FieldOfficerVisitrecords.🎯 Why: Ensures that all field officer visits are tamper-evident and linked in a secure chain, enhancing government accountability.
📊 Impact: Leverages
ThreadSafeCacheto perform O(1) chaining without redundant database lookups, ensuring zero performance regression during high-frequency check-ins.🔬 Measurement: Verified with new test suite
tests/test_field_officer_blockchain.pyand existing blockchain tests.PR created automatically by Jules for task 1185988817342450186 started by @RohanExploit
Summary by cubic
Adds a cryptographically chained blockchain to
FieldOfficerVisitfor tamper-evident logs with O(1) creation and single-record verification. Adds an integrity endpoint and usesvisit_last_hash_cacheto avoid DB roundtrips.New Features
visit_hashandprevious_visit_hash.visit_last_hash_cache(cache miss uses a projected last-id/hash query); responses include both hashes.Migration
previous_visit_hashcolumn with index; applied automatically byinit_dbon startup.Written for commit 1200676. Summary will update on new commits.
Summary by CodeRabbit
New Features
GET /field-officer/visit/{visit_id}/blockchain-verifyto verify visit integrity.Documentation
Tests