Small security flags and linter feedback#2
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis pull request includes refactoring and structural updates across multiple modules: database migration styling and assertions, configuration settings instantiation, conditional migration execution within FastAPI lifespan context, teams repository pattern updates, and workspaces authentication parameter changes alongside import reorganization. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
api/main.py (3)
96-102:⚠️ Potential issue | 🟡 Minor
returnafterraiseis unreachable dead code (lines 102 and 111).
raiseunconditionally transfers control; the subsequentreturnstatements are never executed.🐛 Proposed fix
if not current_user.isWorkspaceContributor(workspace_id): raise HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid authentication credentials", headers={"WWW-Authenticate": "Bearer"}, ) - returnraise HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="You must set your workspace in the X-Workspace header to access OSM methods.", ) - return🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/main.py` around lines 96 - 102, Remove the unreachable `return` statements that follow `raise HTTPException` in the authentication checks: locate the blocks that call `current_user.isWorkspaceContributor(workspace_id)` (and any similar checks using `current_user.isWorkspaceOwner`/contributors) and delete the `return` lines immediately after the `raise HTTPException(...)` so control flow is correct and there is no dead code left after the exception is thrown.
116-131:⚠️ Potential issue | 🔴 Critical
httpx.AsyncClientis never closed — resource leak.The client is constructed with
httpx.AsyncClient(...)but is never closed. On each proxy request a new client is created and leaked. Use it as an async context manager.Additionally,
request.headers.get("Authorization")returnsstr | None. If the header is absent,Noneis appended as the header value, which will raise aTypeErrorinside httpx when it tries to encode the header.🐛 Proposed fix
- url = httpx.URL( - path=request.url.path.strip(), query=request.url.query.encode("utf-8") - ) - client = httpx.AsyncClient(base_url=settings.WS_OSM_HOST) - - new_headers = list() - new_headers.append( - (bytes("Authorization", "utf-8"), request.headers.get("Authorization")) - ) - - if authorizedWorkspace is not None: - new_headers.append( - (bytes("X-Workspace", "utf-8"), bytes(str(authorizedWorkspace.id), "utf-8")) - ) - new_headers.append((bytes("Host", "utf-8"), bytes(client.base_url.host, "utf-8"))) - - rp_req = client.build_request( - request.method, url, headers=new_headers, content=await request.body() - ) - - rp_resp = await client.send(rp_req, stream=True) + url = httpx.URL( + path=request.url.path.strip(), query=request.url.query.encode("utf-8") + ) + auth_header = request.headers.get("Authorization") + new_headers = [] + if auth_header is not None: + new_headers.append((b"Authorization", auth_header.encode("utf-8"))) + + async with httpx.AsyncClient(base_url=settings.WS_OSM_HOST) as client: + if authorizedWorkspace is not None: + new_headers.append( + (b"X-Workspace", str(authorizedWorkspace.id).encode("utf-8")) + ) + new_headers.append((b"Host", client.base_url.host.encode("utf-8"))) + + rp_req = client.build_request( + request.method, url, headers=new_headers, content=await request.body() + ) + rp_resp = await client.send(rp_req, stream=True)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/main.py` around lines 116 - 131, Replace the unclosed httpx.AsyncClient usage with an async context manager (use "async with httpx.AsyncClient(...) as client:"), move client.build_request and rp_req creation inside that block so the client is closed after use; when building new_headers only append the Authorization header if request.headers.get("Authorization") is not None (avoid appending None), keep the X-Workspace logic for authorizedWorkspace and ensure header values are converted to bytes consistently (e.g., bytes(str(...), "utf-8")) and retain Host as bytes(client.base_url.host, "utf-8").
91-91:⚠️ Potential issue | 🟠 MajorRemove dead code at lines 123-126 or implement missing
authorizedWorkspaceassignment.
authorizedWorkspaceis initialized toNoneat line 91 and never assigned elsewhere in the function. The conditional block checkingif authorizedWorkspace is not None:at lines 123-126 is unreachable dead code.While the code validates the workspace ID from the
X-Workspaceheader (line 94), it never retrieves the corresponding workspace object from therepositoryto populateauthorizedWorkspace. Either remove the dead code block, or implement the missing assignment using the repository (e.g.,authorizedWorkspace = repository.get_workspace(workspace_id)) after the validation check.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/main.py` at line 91, The variable authorizedWorkspace is set to None and never assigned, leaving the subsequent conditional checking if authorizedWorkspace is not None as dead code; fix by either removing that unreachable block or assign the workspace after validating workspace_id—e.g., after you parse/validate the X-Workspace header, call repository.get_workspace(workspace_id) and set authorizedWorkspace, then handle the case where repository returns None (deny/return 403 or raise as appropriate); update any logic that depended on authorizedWorkspace accordingly and ensure the authorization branch is reachable.alembic_task/env.py (1)
19-29:⚠️ Potential issue | 🟠 MajorPipeline failure: cross-DB model auto-import causes
workspaces_long_queststable mismatch.The env.py auto-import loop (lines 19–29) imports all models from
api/src/, includingWorkspaceLongQuest(__tablename__ = "workspaces_long_quests") and other models that belong exclusively to the OSM database. When Alembic runs online migrations against the task database, it encounters models it has no migration for, producing the pipeline error:
relation "workspaces_long_quests" does not existThe same auto-import pattern exists in
alembic_osm/env.py, so the same risk applies in reverse (task-only tables being inspected in the OSM context).Consider one of these mitigations:
- Place OSM-only and task-only models in separate subpackages and scope each env.py's glob accordingly.
- Use explicit
importstatements per env instead of the catch-all glob.- Annotate models with a
__bind_key__or equivalent flag and filter during auto-import.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@alembic_task/env.py` around lines 19 - 29, The auto-import loop using src_path.rglob(...) and importlib.import_module(module_path) pulls in all models (including WorkspaceLongQuest with __tablename__ = "workspaces_long_quests") causing cross-DB inspection; restrict imports so alembic_task only loads task-bound models by scoping the glob to a task-only subpackage or replacing the catch-all loop with explicit imports, or filter modules after import by a marker (e.g., check for a __bind_key__ or module/package name) and skip modules that belong to the OSM DB; update the import loop (module_path / importlib.import_module usage) to only import or register task-bound models so workspaces_long_quests is not seen by the task migration run.api/core/security.py (2)
190-194:⚠️ Potential issue | 🟠 MajorSynchronous
requests.get()blocks the async event loop.This is a blocking HTTP call on the FastAPI event loop thread. Under concurrent load, a slow TDEI backend response will stall all other requests. The token cache reduces frequency but doesn't eliminate the risk — cache misses (new tokens, cache evictions) will still block.
Consider using
httpx.AsyncClientor wrapping the call withasyncio.to_thread().Option A: asyncio.to_thread (minimal change)
+import asyncio ... - response = requests.get(authorizationUrl, headers=headers) + response = await asyncio.to_thread(requests.get, authorizationUrl, headers=headers)Option B: httpx.AsyncClient (preferred)
+import httpx ... - response = requests.get(authorizationUrl, headers=headers) + async with httpx.AsyncClient() as client: + response = await client.get(authorizationUrl, headers=headers)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/core/security.py` around lines 190 - 194, The code uses blocking requests.get(authorizationUrl, headers=headers) which will block the FastAPI event loop; replace it with an async call: either (preferred) import httpx, make the surrounding function async and use "async with httpx.AsyncClient() as client: response = await client.get(authorizationUrl, headers=headers)" and keep the same status_code check/raise of credentials_exception, or (minimal) wrap the existing call with asyncio.to_thread: "response = await asyncio.to_thread(requests.get, authorizationUrl, headers=headers)". Also add the necessary imports (httpx or asyncio) and ensure the caller is awaited if you convert the function to async.
196-200:⚠️ Potential issue | 🟡 MinorMissing error handling for non-JSON or unexpected response shapes from TDEI.
The
json.loadsis guarded by atry/except JSONDecodeError, but subsequent code (lines 209–216) assumes the response is a list of dicts with specific keys (tdei_project_group_id,project_group_name,roles). A malformed but valid-JSON response (e.g.,{}ornull) would raise an unhandledTypeError/KeyError, surfacing as a 500 to the caller instead of a 401.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/core/security.py` around lines 196 - 200, Validate the parsed JSON before using it: after loading response.text into j, ensure j is a list and each item is a dict containing the keys 'tdei_project_group_id', 'project_group_name', and 'roles' with the expected types (e.g., str for IDs/names and iterable for roles); if j is not a list or any item is malformed, raise credentials_exception. Wrap the access/iteration logic that assumes these keys (the code that consumes j) in a simple guard or try/except catching TypeError/KeyError/ValueError and convert those to credentials_exception so malformed-but-valid-JSON responses (e.g., null or {}) produce a 401 instead of a 500; reference the variables response, content, j and credentials_exception when making the changes.
🧹 Nitpick comments (4)
api/main.py (1)
73-76: Consider compiling the whitelist patterns once for efficiency.The patterns are re-compiled on every request inside the
any(re.search(...) for ...)call. Pre-compile them withre.compileat module level to avoid repeated compilation overhead on the hot proxy path.♻️ Proposed refactor
-AUTH_WHITELIST_PATHS = [ - r"^/api/0\.6/user/", # used during authentication - r"^/api/0\.6/workspaces/[0-9]+/bbox\.json$", # used to get workspace bbox without workspace header, to be removed -] +AUTH_WHITELIST_PATTERNS = [ + re.compile(r"^/api/0\.6/user/"), # used during authentication + re.compile(r"^/api/0\.6/workspaces/[0-9]+/bbox\.json$"), # used to get workspace bbox without workspace header, to be removed +]Then at the call site:
- if not any( - re.search(pattern, request.url.path) for pattern in AUTH_WHITELIST_PATHS - ): + if not any( + pattern.search(request.url.path) for pattern in AUTH_WHITELIST_PATTERNS + ):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/main.py` around lines 73 - 76, AUTH_WHITELIST_PATHS is a list of raw regex strings that are being recompiled on every request; fix by creating compiled regex objects at module import (e.g. define AUTH_WHITELIST_REGEXES = [re.compile(p) for p in AUTH_WHITELIST_PATHS]) and update the request-check site (where any(re.search(... for ...)) is used) to iterate over AUTH_WHITELIST_REGEXES and call regex.search(path) instead of re.search with strings so the patterns are compiled once and reused.api/core/security.py (2)
157-161:PyJWKClientis re-instantiated on every uncached call, defeating its internal JWKS key cache.
PyJWKClientmaintains an internal cache of fetched signing keys, but since a new instance is created per invocation, that cache is never reused. Additionally,get_signing_key_from_jwtperforms a synchronous HTTP fetch to the OIDC certs endpoint, which blocks the async event loop.Proposed fix: hoist to module-level singleton
+# Module-level JWKS client (reuses internal key cache across calls) +_jwks_client = jwt.PyJWKClient( + f"{settings.TDEI_OIDC_URL}realms/{settings.TDEI_OIDC_REALM}/protocol/openid-connect/certs" +) + async def _validate_token_uncached( token: str, osm_db_session: AsyncSession, task_db_session: AsyncSession, ) -> UserInfo: logger.info("Starting validation of token") ... - jwks_client = jwt.PyJWKClient( - f"{settings.TDEI_OIDC_URL}realms/{settings.TDEI_OIDC_REALM}/protocol/openid-connect/certs" - ) - signing_key = jwks_client.get_signing_key_from_jwt(token) + signing_key = _jwks_client.get_signing_key_from_jwt(token)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/core/security.py` around lines 157 - 161, The code instantiates jwt.PyJWKClient inside the request flow (jwks_client) causing its internal JWKS cache to be lost and get_signing_key_from_jwt(token) to trigger synchronous HTTP calls on each invocation; fix by creating a module-level singleton PyJWKClient (e.g., JWKS_CLIENT or jwks_client_singleton) initialized once at import time using f"{settings.TDEI_OIDC_URL}realms/{settings.TDEI_OIDC_REALM}/protocol/openid-connect/certs" and update the function that calls get_signing_key_from_jwt(token) to use that singleton instead of re-instantiating, so the client’s cache is reused and blocking fetches are minimized.
122-141: Token caching approach looks good overall.The cache-then-validate pattern is sound for reducing repeated validation overhead. A couple of minor observations:
- Log level: Line 135 logs at
INFOon every cache hit. Under steady-state traffic this will be very noisy. ConsiderDEBUG.- Revoked tokens: Cached tokens remain valid for up to 1 hour even if revoked server-side. This is a known trade-off documented in the docstring and is acceptable, but worth keeping in mind for security-sensitive flows.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/core/security.py` around lines 122 - 141, The validate_token function currently logs cache hits with logger.info which will be noisy; change the cache-hit log in validate_token (where it checks _token_cache and currently calls logger.info("Token validation cache hit")) to logger.debug("Token validation cache hit") so routine traffic doesn't spam INFO logs; keep the existing _token_cache behavior (not changing the caching semantics for revoked tokens) but you may add a short inline comment near _token_cache usage to note the accepted one-hour revocation window if desired.api/src/teams/repository.py (1)
68-69:assertis stripped in optimized mode (python -O); consider a proper guard.If the application is ever run with
-O, this assertion silently disappears andteam.idcould beNone, propagating an invalid value to callers. A minor risk since most deployments don't use-O, but worth noting.Safer alternative
- assert team.id is not None - return team.id + if team.id is None: + raise RuntimeError("Team ID was not assigned after commit") + return team.id🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/src/teams/repository.py` around lines 68 - 69, Replace the runtime-only assert with an explicit guard: check if team.id is None and raise a clear exception (e.g., ValueError or RuntimeError) with a descriptive message instead of relying on assert; update the code around the existing references to team.id (the lines with "assert team.id is not None" and "return team.id") so callers never receive a None id even when Python is run with -O.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@alembic_osm/versions/9221408912dd_add_user_role_table.py`:
- Around line 47-61: The migration and ORM disagree: the Alembic migration
creates a column named user_auth_uid of type sa.Uuid(), while the SQLModel
WorkspaceUserRole declares auth_user_uid: str (no sa_column override), causing a
name/type mismatch and runtime errors; fix by making the physical table match
the model or vice‑versa — either (A) update the migration to rename
user_auth_uid -> auth_user_uid and change sa.Uuid() to sa.String() (and keep the
ForeignKeyConstraint(["auth_user_uid"], ["users.auth_uid"])) to match
WorkspaceUserRole, or (B) update the WorkspaceUserRole model to use a UUID type
and set sa_column or Field(..., foreign_key="users.auth_uid",
sa_column=Column("user_auth_uid", Uuid())) so the field name, DB column name and
type (Uuid) align with the migration; ensure the PK/foreign key definitions
reference the corrected column name.
In `@api/src/teams/repository.py`:
- Line 27: The code currently iterates result.all() and passes Row tuples into
WorkspaceTeamItem.from_team, which expects WorkspaceTeam instances; change the
call site to extract scalar model instances by using result.scalars().all() (the
same pattern used in get_members) so WorkspaceTeamItem.from_team receives
WorkspaceTeam objects from the async session.exec(select(...)) result.
---
Outside diff comments:
In `@alembic_task/env.py`:
- Around line 19-29: The auto-import loop using src_path.rglob(...) and
importlib.import_module(module_path) pulls in all models (including
WorkspaceLongQuest with __tablename__ = "workspaces_long_quests") causing
cross-DB inspection; restrict imports so alembic_task only loads task-bound
models by scoping the glob to a task-only subpackage or replacing the catch-all
loop with explicit imports, or filter modules after import by a marker (e.g.,
check for a __bind_key__ or module/package name) and skip modules that belong to
the OSM DB; update the import loop (module_path / importlib.import_module usage)
to only import or register task-bound models so workspaces_long_quests is not
seen by the task migration run.
In `@api/core/security.py`:
- Around line 190-194: The code uses blocking requests.get(authorizationUrl,
headers=headers) which will block the FastAPI event loop; replace it with an
async call: either (preferred) import httpx, make the surrounding function async
and use "async with httpx.AsyncClient() as client: response = await
client.get(authorizationUrl, headers=headers)" and keep the same status_code
check/raise of credentials_exception, or (minimal) wrap the existing call with
asyncio.to_thread: "response = await asyncio.to_thread(requests.get,
authorizationUrl, headers=headers)". Also add the necessary imports (httpx or
asyncio) and ensure the caller is awaited if you convert the function to async.
- Around line 196-200: Validate the parsed JSON before using it: after loading
response.text into j, ensure j is a list and each item is a dict containing the
keys 'tdei_project_group_id', 'project_group_name', and 'roles' with the
expected types (e.g., str for IDs/names and iterable for roles); if j is not a
list or any item is malformed, raise credentials_exception. Wrap the
access/iteration logic that assumes these keys (the code that consumes j) in a
simple guard or try/except catching TypeError/KeyError/ValueError and convert
those to credentials_exception so malformed-but-valid-JSON responses (e.g., null
or {}) produce a 401 instead of a 500; reference the variables response,
content, j and credentials_exception when making the changes.
In `@api/main.py`:
- Around line 96-102: Remove the unreachable `return` statements that follow
`raise HTTPException` in the authentication checks: locate the blocks that call
`current_user.isWorkspaceContributor(workspace_id)` (and any similar checks
using `current_user.isWorkspaceOwner`/contributors) and delete the `return`
lines immediately after the `raise HTTPException(...)` so control flow is
correct and there is no dead code left after the exception is thrown.
- Around line 116-131: Replace the unclosed httpx.AsyncClient usage with an
async context manager (use "async with httpx.AsyncClient(...) as client:"), move
client.build_request and rp_req creation inside that block so the client is
closed after use; when building new_headers only append the Authorization header
if request.headers.get("Authorization") is not None (avoid appending None), keep
the X-Workspace logic for authorizedWorkspace and ensure header values are
converted to bytes consistently (e.g., bytes(str(...), "utf-8")) and retain Host
as bytes(client.base_url.host, "utf-8").
- Line 91: The variable authorizedWorkspace is set to None and never assigned,
leaving the subsequent conditional checking if authorizedWorkspace is not None
as dead code; fix by either removing that unreachable block or assign the
workspace after validating workspace_id—e.g., after you parse/validate the
X-Workspace header, call repository.get_workspace(workspace_id) and set
authorizedWorkspace, then handle the case where repository returns None
(deny/return 403 or raise as appropriate); update any logic that depended on
authorizedWorkspace accordingly and ensure the authorization branch is
reachable.
---
Nitpick comments:
In `@api/core/security.py`:
- Around line 157-161: The code instantiates jwt.PyJWKClient inside the request
flow (jwks_client) causing its internal JWKS cache to be lost and
get_signing_key_from_jwt(token) to trigger synchronous HTTP calls on each
invocation; fix by creating a module-level singleton PyJWKClient (e.g.,
JWKS_CLIENT or jwks_client_singleton) initialized once at import time using
f"{settings.TDEI_OIDC_URL}realms/{settings.TDEI_OIDC_REALM}/protocol/openid-connect/certs"
and update the function that calls get_signing_key_from_jwt(token) to use that
singleton instead of re-instantiating, so the client’s cache is reused and
blocking fetches are minimized.
- Around line 122-141: The validate_token function currently logs cache hits
with logger.info which will be noisy; change the cache-hit log in validate_token
(where it checks _token_cache and currently calls logger.info("Token validation
cache hit")) to logger.debug("Token validation cache hit") so routine traffic
doesn't spam INFO logs; keep the existing _token_cache behavior (not changing
the caching semantics for revoked tokens) but you may add a short inline comment
near _token_cache usage to note the accepted one-hour revocation window if
desired.
In `@api/main.py`:
- Around line 73-76: AUTH_WHITELIST_PATHS is a list of raw regex strings that
are being recompiled on every request; fix by creating compiled regex objects at
module import (e.g. define AUTH_WHITELIST_REGEXES = [re.compile(p) for p in
AUTH_WHITELIST_PATHS]) and update the request-check site (where
any(re.search(... for ...)) is used) to iterate over AUTH_WHITELIST_REGEXES and
call regex.search(path) instead of re.search with strings so the patterns are
compiled once and reused.
In `@api/src/teams/repository.py`:
- Around line 68-69: Replace the runtime-only assert with an explicit guard:
check if team.id is None and raise a clear exception (e.g., ValueError or
RuntimeError) with a descriptive message instead of relying on assert; update
the code around the existing references to team.id (the lines with "assert
team.id is not None" and "return team.id") so callers never receive a None id
even when Python is run with -O.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
LICENSE.templatealembic_osm/env.pyalembic_osm/versions/9221408912dd_add_user_role_table.pyalembic_task/env.pyalembic_task/versions/add6266277c7_.pyapi/core/config.pyapi/core/security.pyapi/main.pyapi/src/teams/repository.pyapi/src/teams/routes.pyapi/src/teams/schemas.pyapi/src/workspaces/repository.pyapi/src/workspaces/schemas.py
💤 Files with no reviewable changes (1)
- alembic_task/versions/add6266277c7_.py
There was a problem hiding this comment.
♻️ Duplicate comments (1)
api/src/teams/repository.py (1)
27-27:result.scalars().all()fix correctly applied — LGTM.Aligns with the pattern on line 85 and the retrieved learning about
ChunkedIteratorResult. Previous issue fully resolved.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/src/teams/repository.py` at line 27, The list comprehension in repository.py that returns [WorkspaceTeamItem.from_team(x) for x in result.scalars().all()] is already corrected to use result.scalars().all() and matches the pattern used elsewhere (see the similar usage around line 85); no further changes are required—leave WorkspaceTeamItem.from_team and the result.scalars().all() call as-is.
🧹 Nitpick comments (2)
api/src/teams/repository.py (2)
79-80: Silent delete on a non-existent team — consider checkingrowcount.
session.executereturns a result with arowcountattribute. Not checking it means a caller requesting deletion of a non-existent team ID gets a silent no-op success, whereas the analogousworkspaces/repository.pydelete(lines 196–199) raisesNotFoundExceptionwhenrowcount != 1. Depending on whetherDELETEis meant to be idempotent here, this inconsistency may be intentional or a gap.♻️ Proposed fix (if non-idempotent behaviour is desired)
- await self.session.execute(delete(WorkspaceTeam).where(WorkspaceTeam.id == id)) # type: ignore[arg-type] + result = await self.session.execute(delete(WorkspaceTeam).where(WorkspaceTeam.id == id)) # type: ignore[arg-type] + if result.rowcount != 1: # type: ignore[attr-defined] + raise NotFoundException(f"Team delete failed for id {id}") await self.session.commit()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/src/teams/repository.py` around lines 79 - 80, The delete call for WorkspaceTeam uses await self.session.execute(delete(WorkspaceTeam).where(WorkspaceTeam.id == id)) but does not check the returned result's rowcount, so deleting a non-existent team silently succeeds; update the method (the WorkspaceTeam delete path) to capture the result from session.execute, check result.rowcount and raise NotFoundException if rowcount != 1 (mirroring the behavior in workspaces/repository.py's delete), then commit only after the check to maintain consistent non-idempotent behavior.
68-68: Replaceassertwith an explicitraisefor a production-safe guard.
assertstatements are silently stripped when Python is run with-Oor-OO, meaningteam.idwould be returned asNonein an optimised deployment without any error.♻️ Proposed fix
- assert team.id is not None + if team.id is None: + raise RuntimeError("Team ID is unexpectedly None after commit and refresh") return team.id🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/src/teams/repository.py` at line 68, Replace the runtime-unsafe assertion "assert team.id is not None" with an explicit check that raises a clear exception (e.g., ValueError or RuntimeError) when team.id is None; locate the check around the code handling the Team instance (the occurrence of "team" and "team.id" in repository.py) and change it to an if-statement that raises with a descriptive message like "team.id must not be None" so the guard remains active in optimized Python runs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@api/src/teams/repository.py`:
- Line 27: The list comprehension in repository.py that returns
[WorkspaceTeamItem.from_team(x) for x in result.scalars().all()] is already
corrected to use result.scalars().all() and matches the pattern used elsewhere
(see the similar usage around line 85); no further changes are required—leave
WorkspaceTeamItem.from_team and the result.scalars().all() call as-is.
---
Nitpick comments:
In `@api/src/teams/repository.py`:
- Around line 79-80: The delete call for WorkspaceTeam uses await
self.session.execute(delete(WorkspaceTeam).where(WorkspaceTeam.id == id)) but
does not check the returned result's rowcount, so deleting a non-existent team
silently succeeds; update the method (the WorkspaceTeam delete path) to capture
the result from session.execute, check result.rowcount and raise
NotFoundException if rowcount != 1 (mirroring the behavior in
workspaces/repository.py's delete), then commit only after the check to maintain
consistent non-idempotent behavior.
- Line 68: Replace the runtime-unsafe assertion "assert team.id is not None"
with an explicit check that raises a clear exception (e.g., ValueError or
RuntimeError) when team.id is None; locate the check around the code handling
the Team instance (the occurrence of "team" and "team.id" in repository.py) and
change it to an if-statement that raises with a descriptive message like
"team.id must not be None" so the guard remains active in optimized Python runs.
|
@cyrossignol can you review or merge this one into yours as applicable? |
|
@jeffmaki Will do! |
84352a2 to
7fd871c
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
alembic_osm/versions/9221408912dd_add_user_role_table.py (1)
47-63:⚠️ Potential issue | 🔴 CriticalThe migration still doesn't match
WorkspaceUserRole.This revision creates
user_auth_uidasUUID, while the ORM/repository surface usesauth_user_uid. That mismatch will break ORM reads/writes againstuser_workspace_rolesuntil the physical column name/type and model mapping agree.#!/bin/bash set -euo pipefail migration="$(fd '9221408912dd_add_user_role_table.py$' . | head -n1)" schema="$(fd 'schemas.py$' . | rg 'api/src/workspaces/schemas.py' -N || true)" echo "== Migration definition ==" sed -n '46,64p' "$migration" echo echo "== WorkspaceUserRole model ==" if [ -n "$schema" ]; then sed -n '124,136p' "$schema" fi echo echo "== All references ==" rg -n -C2 'auth_user_uid|user_auth_uid' --type py🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@alembic_osm/versions/9221408912dd_add_user_role_table.py` around lines 47 - 63, The migration creates the table column as "user_auth_uid" with sa.Uuid(), but the ORM/Repository model (WorkspaceUserRole) expects a column named "auth_user_uid" with the model's type; update the op.create_table call for "user_workspace_roles" to rename the column from user_auth_uid to auth_user_uid and change its SQLAlchemy type to match the WorkspaceUserRole.auth_user_uid field, update the ForeignKeyConstraint reference if necessary (still pointing to users.auth_uid), and keep the PrimaryKeyConstraint("auth_user_uid", "workspace_id") so the physical schema matches the model mapping.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@alembic_osm/versions/9221408912dd_add_user_role_table.py`:
- Around line 29-49: The downgrade path must mirror the upgrade
deterministically: remove the conditional probing logic and drop only the
objects this migration created unconditionally (or alternatively track creation
in a migration-local flag), so edit the migration to eliminate existence checks
around creating/dropping auth_uid_unique, workspace_role, and
user_workspace_roles; specifically remove use of constraint_exists and
insp.has_table gating in upgrade()/downgrade() and make downgrade() always drop
the "auth_uid_unique" constraint, the "workspace_role" enum type, and the
"user_workspace_roles" table (or add explicit creation flags if you prefer
tracking) so the upgrade/downgrade pair is reversible and deterministic.
In `@api/main.py`:
- Around line 56-60: The startup sequence initializes _osm_client and calls
init_tdei_client() before run_migrations(), and if run_migrations() raises the
lifespan never yields so resources remain open; wrap the startup block that
calls init_tdei_client() and any _osm_client initialization in a try/finally (or
try/except + cleanup) so that on any exception you call the appropriate
cleanup/close routines for _osm_client and the TDEI client (mirror whatever
shutdown logic exists) to ensure resources are released when startup fails.
---
Duplicate comments:
In `@alembic_osm/versions/9221408912dd_add_user_role_table.py`:
- Around line 47-63: The migration creates the table column as "user_auth_uid"
with sa.Uuid(), but the ORM/Repository model (WorkspaceUserRole) expects a
column named "auth_user_uid" with the model's type; update the op.create_table
call for "user_workspace_roles" to rename the column from user_auth_uid to
auth_user_uid and change its SQLAlchemy type to match the
WorkspaceUserRole.auth_user_uid field, update the ForeignKeyConstraint reference
if necessary (still pointing to users.auth_uid), and keep the
PrimaryKeyConstraint("auth_user_uid", "workspace_id") so the physical schema
matches the model mapping.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f577d5aa-8b35-4d1c-ba38-f1af44786cc9
📒 Files selected for processing (9)
LICENSE.templatealembic_osm/versions/9221408912dd_add_user_role_table.pyapi/core/config.pyapi/main.pyapi/src/teams/repository.pyapi/src/teams/routes.pyapi/src/teams/schemas.pyapi/src/workspaces/repository.pyapi/src/workspaces/schemas.py
🚧 Files skipped from review as they are similar to previous changes (5)
- api/src/workspaces/schemas.py
- api/src/teams/repository.py
- api/src/teams/routes.py
- api/core/config.py
- api/src/teams/schemas.py
7fd871c to
4961e4b
Compare
Summary by CodeRabbit
New Features
settingsconfiguration object as a public variable for external access.Refactor
Chores