Conversation
|
Warning Rate limit exceeded
You’ve run out of usage credits. Purchase more in the billing tab. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
📝 WalkthroughWalkthroughThis PR introduces a new authenticated bulk endpoint at ChangesBulk Forecast & Comment Endpoint
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…sue/4552/bulk-forecast-comment
🚀 Preview EnvironmentYour preview environment is ready!
Details
ℹ️ Preview Environment InfoIsolation:
Limitations:
Cleanup:
|
…/bulk-forecast-comment
|
@claude please scan this PR for security concerns |
|
Claude finished @lsabor's task in 2m 44s —— View job Security Review: bulk-forecast-comment endpointI reviewed 🟠 HIGH — User enumeration via response-code differentialFile: The endpoint discloses the existence of arbitrary
Because Recommended fix: unify the responses to a single 403 ( 🟡 MEDIUM — No bulk-size limit (DoS vector)File: There is no upper bound on Recommended fix: validate 🟡 MEDIUM — Side effects run outside the atomic transactionFile:
Recommended fix: use 🟡 MEDIUM — Missing audit log for
|
…/bulk-forecast-comment
| if is_staff_override and not request_user.is_staff: | ||
| raise PermissionDenied("Non-staff users cannot use the is_staff_override flag.") |
There was a problem hiding this comment.
Why do we need an explicit is_staff_override flag? Can we simplify it by checking bot-belonging/user permissions when the user explicitly passes user_id or username params?
There was a problem hiding this comment.
I could see a scenario where I'm a staff user and want to have my personal bot forecast, in which case id be forecasting without the "is_staff_override" flag. This way, if I mistype the bot ID or Username, I get bounced appropriately. Since being able to forecast on behalf of anyone is a "risky" feature to add, I want to make sure the one submitting the forecast is being really explicit in their knowledge that they're using a staff-override.
Do you feel like the additional parameter would be hard to maintain or confusing? I think it makes it easier!
| username = serializers.CharField(required=False, allow_null=True) | ||
| is_staff_override = serializers.BooleanField(required=False, default=False) | ||
| forecasts = ForecastWriteSerializer(many=True, required=False, default=list) | ||
| comments = CommentWriteSerializer(many=True, required=False, default=list) |
There was a problem hiding this comment.
A small nit: I don’t like that we have forecasts and comments objects acting independently from their target post, so you could create a forecast for post A and a comment on unrelated post B at the same time.
I’m wondering if there’s a more sophisticated way to shape the schema so it explicitly says “this forecast comes together with this question,” instead of looking almost the same as just making two separate requests.
On the other hand, we still need to keep these things isolated and avoid mixing responsibilities. We currently have a similar case with Key Factors: comments.views.common.comment_create_api_view creates both a comment and a key factor if present.
There was a problem hiding this comment.
I think it should be acceptable for this to be a bulk endpoint for forecasts and comments without restricting it such that you always need to pair comments with forecasts in the submission.
I'd be willing to change it if you insist, but I think it's simpler to allow for independent sets. I guess the risk here is that the forecast needs to attach to a Question while the comment attaches to a Post. I'll make sure that we validate all forecast and comment attachment points before saving any of them, but I still think it's good this way.
| def validate(self, attrs): | ||
| if not attrs.get("user_id") and not attrs.get("username"): | ||
| raise serializers.ValidationError( | ||
| "Either user_id or username must be provided." | ||
| ) | ||
| return attrs |
There was a problem hiding this comment.
Hm, I’m confused. What if the user wants to forecast on behalf of themselves? Should they also pass their own user ID?
If so, maybe we can simplify it to this flow:
- No user params → assume current authenticated user
- User param → check whether:
- it’s the current user
- the bot belongs to the current user
- the user is an admin and can impersonate another user
There was a problem hiding this comment.
I think the adage of "explicit over implicit" is worth it here. What's the downside to making sure you are definitely trying to forecast for yourself vs the bot? It just forces the user to be explicit and I think that's a value add.
…enting, allow pre-registering forecasts
…/bulk-forecast-comment
ee56c6b to
ad12105
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
questions/views.py (3)
291-298: ⚡ Quick winEmpty payload silently returns 201.
If
forecastsandcommentsare both empty (or both omitted, since they default tolist), the endpoint completes with no work done and returns 201. That is potentially confusing for bot authors and obscures buggy clients. Consider rejecting payloads with no forecasts and no comments.♻️ Require at least one item
def validate(self, attrs): if not attrs.get("user_id") and not attrs.get("username"): raise serializers.ValidationError( "Either user_id or username must be provided." ) + if not attrs.get("forecasts") and not attrs.get("comments"): + raise serializers.ValidationError( + "At least one forecast or comment must be provided." + ) return attrs🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@questions/views.py` around lines 291 - 298, The view currently accepts empty payloads because when forecasts_data and comments_data are both empty it proceeds and returns 201; add a validation step that rejects such requests: after deserializing with BulkForecastAndCommentSerializer and extracting forecasts_data and comments_data, if both are empty (e.g., not forecasts_data and not comments_data) raise a serializers.ValidationError (or return Response(..., status=400)) with a clear message like "payload must include at least one forecast or comment"; alternatively implement the same check in BulkForecastAndCommentSerializer.validate(...) to enforce at-least-one-item and surface a 400 to callers.
305-315: ⚡ Quick winSilent preference of
user_idwhen bothuser_idandusernameare provided.The serializer requires at least one of
user_id/username, but if a caller provides both with mismatched values,usernameis silently ignored (both in the staff-override branch and in the non-override branch). Given the explicit-by-design philosophy from earlier discussion, consider rejecting payloads where both are present, or validating that they refer to the sameUserrow.♻️ Reject ambiguous payloads at the serializer level
def validate(self, attrs): - if not attrs.get("user_id") and not attrs.get("username"): + user_id = attrs.get("user_id") + username = attrs.get("username") + if not user_id and not username: raise serializers.ValidationError( "Either user_id or username must be provided." ) + if user_id and username: + raise serializers.ValidationError( + "Provide either user_id or username, not both." + ) return attrs🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@questions/views.py` around lines 305 - 315, The code silently prefers user_id over username (in both the is_staff_override branch using get_object_or_404(User, id=user_id) / get_object_or_404(User, username=username) and the non-override branch using User.objects.filter(...).first()), which allows ambiguous or conflicting payloads; update validation to reject or reconcile dual fields: modify the serializer (or add a clean/validate method) to error if both user_id and username are provided but refer to different users, or to reject payloads that include both, and then adjust the view to assume exactly one identifier is present (remove the silent preference logic) so the view uses the serializer-validated single user identifier instead of choosing between get_object_or_404(User, id=...) and get_object_or_404(User, username=...).
280-282: 💤 Low valueRemove the redundant
IsAuthenticateddecorator on line 281.
IsAuthenticatedis configured as the default permission class in Django REST Framework settings, making the explicit@permission_classes([IsAuthenticated])decorator unnecessary here and inconsistent with neighboring bulk endpoints.Suggested change
`@api_view`(["POST"]) -@permission_classes([IsAuthenticated]) def bulk_forecast_and_comment_api_view(request):🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@questions/views.py` around lines 280 - 282, Remove the redundant `@permission_classes`([IsAuthenticated]) decorator from the bulk_forecast_and_comment_api_view function: delete the decorator line that directly precedes the def bulk_forecast_and_comment_api_view(request) so the view relies on the global DEFAULT_PERMISSION_CLASSES setting (and matches neighboring bulk endpoints) while leaving the `@api_view`(["POST"]) decorator and function body unchanged.tests/unit/test_questions/test_bulk_forecast_and_comment.py (2)
53-272: ⚡ Quick winAdd tests for the public-comment ban and the comment happy path.
The PR explicitly “blocks public commenting” and supports an
included_forecastflag, but the only comment-related coverage here is thekey_factors → 400case. Consider adding:
- A test that submitting a comment with
is_private=False(or omitted) returns 400.- A successful path that creates a private comment (with and without forecasts) and asserts the
Commentrow exists.- An atomicity test where a comment validation/creation failure also prevents the forecasts from being persisted (confirms the
transaction.atomic()block actually behaves as intended).- A test for the
included_forecast=Truebranch inbulk_forecast_and_comment_api_view(lines 393-399 ofquestions/views.py), since that is the most non-trivial code path.These would substantially strengthen the contract this endpoint advertises.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/unit/test_questions/test_bulk_forecast_and_comment.py` around lines 53 - 272, Add tests inside TestBulkForecastAndComment to cover public-comment ban and happy paths: (1) submit a comment with is_private=False (or omit is_private) and assert the bulk endpoint (bulk_forecast_and_comment_api_view) returns 400 to enforce the public-comment ban; (2) create a successful private comment request (with and without forecasts) and assert a Comment row exists and associated Forecast rows are created (use Comment and Forecast model checks); (3) add an atomicity test where you force comment validation/creation to fail (e.g., invalid comment payload) while supplying valid forecasts and assert that no Forecast rows persist, verifying transaction.atomic behavior; and (4) add a test exercising the included_forecast=True branch in bulk_forecast_and_comment_api_view to assert it follows the alternate code path and persists expected Forecast/Comment results.
14-14: 💤 Low valueModule-level
reverse()runs at import time.This works under pytest-django because
ROOT_URLCONFis loaded when the test module is imported, but it ties test collection to URL resolution. If a future test ever wants to overrideROOT_URLCONF(e.g., via@pytest.mark.urls), this constant won't reflect that. A simple fixture or inline call inside each test avoids the foot-gun.♻️ Optional move into a fixture
-URL = reverse("bulk-forecast-comment") - - +@pytest.fixture() +def url(): + return reverse("bulk-forecast-comment")🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/unit/test_questions/test_bulk_forecast_and_comment.py` at line 14, The module-level URL = reverse("bulk-forecast-comment") call runs at import time and prevents tests from seeing runtime URL overrides; change it so reverse("bulk-forecast-comment") is invoked at test runtime instead—either replace the module-level URL with a small fixture (e.g., def bulk_forecast_comment_url(): return reverse("bulk-forecast-comment")) and use that fixture in tests, or call reverse("bulk-forecast-comment") directly inside each test that needs it; update references to the module-level URL variable in test_bulk_forecast_and_comment.py accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@questions/views.py`:
- Around line 305-327: The view enqueues Dramatiq messages inside a transaction
(see create_forecast_bulk and after_forecast_actions) which can be rolled back,
producing orphaned async jobs; change all places that call
run_build_question_forecasts.send(...) and
run_on_post_forecast.send_with_options(...) so they are invoked inside
transaction.on_commit(...) lambdas instead of directly, including the callsite
in after_forecast_actions and the send at line 183 of create_forecast_bulk;
ensure any other callers from non-bulk paths (e.g., the service-layer uses
around lines 155 and 241) use the same transaction.on_commit wrapping so
messages are only published after the DB transaction successfully commits.
---
Nitpick comments:
In `@questions/views.py`:
- Around line 291-298: The view currently accepts empty payloads because when
forecasts_data and comments_data are both empty it proceeds and returns 201; add
a validation step that rejects such requests: after deserializing with
BulkForecastAndCommentSerializer and extracting forecasts_data and
comments_data, if both are empty (e.g., not forecasts_data and not
comments_data) raise a serializers.ValidationError (or return Response(...,
status=400)) with a clear message like "payload must include at least one
forecast or comment"; alternatively implement the same check in
BulkForecastAndCommentSerializer.validate(...) to enforce at-least-one-item and
surface a 400 to callers.
- Around line 305-315: The code silently prefers user_id over username (in both
the is_staff_override branch using get_object_or_404(User, id=user_id) /
get_object_or_404(User, username=username) and the non-override branch using
User.objects.filter(...).first()), which allows ambiguous or conflicting
payloads; update validation to reject or reconcile dual fields: modify the
serializer (or add a clean/validate method) to error if both user_id and
username are provided but refer to different users, or to reject payloads that
include both, and then adjust the view to assume exactly one identifier is
present (remove the silent preference logic) so the view uses the
serializer-validated single user identifier instead of choosing between
get_object_or_404(User, id=...) and get_object_or_404(User, username=...).
- Around line 280-282: Remove the redundant
`@permission_classes`([IsAuthenticated]) decorator from the
bulk_forecast_and_comment_api_view function: delete the decorator line that
directly precedes the def bulk_forecast_and_comment_api_view(request) so the
view relies on the global DEFAULT_PERMISSION_CLASSES setting (and matches
neighboring bulk endpoints) while leaving the `@api_view`(["POST"]) decorator and
function body unchanged.
In `@tests/unit/test_questions/test_bulk_forecast_and_comment.py`:
- Around line 53-272: Add tests inside TestBulkForecastAndComment to cover
public-comment ban and happy paths: (1) submit a comment with is_private=False
(or omit is_private) and assert the bulk endpoint
(bulk_forecast_and_comment_api_view) returns 400 to enforce the public-comment
ban; (2) create a successful private comment request (with and without
forecasts) and assert a Comment row exists and associated Forecast rows are
created (use Comment and Forecast model checks); (3) add an atomicity test where
you force comment validation/creation to fail (e.g., invalid comment payload)
while supplying valid forecasts and assert that no Forecast rows persist,
verifying transaction.atomic behavior; and (4) add a test exercising the
included_forecast=True branch in bulk_forecast_and_comment_api_view to assert it
follows the alternate code path and persists expected Forecast/Comment results.
- Line 14: The module-level URL = reverse("bulk-forecast-comment") call runs at
import time and prevents tests from seeing runtime URL overrides; change it so
reverse("bulk-forecast-comment") is invoked at test runtime instead—either
replace the module-level URL with a small fixture (e.g., def
bulk_forecast_comment_url(): return reverse("bulk-forecast-comment")) and use
that fixture in tests, or call reverse("bulk-forecast-comment") directly inside
each test that needs it; update references to the module-level URL variable in
test_bulk_forecast_and_comment.py accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c153829e-0036-41df-b417-0faf08f48c42
📒 Files selected for processing (4)
questions/services/forecasts.pyquestions/urls.pyquestions/views.pytests/unit/test_questions/test_bulk_forecast_and_comment.py
| if is_staff_override: | ||
| if user_id: | ||
| user = get_object_or_404(User, id=user_id) | ||
| else: | ||
| user = get_object_or_404(User, username=username) | ||
| else: | ||
| user = ( | ||
| User.objects.filter(id=user_id).first() | ||
| if user_id | ||
| else User.objects.filter(username=username).first() | ||
| ) | ||
| is_self = user is not None and user.id == request_user.id | ||
| is_own_bot = ( | ||
| user is not None | ||
| and user.is_bot | ||
| and user.bot_owner_id is not None | ||
| and user.bot_owner_id == request_user.id | ||
| ) | ||
| if not is_self and not is_own_bot: | ||
| raise PermissionDenied( | ||
| "Non-superusers can only submit forecasts and comments as themselves " | ||
| "or their bots." | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm task.send invocations on the forecast hot path are not already wrapped with transaction.on_commit.
rg -nP -C3 "run_build_question_forecasts\.send|run_on_post_forecast\.send" --type=py
rg -nP -C2 "transaction\.on_commit" --type=pyRepository: Metaculus/metaculus
Length of output: 1597
🏁 Script executed:
# Find the create_forecast_bulk function and its surrounding context
sed -n '160,190p' questions/services/forecasts.py
# Check the context around line 385 in views.py
sed -n '380,395p' questions/views.py
# Look for the after_forecast_actions function mentioned in the comment
rg -n "after_forecast_actions" --type=py -B2 -A10Repository: Metaculus/metaculus
Length of output: 4794
🏁 Script executed:
# Get the full after_forecast_actions function
sed -n '137,160p' questions/services/forecasts.py
# Check line 385 in views.py and surrounding context
sed -n '375,395p' questions/views.py
# Check where create_forecast_bulk is called
rg -n "create_forecast_bulk" --type=py -B3 -A3Repository: Metaculus/metaculus
Length of output: 6866
Defer task enqueueing until transaction commit to avoid orphaned async jobs.
create_forecast_bulk enqueues run_build_question_forecasts.send(...) (via after_forecast_actions at line 160) and run_on_post_forecast.send_with_options(...) (at line 183) inside transaction.atomic() on lines 404-405 in views.py. Dramatiq messages are published immediately, not bound to the transaction. If any subsequent create_comment call raises (line 407+), the forecasts and comments get rolled back but the workers will still pick up those messages and try to rebuild aggregates / fire post-forecast actions for objects that no longer exist, leading to noisy errors and potentially stale aggregate state.
Wrap each task send in transaction.on_commit(lambda: task.send(...)) in after_forecast_actions and line 183 of create_forecast_bulk. Doing this in the service layer also benefits the existing non-bulk callers (lines 155, 241).
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@questions/views.py` around lines 305 - 327, The view enqueues Dramatiq
messages inside a transaction (see create_forecast_bulk and
after_forecast_actions) which can be rolled back, producing orphaned async jobs;
change all places that call run_build_question_forecasts.send(...) and
run_on_post_forecast.send_with_options(...) so they are invoked inside
transaction.on_commit(...) lambdas instead of directly, including the callsite
in after_forecast_actions and the send at line 183 of create_forecast_bulk;
ensure any other callers from non-bulk paths (e.g., the service-layer uses
around lines 155 and 241) use the same transaction.on_commit wrapping so
messages are only published after the DB transaction successfully commits.
closes #4552
Summary by CodeRabbit