fix: send SIGHUP to MCP container after initial deploy#378
Conversation
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughAdded a post-deployment hook mechanism that allows service resources to execute custom logic after their containers are confirmed running. The MCP service uses this hook to trigger a best-effort configuration reload via SIGHUP after deployment completes. ChangesPost-Deployment Hook System
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Up to standards ✅🟢 Issues
|
| Metric | Results |
|---|---|
| Duplication | 0 |
NEW Get contextual insights on your PRs based on Codacy's metrics, along with PR and Jira context, without leaving GitHub. Enable AI reviewer
TIP This summary will be updated as you push new changes.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@server/internal/orchestrator/swarm/mcp_config_resource.go`:
- Around line 260-262: In PostDeploy, do not ignore the return from
r.signalConfigReload(ctx, rc); instead check the error and emit a structured
zerolog error log including at least error (err), operation ("PostDeploy" or
"signalConfigReload"), and relevant resource/context identifiers (e.g., rc or
resource name); use the existing logger (e.g., r.logger or rc.Logger) and
include domain error mapping to HTTP codes where appropriate (via Goa) when
handling the error downstream so reload failures are visible in logs and can be
translated to proper status codes.
In `@server/internal/orchestrator/swarm/service_instance_spec.go`:
- Around line 98-100: The current code silently ignores errors from
resource.FromContext; change it to log failures using structured zerolog and
retain the successful path: call resource.FromContext[*MCPConfigResource](rc,
MCPConfigResourceIdentifier(s.ServiceInstanceID)); if err != nil, emit a
structured error log (include fields like "service_instance_id"
s.ServiceInstanceID, "resource" "MCPConfigResource", and the error value) using
the existing zerolog logger available in the request context or rc, and only
call mcpCfg.PostDeploy(ctx, rc) when err == nil; keep the same identifiers
(resource.FromContext, MCPConfigResource, MCPConfigResourceIdentifier,
PostDeploy, mcpCfg) so reviewers can locate the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f2e8d827-e5f3-41fd-ab8f-50136e4a876d
📒 Files selected for processing (3)
server/internal/orchestrator/swarm/mcp_config_resource.goserver/internal/orchestrator/swarm/service_instance.goserver/internal/orchestrator/swarm/service_instance_spec.go
rshoemaker
left a comment
There was a problem hiding this comment.
LGTM.
One minor comment: cluster update will cause a double SIGHUP on the container ( once in MCPConfigResource.Update() and again in ServiceInstanceResource.deploy() ), while create will only cause a single SIGHUP. The double signal is benign though - not worth changing IMO.
|
Will address complete solution as part PLAT-589 |
Summary
This PR ensures that once the MCP container is confirmed running during initial provisioning, a
SIGHUPsignal is sent to trigger a config reload.Changes
service_instance.go— callspec.PostDeploy()afterWaitForService()service_instance_spec.go— addPostDeploy()with a service-type switchmcp_config_resource.go— add publicPostDeploy()that calls the existingsignalConfigReload()Testing
Verification:
From logs confirmed that config reloaded:
Checklist
PLAT-588