Conversation
- Updated deployment tier terminology from "Tier 1", "Tier 2", and "Tier 3" to "Freemium", "Personal", and "Enterprise" for clarity. - Implemented rate limiting checks in multiple actions to prevent burst abuse per user. - Adjusted comments and documentation to reflect new tier names and functionalities. - Enhanced integration tests to align with the new tier structure and ensure proper functionality. - Modified architecture preview components to display updated tier information.
- Updated BuiltInMcpServer and DbMcpServer interfaces to make command, args, and env optional. - Added url and transportType fields to support different transport methods. - Modified addMCPServer and updateMCPServer mutations to validate transport-specific fields. - Adjusted built-in MCP server configurations to reflect new transport types and disable unnecessary servers. - Updated platformValue calculation to reflect new infrastructure items. - Refined realAgentTesting comments to align with AgentCore implementation. - Adjusted schema definitions for mcpServers to accommodate new optional fields. - Updated strandsAgentExecution to ensure correct type handling for skill references. - Removed unused AWS SDK dependencies related to ECS and ECR. - Revised AWSAuthModal and AWSRoleSetup components to reflect new permissions for Bedrock. - Updated ArchitecturePreview component to reflect changes in deployment tiers and resource estimates. - Adjusted CodePreview component to clarify deployment instructions for Bedrock AgentCore.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughThe pull request migrates the application infrastructure from ECS/Fargate-based containerized deployments to AWS Bedrock AgentCore-managed agents. It introduces per-user rate limiting across workflow execution paths, extends MCP server configuration to support multiple transport types (stdio, HTTP/SSE, direct), refactors deployment tier naming from numerical (tier1/tier2/tier3) to descriptive (freemium/personal/enterprise), and updates related infra, deployment logic, and tests. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant App as Agent Builder App
participant Router as Deployment Router
participant RateLimiter as Rate Limiter
participant AgentCore as Bedrock AgentCore
participant S3 as S3 Artifacts Bucket
participant Logs as CloudWatch
User->>App: request deploy
App->>Router: deployAgent(agentId, tier, userId)
Router->>RateLimiter: checkRateLimit(userId, "agentExecution")
alt rate limit exceeded
RateLimiter-->>Router: denied
Router-->>App: error (rate limit)
App-->>User: rate limit error
else allowed
Router->>Router: gather deps & env
Router->>AgentCore: deployToAgentCore(code, deps, env)
AgentCore->>S3: store artifacts
S3-->>AgentCore: ack
AgentCore->>Logs: write logs
Logs-->>AgentCore: ack
AgentCore-->>Router: {success, runtimeId}
Router-->>App: deployment success
App-->>User: agent ready
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This pull request refactors the deployment architecture and tier system for the agent builder application, moving from ECS/Fargate-based deployments to Bedrock AgentCore, and renaming deployment tiers from "tier1/tier2/tier3" to more descriptive "freemium/personal/enterprise" names.
Changes:
- Renamed deployment tiers: tier1→freemium, tier2→personal, tier3→enterprise
- Refactored MCP server configuration to support multiple transport types (stdio/sse/http/direct)
- Removed all ECS/Fargate/ECR infrastructure in favor of Bedrock AgentCore
- Added rate limiting to multiple agent execution endpoints
- Updated billing/usage reset logic for freemium tier via cron jobs
Reviewed changes
Copilot reviewed 43 out of 43 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| src/components/ArchitecturePreview.tsx | Updated tier display names and resource estimates from Fargate to AgentCore |
| src/components/ArchitecturePreview.test.tsx | Updated all test cases for new tier naming convention |
| src/components/CodePreview.tsx | Changed deployment instructions from Lambda/ECS to Bedrock AgentCore |
| src/components/AgentBuilder.tsx | Added workflowResult to useEffect dependencies, improved error handling |
| src/components/AWSRoleSetup.tsx | Replaced ECS/ECR permissions with Bedrock permissions |
| src/components/AWSAuthModal.tsx | Updated permission requirements from ECS/ECR to Bedrock |
| package.json | Removed unused AWS SDK clients (EC2, ECR, ECS) |
| convex/schema.ts | Made MCP server command/args optional, added url and transportType fields |
| convex/mcpConfig.ts | Added transport-specific validation, updated built-in servers to use "direct" transport |
| convex/mcpClient.ts | Implemented multi-transport routing (stdio/sse/http/direct), added invokeDirectAPI function |
| convex/deploymentRouter.ts | Renamed tier functions, replaced Fargate deployment with AgentCore |
| convex/stripeMutations.ts | Added resetFreemiumMonthlyUsage for cron-based billing resets |
| convex/crons.ts | Added monthly cron job for freemium user billing resets |
| convex/workflowExecutor.ts | Added rate limiting to prevent burst abuse |
| convex/unifiedAgentExecution.ts | Added rate limiting check |
| convex/promptChainExecutor.ts | Added rate limiting to both chain and parallel execution |
| convex/interleavedReasoning.ts | Added rate limiting |
| convex/strandsAgentExecution.ts | Improved type safety for agent skills and thinkingLevel |
| convex/realAgentTesting.ts | Updated comments from ECS/Fargate to AgentCore |
| convex/platformValue.ts | Updated infrastructure value items from VPC/ECS to Bedrock/S3 |
| convex/lib/cloudFormationGenerator.ts | Replaced ECS/ECR resources with S3 bucket and AgentCore configuration |
| convex/lib/aws/s3Client.ts | Updated comments from ECS to agent deployment |
| convex/lib/aws/cloudwatchClient.ts | Changed log group from /ecs/ to /agentcore/ |
| convex/lib/toolDispatch.ts | Updated comments from ECS to general execution |
| convex/lib/fileGenerators.ts | Removed Fargate deployment option from README |
| convex/guardrails.ts | Added clarifying comment about rate limit usage |
| convex/diagramGenerator.ts | Updated Docker diagram from Fargate to local Docker |
| convex/deploymentPackageGenerator.ts | Updated deployment target from Fargate to AgentCore |
| convex/constants.ts | Removed ECS/ECR resource constants, renamed isContainerDeployment to isLocalDeployment |
| convex/codeGenerator.ts | Updated SAM template description |
| convex/cloudFormationGenerator.ts | Massive refactoring from ECS/Fargate to Bedrock AgentCore infrastructure |
| convex/cdkGenerator.ts | Replaced ECS/Fargate CDK resources with AgentCore S3 and IAM |
| convex/awsDiagramGenerator.ts | Updated resource list from Fargate to AgentCore |
| convex/awsDeploymentFlow.ts | Removed Fargate deployment path |
| convex/awsDeployment.ts | Removed ECS/ECR deployment code, renamed tier functions |
| convex/awsCrossAccount.ts | Replaced Fargate deployment with AgentCore |
| convex/automatedAgentBuilder.ts | Added rate limiting |
| convex/agents.ts | Updated MCP server schema with optional command/args and new transport fields |
| convex/agentExecution.test.ts | Updated test descriptions and removed ECS-specific test code |
| convex/agentBuilderWorkflow.ts | Added rate limiting to all workflow execution actions |
| convex/http.ts | Removed ECS RunTask endpoint |
| convex/integration.test.ts | Updated tests for new tier names and in-line billing reset logic |
| cloudformation/user-onboarding-template.yaml | Replaced VPC/ECS/ECR resources with S3 and Bedrock permissions |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| } else if ( transport === "stdio" ) { | ||
| if ( !args.command ) { | ||
| throw new Error( "Transport \"stdio\" requires a command field" ); | ||
| } |
There was a problem hiding this comment.
The validation for MCP transport types only checks for "sse", "http", and "stdio", but the built-in servers use "direct" as a transportType. The validation logic should either include "direct" as a valid transport type or exclude it from validation. Currently, if a user tries to create a server with transportType "direct", it would fall through the validation without checking required fields.
| } | |
| } | |
| } else if ( transport === "direct" ) { | |
| // "direct" transport is used for in-process/built-in servers and | |
| // currently has no additional required fields. | |
| } else { | |
| throw new Error( `Unsupported transport type: "${transport}"` ); |
There was a problem hiding this comment.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (24)
convex/lib/fileGenerators.ts (1)
113-130:⚠️ Potential issue | 🟠 MajorFix
generateMCPConfigto handle HTTP/SSE and direct transport types.The function currently only outputs
command/args/env/disabled(stdio transport), but the schema and mcpClient.ts both support"stdio" | "sse" | "http" | "direct"transport types. Servers configured with HTTP/SSE or direct will have incomplete mcp.json in their deployment package — missingurlandtransportTypefields entirely.Additionally, there's a duplicate
generateMCPConfigfunction at convex/codeGenerator.ts:969 with the same issue.Apply the suggested conditional logic to emit transport-specific fields:
- For
sse/http: includeurlandtransportType- For
stdio(default): includecommand,args,env- Always include
disabledFix both locations.
convex/platformValue.ts (1)
52-55:⚠️ Potential issue | 🟡 MinorStale "ECR management" reference in deployment items.
The
infrastructureitems on line 30 were updated to reflect the Bedrock AgentCore migration, but thedeploymentitems still reference "ECR management" and "Docker automation." These should be updated for consistency (e.g., "AgentCore deployment", "S3 artifact upload").convex/mcpClient.ts (1)
626-645:⚠️ Potential issue | 🟡 Minor
server.commandmay beundefinedfor stdio transport if server was created before the validation was added.
server.commandis now optional in the schema. WhileaddMCPServervalidates thatstdiotransport requires a command, pre-existing DB records could havecommand: undefined.StdioClientTransportwould receiveundefinedas the command and likely throw an unhelpful error.Suggested guard
+ if ( !server.command ) { + throw new Error( `MCP server "${server.name}" is configured for stdio transport but has no command. Please update its configuration.` ); + } + const transport = new StdioClientTransport( { command: server.command,convex/automatedAgentBuilder.ts (1)
363-372:⚠️ Potential issue | 🟡 Minor
anthropic_versionis unconditionally included — inconsistent with the conditional fix inmcpClient.ts.Line 797 of
convex/mcpClient.tsconditionally includesanthropic_versiononly for Claude models. Here,analyzeAndAskNextalways includes it (Line 366). IfAGENT_BUILDER_MODEL_IDis set to a non-Claude model (e.g., DeepSeek), this would cause a Bedrock invocation failure.Suggested fix (matching mcpClient.ts pattern)
+ const isClaudeModel = modelId.toLowerCase().startsWith( "anthropic." ); const payload = { - anthropic_version: "bedrock-2023-05-31", max_tokens: 4096, temperature: 0.5, system: systemPrompt, messages, + ...( isClaudeModel ? { anthropic_version: "bedrock-2023-05-31" } : {} ), };convex/interleavedReasoning.ts (1)
99-99:⚠️ Potential issue | 🟡 MinorStale comment contradicts the newly added rate-limiting logic.
Line 99 says
"NO GUARDRAILS - removed all validation/rate-limiting queries", but the code now enforces rate limiting on lines 141-150 (and a bedrock gate on lines 130-139). This comment is misleading and should be updated or removed.convex/agentExecution.test.ts (1)
890-892:⚠️ Potential issue | 🟡 MinorRemove stale ECS assertion that will fail in tests.
Line 892 asserts
expect(test.ecsTaskArn).toBeDefined(), but theupdateStatusmutation for "RUNNING" status does not populate this field. The schema marksecsTaskArnas@deprecatedand kept only for backward compatibility. This assertion is a leftover from the ECS removal and will fail since the field is never set. Remove this expectation or replace it with a relevant assertion for the new local Docker/Bedrock execution model (e.g., checking for a container reference or related runtime identifier).convex/codeGenerator.ts (1)
969-984:⚠️ Potential issue | 🟠 Major
generateMCPConfigdoes not emit the newurlandtransportTypefields.The input schema (lines 52–59) now accepts
urlandtransportTypefor HTTP/SSE/direct MCP transports, butgenerateMCPConfigonly emitscommand,args,env, anddisabled. MCP servers configured with non-stdio transports will produce an incompletemcp.jsonmissing connection details.Proposed fix
for (const server of mcpServers) { - config.mcpServers[server.name] = { - command: server.command, - args: server.args, - env: server.env || {}, - disabled: server.disabled || false, - }; + const entry: Record<string, any> = { + disabled: server.disabled || false, + }; + + if (server.transportType) { + entry.transportType = server.transportType; + } + + if (server.url) { + entry.url = server.url; + } + + if (server.command) { + entry.command = server.command; + } + + if (server.args) { + entry.args = server.args; + } + + if (server.env && Object.keys(server.env).length > 0) { + entry.env = server.env; + } + + config.mcpServers[server.name] = entry; }convex/cloudFormationGenerator.ts (2)
417-437:⚠️ Potential issue | 🟠 Major
AgentCoreSecurityGroupingress still referencesLoadBalancerSecurityGroup— but there is no ALB resource.The ECS resources (cluster, service, task def, ALB) were removed, yet
AgentCoreSecurityGroupingress at line 427 still allows traffic from!Ref LoadBalancerSecurityGroup. While theLoadBalancerSecurityGroupresource still exists in the template (lines 439–458), there is no actual Application Load Balancer or Target Group to route traffic. This leaves a security group setup with no functional purpose for Bedrock AgentCore, which doesn't use ALB-based traffic ingress.Consider removing both
LoadBalancerSecurityGroupand the ALB-related ingress rule fromAgentCoreSecurityGroup, or replacing them with appropriate access controls for the AgentCore invocation pattern.
577-607:⚠️ Potential issue | 🟡 MinorRemove orphaned ECR repository resource.
The
AgentCoreECRRepositoryresource (lines 580–607) is no longer needed. The codebase has migrated to Bedrock AgentCore with Lambda-based deployment (no container images), as evidenced byrequiresECR: falseinawsDeploymentFlow.tsline 68 and the deployment logic that bypasses ECR entirely. The ECR resource is not referenced anywhere and will be created unnecessarily in every deployment.convex/awsDeploymentFlow.ts (5)
369-408:⚠️ Potential issue | 🟠 MajorUnescaped
systemPromptinterpolated into Python triple-quoted string.Line 383 embeds
agent.systemPromptdirectly into a Python"""..."""block without escaping. If the prompt contains""", it will break the generated Python code. Other generators in this codebase (e.g.,codeGenerator.ts) useescapePythonTripleQuote()for this purpose.Proposed fix
Import and use the existing escape utility:
+import { escapePythonTripleQuote } from "./constants"; // ... - system_prompt="""${agent.systemPrompt}""", + system_prompt="""${escapePythonTripleQuote(agent.systemPrompt || "")}""",
119-178:⚠️ Potential issue | 🔴 Critical
Federatedprincipal uses a managed policy ARN instead of an OIDC provider ARN.Line 133 sets the trust policy's
Federatedprincipal toarn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole, which is a managed policy — not an identity provider. Forsts:AssumeRoleWithWebIdentity, this should be an OIDC identity provider ARN (e.g.,arn:aws:iam::<account>:oidc-provider/<provider>). The resulting role cannot be assumed via web identity federation.Additionally, the
identityProviderparameter is accepted but never used in the template.
184-187:⚠️ Potential issue | 🟠 Major
encodedTemplateis computed but not used in the generated URL.
generateCloudFormationURLencodes the template (line 185) but the returned URL hastemplateURL=&stackName=...— an emptytemplateURLparameter. The user would land on the CloudFormation console with no template loaded.Possible fix — either upload template to S3 and reference, or use `templateBody` parameter
Note: CloudFormation console URL Quick Create doesn't support inline template bodies. The template needs to be hosted at a public S3 URL and referenced via
templateURL. Consider uploading the template to an S3 bucket first, or using the console's direct stack creation flow with a different approach.
140-163:⚠️ Potential issue | 🟠 MajorOverly permissive IAM policy —
iam:PassRoleonResource: "*"is a privilege escalation risk.The inline policy grants
bedrock:*,lambda:*,iam:PassRole, andcloudformation:*on all resources.iam:PassRolecombined withlambda:*allows creating Lambda functions with any IAM role, enabling privilege escalation. Consider constraining:
iam:PassRoleto specific role ARN patterns (e.g.,arn:aws:iam::*:role/AgentBuilder*)lambda:*to namespaced function namescloudformation:*to specific stack name patterns
320-324:⚠️ Potential issue | 🔴 Critical
packageForLambdais called but not defined or imported.Line 323 calls
packageForLambda(agentCode), but this function is neither defined in this file nor imported. A search of the entire codebase found no definition of this function. This will produce aReferenceErrorat runtime whendeployToBedrockAgentCoreis invoked.convex/awsCrossAccount.ts (1)
78-107:⚠️ Potential issue | 🔴 CriticalThe
deployToUserAccountfunction obtains credentials from the user's AWS account but never passes them to the AgentCore deployment.Lines 78–82 obtain temporary credentials via
assumeUserRole. The comment at line 88 explicitly states "Deploy to user's AgentCore using their credentials," yet the call toapi.agentcoreDeployment.deployToAgentCore(lines 102–107) does not include a credentials parameter. ThedeployToAgentCoreaction signature accepts onlyagentId,code,dependencies, andenvironmentVariables—no credentials field.This means the cross-account credentials obtained through STS AssumeRole are unused, and the deployment likely executes with the platform's own AWS credentials rather than the user's temporary credentials. This defeats the intent of the Tier 2 (personal/cross-account) deployment flow and may result in resources being provisioned in the wrong AWS account or with incorrect permissions.
Fix: Either (a) add a
credentialsparameter to thedeployToAgentCoreaction and thread the credentials through the MCP sandbox creation call, or (b) if AgentCore sandboxes are meant to be platform-managed regardless of tier, remove the credential-obtention code to clarify the architecture.convex/integration.test.ts (4)
836-878:⚠️ Potential issue | 🟠 Major
beforeEachinserts data into aconvexTestinstance that is discarded by each test.Each
testblock creates its ownt = convexTest(schema, modules)with a fresh, isolated database. ThetestUserId,testAgentId, andtestDeploymentIdpopulated inbeforeEachlive in a separatetinstance and will not exist in any test's database.For example, Lines 885–889 do
ctx.db.patch(testDeploymentId, ...)on a document ID that doesn't exist in that test'st, which will throw at runtime.Either share the same
tinstance across thedescribeblock, or move the setup into each individual test (as done correctly in the "Deployment Integration Tests" section).
1112-1147:⚠️ Potential issue | 🟠 MajorSame broken
beforeEachpattern —testAgentIdwon't be found in test databases.Lines 1150–1163 try to find
testAgentIdin the result ofapi.agents.listExposableAgents, buttestAgentIdwas inserted via a differentconvexTestinstance inbeforeEach. Thefindwill always returnundefined, making the assertion on Line 1161 always fail.
880-906:⚠️ Potential issue | 🟡 MinorStale test descriptions still reference "Tier 1" and "Tier 2".
Line 880:
"should generate architecture diagram for Tier 1 deployment"— but the tier value used is"freemium".
Line 906:"should generate architecture diagram for Tier 2 deployment"— but the tier value used is"personal".These should be updated to match the new naming convention used everywhere else in the file.
1350-1351:⚠️ Potential issue | 🟡 MinorOuter describe block still references "Tier 1" in the nested describe.
Line 1351:
"Tier 1 (Freemium) Deployment Workflow"— consider updating to just"Freemium Deployment Workflow"for consistency with the rest of the file.convex/awsDeployment.ts (2)
655-659:⚠️ Potential issue | 🟠 Major
config.enableMonitoring || truealways evaluates totrue.The
||operator short-circuits totruewheneverconfig.enableMonitoringis falsy (includingfalse), so monitoring can never be disabled. Use??(nullish coalescing) to default only when the value isundefined/null.Proposed fix
monitoring: { - enabled: config.enableMonitoring || true, + enabled: config.enableMonitoring ?? true, logLevel: "INFO" }
22-22:⚠️ Potential issue | 🟡 MinorStale comment still references "Tier 1/2/3".
The comment on Line 22 says "Routes to correct tier (Tier 1/2/3)" but the tiers have been renamed to freemium/personal/enterprise.
convex/cdkGenerator.ts (1)
285-331:⚠️ Potential issue | 🟠 MajorGenerated CDK app includes
vpcConfigandscalingprops thatAgentCoreStackno longer uses.
AgentCoreStackProps(Lines 197–207) no longer includesvpcConfigorscaling, but the generated bin/app code (Lines 309–322) still passes these properties. This will cause a TypeScript compilation error in the generated project if strict type checking is enabled.Proposed fix — remove orphaned props from the generated app
const stackProps = { agentName: '${agentName}', environment: '${environment}', modelId: '${model}', tools: ${JSON.stringify(tools, null, 2)}, - vpcConfig: { - createVpc: ${vpcConfig.createVpc}, - vpcCidr: '${vpcConfig.vpcCidr || '10.0.0.0/16'}', - }, monitoring: { enableXRay: ${monitoring.enableXRay}, enableCloudWatch: ${monitoring.enableCloudWatch}, logRetentionDays: ${monitoring.logRetentionDays || 30}, }, - scaling: { - minCapacity: ${scaling.minCapacity}, - maxCapacity: ${scaling.maxCapacity}, - targetCpuUtilization: ${scaling.targetCpuUtilization}, - }, env: { account: process.env.CDK_DEFAULT_ACCOUNT, region: '${region}', }, };cloudformation/user-onboarding-template.yaml (1)
2-2:⚠️ Potential issue | 🟡 MinorStale description still references "Tier 2".
-Description: 'Agent Builder Platform - User Account Onboarding (Tier 2)' +Description: 'Agent Builder Platform - User Account Onboarding (Personal Tier)'convex/deploymentRouter.ts (1)
270-278:⚠️ Potential issue | 🟠 Major
getDeploymentHistorywithagentIddrops theuserIdfilter — potential data leak.When
args.agentIdis provided, the query switches to theby_agentindex without also filtering byuserId. This could return deployments belonging to other users who deployed the same agent.Proposed fix
if (args.agentId) { - query = ctx.db + // Filter by agent but still scope to current user + const agentDeployments = await ctx.db .query("deployments") - .withIndex("by_agent", (q) => q.eq("agentId", args.agentId!)); + .withIndex("by_agent", (q) => q.eq("agentId", args.agentId!)) + .filter((q) => q.eq(q.field("userId"), userId)) + .order("desc") + .take(args.limit || 20); + return agentDeployments; }
🤖 Fix all issues with AI agents
In `@convex/agentBuilderWorkflow.ts`:
- Around line 476-485: executeCompleteWorkflow performs a global rate-check but
then calls executeWorkflowStage 9 times, causing double counting; modify
executeWorkflowStage to accept an options flag (e.g., internal:boolean) that,
when true, skips its own checkRateLimit call, and then change
executeCompleteWorkflow to pass internal=true for each stage invocation; keep
the existing buildTierRateLimitConfig/getTierConfig/checkRateLimit usage in
executeCompleteWorkflow (so pre-flight remains) and only short-circuit the
per-stage rate-limit logic inside executeWorkflowStage by checking the new
internal flag before calling checkRateLimit.
In `@convex/agents.ts`:
- Around line 82-90: createAgent and updateAgent currently persist mcpServers
without enforcing transport-specific requirements, allowing entries with
transportType "sse" or "http" to lack a url; before inserting/patching in the
createAgent and updateAgent mutation handlers, iterate mcpServers and validate
transport-specific fields (e.g., require url when transportType === "sse" ||
"http", and enforce any other transport-specific fields you already check for in
mcpConfig.ts); if validation fails, throw/return a clear validation error and
avoid persisting; prefer reusing or calling the existing validation logic from
mcpConfig.ts (the same checks around url) to keep behavior consistent.
In `@convex/awsCrossAccount.ts`:
- Around line 102-107: Ensure we check agent.generatedCode before calling
ctx.runAction(api.agentcoreDeployment.deployToAgentCore): if agent.generatedCode
is null/undefined, abort with a clear error or trigger a build step rather than
passing undefined as the code parameter; update the caller around the
ctx.runAction(...) invocation to validate agent.generatedCode and either
throw/log a descriptive message (including agentId) or call the routine that
generates the code first so deployToAgentCore always receives a valid string for
the code argument.
In `@convex/awsDeployment.ts`:
- Around line 170-171: The deployment status strings are inconsistent between
executeDeployment and executeDeploymentInternal (and the other occurrences
around the same area) — some use "Deploying to Bedrock AgentCore..." and others
use "Deploying to AWS AgentCore...". Pick the canonical label (e.g., "Bedrock
AgentCore") and update all status/message/currentStep string literals in
executeDeployment, executeDeploymentInternal, and the nearby status emissions to
use that exact phrase so logs and UI messages are consistent.
- Line 966: Change the type of the userId parameter on deployPersonal from
string to Id<"users"> so it matches deployFreemium and the value returned by
getAuthUserId(); update any local references/annotations inside deployPersonal
that assume a string to use the Id<"users"> type and ensure calls to
internal.awsDeployment.createDeploymentInternal continue to pass userId as
v.id("users") without casting.
- Line 618: The code currently uses String.raw in
Array.from(packages).join(String.raw`\n`) which emits the two-character sequence
"\n" instead of real newlines; update the join call to use a real newline
separator (e.g., '\n') so requirements.txt entries are on separate lines—locate
the join on Array.from(packages) in convex/awsDeployment.ts and replace the
String.raw usage with a plain newline string.
In `@convex/cdkGenerator.ts`:
- Around line 117-118: The generated package.json contains duplicate keywords
("agentcore", "bedrock"); update the keyword-generation logic in
convex/cdkGenerator.ts (where the keywords array is assembled—e.g., the function
or variable that builds the keywords list in the package JSON generation
routine) to deduplicate entries before writing JSON (use a Set or array filter
to remove duplicates) so the final keywords array contains unique values only.
In `@convex/cloudFormationGenerator.ts`:
- Around line 640-643: The CloudFormation output AgentCoreEndpoint is using a
non-existent Bedrock URL; update the AgentCoreEndpoint value to use the real
Bedrock runtime base and InvokeAgent path pattern
(https://bedrock-agent-runtime.<region>.amazonaws.com with
/agents/{agentId}/agentAliases/{agentAliasId}/sessions/{sessionId}/text) and
replace the current substitutions (AgentName/Environment) with the actual
runtime identifiers (AgentId, AgentAliasId, SessionId) or their CloudFormation
parameters/exports so callers get the correct runtime API URL; modify the
AgentCoreEndpoint resource in convex/cloudFormationGenerator.ts accordingly.
In `@convex/lib/cloudFormationGenerator.ts`:
- Around line 89-121: The inline IAM policy in addIAMRoles
(AgentCoreExecutionRole) is overly permissive because it sets Resource: '*' for
S3 and CloudWatch Logs; update the policy to split statements so Bedrock actions
(bedrock:InvokeModel, bedrock:InvokeModelWithResponseStream) can remain
Resource: '*' if needed, but scope s3:GetObject and s3:PutObject to the template
bucket ARN (e.g., reference the S3 bucket resource via !GetAtt or !Ref like the
bucket logical name present in the template) and scope
logs:CreateLogGroup/CreateLogStream/PutLogEvents to the specific log group(s)
created by this template (use !Ref or a proper log-group ARN !Sub using the log
group logical name and account/region), replacing the global Resource: '*' entry
for those actions and keeping only the Bedrock statement as global if required.
In `@convex/mcpClient.ts`:
- Around line 258-276: The Bedrock-routing check is too fragile because
isBedrockServer currently uses args.toolName === "execute_agent" in addition to
server.name; update the logic so detection relies only on server.name ===
"bedrock-agentcore-mcp-server" (remove the args.toolName check) and keep the
existing gating flow that imports requireBedrockAccessForUser, fetches userDoc
via ctx.runQuery(internal.users.getInternal, { id: billingUserId }), calls
requireBedrockAccessForUser(userDoc, undefined), and returns the same error
shapes when gateResult.allowed is false or when billingUserId is missing; ensure
you update the isBedrockServer variable and both branches that reference it to
avoid enforcing the gate for arbitrary tools named "execute_agent".
- Around line 550-564: The fetch in the document-fetcher-mcp-server branch (when
toolName === "fetch_url" using parameters.url and controller.signal) accepts
arbitrary user URLs and must be validated before calling fetch: enforce allowed
schemes (only https), parse parameters.url with the URL constructor, resolve the
hostname to IP(s) and reject any resolved address in
private/loopback/link-local/multicast ranges (RFC1918, 127.0.0.0/8, ::1,
fe80::/10, 169.254.0.0/16, etc.), and optionally require an explicit domain
allowlist; if validation fails throw an error (e.g., "Blocked URL"), otherwise
proceed to fetch and keep the existing HTML-to-text logic and size cap. Use a
helper (e.g., validateFetchUrl or isAllowedHost) and perform DNS resolution/IP
checks before the fetch call.
In `@convex/mcpConfig.ts`:
- Around line 317-327: In updateMCPServer, mirror the transport-specific
validation used in addMCPServer by first loading the existing server record (the
current MCP server object), merging its fields with the incoming patch data,
then validate the resolved transportType: require url when transportType is
"sse" or "http" and require command when transportType is "stdio"; if validation
fails, throw an Error (or return a validation error) before calling ctx.db.patch
so you never persist an incomplete state via updateMCPServer. Ensure you
reference the same symbols: updateMCPServer handler, ctx.db.patch, and follow
the same validation logic used in addMCPServer.
- Around line 282-288: The env validator in addMCPServer and updateMCPServer
currently uses v.optional(v.object({})) which only allows an empty object;
change the validator for the env field in both addMCPServer and updateMCPServer
to v.optional(v.record(v.string(), v.string())) so it matches the schema used
elsewhere and accepts non-empty environment maps like { "PATH": "/usr/bin" }.
In `@src/components/AgentBuilder.tsx`:
- Line 252: The useEffect in AgentBuilder.tsx that depends on [automationData,
executeWorkflow, lastAutomationStamp, setWorkflowResult, workflowResult] should
not include workflowResult because the effect calls setWorkflowResult(result)
and that causes an unnecessary re-run; update the dependency array to remove
workflowResult (leave automationData, executeWorkflow, lastAutomationStamp,
setWorkflowResult) so the effect only runs when inputs change, or if the linter
complains add a single-line eslint-disable comment with an explanation
referencing this intentional omission; locate the effect by the useEffect block
that calls setWorkflowResult and checks lastAutomationStamp to apply the change.
In `@src/components/ArchitecturePreview.tsx`:
- Around line 236-249: determineDeploymentTier currently only returns "freemium"
or "personal", making the "enterprise" UI paths in buildResourceEstimates,
calculateEstimatedCost, the tier badge, cost description, and security-feature
rendering unreachable; update determineDeploymentTier to actually return
"enterprise" when appropriate (for example detect deploymentType ===
"enterprise" or accept/inspect a subscription/account-flag instead of ignoring
the _model param), or if enterprise support is not yet implemented add a clear
TODO and comment in determineDeploymentTier explaining intended future behavior
and why it currently falls back to "freemium" so reviewers know the enterprise
code paths are intentionally inactive.
In `@src/components/AWSRoleSetup.tsx`:
- Around line 87-93: The Bedrock IAM policy in AWSRoleSetup.tsx currently lists
agent actions ("bedrock:CreateAgent", "bedrock:CreateAgentActionGroup",
"bedrock:InvokeAgent", "bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream", "bedrock:GetAgent",
"bedrock:ListAgents") but is missing management actions required for full agent
lifecycle; update the policy to include the additional Bedrock permissions such
as "bedrock:UpdateAgent", "bedrock:DeleteAgent", "bedrock:PrepareAgent",
"bedrock:UpdateAgentActionGroup", "bedrock:DeleteAgentActionGroup", and (if
using aliases) "bedrock:CreateAgentAlias", or alternatively document that users
must extend the policy for update/delete/prepare/alias workflows so agent
updates and deletions won’t fail due to missing permissions.
🧹 Nitpick comments (13)
convex/lib/fileGenerators.ts (1)
427-431: Dead code:hasAliasis alwaysfalse.
pythonFileNameis hardcoded to"agent.py"on line 427, so thehasAliascheck on line 428 is alwaysfalseand the ternary on lines 429-431 always takes the else branch.If this is intentional scaffolding for future use, a brief comment would help; otherwise it can be simplified.
convex/agents.ts (1)
88-89: Consider using a union literal fortransportTypeinstead of openv.string().The comment documents
"stdio" | "sse" | "http" | "direct"but the schema accepts any string. Usingv.union(v.literal("stdio"), v.literal("sse"), v.literal("http"), v.literal("direct"))would enforce valid values at the Convex layer.Proposed change
- transportType: v.optional(v.string()), // "stdio" | "sse" | "http" | "direct" + transportType: v.optional(v.union( + v.literal("stdio"), + v.literal("sse"), + v.literal("http"), + v.literal("direct"), + )),Apply in both the
createandupdatemutations.convex/strandsAgentExecution.ts (1)
209-209: Double type assertion forthinkingLevelis fragile — consider centralizing these schema-extension casts.The
as unknown as { thinkingLevel?: string }pattern here and the similar cast on line 184 suggest theAgentDoctype doesn't reflect all persisted fields. If more fields are accessed this way over time, consider extending the agent schema or creating a helper type (e.g.,AgentDocExtended) to avoid scattered unsafe casts.convex/schema.ts (1)
210-215: Consider using a union validator fortransportTypeto enforce valid values at the schema level.Both
agents.mcpServers(here) and the top-levelmcpServerstable (Line 472) usev.optional(v.string())fortransportType. While runtime validation exists inmcpConfig.ts, a schema-level union (v.optional(v.union(v.literal("stdio"), v.literal("sse"), v.literal("http"), v.literal("direct")))) would catch invalid values at write time across all mutation paths, not justaddMCPServer.Suggested change for agents.mcpServers
- transportType: v.optional( v.string() ), // "stdio" | "sse" | "http" | "direct" — defaults to "stdio" + transportType: v.optional( v.union( + v.literal( "stdio" ), + v.literal( "sse" ), + v.literal( "http" ), + v.literal( "direct" ), + ) ), // defaults to "stdio" at runtimeThe same change should be applied at Line 472 for the top-level
mcpServerstable.src/components/AgentBuilder.tsx (1)
240-241: Inconsistent error typing:error: anyhere vs.error: unknownin the other refactored catch blocks.Lines 328 and 394 were tightened to
catch (error: unknown)with safe extraction. This catch block still useserror: any. Consider aligning for consistency.Suggested fix
- } catch ( error: any ) { - const message = error?.message || "Automation workflow failed"; + } catch ( error: unknown ) { + const message = error instanceof Error ? error.message : "Automation workflow failed";convex/stripeMutations.ts (1)
250-285: Full table scan is fine for a monthly cron, but consider indexingtierif the user base grows.
ctx.db.query("users").filter(…).collect()performs a table scan. For a monthly job this is acceptable, but if the user table grows large, consider adding aby_tierindex to scope the query. Sequentialdb.patchcalls inside the loop are also fine for now but could hit Convex mutation time limits with a very large number of freemium users.convex/promptChainExecutor.ts (1)
466-480:testPrompthas a bedrock gate but no rate-limit check.Unlike
executePromptChainandexecuteParallelPrompts,testPrompt(line 453) only enforces the bedrock gate without a subsequent rate-limit check. If this is intentional (single-shot test invocations are considered low-risk), consider adding a brief comment to that effect. Otherwise, it's an inconsistency with the other two entry points in this file.convex/cloudFormationGenerator.ts (1)
163-188: Dead parameters:InstanceType,MinCapacity,MaxCapacity,TargetCpuUtilizationare unreferenced.After removing ECS/Fargate and autoscaling resources, these parameters aren't consumed by any resource in the template. The
InstanceTypedescription was even updated to "reserved for future use" (line 167), but dead parameters add confusion and noise to the CloudFormation console. Consider removing them.convex/deploymentPackageGenerator.ts (1)
69-79: Local deployment still generates a CloudWatch Logs resource.The comment says "Docker/Container deployment (local only)" but the resource list includes a
cloudwatch-logsresource named/agentcore/${agent.name}. Local Docker deployments won't actually provision CloudWatch. If this is intentional for architecture diagram purposes only, a brief comment would clarify. Otherwise it's misleading in the resource list.convex/awsCrossAccount.ts (1)
113-121: Semantic mismatch:taskArnfield stores aruntimeId.The
deployments.createmutation field is still namedtaskArn(line 119) but now stores an AgentCoreruntimeIdor the literal string"agentcore". This could confuse future maintainers and any code that parsestaskArnas an actual ECS task ARN.Consider renaming the field in the deployments schema to something transport-agnostic like
resourceIdorruntimeIdwhen feasible.convex/cdkGenerator.ts (1)
250-257: Generated CDK stack uses overly broadAmazonBedrockFullAccessmanaged policy.This grants full Bedrock permissions including model management, agent deletion, etc. Consider generating a scoped inline policy limited to the actions the agent runtime actually needs (e.g.,
bedrock:InvokeModel,bedrock:InvokeModelWithResponseStream).cloudformation/user-onboarding-template.yaml (1)
40-54: S3 bucket uses S3-managed encryption (AES256) instead of a Customer Managed Key.Trivy flags this (AWS-0132). For a user-onboarding template, AES256 is a practical default — CMK adds cost and operational overhead. However, if agent artifacts may contain sensitive data (code, credentials, environment variables), consider documenting the trade-off or offering a KMS key option as a parameter.
convex/deploymentRouter.ts (1)
96-103: Model validation heuristic (includes(".")) may be too brittle.The check
!agent.model?.includes(".")gates non-Bedrock models, but this relies on a naming convention (e.g.,anthropic.claude-...). If AWS changes their model ID format or new models don't contain a dot, this will incorrectly block them. Consider using a more explicit allowlist or prefix check (e.g., checking against known Bedrock provider prefixes).
| // Rate limit: prevent burst abuse per user | ||
| { | ||
| const { checkRateLimit, buildTierRateLimitConfig } = await import( "./rateLimiter" ); | ||
| const { getTierConfig } = await import( "./lib/tierConfig" ); | ||
| const rlCfg = buildTierRateLimitConfig( getTierConfig( gateResult.tier ).maxConcurrentTests, "agentExecution" ); | ||
| const rlResult = await checkRateLimit( ctx, String( gateResult.userId ), "agentExecution", rlCfg ); | ||
| if ( !rlResult.allowed ) { | ||
| throw new Error( rlResult.reason ?? "Rate limit exceeded. Please try again later." ); | ||
| } | ||
| } |
There was a problem hiding this comment.
Rate-limit double-counting will break multi-stage workflows for lower tiers.
executeCompleteWorkflow checks the rate limit once here (lines 476–485), then calls executeWorkflowStage for each of the 9 stages (line 507), and each of those calls runs its own rate-limit check (lines 281–288). That's 10 rate-limit entries recorded for a single logical workflow execution. With freemium's maxConcurrentTests = 1 yielding maxPerMinute = 2, the workflow will fail at stage 2.
Consider either:
- Skipping the per-stage rate-limit check when called from
executeCompleteWorkflow(e.g., pass aninternalflag or use an internal action variant). - Removing the rate-limit check from
executeCompleteWorkflowitself and relying on the per-stage checks alone (accepting the 9-entry cost). - Performing a single pre-flight check here and bypassing the per-stage checks.
🤖 Prompt for AI Agents
In `@convex/agentBuilderWorkflow.ts` around lines 476 - 485,
executeCompleteWorkflow performs a global rate-check but then calls
executeWorkflowStage 9 times, causing double counting; modify
executeWorkflowStage to accept an options flag (e.g., internal:boolean) that,
when true, skips its own checkRateLimit call, and then change
executeCompleteWorkflow to pass internal=true for each stage invocation; keep
the existing buildTierRateLimitConfig/getTierConfig/checkRateLimit usage in
executeCompleteWorkflow (so pre-flight remains) and only short-circuit the
per-stage rate-limit logic inside executeWorkflowStage by checking the new
internal flag before calling checkRateLimit.
| mcpServers: v.optional(v.array(v.object({ | ||
| name: v.string(), | ||
| command: v.string(), | ||
| args: v.array(v.string()), | ||
| command: v.optional(v.string()), | ||
| args: v.optional(v.array(v.string())), | ||
| env: v.optional(v.record(v.string(), v.string())), // MCP server environment variables | ||
| disabled: v.optional(v.boolean()), | ||
| url: v.optional(v.string()), | ||
| transportType: v.optional(v.string()), // "stdio" | "sse" | "http" | "direct" | ||
| }))), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for transportType references and MCP validation logic
echo "=== Searching for transportType usage ==="
rg -n "transportType" --type=ts --type=js -C 3
echo ""
echo "=== Searching for MCP validation functions ==="
rg -n "mcp|MCP" --type=ts --type=js | grep -i "validat\|enforce\|check" | head -20
echo ""
echo "=== Checking for mcpConfig or MCP handler files ==="
fd -t f -e ts -e js | grep -i "mcp" | head -20Repository: MikePfunk28/agent_builder_application
Length of output: 11229
🏁 Script executed:
#!/bin/bash
# Verify if agents.ts mutation handlers (createAgent, updateAgent) include validation
echo "=== Check agents.ts mutation handlers ==="
sed -n '92,115p' convex/agents.ts
echo ""
echo "=== Check agents.ts updateAgent handler ==="
sed -n '140,165p' convex/agents.tsRepository: MikePfunk28/agent_builder_application
Length of output: 1423
Add transport-specific validation to agents.ts mutation handlers.
The createAgent and updateAgent mutation handlers (lines 93–104 and 148–157) insert/patch MCP server configurations directly without validating transport-specific required fields. This allows an agent to be created with mcpServers entries where transportType: "sse" or "http" lack a required url. While validation exists in mcpConfig.ts for direct MCP server mutations (lines 318–321), the agent mutations bypass this check. Add the same transport-specific validation to both handlers before persisting.
🤖 Prompt for AI Agents
In `@convex/agents.ts` around lines 82 - 90, createAgent and updateAgent currently
persist mcpServers without enforcing transport-specific requirements, allowing
entries with transportType "sse" or "http" to lack a url; before
inserting/patching in the createAgent and updateAgent mutation handlers, iterate
mcpServers and validate transport-specific fields (e.g., require url when
transportType === "sse" || "http", and enforce any other transport-specific
fields you already check for in mcpConfig.ts); if validation fails, throw/return
a clear validation error and avoid persisting; prefer reusing or calling the
existing validation logic from mcpConfig.ts (the same checks around url) to keep
behavior consistent.
| const deployment: any = await ctx.runAction(api.agentcoreDeployment.deployToAgentCore, { | ||
| agentId: args.agentId, | ||
| code: agent.generatedCode, | ||
| dependencies, | ||
| environmentVariables, | ||
| }); |
There was a problem hiding this comment.
No null check on agent.generatedCode before deployment.
If the agent hasn't been built yet or generatedCode is undefined, this will pass undefined as the code parameter to deployToAgentCore, likely causing a downstream error with an unclear message.
Proposed fix
+ if (!agent.generatedCode) {
+ throw new Error("Agent code has not been generated. Please build the agent first.");
+ }
+
const deployment: any = await ctx.runAction(api.agentcoreDeployment.deployToAgentCore, {
agentId: args.agentId,
code: agent.generatedCode,🤖 Prompt for AI Agents
In `@convex/awsCrossAccount.ts` around lines 102 - 107, Ensure we check
agent.generatedCode before calling
ctx.runAction(api.agentcoreDeployment.deployToAgentCore): if agent.generatedCode
is null/undefined, abort with a clear error or trigger a build step rather than
passing undefined as the code parameter; update the caller around the
ctx.runAction(...) invocation to validate agent.generatedCode and either
throw/log a descriptive message (including agentId) or call the routine that
generates the code first so deployToAgentCore always receives a valid string for
the code argument.
| message: "Deploying to Bedrock AgentCore...", | ||
| currentStep: "Deploying to AgentCore", |
There was a problem hiding this comment.
Deployment status messages are inconsistent between functions.
executeDeployment(Line 170):"Deploying to Bedrock AgentCore..."executeDeploymentInternal(Line 880):"Deploying to AWS AgentCore..."
These refer to the same service but use different naming. Consider standardizing on one term (e.g., "Bedrock AgentCore") throughout.
Also applies to: 863-865, 880-881
🤖 Prompt for AI Agents
In `@convex/awsDeployment.ts` around lines 170 - 171, The deployment status
strings are inconsistent between executeDeployment and executeDeploymentInternal
(and the other occurrences around the same area) — some use "Deploying to
Bedrock AgentCore..." and others use "Deploying to AWS AgentCore...". Pick the
canonical label (e.g., "Bedrock AgentCore") and update all
status/message/currentStep string literals in executeDeployment,
executeDeploymentInternal, and the nearby status emissions to use that exact
phrase so logs and UI messages are consistent.
| @@ -619,29 +618,7 @@ function generateAgentCoreRequirements( tools: any[] ): string { | |||
| return Array.from( packages ).join( String.raw`\n` ); | |||
There was a problem hiding this comment.
Bug: String.raw produces literal \n characters, not newlines.
String.raw\\n`preserves the raw escape sequence, so the output is the two-character sequence\ninstead of an actual newline. The resultingrequirements.txtcontent will be a single line likestrands-agents>=1.0.0\nbedrock-agentcore>=0.1.6\n...` instead of one package per line.
Proposed fix
- return Array.from( packages ).join( String.raw`\n` );
+ return Array.from( packages ).join( "\n" );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return Array.from( packages ).join( String.raw`\n` ); | |
| return Array.from( packages ).join( "\n" ); |
🤖 Prompt for AI Agents
In `@convex/awsDeployment.ts` at line 618, The code currently uses String.raw in
Array.from(packages).join(String.raw`\n`) which emits the two-character sequence
"\n" instead of real newlines; update the join call to use a real newline
separator (e.g., '\n') so requirements.txt entries are on separate lines—locate
the join on Array.from(packages) in convex/awsDeployment.ts and replace the
String.raw usage with a plain newline string.
| command: v.optional( v.string() ), // Required for stdio transport | ||
| args: v.optional( v.array( v.string() ) ), // Required for stdio transport | ||
| env: v.optional( v.object( {} ) ), | ||
| disabled: v.optional( v.boolean() ), | ||
| timeout: v.optional( v.number() ), | ||
| url: v.optional( v.string() ), // Required for sse/http transport | ||
| transportType: v.optional( v.string() ), // "stdio" | "sse" | "http" — defaults to "stdio" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify that env uses v.object({}) in mutations and check schema
echo "=== Checking env field in mcpConfig.ts ==="
rg -n 'env:.*v\.object\(\s*\{\s*\}\s*\)' convex/mcpConfig.ts
echo ""
echo "=== Checking env field context in mcpConfig.ts (addMCPServer) ==="
rg -B2 -A2 'env:' convex/mcpConfig.ts | head -40
echo ""
echo "=== Checking schema.ts for env schema ==="
rg -n 'env:' convex/schema.ts | head -20Repository: MikePfunk28/agent_builder_application
Length of output: 1335
Fix env validator mismatch in addMCPServer and updateMCPServer — currently rejects non-empty environment objects.
The env field at lines 284 and 359 is declared as v.optional(v.object({})), which only accepts an empty object. The schema at lines 212 and 468 correctly defines it as v.optional(v.record(v.string(), v.string())). This mismatch prevents users from passing environment variables like { "PATH": "/usr/bin" }, breaking core functionality. Replace both occurrences with the schema-matching validator.
Fix
- env: v.optional( v.object( {} ) ),
+ env: v.optional( v.record( v.string(), v.string() ) ),Apply at line 284 (addMCPServer) and line 359 (updateMCPServer).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| command: v.optional( v.string() ), // Required for stdio transport | |
| args: v.optional( v.array( v.string() ) ), // Required for stdio transport | |
| env: v.optional( v.object( {} ) ), | |
| disabled: v.optional( v.boolean() ), | |
| timeout: v.optional( v.number() ), | |
| url: v.optional( v.string() ), // Required for sse/http transport | |
| transportType: v.optional( v.string() ), // "stdio" | "sse" | "http" — defaults to "stdio" | |
| command: v.optional( v.string() ), // Required for stdio transport | |
| args: v.optional( v.array( v.string() ) ), // Required for stdio transport | |
| env: v.optional( v.record( v.string(), v.string() ) ), | |
| disabled: v.optional( v.boolean() ), | |
| timeout: v.optional( v.number() ), | |
| url: v.optional( v.string() ), // Required for sse/http transport | |
| transportType: v.optional( v.string() ), // "stdio" | "sse" | "http" — defaults to "stdio" |
🤖 Prompt for AI Agents
In `@convex/mcpConfig.ts` around lines 282 - 288, The env validator in
addMCPServer and updateMCPServer currently uses v.optional(v.object({})) which
only allows an empty object; change the validator for the env field in both
addMCPServer and updateMCPServer to v.optional(v.record(v.string(), v.string()))
so it matches the schema used elsewhere and accepts non-empty environment maps
like { "PATH": "/usr/bin" }.
| // Validate transport-specific required fields | ||
| const transport = args.transportType || "stdio"; | ||
| if ( transport === "sse" || transport === "http" ) { | ||
| if ( !args.url ) { | ||
| throw new Error( `Transport "${transport}" requires a url field` ); | ||
| } | ||
| } else if ( transport === "stdio" ) { | ||
| if ( !args.command ) { | ||
| throw new Error( "Transport \"stdio\" requires a command field" ); | ||
| } | ||
| } |
There was a problem hiding this comment.
Transport-specific validation is missing from updateMCPServer.
addMCPServer correctly validates that sse/http transports require a url and stdio requires a command (Lines 317–327). However, updateMCPServer (Lines 352–415) applies updates via ctx.db.patch without any transport-specific validation. A user could change transportType to "sse" without providing a url, or clear command while transportType is "stdio", leaving the server in an invalid state.
Consider adding similar validation in the updateMCPServer handler, merging the existing server fields with the incoming updates before checking constraints.
🤖 Prompt for AI Agents
In `@convex/mcpConfig.ts` around lines 317 - 327, In updateMCPServer, mirror the
transport-specific validation used in addMCPServer by first loading the existing
server record (the current MCP server object), merging its fields with the
incoming patch data, then validate the resolved transportType: require url when
transportType is "sse" or "http" and require command when transportType is
"stdio"; if validation fails, throw an Error (or return a validation error)
before calling ctx.db.patch so you never persist an incomplete state via
updateMCPServer. Ensure you reference the same symbols: updateMCPServer handler,
ctx.db.patch, and follow the same validation logic used in addMCPServer.
| cancelled = true; | ||
| }; | ||
| }, [automationData, executeWorkflow, lastAutomationStamp, setWorkflowResult] ); | ||
| }, [automationData, executeWorkflow, lastAutomationStamp, setWorkflowResult, workflowResult] ); |
There was a problem hiding this comment.
workflowResult in the dependency array can trigger an unnecessary re-run of this effect.
This effect calls setWorkflowResult(result) (Line 237), so adding workflowResult to its dependency array means the effect re-fires after it completes. The lastAutomationStamp guard prevents an infinite loop, but the effect body still executes up to the early-return check on every workflowResult change. If this dep was added to satisfy a linter warning, consider using an // eslint-disable-next-line with a comment explaining the intentional omission, or move the setWorkflowResult call outside this effect.
🤖 Prompt for AI Agents
In `@src/components/AgentBuilder.tsx` at line 252, The useEffect in
AgentBuilder.tsx that depends on [automationData, executeWorkflow,
lastAutomationStamp, setWorkflowResult, workflowResult] should not include
workflowResult because the effect calls setWorkflowResult(result) and that
causes an unnecessary re-run; update the dependency array to remove
workflowResult (leave automationData, executeWorkflow, lastAutomationStamp,
setWorkflowResult) so the effect only runs when inputs change, or if the linter
complains add a single-line eslint-disable comment with an explanation
referencing this intentional omission; locate the effect by the useEffect block
that calls setWorkflowResult and checks lastAutomationStamp to apply the change.
| function determineDeploymentTier(deploymentType: string, _model: string): string { | ||
| // Freemium: Local (Docker/Ollama) or limited AgentCore via our account | ||
| if (deploymentType === "docker" || deploymentType === "ollama" || deploymentType === "local") { | ||
| return "freemium"; | ||
| } | ||
|
|
||
| // Tier 1: Freemium (AgentCore) - Bedrock models only | ||
| // Bedrock models can be identified by: | ||
| // 1. Containing "bedrock" in the name | ||
| // 2. Using AWS Bedrock provider prefixes (anthropic., amazon., meta., cohere., ai21., mistral.) | ||
| if (deploymentType === "aws") { | ||
| const bedrockProviders = ["anthropic.", "amazon.", "meta.", "cohere.", "ai21.", "mistral."]; | ||
| const isBedrockModel = model.includes("bedrock") || | ||
| bedrockProviders.some(provider => model.startsWith(provider)); | ||
|
|
||
| if (isBedrockModel) { | ||
| return "tier1"; | ||
| } | ||
|
|
||
| // Non-Bedrock AWS deployments use Fargate (tier2) | ||
| return "tier2"; | ||
| // Personal: AgentCore in user's own AWS account (via assume-role) | ||
| if (deploymentType === "aws" || deploymentType === "agentcore") { | ||
| return "personal"; | ||
| } | ||
|
|
||
| // Default to tier1 for local development | ||
| return "tier1"; | ||
| // Default to freemium for local development | ||
| return "freemium"; | ||
| } |
There was a problem hiding this comment.
determineDeploymentTier never returns "enterprise" — all enterprise UI code is unreachable.
The function returns either "freemium" or "personal", meaning the enterprise branch in buildResourceEstimates (lines 315–364), calculateEstimatedCost (lines 376–378), the tier badge (line 73), cost description (lines 170–171), and security features (lines 206–216) will never render.
If enterprise tier determination is planned for a future iteration (e.g., via subscription status), consider adding a // TODO to make that intent explicit. Otherwise this is dead code.
🤖 Prompt for AI Agents
In `@src/components/ArchitecturePreview.tsx` around lines 236 - 249,
determineDeploymentTier currently only returns "freemium" or "personal", making
the "enterprise" UI paths in buildResourceEstimates, calculateEstimatedCost, the
tier badge, cost description, and security-feature rendering unreachable; update
determineDeploymentTier to actually return "enterprise" when appropriate (for
example detect deploymentType === "enterprise" or accept/inspect a
subscription/account-flag instead of ignoring the _model param), or if
enterprise support is not yet implemented add a clear TODO and comment in
determineDeploymentTier explaining intended future behavior and why it currently
falls back to "freemium" so reviewers know the enterprise code paths are
intentionally inactive.
| "bedrock:CreateAgent", | ||
| "bedrock:CreateAgentActionGroup", | ||
| "bedrock:InvokeAgent", | ||
| "bedrock:InvokeModel", | ||
| "bedrock:InvokeModelWithResponseStream", | ||
| "bedrock:GetAgent", | ||
| "bedrock:ListAgents", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/components/AWSRoleSetup.tsx | sed -n '80,100p'Repository: MikePfunk28/agent_builder_application
Length of output: 872
🌐 Web query:
AWS Bedrock Agent IAM permissions required actions API documentation
💡 Result:
Where AWS documents the required IAM actions
- Complete IAM action list (authoritative): Service Authorization Reference → “Actions, resources, and condition keys for Amazon Bedrock” (service prefix
bedrock). This is the master list you use to build IAM policies. [1] - Agents control-plane API operations (what you can manage): Amazon Bedrock API Reference → “Agents for Amazon Bedrock” operations list. [2]
- Agents runtime API operations (what you can invoke): Amazon Bedrock API Reference → “Agents for Amazon Bedrock Runtime” operations list. [3]
- InvokeAgent API details (incl. streaming note): InvokeAgent (agent-runtime) API page. [4]
Typical IAM actions you grant (mapped to the Agents APIs)
A) Manage/build agents (control plane; “Agents for Amazon Bedrock”):
bedrock:CreateAgent,bedrock:UpdateAgent,bedrock:GetAgent,bedrock:ListAgents,bedrock:DeleteAgent[2]- If you use action groups:
bedrock:CreateAgentActionGroup,bedrock:UpdateAgentActionGroup,bedrock:GetAgentActionGroup,bedrock:ListAgentActionGroups,bedrock:DeleteAgentActionGroup[2] - To version/ready an agent:
bedrock:PrepareAgent[2] - To deploy via alias:
bedrock:CreateAgentAlias,bedrock:UpdateAgentAlias,bedrock:GetAgentAlias,bedrock:ListAgentAliases,bedrock:DeleteAgentAlias[2] - If you attach knowledge bases:
bedrock:AssociateAgentKnowledgeBase/bedrock:DisassociateAgentKnowledgeBase(and the relevant KB actions you use, e.g.,bedrock:CreateKnowledgeBase, ingestion job APIs, etc.). [1][2]
B) Invoke agents (runtime; “Agents for Amazon Bedrock Runtime”):
bedrock:InvokeAgent(core runtime call) [3]- Common session/memory runtime ops as needed:
bedrock:CreateSession,bedrock:GetSession,bedrock:UpdateSession,bedrock:EndSession,bedrock:GetAgentMemory,bedrock:DeleteAgentMemory, etc. [3]
C) If you stream agent responses:
- Your permissions must include
bedrock:InvokeModelWithResponseStream(required for streaming). [4]
If you tell me whether you’re writing a builder/admin policy (create/update agents) or a runtime policy (invoke only), plus whether you use knowledge bases and/or streaming, I can output a minimal IAM policy statement set.
Sources: [1] [2] [3] [4]
Bedrock permissions are incomplete for full agent lifecycle management.
The current policy allows creating and invoking agents but omits actions needed for ongoing management. Users will encounter permission errors when attempting to update existing agents, delete agents, prepare/version agents, or use agent aliases. AWS Bedrock Agent management requires: bedrock:UpdateAgent, bedrock:DeleteAgent, bedrock:PrepareAgent, bedrock:UpdateAgentActionGroup, bedrock:DeleteAgentActionGroup, and bedrock:CreateAgentAlias (if using alias-based deployment).
Consider the actual use case and add the necessary management actions, or document that users must customize the policy for their specific workflow.
🤖 Prompt for AI Agents
In `@src/components/AWSRoleSetup.tsx` around lines 87 - 93, The Bedrock IAM policy
in AWSRoleSetup.tsx currently lists agent actions ("bedrock:CreateAgent",
"bedrock:CreateAgentActionGroup", "bedrock:InvokeAgent", "bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream", "bedrock:GetAgent",
"bedrock:ListAgents") but is missing management actions required for full agent
lifecycle; update the policy to include the additional Bedrock permissions such
as "bedrock:UpdateAgent", "bedrock:DeleteAgent", "bedrock:PrepareAgent",
"bedrock:UpdateAgentActionGroup", "bedrock:DeleteAgentActionGroup", and (if
using aliases) "bedrock:CreateAgentAlias", or alternatively document that users
must extend the policy for update/delete/prepare/alias workflows so agent
updates and deletions won’t fail due to missing permissions.
Deploying agent-builder-application with
|
| Latest commit: |
bf06249
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://ff931655.agent-builder-application.pages.dev |
| Branch Preview URL: | https://012-visual-builder-update.agent-builder-application.pages.dev |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
.github/workflows/setup-github-projects_Version3.yml (1)
41-41: Consider usinggithub.repository_ownerinstead of a hardcoded owner.All three
gh project item-addcalls hardcode--owner MikePfunk28. If this repo is ever transferred or forked, these will silently target the wrong project. Using the context variable keeps it portable.♻️ Suggested diff (line 41 shown; apply same pattern to lines 61 and 64)
- err=$(gh project item-add "$PROJECT_ID" --owner MikePfunk28 --url "$ISSUE_URL" 2>&1) || { + err=$(gh project item-add "$PROJECT_ID" --owner "${{ github.repository_owner }}" --url "$ISSUE_URL" 2>&1) || {Also applies to: 61-61, 64-64
… return type in MCP tool invocation
Refactor deployment tiers and rate limiting logic
@MikePfunk28
MikePfunk28 committed 2 days ago
Refactor MCP server configuration and transport handling
@MikePfunk28
MikePfunk28 committed 2 days ago
Summary by CodeRabbit
New Features
Refactor
Documentation