ci(bonk): update Bonk review routing#1120
Conversation
Bonk and BigBonk still routed review runs to Claude Opus 4.6 with the max variant. That leaves review commands and local opencode review agents on the old Anthropic model instead of the intended OpenAI xhigh setup. The workflows now route through Cloudflare AI Gateway to openai/gpt-5.5, request the xhigh variant, and pin opencode to 1.14.40 so the cf-ai-gateway reasoning_effort propagation fix from anomalyco/opencode#25573 is present. The viguy and reviewer agent frontmatter plus contributor docs now describe the same GPT-5.5 review setup.
commit: |
The review model update should split regular Bonk and BigBonk. Regular Bonk is meant to use Claude Opus 4.7, while BigBonk should use GPT-5.5 with xhigh reasoning. Point the /bonk workflow back to the Anthropic Opus 4.7 model through Cloudflare AI Gateway and keep its max variant. Leave /bigbonk on GPT-5.5 xhigh, and update contributor docs to describe the two review paths.
The requested review model split is regular Bonk on GPT-5.5 xhigh and BigBonk on Claude Opus 4.6 max. Update only the Bonk workflow model and variant inputs. Leave the local opencode agent frontmatter and contributor docs unchanged from the existing PR branch.
Bonk should run GPT-5.5 xhigh through the reviewer agent, while BigBonk keeps the viguy agent on Claude Opus 4.6 max. Route the Bonk workflow to the reviewer agent, set viguy's default model to Claude Opus 4.6, and use the same 0.2 temperature for both opencode agent definitions.
OpenCode agent frontmatter should use the normal provider/model ids. The workflow inputs still need Cloudflare AI Gateway-prefixed model ids because the GitHub action routes through AI Gateway credentials. Set viguy back to anthropic/claude-opus-4-6 and reviewer to openai/gpt-5.5 while leaving the Bonk and BigBonk workflow model overrides unchanged.
|
/bigbonk review |
There was a problem hiding this comment.
The workflow files look correct and internally consistent. The version bump and model/agent routing changes in bonk.yml, bigbonk.yml, and reviewer.md all line up.
However, CONTRIBUTING.md has the model names swapped and references a model that doesn't exist in any workflow.
|
Review posted. Here's the summary: The workflow files ( One blocking issue:
Both lines 7 and 18 of CONTRIBUTING.md need to be corrected. I posted suggestions with the fix. |
Co-authored-by: ask-bonk[bot] <249159057+ask-bonk[bot]@users.noreply.github.com>
Co-authored-by: ask-bonk[bot] <249159057+ask-bonk[bot]@users.noreply.github.com>
What this changes
Updates the review workflow routing:
/bonk,@ask-bonk) uses GPT-5.5 xhigh through Cloudflare AI Gateway and therevieweragent./bigbonk) uses Claude Opus 4.6 max through Cloudflare AI Gateway and theviguyagent.reviewerdefaults toopenai/gpt-5.5,viguydefaults toanthropic/claude-opus-4-6, and both usetemperature: 0.2.opencode_versionto1.14.40.Why
The intended split is regular Bonk on GPT-5.5 xhigh with the reviewer agent, and BigBonk on Opus 4.6 max with the viguy agent.
The workflow model inputs keep Cloudflare AI Gateway-prefixed ids because the GitHub action provides AI Gateway credentials. The agent frontmatter uses direct OpenCode provider/model ids, matching upstream's original agent format.
The OpenCode bump includes the cf-ai-gateway reasoning propagation fix from anomalyco/opencode#25573: anomalyco/opencode#25573
Validation
.github/workflows/bonk.ymland.github/workflows/bigbonk.ymlwith Ruby YAML.rg: Bonk =cloudflare-ai-gateway/openai/gpt-5.5,variant: xhigh,agent: reviewer; BigBonk =cloudflare-ai-gateway/anthropic/claude-opus-4-6,variant: max,agent: viguy.reviewer.md=openai/gpt-5.5,viguy.md=anthropic/claude-opus-4-6, both attemperature: 0.2.MODEL,AGENT, andVARIANTintoopencode github run, and that OpenCode documentsCLOUDFLARE_ACCOUNT_ID,CLOUDFLARE_GATEWAY_ID, andCLOUDFLARE_API_TOKENas the Cloudflare AI Gateway env vars.opencode-ai@1.14.40 run -m cloudflare-ai-gateway/openai/gpt-5.5 --variant xhighwith fake gateway credentials. OpenCode completed provider init, selectedmodelID=openai/gpt-5.5, and failed only at expected AI Gateway auth.opencode-ai@1.14.40 run -m cloudflare-ai-gateway/anthropic/claude-opus-4-6with fake gateway credentials. OpenCode completed provider init, selectedmodelID=anthropic/claude-opus-4-6, and failed only at expected AI Gateway auth.openai/gpt-5.5works locally;opencode/gpt-5.5does not. Upstream originally usedanthropic/claude-opus-4-6in agent frontmatter.Risks / follow-ups
Full
vp checkwas not usable in this local checkout because it traversed the local.refs/astroreference clone and Vite+ panicked while printing output. The staged-file hook also fails on this YAML/Markdown-only diff withNo files found to lint, so commits were made with hooks bypassed after the focused checks above.I could not perform a real upstream AI Gateway completion locally because this shell does not have valid Cloudflare AI Gateway credentials. The smoke tests cover the previous failure classes before that boundary: provider initialization and model/variant routing.