feat: add MiniMax as alternative LLM provider for chat widget#780
feat: add MiniMax as alternative LLM provider for chat widget#780octo-patch wants to merge 2 commits intoStructuredLabs:mainfrom
Conversation
Add multi-provider LLM support to the chat component, enabling users to choose between OpenAI and MiniMax as their LLM backend. MiniMax uses an OpenAI-compatible API, making integration seamless. Changes: - Create frontend/src/services/llm.js with provider registry, per-provider API key storage, model selection, and temperature clamping - Update ChatWidget.jsx with provider/model selector dropdowns in settings - Refactor openai.js to re-export from llm.js for backward compatibility - Add 26 unit tests and 3 integration tests (vitest) - Update README with multi-provider chat feature mention
Add 9 ChatWidget component tests covering provider selection UI, API key state management, and settings panel rendering. Move test infrastructure (vitest.config.js, vitest.setup.js) to frontend root for proper discovery from both test directories.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| it('propagates error', async () => { setApiKey('minimax','k'); setSelectedProvider('minimax'); vi.spyOn(globalThis,'fetch').mockResolvedValue({ok:false,json:()=>Promise.resolve({error:{message:'bad key'}})}); await expect(createChatCompletion([{role:'user',content:'t'}])).rejects.toThrow('bad key'); }); | ||
| it('handles network error', async () => { setApiKey('openai','sk'); setSelectedProvider('openai'); vi.spyOn(globalThis,'fetch').mockRejectedValue(new Error('net')); await expect(createChatCompletion([{role:'user',content:'t'}])).rejects.toThrow('net'); }); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
Duplicate test suites run twice in CI
Low Severity
The LLM unit and integration tests exist in two locations: src/__tests__/llm.test.js and src/__tests__/llm.integration.test.js are compact duplicates of the fuller versions in src/services/__tests__/llm.test.js and src/services/__tests__/llm.integration.test.js. The vitest config has no include filter, so both sets run on every npm test, doubling test execution and creating a maintenance burden where changes need to be applied in two places.
Additional Locations (1)
| return Math.max(0, Math.min(1, temperature)); | ||
| } | ||
| return temperature; | ||
| }; |
There was a problem hiding this comment.
clampTemperature exported but never called in production
Low Severity
clampTemperature is defined, exported, and tested, but never invoked in any production code. The createChatCompletion function doesn't call it, nor does ChatWidget.jsx. No temperature parameter is included in the API request body at all. If MiniMax requires temperatures in [0, 1], the clamping needs to actually be applied somewhere; as-is, the function is dead code.


Summary
frontend/src/services/llm.jswith provider registry, per-provider API key storage, model selection, and temperature clampingChatWidget.jsxwith provider/model selector dropdowns in settings panelChanges
frontend/src/services/llm.jsfrontend/src/services/openai.jsllm.jsfor backward compatibilityfrontend/src/components/widgets/ChatWidget.jsxfrontend/src/services/__tests__/llm.test.jsfrontend/src/services/__tests__/llm.integration.test.jsfrontend/package.jsonnpm testscriptREADME.mdHow it works
MiniMax uses an OpenAI-compatible chat completions API (
https://api.minimax.io/v1/chat/completions), so the same request/response format works for both providers.Test plan
npm testnpm run buildNote
Medium Risk
Changes core chat request flow to route through a new multi-provider LLM service and adds provider/model persistence in session storage, which could affect chat availability and API calls. Risk is mitigated by extensive unit/integration tests but still touches user-facing chat and network behavior.
Overview
Adds multi-provider LLM support for the chat widget (OpenAI + MiniMax). A new
frontend/src/services/llm.jscentralizes provider config, per-provider API key storage, model selection, and request dispatch to the provider’schat/completionsendpoint.Updates
ChatWidgetsettings and behavior to let users pick an LLM provider and model, store provider-specific keys, and reflect provider-specific “online/API key required” UI states.Introduces a Vitest-based test setup (new
vitestscripts/config plus@testing-library/*+jsdom) and adds coverage for the LLM service andChatWidget, while keeping backward compatibility by turningfrontend/src/services/openai.jsinto a re-export ofcreateChatCompletionfromllm.Written by Cursor Bugbot for commit 434219b. This will update automatically on new commits. Configure here.