Skip to content

feat: add MiniMax as alternative LLM provider for chat widget#780

Open
octo-patch wants to merge 2 commits intoStructuredLabs:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as alternative LLM provider for chat widget#780
octo-patch wants to merge 2 commits intoStructuredLabs:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 22, 2026

Summary

  • Add multi-provider LLM support to the chat component with OpenAI and MiniMax as selectable backends
  • Create frontend/src/services/llm.js with provider registry, per-provider API key storage, model selection, and temperature clamping
  • Update ChatWidget.jsx with provider/model selector dropdowns in settings panel
  • MiniMax provides OpenAI-compatible API with models: M2.7, M2.7-highspeed, M2.5, M2.5-highspeed

Changes

File Description
frontend/src/services/llm.js New multi-provider LLM service with provider configs, API key management, model selection
frontend/src/services/openai.js Refactored to re-export from llm.js for backward compatibility
frontend/src/components/widgets/ChatWidget.jsx Added provider and model selector dropdowns in settings
frontend/src/services/__tests__/llm.test.js 26 unit tests
frontend/src/services/__tests__/llm.integration.test.js 3 integration tests
frontend/package.json Added vitest test runner and npm test script
README.md Added multi-provider chat feature mention

How it works

  1. Users click the Settings gear icon in the chat widget
  2. Select their preferred LLM provider (OpenAI or MiniMax) from a dropdown
  3. Choose the model they want to use
  4. Enter their API key for the selected provider
  5. API keys are stored independently per provider in browser sessionStorage

MiniMax uses an OpenAI-compatible chat completions API (https://api.minimax.io/v1/chat/completions), so the same request/response format works for both providers.

Test plan

  • 26 unit tests covering provider config, API key management, model selection, temperature clamping, and API calls
  • 3 integration tests covering provider switching workflow, multi-turn conversations, and model changes
  • All 29 tests pass with npm test
  • Frontend build succeeds with npm run build
  • Manual test: select MiniMax provider, enter API key, verify chat works
  • Manual test: switch between providers, verify API keys persist independently

Note

Medium Risk
Changes core chat request flow to route through a new multi-provider LLM service and adds provider/model persistence in session storage, which could affect chat availability and API calls. Risk is mitigated by extensive unit/integration tests but still touches user-facing chat and network behavior.

Overview
Adds multi-provider LLM support for the chat widget (OpenAI + MiniMax). A new frontend/src/services/llm.js centralizes provider config, per-provider API key storage, model selection, and request dispatch to the provider’s chat/completions endpoint.

Updates ChatWidget settings and behavior to let users pick an LLM provider and model, store provider-specific keys, and reflect provider-specific “online/API key required” UI states.

Introduces a Vitest-based test setup (new vitest scripts/config plus @testing-library/* + jsdom) and adds coverage for the LLM service and ChatWidget, while keeping backward compatibility by turning frontend/src/services/openai.js into a re-export of createChatCompletion from llm.

Written by Cursor Bugbot for commit 434219b. This will update automatically on new commits. Configure here.

PR Bot added 2 commits March 22, 2026 20:29
Add multi-provider LLM support to the chat component, enabling users to
choose between OpenAI and MiniMax as their LLM backend. MiniMax uses an
OpenAI-compatible API, making integration seamless.

Changes:
- Create frontend/src/services/llm.js with provider registry, per-provider
  API key storage, model selection, and temperature clamping
- Update ChatWidget.jsx with provider/model selector dropdowns in settings
- Refactor openai.js to re-export from llm.js for backward compatibility
- Add 26 unit tests and 3 integration tests (vitest)
- Update README with multi-provider chat feature mention
Add 9 ChatWidget component tests covering provider selection UI,
API key state management, and settings panel rendering.
Move test infrastructure (vitest.config.js, vitest.setup.js) to
frontend root for proper discovery from both test directories.
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

it('propagates error', async () => { setApiKey('minimax','k'); setSelectedProvider('minimax'); vi.spyOn(globalThis,'fetch').mockResolvedValue({ok:false,json:()=>Promise.resolve({error:{message:'bad key'}})}); await expect(createChatCompletion([{role:'user',content:'t'}])).rejects.toThrow('bad key'); });
it('handles network error', async () => { setApiKey('openai','sk'); setSelectedProvider('openai'); vi.spyOn(globalThis,'fetch').mockRejectedValue(new Error('net')); await expect(createChatCompletion([{role:'user',content:'t'}])).rejects.toThrow('net'); });
});
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate test suites run twice in CI

Low Severity

The LLM unit and integration tests exist in two locations: src/__tests__/llm.test.js and src/__tests__/llm.integration.test.js are compact duplicates of the fuller versions in src/services/__tests__/llm.test.js and src/services/__tests__/llm.integration.test.js. The vitest config has no include filter, so both sets run on every npm test, doubling test execution and creating a maintenance burden where changes need to be applied in two places.

Additional Locations (1)
Fix in Cursor Fix in Web

return Math.max(0, Math.min(1, temperature));
}
return temperature;
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clampTemperature exported but never called in production

Low Severity

clampTemperature is defined, exported, and tested, but never invoked in any production code. The createChatCompletion function doesn't call it, nor does ChatWidget.jsx. No temperature parameter is included in the API request body at all. If MiniMax requires temperatures in [0, 1], the clamping needs to actually be applied somewhere; as-is, the function is dead code.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant