Skip to content

Add MiniMax as first-class LLM provider#363

Open
octo-patch wants to merge 1 commit intoenoch3712:mainfrom
octo-patch:feature/add-minimax-provider
Open

Add MiniMax as first-class LLM provider#363
octo-patch wants to merge 1 commit intoenoch3712:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as a first-class LLM provider for document extraction, supporting MiniMax-M2.7 and MiniMax-M2.7-highspeed models (up to 1M context window).

Users can simply use the minimax/ model prefix. The integration auto-detects MiniMax models, rewrites the model string for litellm OpenAI-compatible provider, and configures the API base URL and key automatically from environment variables.

Usage

import os
from extract_thinker import Extractor, DocumentLoaderPyPdf, Contract

os.environ["MINIMAX_API_KEY"] = "your-key"

class InvoiceContract(Contract):
    invoice_number: str
    invoice_date: str

extractor = Extractor()
extractor.load_document_loader(DocumentLoaderPyPdf())
extractor.load_llm("minimax/MiniMax-M2.7")

result = extractor.extract("invoice.pdf", InvoiceContract)

Changes

  • extract_thinker/llm.py - Add api_base/api_key constructor params, MiniMax auto-detection via minimax/ prefix, temperature clamping to [0, 1], extra params passed through all request paths
  • extract_thinker/utils.py - Add is_minimax_model(), resolve_minimax_model() helpers and MINIMAX_API_BASE/MINIMAX_MODELS constants
  • extract_thinker/global_models.py - Add get_minimax_model() and get_minimax_highspeed_model() convenience functions
  • README.md - Add MiniMax to supported providers list with usage example
  • tests/test_minimax.py - 29 unit tests + 3 integration tests covering model detection, temperature clamping, parameter wiring, and real API calls

Test plan

  • 29 unit tests pass (model detection, resolve, temperature clamping, extra params, global models)
  • 3 integration tests pass with real MiniMax API (raw completion, structured extraction via Extractor, highspeed variant)
  • Existing test_llm_backends.py tests unaffected (pre-existing failures not related to this change)
  • CI pipeline passes

Add support for MiniMax M2.7 and M2.7-highspeed models via the
convenient minimax/ model prefix. The integration auto-detects
MiniMax models, rewrites the model string for litellm's OpenAI-
compatible provider, and configures the API base URL and key
automatically from environment variables.

Changes:
- llm.py: Add api_base/api_key constructor params, MiniMax auto-
  detection, temperature clamping [0,1], extra params in all
  request paths
- utils.py: Add is_minimax_model(), resolve_minimax_model() helpers
  and MINIMAX_API_BASE/MINIMAX_MODELS constants
- global_models.py: Add get_minimax_model() and
  get_minimax_highspeed_model() helpers
- README.md: Add MiniMax to supported providers list with usage
  example
- tests/test_minimax.py: 29 unit tests + 3 integration tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant