Skip to content

Support non-Gemini LLM providers (OpenAI-compatible, Anthropic, etc.) #7

@cosgroveb

Description

@cosgroveb

Problem

AsyncReview is hardcoded to Gemini — virtual_runner.py force-prepends gemini/ to model names, GEMINI_API_KEY is the only auth path, and there's no way to point at a different API endpoint. This is unnecessary since DSPy/LiteLLM already handle multi-provider routing via model prefixes.

Proposed solution

Trust the MAIN_MODEL/SUB_MODEL strings as-is and let LiteLLM do the routing. Add optional LLM_API_BASE/LLM_API_KEY env vars for proxy endpoints.

Usage

# Gemini (unchanged)
MAIN_MODEL=gemini/gemini-2.5-pro
GEMINI_API_KEY=...

# OpenAI-compatible proxy
MAIN_MODEL=openai/claude-sonnet-4-5
SUB_MODEL=openai/gemini-2.5-flash

# Anthropic direct
MAIN_MODEL=anthropic/claude-sonnet-4-5
ANTHROPIC_API_KEY=...

Alternatives considered

N/A

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions