This guide explains how to get API keys for every provider supported by ModelMesh. You only need one key to get started — add more providers later for failover and free-tier aggregation.
| Provider | Env Variable | Free Tier | Sign-Up Link |
|---|---|---|---|
| OpenAI | OPENAI_API_KEY |
$5 credit (new accounts) | platform.openai.com |
| Anthropic | ANTHROPIC_API_KEY |
$5 credit (new accounts) | console.anthropic.com |
| Google Gemini | GOOGLE_API_KEY |
Generous free tier | aistudio.google.com |
| xAI (Grok) | XAI_API_KEY |
$25 free credit | console.x.ai |
| DeepSeek | DEEPSEEK_API_KEY |
Free trial credits | platform.deepseek.com |
| Mistral AI | MISTRAL_API_KEY |
Free tier available | console.mistral.ai |
| Cohere | COHERE_API_KEY |
Free trial tier | dashboard.cohere.com |
| Groq | GROQ_API_KEY |
Free tier (rate-limited) | console.groq.com |
| Perplexity | PERPLEXITY_API_KEY |
Free credits on signup | perplexity.ai/settings/api |
| OpenRouter | OPENROUTER_API_KEY |
Free models available | openrouter.ai/keys |
| Together AI | TOGETHER_API_KEY |
$5 free credit | api.together.ai |
| ElevenLabs | ELEVENLABS_API_KEY |
10,000 chars/month free | elevenlabs.io |
| AssemblyAI | ASSEMBLYAI_API_KEY |
Free tier available | assemblyai.com |
| Azure Speech | AZURE_SPEECH_KEY |
5M chars/month free | portal.azure.com |
| Tavily | TAVILY_API_KEY |
1,000 searches/month free | tavily.com |
| Serper | SERPER_API_KEY |
2,500 searches free | serper.dev |
| Jina AI | JINA_API_KEY |
1M tokens/month free | jina.ai |
| Firecrawl | FIRECRAWL_API_KEY |
500 pages/month free | firecrawl.dev |
| Ollama | OLLAMA_HOST |
Free (local) | ollama.com |
| LM Studio | LMSTUDIO_HOST |
Free (local) | lmstudio.ai |
| vLLM | VLLM_HOST |
Free (local) | docs.vllm.ai |
| LocalAI | LOCALAI_HOST |
Free (local) | localai.io |
Models: GPT-4o, GPT-4o-mini, GPT-4 Turbo, o1, o3-mini, DALL-E 3, Whisper, TTS
- Go to platform.openai.com/signup
- Create an account (email or Google/Microsoft SSO)
- Navigate to API Keys in the left sidebar
- Click Create new secret key, give it a name
- Copy the key (starts with
sk-)
export OPENAI_API_KEY="sk-..."Connector ID: provider.openai.llm.v1
Models: Claude Opus 4, Claude Sonnet 4, Claude 3.5 Haiku
- Go to console.anthropic.com
- Sign up with email or Google SSO
- Navigate to API Keys in Settings
- Click Create Key
- Copy the key (starts with
sk-ant-)
export ANTHROPIC_API_KEY="sk-ant-..."Connector ID: anthropic.claude.v1
Models: Gemini 2.0 Flash, Gemini 2.0 Pro, Gemini 1.5 Pro, Gemini 1.5 Flash
- Go to aistudio.google.com/apikey
- Sign in with your Google account
- Click Create API Key
- Select or create a Google Cloud project
- Copy the key (starts with
AI)
export GOOGLE_API_KEY="AI..."Connector ID: provider.google.gemini.v1
Tip: Gemini has one of the most generous free tiers — great for development and free-tier aggregation.
Models: Grok-2, Grok-2 Mini, Grok-3, Grok-3 Mini
- Go to console.x.ai
- Sign up with your X (Twitter) account or email
- Navigate to API Keys
- Click Create API Key
- Copy the key
export XAI_API_KEY="xai-..."Connector ID: provider.xai.grok.v1
Models: DeepSeek-V3, DeepSeek-R1, DeepSeek-Coder
- Go to platform.deepseek.com
- Create an account
- Navigate to API Keys
- Click Create new API key
- Copy the key
export DEEPSEEK_API_KEY="sk-..."Connector ID: provider.deepseek.api.v1
Tip: DeepSeek offers very competitive pricing — one of the lowest cost-per-token providers.
Models: Mistral Large, Mistral Small, Mistral Nemo, Codestral, Pixtral
- Go to console.mistral.ai
- Create an account
- Navigate to API Keys in the sidebar
- Click Create new key
- Copy the key
export MISTRAL_API_KEY="..."Connector ID: provider.mistral.api.v1
Models: Command R+, Command R, Embed, Rerank
- Go to dashboard.cohere.com
- Sign up with email or Google SSO
- Navigate to API Keys
- Your trial key is shown automatically
- Copy the key
export COHERE_API_KEY="..."Connector ID: provider.cohere.nlp.v1
Models: LLaMA 3.3, Gemma 2, Mixtral (ultra-fast inference)
- Go to console.groq.com
- Sign up with email or Google SSO
- Navigate to API Keys
- Click Create API Key
- Copy the key (starts with
gsk_)
export GROQ_API_KEY="gsk_..."Connector ID: provider.groq.api.v1
Tip: Groq provides extremely fast inference using LPU hardware. Great for low-latency use cases.
Models: Sonar, Sonar Pro (search-augmented generation)
- Go to perplexity.ai/settings/api
- Sign up or log in
- Navigate to API settings
- Generate an API key
- Copy the key (starts with
pplx-)
export PERPLEXITY_API_KEY="pplx-..."Connector ID: provider.perplexity.search.v1
Access to 200+ models from multiple providers through a single API key.
- Go to openrouter.ai/keys
- Sign up with email or Google SSO
- Click Create Key
- Copy the key (starts with
sk-or-)
export OPENROUTER_API_KEY="sk-or-..."Connector ID: provider.openrouter.gateway.v1
Tip: OpenRouter includes many free models. Useful for accessing models from providers without direct API access.
Models: LLaMA, Mistral, Qwen, and other open-source models at scale.
- Go to api.together.ai
- Create an account
- Navigate to Settings → API Keys
- Copy your key
export TOGETHER_API_KEY="..."Connector ID: provider.together.api.v1
Capabilities: Text-to-Speech (high-quality voices)
- Go to elevenlabs.io and sign up
- Navigate to your Profile (bottom-left)
- Click API Key section
- Copy your API key
export ELEVENLABS_API_KEY="..."Connector ID: provider.elevenlabs.tts.v1
Capabilities: Speech-to-Text, transcription, audio intelligence
- Go to assemblyai.com and sign up
- Navigate to your Dashboard
- Your API key is displayed on the dashboard home page
- Copy the key
export ASSEMBLYAI_API_KEY="..."Connector ID: provider.assemblyai.stt.v1
Capabilities: Text-to-Speech, Speech-to-Text (Microsoft Neural Voices)
- Go to portal.azure.com
- Create an Azure account (free tier available)
- Create a Speech Services resource
- Navigate to Keys and Endpoint
- Copy Key 1 and note the Region
export AZURE_SPEECH_KEY="..."
export AZURE_SPEECH_REGION="eastus" # your regionConnector ID: provider.azure.tts.v1
Capabilities: AI-optimized web search
- Go to tavily.com and sign up
- Navigate to your Dashboard
- Your API key is shown on the overview page
- Copy the key (starts with
tvly-)
export TAVILY_API_KEY="tvly-..."Connector ID: provider.tavily.search.v1
Capabilities: Google Search API (SERP results)
- Go to serper.dev and sign up
- Navigate to API Key in your dashboard
- Copy your key
export SERPER_API_KEY="..."Connector ID: provider.serper.search.v1
Capabilities: Embeddings, reranking, web content extraction
- Go to jina.ai and sign up
- Navigate to API Keys in your account settings
- Create and copy your key
export JINA_API_KEY="..."Connector ID: provider.jina.ai.v1
Capabilities: Web scraping, content extraction, markdown conversion
- Go to firecrawl.dev and sign up
- Navigate to API Keys in your dashboard
- Create and copy your key (starts with
fc-)
export FIRECRAWL_API_KEY="fc-..."Connector ID: provider.firecrawl.scrape.v1
These providers run on your own machine — no API key required, no usage costs.
Models: LLaMA 3, Mistral, Gemma, Phi, CodeLLaMA, and 100+ more
- Download from ollama.com
- Install and run:
ollama serve - Pull a model:
ollama pull llama3.3 - The API is available at
http://localhost:11434
export OLLAMA_HOST="http://localhost:11434"Connector ID: ollama.local.v1
Models: Any GGUF model from Hugging Face
- Download from lmstudio.ai
- Install and launch LM Studio
- Download a model from the Discover tab
- Start the local server (Developer tab)
- The API is available at
http://localhost:1234
export LMSTUDIO_HOST="http://localhost:1234"Connector ID: lmstudio.local.v1
Models: Any Hugging Face model with high-performance serving
- Install:
pip install vllm - Start the server:
python -m vllm.entrypoints.openai.api_server \ --model meta-llama/Llama-3.3-70B-Instruct
- The API is available at
http://localhost:8000
export VLLM_HOST="http://localhost:8000"Connector ID: vllm.local.v1
Models: LLaMA, Whisper, Stable Diffusion, and more via OpenAI-compatible API
- Install from localai.io
- Or run via Docker:
docker run -p 8080:8080 localai/localai
- The API is available at
http://localhost:8080
export LOCALAI_HOST="http://localhost:8080"Connector ID: localai.local.v1
Set one or more keys and call create():
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."import modelmesh
client = modelmesh.create("chat-completion")
# ModelMesh auto-detects all configured providersCreate a .env file in your project root:
cp .env.example .env
# Edit .env and add your keysReference keys via secret store in modelmesh.yaml:
providers:
openai.llm.v1:
connector: openai.llm.v1
config:
api_key: "${secrets:OPENAI_API_KEY}"Pass keys directly in code:
import modelmesh
client = modelmesh.create(
"chat-completion",
providers={"openai": {"api_key": "sk-..."}},
)- Start with one provider — you can always add more later
- Use free tiers — Gemini, Groq, and OpenRouter offer generous free tiers
- Never commit keys — use
.envfiles (gitignored) or secret stores - Rotate keys regularly — especially for production deployments
- Set budget limits — prevent surprise bills with ModelMesh budget enforcement
- Use local providers for development — Ollama and LM Studio are free and fast
For maximum free usage, set up these providers:
export GOOGLE_API_KEY="AI..." # Gemini: generous free tier
export GROQ_API_KEY="gsk_..." # Groq: fast free inference
export OPENROUTER_API_KEY="sk-or-..." # OpenRouter: access to free modelsclient = modelmesh.create("chat-completion")
# Chains all three providers automaticallyModelMesh rotates between them when quotas are exhausted.
See also: System Configuration | Connector Catalogue | FAQ | Troubleshooting