Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 26 additions & 2 deletions docs/docs/waveai-modes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ Wave AI now supports provider-based configuration which automatically applies se
- **`openai`** - OpenAI API (automatically configures endpoint and secret name) [[see example](#openai)]
- **`openrouter`** - OpenRouter API (automatically configures endpoint and secret name) [[see example](#openrouter)]
- **`nanogpt`** - NanoGPT API (automatically configures endpoint and secret name) [[see example](#nanogpt)]
- **`groq`** - Groq API (automatically configures endpoint and secret name) [[see example](#groq)]
- **`google`** - Google AI (Gemini) [[see example](#google-ai-gemini)]
- **`azure`** - Azure OpenAI Service (modern API) [[see example](#azure-openai-modern-api)]
- **`azure-legacy`** - Azure OpenAI Service (legacy deployment API) [[see example](#azure-openai-legacy-deployment-api)]
Expand Down Expand Up @@ -295,6 +296,29 @@ NanoGPT is a proxy service that provides access to multiple AI models. You must
For vision-capable models like `openai/gpt-5`, add `"images"` to capabilities.
:::

### Groq

[Groq](https://groq.com) provides fast inference for open models through an OpenAI-compatible API. Using the `groq` provider simplifies configuration:

```json
{
"groq-kimi-k2": {
"display:name": "Groq - Kimi K2",
"ai:provider": "groq",
"ai:model": "moonshotai/kimi-k2-instruct"
}
}
```

The provider automatically sets:
- `ai:endpoint` to `https://api.groq.com/openai/v1/chat/completions`
- `ai:apitype` to `openai-chat`
- `ai:apitokensecretname` to `GROQ_KEY` (store your Groq API key with this name)

:::note
For Groq, you must manually specify `ai:capabilities` based on your model's features.
:::

### Google AI (Gemini)

[Google AI](https://ai.google.dev) provides the Gemini family of models. Using the `google` provider simplifies configuration:
Expand Down Expand Up @@ -508,7 +532,7 @@ If you get "model not found" errors:
| `display:order` | No | Sort order in the selector (lower numbers first) |
| `display:icon` | No | Icon identifier for the mode (can use any [FontAwesome icon](https://fontawesome.com/search), use the name without the "fa-" prefix). Default is "sparkles" |
| `display:description` | No | Full description of the mode |
| `ai:provider` | No | Provider preset: `openai`, `openrouter`, `google`, `azure`, `azure-legacy`, `custom` |
| `ai:provider` | No | Provider preset: `openai`, `openrouter`, `nanogpt`, `groq`, `google`, `azure`, `azure-legacy`, `custom` |
| `ai:apitype` | No | API type: `openai-chat`, `openai-responses`, or `google-gemini` (defaults to `openai-chat` if not specified) |
| `ai:model` | No | Model identifier (required for most providers) |
| `ai:thinkinglevel` | No | Thinking level: `low`, `medium`, or `high` |
Expand All @@ -532,7 +556,7 @@ The `ai:capabilities` field specifies what features the AI mode supports:

**Provider-specific behavior:**
- **OpenAI and Google providers**: Capabilities are automatically configured based on the model. You don't need to specify them.
- **OpenRouter, Azure, Azure-Legacy, and Custom providers**: You must manually specify capabilities based on your model's features.
- **OpenRouter, NanoGPT, Groq, Azure, Azure-Legacy, and Custom providers**: You must manually specify capabilities based on your model's features.

:::warning
If you don't include `"tools"` in the `ai:capabilities` array, the AI model will not be able to interact with your Wave terminal widgets, read/write files, or execute commands. Most AI modes should include `"tools"` for the best Wave experience.
Expand Down
1 change: 1 addition & 0 deletions pkg/aiusechat/uctypes/uctypes.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ const (
const (
AIProvider_Wave = "wave"
AIProvider_Google = "google"
AIProvider_Groq = "groq"
AIProvider_OpenRouter = "openrouter"
AIProvider_NanoGPT = "nanogpt"
AIProvider_OpenAI = "openai"
Expand Down
13 changes: 13 additions & 0 deletions pkg/aiusechat/usechat-mode.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ const (
OpenAIChatEndpoint = "https://api.openai.com/v1/chat/completions"
OpenRouterChatEndpoint = "https://openrouter.ai/api/v1/chat/completions"
NanoGPTChatEndpoint = "https://nano-gpt.com/api/v1/chat/completions"
GroqChatEndpoint = "https://api.groq.com/openai/v1/chat/completions"
AzureLegacyEndpointTemplate = "https://%s.openai.azure.com/openai/deployments/%s/chat/completions?api-version=%s"
AzureResponsesEndpointTemplate = "https://%s.openai.azure.com/openai/v1/responses"
AzureChatEndpointTemplate = "https://%s.openai.azure.com/openai/v1/chat/completions"
Expand All @@ -32,6 +33,7 @@ const (
OpenAIAPITokenSecretName = "OPENAI_KEY"
OpenRouterAPITokenSecretName = "OPENROUTER_KEY"
NanoGPTAPITokenSecretName = "NANOGPT_KEY"
GroqAPITokenSecretName = "GROQ_KEY"
AzureOpenAIAPITokenSecretName = "AZURE_OPENAI_KEY"
GoogleAIAPITokenSecretName = "GOOGLE_AI_KEY"
)
Expand Down Expand Up @@ -112,6 +114,17 @@ func applyProviderDefaults(config *wconfig.AIModeConfigType) {
config.APITokenSecretName = NanoGPTAPITokenSecretName
}
}
if config.Provider == uctypes.AIProvider_Groq {
if config.APIType == "" {
config.APIType = uctypes.APIType_OpenAIChat
}
if config.Endpoint == "" {
config.Endpoint = GroqChatEndpoint
}
if config.APITokenSecretName == "" {
config.APITokenSecretName = GroqAPITokenSecretName
}
}
if config.Provider == uctypes.AIProvider_AzureLegacy {
if config.AzureAPIVersion == "" {
config.AzureAPIVersion = AzureLegacyDefaultAPIVersion
Expand Down
27 changes: 27 additions & 0 deletions pkg/aiusechat/usechat_mode_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
// Copyright 2026, Command Line Inc.
// SPDX-License-Identifier: Apache-2.0

package aiusechat

import (
"testing"

"github.com/wavetermdev/waveterm/pkg/aiusechat/uctypes"
"github.com/wavetermdev/waveterm/pkg/wconfig"
)

func TestApplyProviderDefaultsGroq(t *testing.T) {
config := wconfig.AIModeConfigType{
Provider: uctypes.AIProvider_Groq,
}
applyProviderDefaults(&config)
if config.APIType != uctypes.APIType_OpenAIChat {
t.Fatalf("expected API type %q, got %q", uctypes.APIType_OpenAIChat, config.APIType)
}
if config.Endpoint != GroqChatEndpoint {
t.Fatalf("expected endpoint %q, got %q", GroqChatEndpoint, config.Endpoint)
}
if config.APITokenSecretName != GroqAPITokenSecretName {
t.Fatalf("expected API token secret name %q, got %q", GroqAPITokenSecretName, config.APITokenSecretName)
}
}
2 changes: 1 addition & 1 deletion pkg/wconfig/settingsconfig.go
Original file line number Diff line number Diff line change
Expand Up @@ -278,7 +278,7 @@ type AIModeConfigType struct {
DisplayOrder float64 `json:"display:order,omitempty"`
DisplayIcon string `json:"display:icon,omitempty"`
DisplayDescription string `json:"display:description,omitempty"`
Provider string `json:"ai:provider,omitempty" jsonschema:"enum=wave,enum=google,enum=openrouter,enum=nanogpt,enum=openai,enum=azure,enum=azure-legacy,enum=custom"`
Provider string `json:"ai:provider,omitempty" jsonschema:"enum=wave,enum=google,enum=groq,enum=openrouter,enum=nanogpt,enum=openai,enum=azure,enum=azure-legacy,enum=custom"`
APIType string `json:"ai:apitype,omitempty" jsonschema:"enum=google-gemini,enum=openai-responses,enum=openai-chat"`
Model string `json:"ai:model,omitempty"`
ThinkingLevel string `json:"ai:thinkinglevel,omitempty" jsonschema:"enum=low,enum=medium,enum=high"`
Expand Down
Loading