Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
84 changes: 73 additions & 11 deletions agent-schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -77,11 +77,21 @@
"definitions": {
"ProviderConfig": {
"type": "object",
"description": "Configuration for a custom model provider. Can be used for custom gateways",
"description": "Configuration for a model provider. Defines reusable defaults that models can inherit by referencing the provider name. Supports any provider type (openai, anthropic, google, amazon-bedrock, etc.).",
"properties": {
"provider": {
"type": "string",
"description": "The underlying provider type. Defaults to \"openai\" when not set. Supported values: openai, anthropic, google, amazon-bedrock, dmr, and any built-in alias (requesty, azure, xai, ollama, mistral, etc.).",
"examples": [
"openai",
"anthropic",
"google",
"amazon-bedrock"
]
},
"api_type": {
"type": "string",
"description": "The API schema type to use. Determines which API schema to use.",
"description": "The API schema type to use. Only applicable for OpenAI-compatible providers.",
"enum": [
"openai_chatcompletions",
"openai_responses"
Expand All @@ -94,23 +104,75 @@
},
"base_url": {
"type": "string",
"description": "Base URL for the provider's API endpoint (required)",
"description": "Base URL for the provider's API endpoint. Required for OpenAI-compatible providers, optional for native providers.",
"format": "uri",
"examples": [
"https://router.example.com/v1"
]
},
"token_key": {
"type": "string",
"description": "Environment variable name containing the API token. If not set, requests will be sent without authentication.",
"description": "Environment variable name containing the API token. If not set, requests will use the default token for the provider type.",
"examples": [
"CUSTOM_PROVIDER_API_KEY"
"CUSTOM_PROVIDER_API_KEY",
"ANTHROPIC_API_KEY"
]
},
"temperature": {
"type": "number",
"description": "Default sampling temperature for models using this provider.",
"minimum": 0,
"maximum": 2
},
"max_tokens": {
"type": "integer",
"description": "Default maximum number of tokens for models using this provider."
},
"top_p": {
"type": "number",
"description": "Default top-p (nucleus) sampling parameter.",
"minimum": 0,
"maximum": 1
},
"frequency_penalty": {
"type": "number",
"description": "Default frequency penalty.",
"minimum": -2,
"maximum": 2
},
"presence_penalty": {
"type": "number",
"description": "Default presence penalty.",
"minimum": -2,
"maximum": 2
},
"parallel_tool_calls": {
"type": "boolean",
"description": "Whether to enable parallel tool calls by default."
},
"provider_opts": {
"type": "object",
"description": "Provider-specific options passed through to the underlying client.",
"additionalProperties": true
},
"track_usage": {
"type": "boolean",
"description": "Whether to track token usage by default."
},
"thinking_budget": {
"description": "Default reasoning effort/budget for models using this provider. Can be an integer token count or a string effort level.",
"oneOf": [
{
"type": "integer",
"description": "Token budget for reasoning"
},
{
"type": "string",
"description": "Effort level (e.g., \"low\", \"medium\", \"high\", \"none\", \"adaptive\")"
}
]
}
},
"required": [
"base_url"
],
"additionalProperties": false
},
"AgentConfig": {
Expand Down Expand Up @@ -359,7 +421,7 @@
"cooldown": {
"type": "string",
"description": "Duration to stick with a successful fallback model before retrying the primary. Only applies after a non-retryable error (e.g., 429 rate limit). Use Go duration format (e.g., '1m', '30s', '2m30s'). Default is '1m'.",
"pattern": "^([0-9]+(ns|us|µs|ms|s|m|h))+$",
"pattern": "^([0-9]+(ns|us|\u00b5s|ms|s|m|h))+$",
"default": "1m",
"examples": [
"1m",
Expand Down Expand Up @@ -758,7 +820,7 @@
},
"instruction": {
"type": "string",
"description": "Custom instruction for this MCP server's tools. By default, setting this field replaces the toolset's built-in instructions entirely. To enrich (rather than replace) the original instructions, include the placeholder {ORIGINAL_INSTRUCTIONS} in your text it will be substituted with the toolset's built-in instructions at runtime. For example: '{ORIGINAL_INSTRUCTIONS}\nAlways prefer JSON output.' will prepend the original instructions and append your extra guidance."
"description": "Custom instruction for this MCP server's tools. By default, setting this field replaces the toolset's built-in instructions entirely. To enrich (rather than replace) the original instructions, include the placeholder {ORIGINAL_INSTRUCTIONS} in your text \u2014 it will be substituted with the toolset's built-in instructions at runtime. For example: '{ORIGINAL_INSTRUCTIONS}\nAlways prefer JSON output.' will prepend the original instructions and append your extra guidance."
},
"name": {
"type": "string",
Expand Down Expand Up @@ -874,7 +936,7 @@
},
"instruction": {
"type": "string",
"description": "Custom instruction for this toolset. By default, setting this field replaces the toolset's built-in instructions entirely. To enrich (rather than replace) the original instructions, include the placeholder {ORIGINAL_INSTRUCTIONS} in your text it will be substituted with the toolset's built-in instructions at runtime. For example: '{ORIGINAL_INSTRUCTIONS}\nAlways prefer JSON output.' will prepend the original instructions and append your extra guidance."
"description": "Custom instruction for this toolset. By default, setting this field replaces the toolset's built-in instructions entirely. To enrich (rather than replace) the original instructions, include the placeholder {ORIGINAL_INSTRUCTIONS} in your text \u2014 it will be substituted with the toolset's built-in instructions at runtime. For example: '{ORIGINAL_INSTRUCTIONS}\nAlways prefer JSON output.' will prepend the original instructions and append your extra guidance."
},
"toon": {
"type": "string",
Expand Down
2 changes: 1 addition & 1 deletion docs/_data/nav.yml
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@
url: /providers/minimax/
- title: Local Models
url: /providers/local/
- title: Custom Providers
- title: Provider Definitions
url: /providers/custom/

- section: Guides
Expand Down
27 changes: 27 additions & 0 deletions docs/configuration/models/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -189,3 +189,30 @@ models:
```

See [Local Models]({{ '/providers/local/' | relative_url }}) for more examples of custom endpoints.

## Inheriting from Provider Definitions

Models can reference a named provider to inherit shared defaults. Model-level settings always take precedence:

```yaml
providers:
my_anthropic:
provider: anthropic
token_key: MY_ANTHROPIC_KEY
max_tokens: 16384
thinking_budget: high
temperature: 0.5

models:
claude:
provider: my_anthropic
model: claude-sonnet-4-5
# Inherits max_tokens, thinking_budget, temperature from provider

claude_fast:
provider: my_anthropic
model: claude-haiku-4-5
thinking_budget: low # Overrides provider default
```

See [Provider Definitions]({{ '/providers/custom/' | relative_url }}) for the full list of inheritable properties.
52 changes: 35 additions & 17 deletions docs/configuration/overview/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,12 +46,12 @@ rag:
- type: chunked-embeddings
model: openai/text-embedding-3-small

# 6. Providers — optional custom provider definitions
# 6. Providers — optional reusable provider definitions
providers:
my_provider:
api_type: openai_chatcompletions
base_url: https://api.example.com/v1
provider: anthropic # or openai (default), google, amazon-bedrock, etc.
token_key: MY_API_KEY
max_tokens: 16384

# 7. Permissions — agent-level tool permission rules (optional)
# For user-wide global permissions, see ~/.config/cagent/config.yaml
Expand Down Expand Up @@ -220,34 +220,52 @@ See [Agent Distribution]({{ '/concepts/distribution/' | relative_url }}) for pub

## Custom Providers Section

Define reusable provider configurations for custom or self-hosted endpoints:
Define reusable provider configurations with shared defaults. Providers can wrap any provider type — not just OpenAI-compatible endpoints:

```yaml
providers:
# OpenAI-compatible custom endpoint
azure:
api_type: openai_chatcompletions
base_url: https://my-resource.openai.azure.com/openai/deployments/gpt-4o
token_key: AZURE_OPENAI_API_KEY

internal_llm:
api_type: openai_chatcompletions
base_url: https://llm.internal.company.com/v1
token_key: INTERNAL_API_KEY
# Anthropic with shared model defaults
team_anthropic:
provider: anthropic
token_key: TEAM_ANTHROPIC_KEY
max_tokens: 32768
thinking_budget: high

models:
azure_gpt:
provider: azure # References the custom provider
provider: azure
model: gpt-4o

claude:
provider: team_anthropic
model: claude-sonnet-4-5
# Inherits max_tokens, thinking_budget from provider

agents:
root:
model: azure_gpt
model: claude
```

| Field | Description |
| ----------- | -------------------------------------------------------------------- |
| `api_type` | API schema: `openai_chatcompletions` (default) or `openai_responses` |
| `base_url` | Base URL for the API endpoint |
| `token_key` | Environment variable name for the API token |

See [Custom Providers]({{ '/providers/custom/' | relative_url }}) for more details.
| Field | Description |
| --------------------- | ---------------------------------------------------------------------------------------- |
| `provider` | Underlying provider type: `openai` (default), `anthropic`, `google`, `amazon-bedrock`, etc. |
| `api_type` | API schema: `openai_chatcompletions` (default) or `openai_responses`. OpenAI-only. |
| `base_url` | Base URL for the API endpoint. Required for OpenAI-compatible providers. |
| `token_key` | Environment variable name for the API token. |
| `temperature` | Default sampling temperature. |
| `max_tokens` | Default maximum response tokens. |
| `thinking_budget` | Default reasoning effort/budget. |
| `top_p` | Default top-p sampling parameter. |
| `frequency_penalty` | Default frequency penalty. |
| `presence_penalty` | Default presence penalty. |
| `parallel_tool_calls` | Enable parallel tool calls by default. |
| `track_usage` | Track token usage by default. |
| `provider_opts` | Provider-specific options. |

See [Provider Definitions]({{ '/providers/custom/' | relative_url }}) for more details.
2 changes: 1 addition & 1 deletion docs/getting-started/introduction/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ their model, personality, tools, and how they collaborate — and docker-agent h
<div class="feature">
<div class="feature-icon">🧠</div>
<h3>Multi-Model Support</h3>
<p>OpenAI, Anthropic, Google Gemini, AWS Bedrock, Docker Model Runner, and custom OpenAI-compatible providers.</p>
<p>OpenAI, Anthropic, Google Gemini, AWS Bedrock, Docker Model Runner, and reusable provider definitions with shared defaults.</p>

</div>
<div class="feature">
Expand Down
Loading
Loading