Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@
- [x] 添加自定义 sentry 错误上报支持 [文档](https://ccb.agent-aura.top/docs/internals/sentry-setup)
- [x] 添加自定义 GrowthBook 支持 (GB 也是开源的, 现在你可以配置一个自定义的遥控平台) [文档](https://ccb.agent-aura.top/docs/internals/growthbook-adapter)
- [x] 自定义 login 模式, 大家可以用这个配置 Claude 的模型!
- [x] 支持 OpenAI API 模式 [文档](https://ccb.agent-aura.top/docs/features/openai)
- [ ] V6 大规模重构石山代码, 全面模块分包
- [ ] V6 将会为全新分支, 届时 main 分支将会封存为历史版本

Expand Down
61 changes: 32 additions & 29 deletions bun.lock

Large diffs are not rendered by default.

132 changes: 132 additions & 0 deletions docs/features/openai.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
---
title: "支持 OpenAI API 模式"
description: "通过环境变量切换至 OpenAI 兼容 API,支持 GPT 系列、DeepSeek、Qwen 等任意 OpenAI 格式提供商"
keywords: ["OpenAI", "API", "环境变量", "模型切换"]
---

Claude Code 内置 OpenAI 兼容提供商支持,通过环境变量即可切换,无需修改代码。

---

## 方式一:配置文件(推荐)

修改 `~/.claude/settings.json`,配置仅在 Claude Code 内生效,不污染系统环境变量。

```json
{
"env": {
"CLAUDE_CODE_USE_OPENAI": "1",
"OPENAI_API_KEY": "sk-xxx",
"OPENAI_BASE_URL": "https://api.openai.com/v1",
"OPENAI_DEFAULT_MODEL": "gpt-4o",
"OPENAI_MAX_TOKENS": "128000"
}
}
```

---

## 方式二:环境变量

### macOS / Linux

```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-xxx
export OPENAI_BASE_URL=https://api.openai.com/v1
export OPENAI_DEFAULT_MODEL=gpt-4o

claude
```

永久生效(写入 shell 配置):

```bash
# bash
echo 'export CLAUDE_CODE_USE_OPENAI=1' >> ~/.bashrc

# zsh
echo 'export CLAUDE_CODE_USE_OPENAI=1' >> ~/.zshrc
```

### Windows

**命令提示符(CMD)**

```bat
set CLAUDE_CODE_USE_OPENAI=1
set OPENAI_API_KEY=sk-xxx
set OPENAI_BASE_URL=https://api.openai.com/v1
set OPENAI_DEFAULT_MODEL=gpt-4o

claude
```

**PowerShell**

```powershell
$env:CLAUDE_CODE_USE_OPENAI = "1"
$env:OPENAI_API_KEY = "sk-xxx"
$env:OPENAI_BASE_URL = "https://api.openai.com/v1"
$env:OPENAI_DEFAULT_MODEL = "gpt-4o"

claude
```

永久生效(系统级):

```powershell
[System.Environment]::SetEnvironmentVariable("CLAUDE_CODE_USE_OPENAI", "1", "User")
[System.Environment]::SetEnvironmentVariable("OPENAI_API_KEY", "sk-xxx", "User")
```

---

## 环境变量说明

### 必填

| 变量 | 说明 |
|------|------|
| `CLAUDE_CODE_USE_OPENAI` | 设为 `1` 启用 OpenAI 提供商 |
| `OPENAI_API_KEY` | API 密钥 |
| `OPENAI_BASE_URL` | API 地址,默认 `https://api.openai.com/v1`,可替换为第三方兼容服务(DeepSeek、Qwen、Azure OpenAI 等) |
| `OPENAI_DEFAULT_MODEL` | 模型名称,如 `gpt-4o`、`deepseek-chat` |

### 可选

| 变量 | 默认值 | 说明 |
|------|--------|------|
| `OPENAI_MAX_TOKENS` | `min(请求值, 16384)` | 覆盖 max_tokens 上限,o3 等大输出模型可设为 `65536` 或更高 |
| `OPENAI_DEBUG_PROXY` | 无 | HTTP 代理地址,用于抓包调试,如 `http://127.0.0.1:9005` |

---

## 常见场景

### DeepSeek

```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-xxx
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_DEFAULT_MODEL=deepseek-chat
```

### Azure OpenAI

```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=<azure-key>
export OPENAI_BASE_URL=https://<resource>.openai.azure.com/openai/deployments/<deployment>/
export OPENAI_DEFAULT_MODEL=gpt-4o
```

### 大输出模型(o1 / o3)

```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-xxx
export OPENAI_DEFAULT_MODEL=o3
export OPENAI_MAX_TOKENS=65536
```
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@
"@sentry/node": "^10.47.0",
"@smithy/core": "^3.23.13",
"@smithy/node-http-handler": "^4.5.1",
"openai": "^6.0.0",
"@types/bun": "^1.3.11",
"@types/cacache": "^20.0.1",
"@types/plist": "^3.0.5",
Expand Down
9 changes: 8 additions & 1 deletion src/services/api/claude.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3384,8 +3384,15 @@ function isMaxTokensCapEnabled(): boolean {
}

export function getMaxOutputTokensForModel(model: string): number {
const maxOutputTokens = getModelMaxOutputTokens(model)
// In OpenAI mode, OPENAI_MAX_TOKENS is the authoritative source.
// Apply early so the correct value propagates through the entire pipeline
// (logging, context window calculations, compaction, etc.).
if (isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) && !!process.env.OPENAI_MAX_TOKENS) {
const openaiMaxTokens = Number(process.env.OPENAI_MAX_TOKENS)
if (openaiMaxTokens > 0) return openaiMaxTokens
}

const maxOutputTokens = getModelMaxOutputTokens(model)
// Slot-reservation cap: drop default to 8k for all models. BQ p99 output
// = 4,911 tokens; 32k/64k defaults over-reserve 8-16× slot capacity.
// Requests hitting the cap get one clean retry at 64k (query.ts
Expand Down
11 changes: 11 additions & 0 deletions src/services/api/client.ts
Original file line number Diff line number Diff line change
Expand Up @@ -296,6 +296,17 @@ export async function getAnthropicClient({
// we have always been lying about the return type - this doesn't support batching or models
return new AnthropicVertex(vertexArgs) as unknown as Anthropic
}
if (isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)) {
const { createOpenAIAdapter } = await import('./openaiAdapter.js')
const openaiApiKey = process.env.OPENAI_API_KEY
const openaiBaseURL = process.env.OPENAI_BASE_URL
// we have always been lying about the return type - OpenAI adapter mimics beta.messages.create
return createOpenAIAdapter({
apiKey: openaiApiKey,
...(openaiBaseURL ? { baseURL: openaiBaseURL } : {}),
defaultModel: model,
}) as unknown as Anthropic
}

// Determine authentication method based on available tokens
const clientConfig: ConstructorParameters<typeof Anthropic>[0] = {
Expand Down
Loading