Skip to content

Releases: VladoIvankovic/Codeep

v1.3.31 — Claude Opus 4.7 + VS Code Extension

17 Apr 13:11

Choose a tag to compare

What's new

Claude Opus 4.7 support

  • Added claude-opus-4-7 as the new default model for Anthropic
  • claude-opus-4-6 remains available as a selectable option
  • Updated context window and pricing tables

API key UX

  • Skip API key prompt for providers that don't require one

VS Code Extension v0.1.7

  • Collapsible tool call list▸ Working... (14) collapsed by default, click to expand
  • Input placeholder shows "Working..." with red border while agent runs
  • Tool item overflow fix — long file paths truncated with ellipsis
  • Stop button recovery — correctly reappears if CLI times out and retries
  • Permission handler leak fix — orphan permission cards auto-cancelled on Stop
  • clearChat resets UI state correctly (streaming indicator, buttons)
  • Session load restores manual mode
  • Process exit rejects all pending requests — no more infinite hangs
  • cancelAndSend state fixsuppressNextResponseEnd correctly reset on error
  • initClient() is now idempotent — prevents duplicate event listener stacking
  • Settings panel no longer clears API key input while user is typing
  • Copy button shows "Failed" on clipboard error instead of silently failing

v1.3.30

17 Apr 13:11

Choose a tag to compare

1.3.30

v1.3.29

17 Apr 13:12

Choose a tag to compare

VS Code Extension — Interrupt & Reply + Navigation Fixes

15 Apr 22:17

Choose a tag to compare

VS Code Extension v0.1.4

The VS Code extension now supports interrupt-and-reply — type a message while the agent is working and press Enter to cancel and send your reply immediately, without stopping the agent first.

What's new

VS Code Extension (v0.1.4)

  • Interrupt & reply — type a message while agent is running and press Enter to cancel + send in one action
  • Published on VS Code Marketplace: VladoIvankovic.codeep

CLI

  • Interrupt & reply — same feature in the terminal: type while agent works, Enter cancels and sends
  • Arrow key fix — left/right arrows now correctly navigate the Allow/Deny/Always Allow permission dialog (left = previous, right = next)
  • Demo GIF added to README

v1.3.0 — Ollama Local AI Support

08 Apr 13:29

Choose a tag to compare

v1.3.0 — Ollama Local AI Support

New: Ollama Provider

Run AI models fully locally — or on a remote server — without an API key.

  • Select provider with /providerollama
  • Configure URL via /settings → Ollama URL (default: http://localhost:11434)
  • Pick from installed models dynamically with /model
  • For remote Ollama: set OLLAMA_HOST=0.0.0.0 on the server
  • Node v24 compatibility: uses node:http transport to avoid undici AggregateError

New: /memory command

Annotate project context from the terminal — stored in .codeep/intelligence.json.

  • /memory <note> — add a note
  • /memory list — show all notes
  • /memory remove <n> — remove by index
  • /memory clear — remove all notes

Agent Confirmation/Permission (ACP)

Granular per-tool confirmation controls in /settings:

  • Confirm: delete_file — ON by default
  • Confirm: execute_command — ON by default
  • Confirm: write_file / edit_file — OFF by default (opt-in)

API Endpoint Detection

Automatic detection of API endpoints on project scan (Next.js App/Pages Router, Express, Laravel, Django). Results are included in the agent system prompt.

Dashboard improvements

  • Archive confirmation dialog before archiving projects
  • "View all N →" link when more than 10 projects exist
  • Pending / Done task tabs in the tasks section

v1.2.160

07 Apr 20:54

Choose a tag to compare

What's new

/memory command

Add custom notes to project intelligence directly from the CLI:

/memory Always use pnpm, never npm
/memory Main entry point is src/renderer/main.ts

Notes are included in every AI and agent conversation for this project.

Agent now uses project intelligence

The agent system prompt now includes the full intelligence.json context — frameworks, architecture, entry points, API endpoints, and custom notes. Previously only chat mode had access to this data.

Configurable tool confirmations

In dangerous confirmation mode, you can now choose exactly which tools require approval via /settings:

  • Confirm: delete_file — ON by default
  • Confirm: execute_command — ON by default
  • Confirm: write_file / edit_file — OFF by default

API endpoint detection

/scan now detects API routes in your project:

  • Next.js App Router (app/**/route.ts)
  • Next.js Pages Router (pages/api/**)
  • Express / Fastify (app.get('/path'))
  • Laravel (Route::get('/path'))
  • Django (urls.py)

Dashboard improvements

  • Confirmation dialog before archiving a project
  • "View all N →" link when you have more than 10 projects
  • Pending / Done tabs for tasks with counts

v1.2.152

04 Apr 14:52

Choose a tag to compare

What's new

Security

  • Fixed unauthenticated access to /api/tasks — now requires x-sync-token header
  • Added rate limiting to all API endpoints (stats, tasks, progress, sync, keys, cleanup)

New features

  • Token budget warning — agent warns at 80% and 95% of model's context window, using accurate per-model context sizes
  • /sync command — sync learning preferences and profiles across machines
  • Auto-sync on startup — learning preferences are automatically pulled from cloud if newer than local

Reliability

  • Retry logic for all cloud sync calls (exponential backoff, up to 2 retries on network errors and 5xx)

Developer experience

  • Debug logging now writes to ~/.codeep/logs/ — use CODEEP_DEBUG=1 to enable, tail -f to follow without breaking the UI
  • Updated TypeScript 5.3 → 6.0 and minimum Node.js 18 → 20

Data & accuracy

  • Fixed model context window sizes (Claude Opus/Sonnet: 200k → 1M, DeepSeek: 64k → 128k, MiniMax corrected)
  • Updated model pricing across all providers

Bug fixes

  • Fixed 23 failing tests

v1.2.135

01 Apr 20:45

Choose a tag to compare

What's new

OpenAI GPT-5.4 support

  • Added GPT-5.4, GPT-5.4 Mini, and GPT-5.4 Nano models
  • Fixed max_completion_tokens compatibility (GPT-5.4+ requirement)

Updated provider model lists

  • Z.AI — GLM-5.1 (default), GLM-5 Turbo, GLM-5
  • MiniMax — MiniMax M2.7
  • OpenAI — GPT-5.4 (default), GPT-5.4 Mini, GPT-5.4 Nano
  • Anthropic — Claude Opus (default), Claude Sonnet, Claude Haiku
  • Google — Gemini 3.1 Pro (default), Gemini 3 Flash

Higher default limits

  • Agent iterations: raised to 10,000 (was 100)
  • Agent duration: raised to 480 min / 8h (was 20 min)
  • Max tokens: raised to 32,768 (was 8,192) — better for Opus and large responses
  • Progress bar no longer capped at 500 iterations

Bug fixes

  • API error messages now show actual error details instead of generic "API error"
  • Fixed project type detection (PHP/HTML projects no longer show "Unknown")

v1.2.130

01 Apr 12:58

Choose a tag to compare

Fix project type detection using scanned file extensions

- Run extension fallback for both Unknown and generic types
- Skip non-code extensions (svg, png, etc.) when finding dominant type
- Add HTML/CSS, TypeScript, JavaScript to extension type map

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

v1.2.129

01 Apr 12:45

Choose a tag to compare

Use scanned file extensions as fallback for project type detection

If config-file detection returns Unknown, count file extensions from
the directory scan (depth 3) and use the dominant extension to determine type.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>