A single binary that gives you a complete AI agent runtime — chat with your agents, give them tasks, and let them work autonomously. Built-in memory, budget controls, scheduling, and a polished web UI.
Website • Hub • Contribute • Discord
xpressclaw init
xpressclaw upThat's it. Open http://localhost:8935 and start chatting with your agents.
Most agent frameworks give you a library. xpressclaw gives you a running system — a ~12MB binary with everything included: server, web UI, LLM router, and agent management. No Python environment to configure, no Docker Compose sprawl, no YAML templating engines.
- Chat-first interface — Talk to your agents in a messaging UI, not a terminal.
@mentionagents in conversations, just like Slack. - Single binary, zero dependencies — Download one file, run it. The server, API, and web frontend are all embedded.
- Native desktop app — Tauri-based
.app/.dmgwith system tray. Runs in the background, always available. - Production-tested architecture — Built on the same agent orchestration patterns that power Xpress AI's enterprise platform, deployed at regulated financial institutions.
- Local-first, cloud-optional — Works with Ollama out of the box. Add OpenAI or Anthropic keys when you need them.
- Secure by default — Agents run in Docker containers. Budget controls prevent runaway costs. No exceptions.
The primary interface is a messaging UI. Create conversations, add agents, and talk to them — individually or in groups. Agents respond via the configured LLM (local or cloud).
Agents pick up tasks from a queue and work through them. Schedule recurring work with cron expressions. Define SOPs (Standard Operating Procedures) so agents perform consistently.
Zettelkasten-style knowledge base with vector search (sqlite-vec). Agents remember context across sessions and retrieve relevant information automatically.
- Local: Qwen 3.5, Llama 3, Mistral, and more via Ollama
- Cloud: Claude (Anthropic), GPT-4o (OpenAI), and 100+ models via OpenRouter
- Framework agnostic: Agent harnesses for Claude SDK, LangChain, Xaibo, and generic
- Container isolation — each agent runs in its own Docker container
- Budget controls — daily/monthly spending limits per agent and globally
- Tool permissions — explicit allow-list; agents only access what you grant
- Everything local — your data never leaves your machine unless you choose a cloud LLM
Activity logs, budget dashboards, agent status monitoring. Know what your agents did at 3am.
Grab the latest release from GitHub Releases.
xpressclaw init
xpressclaw up
# Open http://localhost:8935Download xpressclaw.dmg from Releases — double-click to install. The app runs in the system tray.
See Building below.
- Docker or Podman (required for agent container isolation)
- Ollama (optional, for local LLM —
ollama pull qwen3.5:latest) - Or an API key for Claude / OpenAI / OpenRouter
Chat with agents from the web UI:
Create a conversation, add an agent, and start talking. Use @atlas to mention a specific agent in a multi-agent conversation.
Schedule recurring tasks:
xpressclaw tasks create "Summarize top 10 HN stories" --agent atlasReview what happened while you were away:
xpressclaw logs
xpressclaw status
xpressclaw budgetDefine SOPs for consistent behavior:
name: weekly-report
steps:
- Check JIRA for completed tickets this week
- Summarize key accomplishments
- Identify blockers and risks
- Draft report and send to team channelInteractive CLI chat:
xpressclaw chat atlasxpressclaw init creates a xpressclaw.yaml in your project:
system:
budget:
daily: $20.00
on_exceeded: pause
isolation: docker
agents:
- name: atlas
backend: generic
role: |
You are a helpful assistant.
memory:
near_term_slots: 8
eviction: least-recently-relevant
llm:
default_provider: local
# local_model: qwen3.5:latest
# Set OPENAI_API_KEY or ANTHROPIC_API_KEY env vars for cloud providers- Bazel 8.2+ (via Bazelisk)
- Rust (stable toolchain, used by Bazel and for fmt/clippy)
- LLVM (provides
libclang, required by llama.cpp bindings) - CMake (required by llama.cpp build)
- Node.js 18+ (for the frontend)
- Docker (for running agents)
git clone https://github.com/XpressAI/xpressclaw.git
cd xpressclaw
# Build CLI, core, and server (includes frontend)
./build.sh
# Or with a clean build
./build.sh --clean# CLI only
bazel build //crates/xpressclaw-cli:xpressclaw
# Core library
bazel build //crates/xpressclaw-core:xpressclaw-core
# Server
bazel build //crates/xpressclaw-server:xpressclaw-server
# The CLI binary is at bazel-bin/crates/xpressclaw-cli/xpressclaw# Build everything including the Tauri desktop app
./build.sh
# For signed/notarized macOS builds
./build-signed.shAgent harnesses are Docker images that run your agents in isolation:
cd harnesses
# Build all harness images
docker buildx bake
# Or build individually
docker build -t xpressclaw-harness-base ./base
docker build -t xpressclaw-harness-generic ./generic
docker build -t xpressclaw-harness-claude-sdk ./claude-sdk# Via Bazel
bazel test //crates/xpressclaw-core:core_test //crates/xpressclaw-server:server_test
# Frontend type check
cd frontend && npm run check
# Formatting and linting (still via Cargo)
cargo fmt -p xpressclaw-core -p xpressclaw-server -p xpressclaw-cli -p xpressclaw-tauri -- --check
cargo clippy -p xpressclaw-core -p xpressclaw-server -p xpressclaw-cli -p xpressclaw-tauri --all-targets -- -D warnings# Terminal 1: Run the Rust server with auto-reload
cargo run -- up
# Terminal 2: Run the frontend dev server with hot reload
cd frontend && npm run dev
# The frontend dev server proxies API calls to localhost:8935xpressclaw is a Cargo workspace with four crates:
| Crate | Purpose |
|---|---|
xpressclaw-core |
Business logic: config, SQLite + sqlite-vec, agents, memory, tasks, budget, LLM router, Docker management, MCP tools |
xpressclaw-server |
Axum REST API, SSE streaming, embedded SvelteKit frontend (rust-embed) |
xpressclaw-cli |
10 CLI commands via clap: init, up, down, status, chat, tasks, memory, budget, sop, logs |
xpressclaw-tauri |
Native desktop app with system tray (Tauri v2) |
xpressclaw (single ~12MB binary)
+-- Axum server (REST API + embedded SvelteKit frontend)
+-- LLM Router (Ollama / OpenAI / Anthropic)
+-- SQLite + sqlite-vec (tasks, memory, conversations, budget)
+-- Docker Manager (agent container lifecycle)
+-- Agent Harnesses (isolated Python containers per backend)
Key design decisions:
- Single binary — server, API, frontend, and CLI in one executable
- Docker required — agent isolation is not optional
- SQLite for everything — tasks, memory, embeddings, conversations, budget
- OpenAI-compatible protocol — harnesses expose
/v1/chat/completions
xpressclaw init Initialize workspace with config + data dir
xpressclaw up [--detach] Start the server and agents
xpressclaw down Stop all running agents
xpressclaw status Show agent status and budget summary
xpressclaw chat <agent> Interactive chat in the terminal
xpressclaw tasks Task management (list, create, update, delete)
xpressclaw memory Memory inspection (list, search, add)
xpressclaw budget Budget report and usage history
xpressclaw sop SOP management (list, create, run)
xpressclaw logs Activity log viewer
Default port: 8935 (override with --port).
xpressclaw is the open-source foundation. When your team needs collaboration, visual workflows, compliance, and enterprise support — Xpress AI has you covered.
| xpressclaw (Free) | Xpress AI (Enterprise) | |
|---|---|---|
| Autonomous AI agents | ✅ | ✅ |
| Chat-first web UI | ✅ | ✅ |
| SOPs & scheduling | ✅ | ✅ |
| Local model support | ✅ | ✅ |
| Budget controls | ✅ | ✅ |
| Team collaboration | ✅ | |
| Visual workflow builder (Xircuits) | ✅ | |
| iOS & Android apps | ✅ | |
| On-premise deployment | ✅ | |
| Role-based access control | ✅ | |
| Audit logging & compliance | ✅ | |
| Dedicated support & SLA | ✅ |
We welcome contributions! See CONTRIBUTING.md for guidelines.
git clone https://github.com/XpressAI/xpressclaw.git
cd xpressclaw
./build.sh- Website: xpressclaw.ai
- Hub: hub.xpressclaw.ai
- Discord: discord.com/invite/vgEg2ZtxCw
- Twitter/X: @xpressclaw
- Enterprise: xpress.ai
Built by Xpress AI — the team behind enterprise agent platforms for regulated industries.

