Build AI bot teams for company-like work. Define goals and teams (programmers, artist, SFX, etc.); OpenClaw workers execute tasks via LangGraph. Optional multi-run mode learns from failed runs via RAG (ChromaDB or JSON fallback).
pnpm install
pnpm run build
# Web UI (http://localhost:8000)
pnpm run web
# Work sessions (starts web dashboard automatically)
pnpm run work
teamclaw work --runs 5 # Multi-run with lesson learning
teamclaw work # CLI-only (no dashboard)Requirements: Node.js >= 20, pnpm. For local dev without OpenClaw: ollama pull qwen2.5-coder:7b.
- Coordinator — Decomposes user goals into subtasks, assigns by role
- WorkerBots — Execute tasks via OpenClaw (RealSparki) or Ollama (MockSparki for local dev)
- Team Templates — Game Dev, Startup, Content
- Vector Memory — RAG over lessons (ChromaDB or JSON fallback)
- Web UI — Real-time workflow at http://localhost:8000
src/
├── core/ # State, config, orchestration, knowledge-base
├── agents/ # Coordinator, WorkerBot, Analyst
├── interfaces/ # Sparki SDK (RealSparki / MockSparki)
├── web/ # Fastify server + static terminal.html
├── cli.ts # CLI entry
└── work-runner.ts # Work session logic
- Onboarding (recommended): Run
teamclaw onboardfor an interactive wizard that sets up worker URL, team template, goal, and optionally the LiteLLM gateway. If you enable LiteLLM, it writesGATEWAY_URL,TEAM_MODEL, andLITELLM_CONFIG_PATHto.envand generatesllm-config.yamlif missing. Then runteamclaw gateway start(ordocker compose --profile gateway up) and ensure those env vars are loaded. Re-running onboarding is idempotent and preserves existing gateway settings. - Web UI: Run
pnpm run weband set OpenClaw Worker URL in the splash screen before starting. Leave empty for local Ollama (MockSparki). - Config file: Copy
teamclaw.config.example.jsontoteamclaw.config.jsonand setworkersor use a singleOPENCLAW_WORKER_URLin.env. - .env (advanced): Copy
.env.exampleto.envfor env-only overrides.
- Templates:
game_dev,startup,content(Web UI orteamclaw.config.json). - Config file (
teamclaw.config.json):template,workers(per-bot URLs),goal,creativity(0–1). - Precedence: Web UI worker URL >
teamclaw.config.jsonworkers >OPENCLAW_WORKER_URLenv.
Copy .env.example to .env. Key variables:
OLLAMA_MODEL,OLLAMA_BASE_URL— LLM (MockSparki / Coordinator)CREATIVITY,MAX_CYCLES,MAX_RUNS— Session defaultsOPENCLAW_WORKER_URL— Fallback worker URL (prefer Web UI or config)CHROMADB_PERSIST_DIR— Vector store path
All services use the shared network teamclaw-net. Internal hostnames: chromadb, ollama, openclaw.
docker compose up # Web + ChromaDB
docker compose --profile ollama up # + Ollama (LLM)
docker compose --profile openclaw up # + OpenClaw worker
docker compose --profile ollama --profile openclaw up # Full stackWhen using the openclaw profile, set in .env:
OPENCLAW_WORKER_URL=http://openclaw:3000so TeamClaw can reach OpenClaw inside the network.
Optionally set OPENCLAW_IMAGE to your OpenClaw worker image (default: openclaw/worker:latest). The OpenClaw container may need GUI-related options (e.g. shm_size: '2g', VNC port for debugging); see comments in docker-compose.yml and docs/OPENCLAW_PROVISIONING.md.
TeamClaw and OpenClaw can share one LLM proxy so you configure API keys and models in a single place. If you used teamclaw onboard and enabled LiteLLM, run teamclaw gateway start (or docker compose --profile gateway up) and ensure GATEWAY_URL / TEAM_MODEL are exported in .env.
Terminal-first: In one terminal start the gateway, then run TeamClaw with it:
teamclaw gateway start
# In another terminal:
export GATEWAY_URL=http://localhost:4000
export TEAM_MODEL=team-default
pnpm run web
# or: pnpm run workEdit llm-config.yaml to add models (Ollama, OpenAI, Claude, Gemini, etc.). Use TEAM_MODEL to match a model_name in that file. Override config path with LITELLM_CONFIG_PATH or teamclaw gateway start --config /path/to/config.yaml.
Docker: docker compose --profile gateway up runs the LiteLLM container. In .env set GATEWAY_URL=http://litellm:4000 and TEAM_MODEL=team-default so TeamClaw and OpenClaw use the gateway inside the network.
MIT