- Services:
database_postgres(internal DB),worker_plan(internal pipeline API),frontend_multi_user(UI on${PLANEXE_FRONTEND_MULTIUSER_PORT:-5001}), plus DB workers (worker_plan_database_1/2/3by default;worker_plan_databaseinmanualprofile), andmcp_cloud(MCP interface on${PLANEXE_MCP_HTTP_PORT:-8001});frontend_multi_userwaits for Postgres and worker health. - Shared host files:
.envand./llm_config/mounted read-only;.envis also loaded viaenv_file. - Postgres defaults to user/db/password
planexe; override via env or.env; data lives in thedatabase_postgres_datavolume. - Env defaults live in
docker-compose.ymlbut can be overridden in.envor your shell (URLs, timeouts, run dirs, optional auth). - Only
frontend_multi_userandmcp_cloudpublish ports to the host (bound to127.0.0.1by default, override viaPLANEXE_BIND_HOST).database_postgresandworker_planare docker-network-only — see Published ports / bind host. develop.watchsyncs code/config forworker_plan; rebuild with--no-cacheafter big moves or dependency changes; restart policy isunless-stopped.
- Up (everything):
docker compose up frontend_multi_user database_postgres worker_plan worker_plan_database_1 worker_plan_database_2 worker_plan_database_3. - Up (MCP server):
docker compose up mcp_cloud(requiresdatabase_postgresto be running). - Down:
docker compose down(add--remove-orphansif stray containers linger). - Rebuild clean:
docker compose build --no-cache database_postgres worker_plan frontend_multi_user worker_plan_database worker_plan_database_1 worker_plan_database_2 worker_plan_database_3 mcp_cloud. - UI: http://localhost:5001 after the stack is up.
- MCP: configure your MCP client to connect to the
mcp_cloudcontainer via stdio. - Logs:
docker compose logs -f worker_planor... frontend_multi_useror... mcp_cloud. - One-off inside a container:
docker compose run --rm worker_plan python -m worker_plan_internal.fiction.fiction_writer(useexecif already running). - Ensure
.envandllm_config/exist; copy.env.docker-exampleto.envif you need a starter.
- Dependency hell: when one Python package requires version A of a dependency while another requires version B (or a different Python), so
pipcannot satisfy everything in one environment; the resolver loops, pins conflict, or installs a set that breaks another part of the app. System-level deps (libssl) can also clash, and "fixes" often mean uninstalling or downgrading unrelated packages. - I want to experiment with the
uvpackage manager; to try it, installuvduring the image build and replace thepip install ...lines withuv pip install .... Compose keeps that change isolated per service so it doesn’t spill onto the other containers or host Python. - Compose solves this by isolating environments per service: each image pins its own base Python, OS libs, and
requirements.txt, so the frontend and worker no longer fight over versions. - Builds are reproducible: the
Dockerfileinstalls a clean env from scratch, so you avoid ghosts from previous virtualenvs or globally-installed wheels. - If a dependency change fails, you can rebuild from zero or switch base images without nuking your host Python setup.
- Reusable local stack with consistent env/paths under
/appin each container. - Postgres data volume:
database_postgres_datakeeps the database files outside the repo tree.
- Purpose: Storage in a Postgres database for future queue + event logging work. Not published to the host. Other containers reach it via the docker network at
database_postgres:5432. To poke at it from your machine usedocker compose exec database_postgres psql -U planexeor a one-off override (see Published ports / bind host). - Build:
database_postgres/Dockerfile(uses the official Postgres image). - Env defaults:
PLANEXE_POSTGRES_USER=planexe,PLANEXE_POSTGRES_PASSWORD=planexe,PLANEXE_POSTGRES_DB=planexe,PLANEXE_POSTGRES_PORT=5432(override with env/.env). - Data/health: data in the named volume
database_postgres_data; healthcheck usespg_isready.
Several services have permissive defaults that are fine for localhost-only development but would be a foot-gun on a shared network:
frontend_multi_userdefaults toadmin/admin.mcp_clouddefaults toPLANEXE_MCP_REQUIRE_AUTH=falsewith emptyPLANEXE_MCP_API_KEY.database_postgresdefaults to user/passwordplanexe/planexe.worker_planhas no auth at all.
Ports policy:
database_postgresandworker_planare not published to the host. Other containers reach them via the docker network asdatabase_postgres:5432andworker_plan:8000. This also sidesteps the common "port 5432 already in use" conflict with a local Postgres on dev machines.frontend_multi_user(5001) andmcp_cloud(8001) are published, bound toPLANEXE_BIND_HOST(default127.0.0.1).
To opt back into LAN access for the published services (e.g., testing from your phone, Claude Desktop on another machine):
export PLANEXE_BIND_HOST=0.0.0.0
docker compose upBefore doing this, set strong values for at least:
PLANEXE_FRONTEND_MULTIUSER_ADMIN_PASSWORDPLANEXE_MCP_REQUIRE_AUTH=trueandPLANEXE_MCP_API_KEY
When you actually want a psql shell or curl to one of the unpublished services:
# Postgres shell inside the DB container:
docker compose exec database_postgres psql -U planexe
# Hit worker_plan from inside the frontend container:
docker compose exec frontend_multi_user curl -fsS http://worker_plan:8000/healthcheckOr, drop a tiny override file (gitignored) that adds host port mappings for a debugging session:
# docker-compose.override.yml
services:
database_postgres:
ports:
- "127.0.0.1:5433:5432" # avoid colliding with a local Postgres on 5432
worker_plan:
ports:
- "127.0.0.1:8000:8000"docker compose up automatically merges docker-compose.override.yml if present.
- Purpose: Multi-user Flask UI with admin views (tasks/events/nonce/workers) backed by Postgres.
- Build:
frontend_multi_user/Dockerfile. - Env defaults: DB host
database_postgres, port5432, db/user/passwordplanexe(followsPLANEXE_POSTGRES_*); admin credentials must be provided viaPLANEXE_FRONTEND_MULTIUSER_ADMIN_USERNAME/PLANEXE_FRONTEND_MULTIUSER_ADMIN_PASSWORD(compose will fail if missing); container listens on fixed port5000, host maps${PLANEXE_BIND_HOST:-127.0.0.1}:${PLANEXE_FRONTEND_MULTIUSER_PORT:-5001}. - Health: depends on
database_postgreshealth; its own healthcheck hits/healthcheckon port 5000.
- Purpose: runs the PlanExe pipeline. Listens on port 8000 inside the container; not published to the host —
frontend_multi_userreaches it via the docker network atworker_plan:8000. The frontend depends on its health. - Build:
worker_plan/Dockerfile. - Env:
PLANEXE_CONFIG_PATH=/app,PLANEXE_WORKER_RELAY_PROCESS_OUTPUT=true. - Health:
http://localhost:8000/healthcheckchecked via the compose healthcheck. - Volumes:
.env(ro),llm_config/(ro). - Watch: sync
worker_plan/into/app/worker_plan, rebuild onworker_plan/pyproject.toml, restart on compose edits.
- Purpose: polls
PlanItemrows in Postgres, marks them processing, runs the PlanExe pipeline, and writes progress/events back to the DB; no HTTP port exposed. - Build:
worker_plan_database/Dockerfile(shipsworker_plancode, shareddatabase_apimodels, and this worker subclass). - Depends on:
database_postgreshealth. - Env defaults: derives
SQLALCHEMY_DATABASE_URIfromPLANEXE_POSTGRES_HOST|PORT|DB|USER|PASSWORD(fallbacks todatabase_postgres+planexe/planexeon 5432);PLANEXE_CONFIG_PATH=/app; MachAI confirmation URLs default tohttps://example.com/iframe_generator_confirmationfor bothPLANEXE_IFRAME_GENERATOR_CONFIRMATION_PRODUCTION_URLandPLANEXE_IFRAME_GENERATOR_CONFIRMATION_DEVELOPMENT_URL(override with real endpoints). - Volumes:
.env(ro),llm_config/(ro). Pipeline output stays inside the container; the worker persists final artifacts via the DB. - Entrypoint:
python -m worker_plan_database.app(runs the long-lived poller loop). - Multiple workers: compose defines
worker_plan_database_1/2/3withPLANEXE_WORKER_IDset to1/2/3. Start the trio with:docker compose up -d worker_plan_database_1 worker_plan_database_2 worker_plan_database_3- (Use
worker_plan_databasealone only via profile:docker compose --profile manual up worker_plan_database.)
- Purpose: Model Context Protocol (MCP) server that provides a standardized interface for AI agents and developer tools to interact with PlanExe. Communicates with
worker_plan_databasevia the shared Postgres database. - Build:
mcp_cloud/Dockerfile(ships shareddatabase_apimodels and the MCP server implementation). - Depends on:
database_postgresandworker_planhealth. - Env defaults: derives
SQLALCHEMY_DATABASE_URIfromPLANEXE_POSTGRES_HOST|PORT|DB|USER|PASSWORD(fallbacks todatabase_postgres+planexe/planexeon 5432);PLANEXE_CONFIG_PATH=/app;PLANEXE_MCP_HTTP_HOST=0.0.0.0,PLANEXE_MCP_HTTP_PORT=8001;PLANEXE_MCP_PUBLIC_BASE_URL=http://localhost:8001for report download URLs;PLANEXE_MCP_REQUIRE_AUTH=falseby default. - Ports: host
${PLANEXE_BIND_HOST:-127.0.0.1}:${PLANEXE_MCP_HTTP_PORT:-8001}-> container8001. - Volumes:
llm_config/(ro for provider configs). - Health:
http://localhost:8001/healthcheckchecked via the compose healthcheck. - Communication: Streamable HTTP (
/mcp) plus helper endpoints (/download/...,/sse/...). Point your MCP client athttp://localhost:${PLANEXE_MCP_HTTP_PORT:-8001}/mcp. - MCP tools: implements the specification in
docs/mcp/planexe_mcp_interface.mdincluding session management, artifact operations, and event streaming.
- Published ports (host-reachable, both default to
127.0.0.1):${PLANEXE_FRONTEND_MULTIUSER_PORT:-5001}->frontend_multi_user,${PLANEXE_MCP_HTTP_PORT:-8001}->mcp_cloud. SetPLANEXE_BIND_HOST=0.0.0.0for LAN access (read Published ports / bind host first). - Internal-only services (no host port):
database_postgres(:5432) andworker_plan(:8000). Reach them via the docker network from another container, or usedocker compose exec/ adocker-compose.override.ymlfor ad-hoc host access. .envmust exist beforedocker compose up; it is both loaded and mounted read-only. Same forllm_config/. If missing, start from.env.docker-example.- Database: connect from inside any container as
database_postgres:5432withplanexe/planexeby default; data persists via thedatabase_postgres_datavolume. Direct host access is opt-in via override file ordocker compose exec.
Snapshot from docker compose ps on a live stack with two numbered DB workers; your timestamps, ports, and container names may differ:
PROMPT> docker compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
database_postgres planexe-database_postgres "docker-entrypoint.s…" database_postgres 8 hours ago Up 8 hours (healthy)
frontend_multi_user planexe-frontend_multi_user "python /app/fronten…" frontend_multi_user 8 hours ago Up 2 minutes (healthy) 127.0.0.1:5001->5000/tcp
worker_plan planexe-worker_plan "uvicorn worker_plan…" worker_plan 2 minutes ago Up 2 minutes (healthy)
worker_plan_database_1 planexe-worker_plan_database_1 "python -m worker_pl…" worker_plan_database_1 15 seconds ago Up 13 seconds
worker_plan_database_2 planexe-worker_plan_database_2 "python -m worker_pl…" worker_plan_database_2 15 seconds ago Up 13 seconds