Run the diagnostics dashboard to observe a pile:
cargo run --manifest-path playground/Cargo.toml -- diagnosticsThe dashboard defaults to ./personas/<instance>/pile/self.pile (instance defaults to playground)
and branch cognition; change the path or branch in the UI if
your VM writes elsewhere.
Prefill the dashboard with a pile path:
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/self.pile diagnosticsRun the conceptual compaction notebook to explore carry merges as new messages are added:
cargo run --manifest-path playground/Cargo.toml --example compaction_lsmRun core + LLM + Lima exec. Defaults to ./personas/<instance>/pile/self.pile:
cargo run --manifest-path playground/Cargo.toml -- runPoint at a specific pile path:
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/pile/self.pile runRun core + LLM and start the exec worker inside a Lima VM (macOS):
cargo run --manifest-path playground/Cargo.toml --bin playground -- \
--pile /path/to/pile/self.pile runRun the core loop only (no LLM/exec workers):
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/pile/self.pile coreRun the LLM worker only (split-host setups or local testing):
cargo run --manifest-path playground/Cargo.toml --bin playground -- --pile /path/to/pile/self.pile llmRun the exec worker only (VM/split-host setups):
cargo run --manifest-path playground/Cargo.toml --bin playground -- --pile /path/to/pile/self.pile execEstimate pending compaction work (archive by default):
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/pile/self.pile memory estimateInclude pending exec leaves in the estimate:
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/pile/self.pile memory estimate --include-execOptionally provide pricing to get a rough USD estimate:
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/pile/self.pile \
memory estimate \
--input-cost-per-1m-tokens 2.0 \
--output-cost-per-1m-tokens 6.0 \
--cost-currency EURBackfill context memory chunks without creating LLM requests:
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/pile/self.pile memory buildCap archive ingestion per run (useful for staged backfills):
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/pile/self.pile \
memory build --max-archive-leaves 500Playground stores its configuration inside the pile. Use the config subcommand to inspect or update it:
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/pile/self.pile config show
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/pile/self.pile config set poll-ms 100
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/pile/self.pile config set memory-compaction-arity 8Prompts can also be loaded from files:
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/pile/self.pile config set system-prompt @./system_prompt.txt
./playground/faculties/headspace --pile /path/to/pile/self.pile lens set factual prompt @./memory_lens_factual.txt
./playground/faculties/headspace --pile /path/to/pile/self.pile lens set factual compaction-prompt @./memory_lens_factual_compaction.txt
./playground/faculties/headspace --pile /path/to/pile/self.pile lens add reflective --prompt @./memory_lens_reflective.txt --compaction-prompt @./memory_lens_reflective_compaction.txt --max-output-tokens 160
./playground/faculties/headspace --pile /path/to/pile/self.pile lens listUse @- to read a value from stdin (for both playground config set and headspace value fields).
Prompt files in playground/prompts/*.md are generated from templates in
playground/prompts/templates/*.tmpl.md. Re-render after editing templates or shared fragments:
python3 playground/scripts/render_prompts.py
python3 playground/scripts/render_prompts.py --checkYou can pin branch ids in config (recommended) so faculties resolve stable branch identities:
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/pile/self.pile config set compass-branch-id <hex-id>
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/pile/self.pile config set local-messages-branch-id <hex-id>
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/pile/self.pile config set relations-branch-id <hex-id>Clear an optional config field:
./playground/faculties/headspace --pile /path/to/pile/self.pile lens reset factual prompt
./playground/faculties/headspace --pile /path/to/pile/self.pile lens remove reflectiveManage LLM profiles (headspaces):
./playground/faculties/headspace --pile /path/to/pile/self.pile list
./playground/faculties/headspace --pile /path/to/pile/self.pile add "oss-120" --model gpt-oss:120b --base-url http://localhost:11434/v1/responses
./playground/faculties/headspace --pile /path/to/pile/self.pile use oss-120
./playground/faculties/headspace --pile /path/to/pile/self.pile set reasoning-effort medium
./playground/faculties/headspace --pile /path/to/pile/self.pile set api-key sk-...LLM/headspace settings (model/base-url/reasoning/api-key, compaction profile, and memory lenses)
are managed by the headspace faculty. Compaction merge arity is runtime config.
Capture a curated snapshot of the workspace into the pile (branch workspace by default):
./playground/faculties/workspace.rs --pile /path/to/pile/self.pile capture \
playground/faculties /workspace/faculties \
--label "seed:faculties"List snapshots:
./playground/faculties/workspace.rs --pile /path/to/pile/self.pile listRestore the latest snapshot into a target directory:
./playground/faculties/workspace.rs --pile /path/to/pile/self.pile restore /tmp/workspaceSnapshots are used by the exec worker to bootstrap /workspace on startup.
Bootstrap now performs a non-destructive merge of the latest snapshot lineage:
- missing files/dirs/symlinks are created,
- existing matching entries are kept as-is,
- conflicting existing paths are left untouched.
On macOS, use Lima to run the exec worker in a VM while the core loop + LLM worker run on the host:
cargo run --manifest-path playground/Cargo.toml -- --pile /path/to/pile/self.pile runThis command:
- Creates/starts a Lima instance (default name
playground). - Runs the core loop + LLM worker on the host.
- Runs the exec worker inside the VM, pointed at the same pile.
Commands executed by the exec worker receive:
PILE(active pile path),CONFIG_BRANCH_ID(6069A136254E1B87E4C0D2E0295DB382),WORKER_ID(exec worker id),TURN_ID(current exec request id).
Reason notes (useful when a model/provider does not expose reasoning output):
./playground/faculties/reason "Why this action makes sense"
./playground/faculties/reason "Why this command now" -- git statusreason logs a structured rationale event into the active exec/cognition branch
and (when a command is provided) then runs it.
Pass a pile path as the first argument if you want a non-default location. The Lima VM is recreated on every run to ensure the exec environment matches the host.