diff --git a/.cargo/config.toml b/.cargo/config.toml new file mode 100644 index 0000000..a6b014e --- /dev/null +++ b/.cargo/config.toml @@ -0,0 +1,2 @@ +[build] +target-dir = "target" diff --git a/.cursor/rules/antipatterns.mdc b/.cursor/rules/antipatterns.mdc new file mode 100644 index 0000000..d421002 --- /dev/null +++ b/.cursor/rules/antipatterns.mdc @@ -0,0 +1,20 @@ +--- +description: Common LoopForge mistakes to avoid during edits +alwaysApply: false +--- + +# LoopForge Antipatterns + +Avoid these mistakes: + +- Editing plan or story data only in Zustand without persisting to artifact files +- Saving empty wizard snapshots like `"{}"` instead of full draft state +- Passing partial configure inputs that never reach `build_ralph_config` +- Leaving process cleanup stubs without child termination and flush behavior +- Creating oversized mixed concern modules instead of splitting commands, services, and storage + +Preferred patterns: + +- Persist `plan.md`, `prd.json`, `config.json`, and `draft.json` through backend commands +- Keep command handlers thin and move logic into services +- Validate artifact reads and writes with tests when changing flow contracts diff --git a/.cursor/rules/frozen-modules.mdc b/.cursor/rules/frozen-modules.mdc new file mode 100644 index 0000000..c765be7 --- /dev/null +++ b/.cursor/rules/frozen-modules.mdc @@ -0,0 +1,16 @@ +--- +description: Apply when a task touches frozen or unconnected modules +alwaysApply: false +--- + +# Frozen Modules + +Do not modify, extend, or wire these modules into new code unless the task explicitly includes a planned vertical slice for them. + +- `connections.rs` and `Connections.tsx` +- `plugin_registry.rs` and `Plugins.tsx` +- `scm_watcher.rs` +- `ephemeral_query.rs` and `EphemeralOverlay.tsx` +- `summary_generator.rs` + +If work must touch one of these files, pause and confirm the intended slice boundary before coding. diff --git a/.cursor/rules/project.mdc b/.cursor/rules/project.mdc deleted file mode 100644 index bded838..0000000 --- a/.cursor/rules/project.mdc +++ /dev/null @@ -1,22 +0,0 @@ ---- -description: Global project conventions for LoopForge -globs: **/* ---- - -# LoopForge Project Rules - -## Stack -- Tauri v2 desktop app: Rust backend + React 19 / TypeScript frontend -- Cargo workspace: `crates/ralph-core` (lib) + `src-tauri` (bin) -- Tailwind CSS 4 with OKLCH design tokens -- SQLite for metadata, filesystem for artifacts -- Zustand for frontend state - -## Absolute Rules -- No comments or JSDoc in code -- No single-character variable names -- No eslint-disable directives -- No raw color values (hex, rgb, oklch literals) -- All colors via design token utility classes -- One atomic commit per sub-task -- Commit format: `type(scope): description` diff --git a/.cursor/rules/react.mdc b/.cursor/rules/react.mdc index d5cbfd7..4f30f53 100644 --- a/.cursor/rules/react.mdc +++ b/.cursor/rules/react.mdc @@ -8,12 +8,13 @@ globs: "src/**/*.{ts,tsx}" ## Components - Functional components with named exports - PascalCase for component names and files -- Props as inline type parameters, not separate interface files +- Keep components focused and extract early before large mixed concern files form ## State - Zustand stores in `src/stores/` - One store per domain: projectStore, agentStore, loopStore, wizardStore - Use selectors to minimize re-renders +- Never treat Zustand as source of truth for `plan.md`, `prd.json`, `config.json`, or `draft.json` ## Styling - Tailwind CSS 4 utility classes exclusively @@ -32,3 +33,14 @@ globs: "src/**/*.{ts,tsx}" - Bindings in `src/lib/tauri.ts` - Use `@tauri-apps/api/core` for invoke - Use `@tauri-apps/api/event` for listen +- Persist user edits through backend commands before advancing workflow stages + +## Data Contracts +- Plan step edits must land in `plan.md` +- Atomize edits must land in `prd.json` +- Configure edits must land in `config.json` +- Wizard resume data must land in `draft.json` + +## Testing Expectations +- Run `bun run typecheck` after TypeScript edits +- For flow changes, verify frontend action reaches backend command and artifact update diff --git a/.cursor/rules/rust.mdc b/.cursor/rules/rust.mdc index 0ed16ec..fb3a1c6 100644 --- a/.cursor/rules/rust.mdc +++ b/.cursor/rules/rust.mdc @@ -10,6 +10,7 @@ globs: "**/*.rs" - `anyhow::Result` acceptable at application boundary (src-tauri commands) - Never `unwrap()` in `crates/ralph-core/`; acceptable in tests - Use `?` propagation, not `match` chains for simple forwarding +- Keep command errors serializable for IPC boundaries ## Async - Runtime: tokio (multi-thread) @@ -21,10 +22,16 @@ globs: "**/*.rs" - New optional fields use `#[serde(default)]` - JSON output: `serde_json` with `to_string_pretty` for artifact files +## Persistence Contracts +- Keep metadata in SQLite through `rusqlite` and managed `DbState` +- Keep project workflow artifacts on filesystem under project directories +- Do not use `tauri-plugin-sql` for project persistence + ## Testing - Unit tests in same file (`#[cfg(test)]` module) - Integration tests in `tests/` directory - `#[tokio::test]` for async tests +- For flow changes, test storage, service, and command layers together where possible ## Dependencies (ralph-core) - serde, serde_json, tokio, anyhow, thiserror, regex, chrono diff --git a/.cursor/rules/tauri.mdc b/.cursor/rules/tauri.mdc index 7115d3f..40994ca 100644 --- a/.cursor/rules/tauri.mdc +++ b/.cursor/rules/tauri.mdc @@ -9,12 +9,12 @@ globs: "src-tauri/**/*.rs" - Define in `src-tauri/src/commands/` module - Use `#[tauri::command]` macro - Return `Result` for IPC error serialization -- Group by domain: project_commands.rs, agent_commands.rs, loop_commands.rs +- Keep command handlers thin and delegate logic to services ## Events - Emit from backend: `app_handle.emit("event-name", payload)` - Listen in frontend: `listen("event-name", callback)` -- Event names: kebab-case (plan-activity, agent-output-stream, iteration-started) +- Event names: kebab-case (`plan-activity`, `agent-output-stream`, `iteration-started`) ## State Management - Use `tauri::State>` or `tauri::State>>` @@ -28,11 +28,10 @@ globs: "src-tauri/**/*.rs" - Track PIDs for graceful termination (SIGTERM, then SIGKILL after timeout) ## SQLite -- Use `tauri-plugin-sql` with migrations defined in Rust -- Migrations registered via `tauri_plugin_sql::Builder::default().add_migrations()` -- Database at `sqlite:loopforge.db` (resolves to AppConfig directory) -- Frontend queries via `@tauri-apps/plugin-sql` (`Database.load()`, `db.select()`, `db.execute()`) -- Permissions in `capabilities/default.json`: `sql:default`, `sql:allow-execute`, `sql:allow-select` +- Use `rusqlite` directly via `DbState` +- Do not use `tauri-plugin-sql` or `@tauri-apps/plugin-sql` +- Keep SQLite access in backend services and expose data through Tauri commands +- Store metadata in SQLite and keep artifact files on filesystem per project directory ## Shell (Agent CLI) - Use `tauri-plugin-shell` for spawning agent CLIs @@ -40,3 +39,7 @@ globs: "src-tauri/**/*.rs" - Returns `(Receiver, Child)` for async streaming - `child.write(bytes)` for stdin piping - Permissions in `capabilities/default.json`: `shell:default` + +## Vertical Slice Reminder +- Add features in this order: storage, model, service, command, event, IPC binding, UI, test +- Do not ship scaffolding steps without wiring the end to end path diff --git a/.cursor/rules/templates.mdc b/.cursor/rules/templates.mdc new file mode 100644 index 0000000..0303c61 --- /dev/null +++ b/.cursor/rules/templates.mdc @@ -0,0 +1,15 @@ +--- +description: MiniJinja template editing conventions +globs: "src-tauri/templates/**/*.j2" +alwaysApply: false +--- + +# MiniJinja Template Rules + +- Keep templates deterministic and data driven +- Prefer explicit variable names that match Rust struct fields +- Avoid hidden control flow and nested branching when simple conditional blocks work +- Keep prompt and output sections easy to diff across iterations +- Preserve whitespace behavior expected by downstream parsers + +When template fields change, update the corresponding Rust serializer or model and verify the full atomizer pipeline. diff --git a/.cursor/rules/vertical-slice.mdc b/.cursor/rules/vertical-slice.mdc new file mode 100644 index 0000000..15713be --- /dev/null +++ b/.cursor/rules/vertical-slice.mdc @@ -0,0 +1,19 @@ +--- +description: Apply when adding new features or extending IPC flows +alwaysApply: false +--- + +# Vertical Slice Workflow + +Implement feature changes in this exact order: + +1. Storage schema or artifact contract +2. Rust models with serde support +3. Service logic in backend +4. Tauri command wrapper +5. Event and stream wiring if needed +6. Frontend IPC binding in `src/lib/tauri.ts` +7. UI integration +8. Tests proving end to end behavior + +Do not ship partial scaffolding. Each step depends on the previous step being wired and verified. diff --git a/.editorconfig b/.editorconfig new file mode 100644 index 0000000..6732c0f --- /dev/null +++ b/.editorconfig @@ -0,0 +1,22 @@ +root = true + +[*] +end_of_line = lf +insert_final_newline = true +trim_trailing_whitespace = true +charset = utf-8 + +[*.{ts,tsx,js,json,css,html}] +indent_style = space +indent_size = 2 + +[*.rs] +indent_style = space +indent_size = 4 + +[*.toml] +indent_style = space +indent_size = 2 + +[*.md] +trim_trailing_whitespace = false diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS new file mode 100644 index 0000000..b620ac3 --- /dev/null +++ b/.github/CODEOWNERS @@ -0,0 +1,11 @@ +* @taberoajorge + +/.github/ @taberoajorge +/src-tauri/ @taberoajorge +/crates/ @taberoajorge +/src/ @taberoajorge + +SECURITY.md @taberoajorge +LICENSE @taberoajorge +README.md @taberoajorge +CONTRIBUTING.md @taberoajorge diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml new file mode 100644 index 0000000..8c7a03e --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -0,0 +1,96 @@ +name: Bug report +description: Report a reproducible defect in LoopForge +labels: + - bug + - triage +body: + - type: markdown + attributes: + value: | + Thanks for taking the time to report a defect. Please fill every section so the maintainers can reproduce the issue fast. + + - type: checkboxes + id: preflight + attributes: + label: Preflight checks + options: + - label: I searched existing issues and did not find a duplicate. + required: true + - label: I am running the latest release or a recent `main` build. + required: true + - label: I reviewed the troubleshooting section of the README. + required: true + + - type: input + id: loopforge_version + attributes: + label: LoopForge version + description: Output of the `About` dialog or the release tag you are running. + placeholder: "v0.1.0" + validations: + required: true + + - type: dropdown + id: platform + attributes: + label: Operating system + options: + - macOS + - Linux + - Windows + - Other + validations: + required: true + + - type: input + id: os_version + attributes: + label: Operating system version + placeholder: "macOS 15.2, Ubuntu 22.04, Windows 11 23H2" + validations: + required: true + + - type: input + id: agent_cli + attributes: + label: Agent CLI and version + placeholder: "claude-code 1.2.3, codex 0.4.0, cursor-agent 0.9.1" + validations: + required: true + + - type: textarea + id: summary + attributes: + label: Summary of the defect + description: What did you expect to happen and what actually happened. + validations: + required: true + + - type: textarea + id: reproduction + attributes: + label: Reproduction steps + description: Minimal steps that reliably reproduce the defect. + placeholder: | + 1. Open LoopForge + 2. Start the wizard and load `plan.md` + 3. Launch the loop and observe the failure + validations: + required: true + + - type: textarea + id: logs + attributes: + label: Relevant logs + description: Paste the contents of `.loopforge/activity.log` and `.loopforge/error.log` from the project directory. + render: shell + validations: + required: false + + - type: textarea + id: extra + attributes: + label: Additional context + description: Screenshots, configuration snippets, or anything else that helps. + validations: + required: false diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml new file mode 100644 index 0000000..f698c3d --- /dev/null +++ b/.github/ISSUE_TEMPLATE/config.yml @@ -0,0 +1,8 @@ +blank_issues_enabled: false +contact_links: + - name: Security vulnerability + url: https://github.com/taberoajorge/loopforge/security/advisories/new + about: Report a security vulnerability through a private advisory, never in a public issue. + - name: Question or discussion + url: https://github.com/taberoajorge/loopforge/discussions + about: Ask questions or share ideas in Discussions once they are enabled. diff --git a/.github/ISSUE_TEMPLATE/feature_request.yml b/.github/ISSUE_TEMPLATE/feature_request.yml new file mode 100644 index 0000000..9d1d8dd --- /dev/null +++ b/.github/ISSUE_TEMPLATE/feature_request.yml @@ -0,0 +1,52 @@ +name: Feature request +description: Propose a new capability or an enhancement for LoopForge +labels: + - enhancement + - triage +body: + - type: markdown + attributes: + value: | + Thanks for suggesting an improvement. Please describe the outcome first, then the implementation ideas. + + - type: checkboxes + id: preflight + attributes: + label: Preflight checks + options: + - label: I searched existing issues and discussions and did not find a duplicate proposal. + required: true + - label: The feature aligns with the scope described in the README. + required: true + + - type: textarea + id: problem + attributes: + label: Problem statement + description: What situation or pain point motivates the request. + validations: + required: true + + - type: textarea + id: proposal + attributes: + label: Proposal + description: A short description of the desired behaviour or workflow. + validations: + required: true + + - type: textarea + id: alternatives + attributes: + label: Alternatives considered + description: Any other approaches you weighed and why they fall short. + validations: + required: false + + - type: textarea + id: context + attributes: + label: Additional context + description: Mockups, command examples, related prior art. + validations: + required: false diff --git a/.github/dependabot.yml b/.github/dependabot.yml new file mode 100644 index 0000000..ee0ac38 --- /dev/null +++ b/.github/dependabot.yml @@ -0,0 +1,64 @@ +version: 2 +updates: + - package-ecosystem: "github-actions" + directory: "/" + schedule: + interval: "weekly" + day: "monday" + time: "08:00" + timezone: "America/Bogota" + open-pull-requests-limit: 5 + labels: + - "dependencies" + - "ci" + commit-message: + prefix: "ci" + include: "scope" + + - package-ecosystem: "cargo" + directory: "/" + schedule: + interval: "weekly" + day: "monday" + time: "08:00" + timezone: "America/Bogota" + open-pull-requests-limit: 10 + labels: + - "dependencies" + - "rust" + commit-message: + prefix: "deps" + include: "scope" + groups: + rust-patch: + applies-to: version-updates + update-types: + - "patch" + rust-minor: + applies-to: version-updates + update-types: + - "minor" + + - package-ecosystem: "npm" + directory: "/" + schedule: + interval: "weekly" + day: "monday" + time: "08:00" + timezone: "America/Bogota" + open-pull-requests-limit: 10 + labels: + - "dependencies" + - "javascript" + commit-message: + prefix: "deps" + include: "scope" + groups: + js-patch: + applies-to: version-updates + update-types: + - "patch" + js-minor: + applies-to: version-updates + update-types: + - "minor" diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index bd67625..dc1794d 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -29,10 +29,10 @@ jobs: runs-on: ${{ matrix.platform }} steps: - - uses: actions/checkout@v4 + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 - name: Install Rust stable - uses: dtolnay/rust-toolchain@stable + uses: dtolnay/rust-toolchain@29eef336d9b2848a0b548edc03f92a220660cdb8 # stable with: targets: ${{ matrix.rust_targets }} @@ -43,18 +43,18 @@ jobs: sudo apt-get install -y libwebkit2gtk-4.1-dev libappindicator3-dev librsvg2-dev patchelf - name: Setup Node - uses: actions/setup-node@v4 + uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0 with: node-version: lts/* - name: Setup Bun - uses: oven-sh/setup-bun@v2 + uses: oven-sh/setup-bun@0c5077e51419868618aeaa5fe8019c62421857d6 # v2.2.0 - name: Install frontend dependencies run: bun install - name: Build and publish - uses: tauri-apps/tauri-action@v0 + uses: tauri-apps/tauri-action@84b9d35b5fc46c1e45415bdb6144030364f7ebc5 # v0.6.2 env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} TAURI_SIGNING_PRIVATE_KEY: ${{ secrets.TAURI_SIGNING_PRIVATE_KEY }} diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml index 919fb9e..732650f 100644 --- a/.github/workflows/test.yml +++ b/.github/workflows/test.yml @@ -14,10 +14,10 @@ jobs: typecheck: runs-on: ubuntu-22.04 steps: - - uses: actions/checkout@v4 + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 - name: Setup Bun - uses: oven-sh/setup-bun@v2 + run: npm install -g bun@1.3.10 - name: Install frontend dependencies run: bun install @@ -25,6 +25,20 @@ jobs: - name: TypeScript check run: bun run typecheck + lint: + runs-on: ubuntu-22.04 + steps: + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + - name: Setup Bun + run: npm install -g bun@1.3.10 + + - name: Install frontend dependencies + run: bun install + + - name: Biome lint + run: bun run lint + build: strategy: fail-fast: false @@ -46,15 +60,15 @@ jobs: runs-on: ${{ matrix.platform }} steps: - - uses: actions/checkout@v4 + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 - name: Install Rust stable - uses: dtolnay/rust-toolchain@stable + uses: dtolnay/rust-toolchain@29eef336d9b2848a0b548edc03f92a220660cdb8 # stable with: targets: ${{ matrix.rust_targets }} - name: Rust cache - uses: swatinem/rust-cache@v2 + uses: swatinem/rust-cache@e18b497796c12c097a38f9edb9d0641fb99eee32 # v2.9.1 with: workspaces: src-tauri @@ -65,34 +79,32 @@ jobs: sudo apt-get install -y libwebkit2gtk-4.1-dev libappindicator3-dev librsvg2-dev patchelf - name: Setup Node - uses: actions/setup-node@v4 + uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0 with: node-version: lts/* - name: Setup Bun - uses: oven-sh/setup-bun@v2 + run: npm install -g bun@1.3.10 - name: Install frontend dependencies run: bun install - name: Build Tauri app - uses: tauri-apps/tauri-action@v0 - env: - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - with: - tauriScript: bunx tauri - args: ${{ matrix.args }} + shell: bash + run: bunx tauri build ${{ matrix.args }} --config '{"bundle":{"createUpdaterArtifacts":false}}' rust-tests: runs-on: ubuntu-22.04 steps: - - uses: actions/checkout@v4 + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 - name: Install Rust stable - uses: dtolnay/rust-toolchain@stable + uses: dtolnay/rust-toolchain@29eef336d9b2848a0b548edc03f92a220660cdb8 # stable + with: + components: clippy, rustfmt - name: Rust cache - uses: swatinem/rust-cache@v2 + uses: swatinem/rust-cache@e18b497796c12c097a38f9edb9d0641fb99eee32 # v2.9.1 - name: Install Linux dependencies run: | @@ -101,3 +113,9 @@ jobs: - name: Run tests run: cargo test + + - name: Rust format check + run: cargo fmt --all -- --check + + - name: Rust clippy + run: cargo clippy --workspace -- -D warnings diff --git a/.gitignore b/.gitignore index 7729132..5605e7a 100644 --- a/.gitignore +++ b/.gitignore @@ -23,8 +23,16 @@ ralph/ ralph 2/ openspec/ -.claude/prompts/ -.claude/worktrees/ +.loopforge/ + +.claude/ +.codex/ +.gemini/ +.opencode/ +.cursor/commands/ +.cursor/skills/ +.cursor/agents/ +.cursor/mcp.json plan.md prd.json diff --git a/AGENTS.md b/AGENTS.md index 4ae8682..8c9c285 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -1,143 +1,140 @@ # LoopForge -Desktop AI loop orchestrator. Tauri v2 (Rust + React). Plan → Atomize → Execute → Monitor. +Desktop AI loop orchestrator. Tauri v2 with Rust and React. Workflow: Plan, Atomize, Execute, Monitor. ## Stack -- Backend: Rust (Cargo workspace) - - `crates/ralph-core` — library crate: loop engine, PRD, prompt builder, providers, detection, verification - - `src-tauri/` — Tauri v2 app binary, consumes ralph-core via Cargo path dependency +- Backend: Rust Cargo workspace + - `crates/ralph-core` for pure loop logic + - `src-tauri/` for Tauri binary and adapters - Frontend: React 19, TypeScript 5, Tailwind CSS 4, Zustand 5 -- Storage: rusqlite (SQLite, bundled) + filesystem artifacts -- Desktop: Tauri v2 (system webview, tray icon, native notifications) -- Templates: MiniJinja (.j2) for atomization pipeline -- Package manager: Bun (frontend), Cargo (Rust) +- Storage: `rusqlite` for metadata and filesystem artifacts for project state +- Templates: MiniJinja `.j2` +- Package managers: Bun and Cargo ## Commands -``` -bun run tauri dev # full app development (backend + frontend) -bun run tauri build # production build -cargo test # Rust tests (from workspace root) -bun run dev # frontend only (Vite dev server on :1420) -bun run build # frontend production build -bun run typecheck # TypeScript type check (tsc --noEmit) +```bash +bun run tauri dev +bun run tauri build +cargo test +bun run dev +bun run build +bun run typecheck ``` ## Architecture -Three-layer hexagonal architecture: +Three layer hexagonal architecture with explicit boundaries. | Layer | Directory | Responsibility | -|-------|-----------|----------------| -| Domain | `crates/ralph-core/src/` | Pure loop logic, no UI, no Tauri deps | -| Shell | `src-tauri/src/` | Tauri commands, events, SQLite, filesystem, tray | -| UI | `src/` | React pages, Zustand stores, Tailwind styling | +|------|------|------| +| Domain | `crates/ralph-core/src/` | Pure loop logic without UI or Tauri dependencies | +| Shell | `src-tauri/src/` | Tauri commands, events, SQLite, filesystem integration | +| UI | `src/` | React pages, components, stores, and rendering | + +IPC contract: `#[tauri::command]` for request response and `app.emit()` for streaming events. -IPC: `#[tauri::command]` for request/response, `app.emit()` for streaming events. +## Architecture Rationale + +- Keep loop logic testable and independent from desktop runtime +- Keep persistence in Rust services through `DbState` and `rusqlite` +- Keep project artifacts as durable handoff files between wizard and runtime +- Keep UI stores as rendering state, never as canonical persisted state +- Keep atomization deterministic through MiniJinja templates ## Conventions -- No comments, no JSDoc in code -- No single-character variables -- No eslint-disable -- Max 200 lines per file +- No comments and no JSDoc in code +- No single character variable names +- No `eslint-disable` +- Max 200 lines per Rust and TypeScript file - Commit format: `type(scope): description` -- Types: feat, fix, refactor, test, chore -- Scopes: core, tauri, frontend, tokens, docs -- One atomic commit per logical unit of work +- Atomic commits by logical unit -## Rust +## Rust Rules -- `thiserror` for error types in ralph-core, `anyhow` for application errors in src-tauri -- No `unwrap()` in library code (ralph-core); acceptable in tests only -- No `println!` in library code; use `tracing` or `log` -- All new struct fields use `#[serde(default)]` for backward compatibility -- Async runtime: tokio (full features) -- Child process management: `tokio::process::Command` -- Template engine: minijinja (NOT Handlebars) +- Use `thiserror` in `ralph-core` and `anyhow` in `src-tauri` +- No `unwrap()` in `ralph-core` non test code +- No `println!` in library code, use `tracing` or `log` +- New struct fields must use `#[serde(default)]` +- Child processes use `tokio::process::Command` -## React / TypeScript +## React and TypeScript Rules - Functional components only -- State management: Zustand (stores for app-level state only, NOT as source of truth for artifacts) -- Styling: Tailwind CSS 4 utility classes using design tokens from `src/tokens.css` -- NEVER use raw hex, rgb, or arbitrary color values -- ALWAYS use token classes: `bg-void`, `bg-surface`, `text-primary`, `border-border`, etc. -- Routing: React Router v7 -- Terminal rendering: xterm.js (@xterm/xterm) -- Markdown: react-markdown + remark-gfm +- Zustand for app state only, not artifact source of truth +- Tailwind utility classes using design tokens from `src/tokens.css` +- Never use raw hex, rgb, or oklch color values +- Keep bindings in `src/lib/tauri.ts` for invoke and event APIs + +## Testing + +- Run `cargo test` after backend or core updates +- Run `bun run typecheck` after frontend TypeScript updates +- Add integration tests for new vertical slices touching storage to UI +- Keep Rust unit tests in file `#[cfg(test)]` modules +- Keep integration tests in `tests/` directories where applicable +- Validate artifact read and write behavior when changing project flow + +## Safety + +- Ask for confirmation before destructive git commands or release actions +- Do not commit secrets or local credential files +- Do not introduce `tauri-plugin-sql` or frontend SQL plugins +- Keep frozen modules untouched unless a planned vertical slice requires work ## File Boundaries | Directory | Responsibility | -|-----------|----------------| -| `crates/ralph-core/src/` | Pure loop logic: engine, PRD, prompt, providers, detection, verification, config | -| `src-tauri/src/` | Tauri commands, events, SQLite (rusqlite), filesystem artifacts, tray, agent registry | -| `src-tauri/templates/` | MiniJinja (.j2) templates for atomization pipeline | -| `src/pages/` | Page-level components: Home, wizard steps, Monitor | -| `src/components/` | Reusable React components | -| `src/stores/` | Zustand stores (app-level state only) | -| `src/lib/` | Tauri IPC bindings, utility functions | -| `docs/` | Architecture, schema, and token documentation | - -## Tauri IPC — Registered Commands - -Commands actually registered in `src-tauri/src/lib.rs`: - -### Agents -- `detect_agents` — scan for installed CLI agents -- `refresh_agents` — re-scan on demand - -### Planning -- `start_plan` — spawn agent CLI in plan mode, stream via Channel -- `write_to_plan` — forward user input to agent stdin -- `stop_plan` — terminate plan session - -### Projects -- `create_project`, `finalize_draft`, `discard_draft` -- `save_wizard_state`, `resume_wizard` -- `list_projects`, `pause_project`, `resume_project`, `archive_project` -- `get_project_detail`, `get_project_stories`, `get_guardrails`, `get_project_config` -- `get_notification_prefs`, `save_notification_prefs` -- `load_existing_plan`, `load_existing_prd` - -### Atomization -- `run_atomizer` — 4-stage MiniJinja pipeline - -### Execution -- `start_loop`, `stop_loop`, `session_stats` - -### Events (app.emit) -- `agent-output-stream` — raw agent stdout/stderr -- `iteration_started`, `iteration_completed` -- `story_blocked`, `rate_limit_detected`, `session_ended` - -### Channels -- `plan-activity` — scoped to `start_plan`, carries `PlanEvent` -- `atomization-progress` — stage progress during atomization +|------|------| +| `crates/ralph-core/src/` | Loop engine, PRD, prompt, providers, detection, verification, config | +| `src-tauri/src/` | Commands, services, storage, events, process lifecycle | +| `src-tauri/templates/` | MiniJinja templates for atomizer | +| `src/pages/` | Home, wizard flow, monitor pages | +| `src/components/` | Reusable UI components | +| `src/stores/` | Zustand stores for UI state only | +| `src/lib/` | IPC wrappers and utilities | +| `docs/` | Architecture and schema references | + +## Tauri IPC Registered Command Groups + +- Agents: `detect_agents`, `refresh_agents` +- Planning: `start_plan`, `write_to_plan`, `stop_plan` +- Projects: create, finalize, discard, save and resume wizard, detail and config reads +- Atomization: `run_atomizer` +- Execution: `start_loop`, `stop_loop`, `session_stats` +- Streams: `agent-output-stream`, `iteration_started`, `iteration_completed`, `story_blocked`, `rate_limit_detected`, `session_ended` +- Channels: `plan-activity`, `atomization-progress` ## Prohibitions -- NEVER create files larger than 200 lines -- NEVER use Zustand as source of truth for filesystem artifacts (plan.md, prd.json, config.json) -- NEVER add `unwrap()` to ralph-core non-test code -- NEVER use `println!` in library code -- NEVER use raw color values (hex, rgb, oklch literals) — use design token classes -- NEVER wire frozen modules into new code (connections, plugins, scm_watcher, ephemeral_query, summary_generator) -- NEVER skip the vertical slice workflow (storage → command → service → event → UI) -- NEVER mark a feature as done without end-to-end testing -- NEVER use `tauri-plugin-sql` — the app uses `rusqlite` directly via `DbState` +- Never create files above 200 lines in Rust or TypeScript +- Never use Zustand as source of truth for `plan.md`, `prd.json`, `config.json`, `draft.json` +- Never wire frozen modules into new code without planned vertical slice +- Never skip full vertical slice ordering for new features +- Never mark work complete without end to end validation +- Never use `tauri-plugin-sql`, the app uses `rusqlite` through `DbState` ## Artifact Contract -Per-project directory at `~/.config/loopforge/projects//`: +Per project directory: `~/.config/loopforge/projects//` | File | Purpose | Written by | Read by | -|------|---------|-----------|---------| -| `draft.json` | Wizard snapshot (all steps) | Wizard step transitions | Wizard resume | -| `plan.md` | Generated research plan | Plan engine | Atomizer stage 1 | -| `prd.json` | Atomic user stories | Atomizer stage 4 + UI edits | Loop engine | +|------|------|------|------| +| `draft.json` | Wizard snapshot | Wizard transitions | Wizard resume | +| `plan.md` | Generated plan | Plan engine | Atomizer stage 1 | +| `prd.json` | Atomic user stories | Atomizer and Atomize UI edits | Loop engine | | `config.json` | Execution configuration | Configure step | Loop manager | -| `prompt.md` | Agent execution instructions | Atomizer stage 4 | Loop engine | -| `guardrails.md` | Dynamic guardrails | Atomizer + loop engine | Loop engine | +| `prompt.md` | Execution prompt | Atomizer stage 4 | Loop engine | +| `guardrails.md` | Dynamic guardrails | Atomizer and loop engine | Loop engine | + +## Scoped Cursor Rules + +- `.cursor/rules/rust.mdc` +- `.cursor/rules/react.mdc` +- `.cursor/rules/tauri.mdc` +- `.cursor/rules/frozen-modules.mdc` +- `.cursor/rules/vertical-slice.mdc` +- `.cursor/rules/antipatterns.mdc` +- `.cursor/rules/templates.mdc` diff --git a/CLAUDE.md b/CLAUDE.md index 1183815..d466d45 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -1,8 +1,8 @@ # LoopForge -Desktop AI loop orchestrator. Tauri v2 (Rust backend) + React 19 frontend. Three-phase workflow: Plan → Atomize → Execute with real-time Monitor. +Desktop AI loop orchestrator. Tauri v2 with Rust backend and React frontend. -**Status**: Reboot in progress. Core flow works structurally but has broken contracts between layers. See "Known Broken Contracts" below. +This file is a Claude session primer. See `AGENTS.md` for full canonical conventions. ## Quick Start @@ -13,19 +13,6 @@ bun run dev # frontend only (Vite on :1420) bun run typecheck # TypeScript check ``` -## Architecture - -Three-layer hexagonal: domain → shell → presentation. - -``` -ralph-core (library, zero Tauri deps) - └── loop_engine, prd, prompt, providers, detection, verification, config -src-tauri (Tauri v2 binary, consumes ralph-core) - └── 16 modules in src-tauri/src/ (flat, being restructured to commands/services/storage) -src/ (React 19 + TypeScript + Tailwind CSS 4 + Zustand) - └── pages/ (Home, wizard/[Describe,Plan,Atomize,Configure,Launch], monitor/Monitor) -``` - ## Critical Rules - Max 200 lines per file (Rust and TypeScript) @@ -38,6 +25,7 @@ src/ (React 19 + TypeScript + Tailwind CSS 4 + Zustand) - No `println!` in library code; use `tracing` or `log` - All new struct fields use `#[serde(default)]` - Atomic commits: `type(scope): description` +- Do not use `tauri-plugin-sql`; use backend `rusqlite` via `DbState` ## Source of Truth @@ -50,73 +38,15 @@ src/ (React 19 + TypeScript + Tailwind CSS 4 + Zustand) | Wizard state | `draft.json` on filesystem | NOT `wizard_state_json = "{}"` | | Loop runtime state | Backend `LoopManagerState` | Frontend only renders snapshots | -## Known Broken Contracts (being fixed) - -1. **Plan edits don't persist** — `Plan.tsx` edits stay in local state, atomizer reads `plan.md` from disk -2. **PRD edits don't reach runtime** — Zustand edits never saved to `prd.json`, loop runs stale version -3. **Configure is decorative** — UI captures 9 fields, `StartLoopArgs` only carries 4, `build_ralph_config` only applies `max_iterations` -4. **Wizard persistence is nominal** — saves `"{}"` on exit, rehidration ignores `wizard_state_json` -5. **Plan cleanup is a stub** — `cleanup_session_by_id` is empty, child processes leak -6. **V2 features are facades** — ephemeral_query returns "future update", plugin_registry is static, scm_watcher not integrated -7. **Summary uses wrong directory** — git diff runs against artifacts dir, not working directory - -## Antipatterns - -BAD: Edit stories in Zustand without persisting → loop runs stale PRD -GOOD: Persist edits to `prd.json` via IPC command, UI reads back from backend - -BAD: Store wizard state as `"{}"` in SQLite -GOOD: Write `draft.json` with full wizard snapshot to filesystem - -BAD: `StartLoopArgs` with 4 fields when Configure captures 9 -GOOD: `config.json` as mandatory artifact before launch, all fields flow to `build_ralph_config` - -BAD: `fn cleanup_session_by_id(_id: &str) {}` -GOOD: Kill child process, remove session from state, flush partial `plan.md` - -BAD: Create 800-line module with mixed concerns -GOOD: Split into commands/ (thin handlers), services/ (logic), storage/ (data access) - -## Frozen Modules (DO NOT TOUCH) - -These modules exist but are not connected end-to-end. Do not modify, extend, or wire them into new code until their vertical slice is planned: - -- `connections.rs` + `Connections.tsx` — multi-repo workspace -- `plugin_registry.rs` + `Plugins.tsx` — plugin system -- `scm_watcher.rs` — review comment routing -- `ephemeral_query.rs` + `EphemeralOverlay.tsx` — query during execution -- `summary_generator.rs` — proof-of-work reports - -## Vertical Slice Workflow - -When adding a new feature, implement in this order: - -1. **Storage** — SQLite migration or filesystem artifact schema -2. **Models** — Rust structs with `Serialize`/`Deserialize` -3. **Service** — Business logic in `src-tauri/src/` -4. **Command** — `#[tauri::command]` thin wrapper -5. **Event** — Add to typed event catalog if streaming is needed -6. **IPC binding** — TypeScript invoke wrapper in `src/lib/tauri.ts` -7. **UI** — React component consuming the IPC binding -8. **Test** — Integration test proving the full slice works - -Never ship a step without the ones before it. No "scaffolding" without wiring. - -## Artifact Contract - -Per-project directory at `~/.config/loopforge/projects//`: +## Safety -| File | Written by | Read by | -|------|-----------|---------| -| `draft.json` | Wizard (each step transition) | Wizard (resume on reopen) | -| `plan.md` | Plan engine (streaming + flush) | Atomizer (stage 1 input) | -| `prd.json` | Atomizer (stage 4 output), Atomize UI (edits) | Loop engine (story iteration) | -| `config.json` | Configure step | Loop manager (`build_ralph_config`) | -| `prompt.md` | Atomizer (stage 4) | Loop engine (prompt builder) | -| `guardrails.md` | Atomizer (stage 4), loop engine (appends) | Loop engine (context) | +- Ask before running release, deploy, or destructive git operations +- Prefer read and verify first when touching contracts across frontend and backend +- Keep `AGENTS.md` as the canonical full instruction set ## Key References +- @AGENTS.md - @docs/ARCHITECTURE.md — module boundaries, data flow - @docs/STORY_SCHEMA.md — UserStory JSON schema - @docs/DESIGN_TOKENS.md — OKLCH colors, fonts, token classes diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 0000000..9aecb08 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,81 @@ +# Contributing to LoopForge + +Thank you for taking the time to contribute. LoopForge is a focused desktop product with strict conventions so that agent driven iteration on the codebase stays predictable. Please read this document before opening a pull request. + +## Before you start + +1. Read [AGENTS.md](AGENTS.md). It is the canonical spec for layer boundaries, the artifact contract and the conventions every contribution follows. +2. Skim [CLAUDE.md](CLAUDE.md) for a shorter Claude oriented primer. +3. Review the scoped rules in `.cursor/rules/` that apply to the area you plan to touch (`rust.mdc`, `react.mdc`, `tauri.mdc`, `vertical-slice.mdc`, `templates.mdc`, `antipatterns.mdc`, `frozen-modules.mdc`). + +## Development setup + +```bash +git clone https://github.com/taberoajorge/loopforge.git +cd loopforge +bun install +bun run tauri dev +``` + +Prerequisites: Rust 1.77 or newer, Bun 1.3 or newer, and the platform toolchain required by Tauri. See the [Tauri prerequisites guide](https://tauri.app/start/prerequisites/). + +## Conventions + +- Max 200 lines per Rust or TypeScript file. +- No comments, no JSDoc, no single character variables, no `eslint-disable`. +- Use design tokens from `src/tokens.css` for colour. Never use raw hex, rgb or oklch values in components. +- Use `thiserror` in `crates/ralph-core` and `anyhow` in `src-tauri`. +- Child processes always go through `tokio::process::Command`. +- New struct fields must use `#[serde(default)]`. +- Frontend state lives in Zustand stores. The filesystem and SQLite remain the source of truth for `plan.md`, `prd.json`, `config.json` and `draft.json`. + +## Commit format + +``` +type(scope): description +``` + +Where `type` is one of `feat`, `fix`, `refactor`, `chore`, `docs`, `test`, `style`, `perf`, `ci` and `scope` is the touched module. + +Example: `feat(loop-engine): propagate rate limit notices to the monitor`. + +Commits are atomic by logical unit. If a change touches the loop engine and the UI, split it into two commits. + +## Testing gate + +Run the relevant checks before opening a pull request: + +```bash +cargo test --workspace +bun run typecheck +bun run test +bun run lint +``` + +For user facing changes also run at least one smoke e2e: + +```bash +bun run e2e:smoke +``` + +## Opening a pull request + +1. Branch off `main` with a descriptive name (`feat/monitor-cost-tab`, `fix/atomizer-json-parse`). +2. Keep the PR focused. Unrelated refactors belong in a separate PR. +3. Fill in the PR description with what changed, why it changed, and how you verified it. +4. Link any related issue or discussion. +5. Wait for CI. Green is required to merge. + +## Proposing larger changes + +For anything that touches a layer boundary, adds a new crate, changes the artifact contract or introduces a new provider, open an issue labelled `proposal` first. Describe the motivation, the proposed vertical slice and the testing plan. A short design exchange there saves a long review cycle later. + +## Reporting bugs + +- Include your operating system, LoopForge version, agent CLI name and version, and the relevant slice of `activity.log` and `error.log`. +- Redact anything that looks like an API key or an email address. +- If the bug is reproducible, include the minimal PRD that triggers it. + +## Security disclosures + +Do not open a public issue for security problems. Email the maintainer listed in `AGENTS.md` with the details. diff --git a/Cargo.lock b/Cargo.lock index 85c9299..e916761 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -87,6 +87,15 @@ version = "1.0.102" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7f202df86484c868dbad7eaa557ef785d5c66295e41b460ef922eca0723b842c" +[[package]] +name = "app-services" +version = "0.1.0" +dependencies = [ + "serde", + "serde_json", + "thiserror 2.0.18", +] + [[package]] name = "arbitrary" version = "1.4.2" @@ -301,6 +310,26 @@ version = "0.22.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6" +[[package]] +name = "bindgen" +version = "0.72.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "993776b509cfb49c750f11b8f07a46fa23e0a1386ffc01fb1e7d343efc387895" +dependencies = [ + "bitflags 2.11.0", + "cexpr", + "clang-sys", + "itertools", + "log", + "prettyplease", + "proc-macro2", + "quote", + "regex", + "rustc-hash", + "shlex", + "syn 2.0.117", +] + [[package]] name = "bit-set" version = "0.8.0" @@ -569,6 +598,15 @@ version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6d43a04d8753f35258c91f8ec639f792891f748a1edbd759cf1dcea3382ad83c" +[[package]] +name = "cexpr" +version = "0.6.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6fac387a98bb7c37292057cffc56d62ecb629900026402633ae9160df93a8766" +dependencies = [ + "nom 7.1.3", +] + [[package]] name = "cfb" version = "0.7.3" @@ -616,6 +654,17 @@ dependencies = [ "windows-link 0.2.1", ] +[[package]] +name = "clang-sys" +version = "1.8.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0b023947811758c97c59bf9d1c188fd619ad4718dcaa767947df1cadb14f39f4" +dependencies = [ + "glob", + "libc", + "libloading 0.8.9", +] + [[package]] name = "clipboard-win" version = "5.4.1" @@ -660,16 +709,6 @@ dependencies = [ "version_check", ] -[[package]] -name = "core-foundation" -version = "0.9.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "91e195e091a93c46f7102ec7818a2aa394e1e1771c3ab4825963fa03e45afb8f" -dependencies = [ - "core-foundation-sys", - "libc", -] - [[package]] name = "core-foundation" version = "0.10.1" @@ -693,9 +732,9 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "064badf302c3194842cf2c5d61f56cc88e54a759313879cdf03abdd27d0c3b97" dependencies = [ "bitflags 2.11.0", - "core-foundation 0.10.1", + "core-foundation", "core-graphics-types", - "foreign-types 0.5.0", + "foreign-types", "libc", ] @@ -706,7 +745,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3d44a101f213f6c4cdc1853d4b78aef6db6bdfa3468798cc1d9912f4735013eb" dependencies = [ "bitflags 2.11.0", - "core-foundation 0.10.1", + "core-foundation", "libc", ] @@ -1032,6 +1071,12 @@ version = "1.0.20" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d0881ea181b1df73ff77ffaaf9c7544ecc11e82fba9b5f27b262a3c73a332555" +[[package]] +name = "either" +version = "1.15.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "48c757948c5ede0e46177b7add2e67155f70e33c07fea8284df6576da70b3719" + [[package]] name = "embed-resource" version = "3.0.7" @@ -1269,15 +1314,6 @@ version = "0.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "77ce24cb58228fbb8aa041425bb1050850ac19177686ea6e0f41a70416f56fdb" -[[package]] -name = "foreign-types" -version = "0.3.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f6f339eb8adc052cd2ca78910fda869aefa38d22d5cb648e6485e4d3fc06f3b1" -dependencies = [ - "foreign-types-shared 0.1.1", -] - [[package]] name = "foreign-types" version = "0.5.0" @@ -1285,7 +1321,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d737d9aa519fb7b749cbc3b962edcf310a8dd1f4b67c91c4f83975dbdd17d965" dependencies = [ "foreign-types-macros", - "foreign-types-shared 0.3.1", + "foreign-types-shared", ] [[package]] @@ -1299,12 +1335,6 @@ dependencies = [ "syn 2.0.117", ] -[[package]] -name = "foreign-types-shared" -version = "0.1.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "00b0228411908ca8685dba7fc2cdd70ec9990a6e753e89b6ac91a84c40fbaf4b" - [[package]] name = "foreign-types-shared" version = "0.3.1" @@ -1566,8 +1596,10 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ff2abc00be7fca6ebc474524697ae276ad847ad0a6b3faa4bcb027e9a4614ad0" dependencies = [ "cfg-if", + "js-sys", "libc", "wasi 0.11.1+wasi-snapshot-preview1", + "wasm-bindgen", ] [[package]] @@ -1577,9 +1609,11 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "899def5c37c4fd7b2664648c28120ecec138e4d395b459e5ca34f9cce2dd77fd" dependencies = [ "cfg-if", + "js-sys", "libc", "r-efi 5.3.0", "wasip2", + "wasm-bindgen", ] [[package]] @@ -1743,25 +1777,6 @@ dependencies = [ "syn 2.0.117", ] -[[package]] -name = "h2" -version = "0.4.13" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2f44da3a8150a6703ed5d34e164b875fd14c2cdab9af1252a9a1020bde2bdc54" -dependencies = [ - "atomic-waker", - "bytes", - "fnv", - "futures-core", - "futures-sink", - "http", - "indexmap 2.13.0", - "slab", - "tokio", - "tokio-util", - "tracing", -] - [[package]] name = "half" version = "2.7.1" @@ -1910,7 +1925,6 @@ dependencies = [ "bytes", "futures-channel", "futures-core", - "h2", "http", "http-body", "httparse", @@ -1936,22 +1950,7 @@ dependencies = [ "tokio", "tokio-rustls", "tower-service", -] - -[[package]] -name = "hyper-tls" -version = "0.6.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "70206fc6890eaca9fde8a0bf71caa2ddfc9fe045ac9e5c70df101a7dbde866e0" -dependencies = [ - "bytes", - "http-body-util", - "hyper", - "hyper-util", - "native-tls", - "tokio", - "tokio-native-tls", - "tower-service", + "webpki-roots", ] [[package]] @@ -1972,11 +1971,9 @@ dependencies = [ "percent-encoding", "pin-project-lite", "socket2", - "system-configuration", "tokio", "tower-service", "tracing", - "windows-registry", ] [[package]] @@ -2208,6 +2205,15 @@ dependencies = [ "once_cell", ] +[[package]] +name = "itertools" +version = "0.13.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "413ee7dfc52ee1a4949ceeb7dbc8a33f2d6c088194d9f922fb8318faf1f01186" +dependencies = [ + "either", +] + [[package]] name = "itoa" version = "1.0.18" @@ -2372,7 +2378,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6e9ec52138abedcc58dc17a7c6c0c00a2bdb4f3427c7f63fa97fd0d859155caf" dependencies = [ "gtk-sys", - "libloading", + "libloading 0.7.4", "once_cell", ] @@ -2392,6 +2398,16 @@ dependencies = [ "winapi", ] +[[package]] +name = "libloading" +version = "0.8.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d7c4b02199fee7c5d21a5ae7d8cfa79a6ef5bb2fc834d6e9058e89c825efdc55" +dependencies = [ + "cfg-if", + "windows-link 0.2.1", +] + [[package]] name = "libredox" version = "0.1.14" @@ -2450,6 +2466,7 @@ name = "loopforge" version = "0.1.0" dependencies = [ "anyhow", + "app-services", "chrono", "junction", "log", @@ -2472,6 +2489,19 @@ dependencies = [ "uuid", ] +[[package]] +name = "loopforge-app-core" +version = "0.1.0" +dependencies = [ + "app-services", +] + +[[package]] +name = "lru-slab" +version = "0.1.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "112b39cec0b298b6c1999fee3e31427f74f676e4cb9879ed1a121b43661a4154" + [[package]] name = "mac" version = "0.1.1" @@ -2569,6 +2599,12 @@ dependencies = [ "serde", ] +[[package]] +name = "minimal-lexical" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "68354c5c6bd36d73ff3feceb05efa59b6acb7626617f4962be322a825e61f79a" + [[package]] name = "minisign-verify" version = "0.2.5" @@ -2628,20 +2664,11 @@ dependencies = [ ] [[package]] -name = "native-tls" -version = "0.2.18" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "465500e14ea162429d264d44189adc38b199b62b1c21eea9f69e4b73cb03bbf2" +name = "native-shell" +version = "0.1.0" dependencies = [ - "libc", - "log", - "openssl", - "openssl-probe", - "openssl-sys", - "schannel", - "security-framework", - "security-framework-sys", - "tempfile", + "app-services", + "loopforge-app-core", ] [[package]] @@ -2674,6 +2701,73 @@ dependencies = [ "jni-sys 0.3.1", ] +[[package]] +name = "netlink-packet-core" +version = "0.7.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "72724faf704479d67b388da142b186f916188505e7e0b26719019c525882eda4" +dependencies = [ + "anyhow", + "byteorder", + "netlink-packet-utils", +] + +[[package]] +name = "netlink-packet-sock-diag" +version = "0.4.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a495cb1de50560a7cd12fdcf023db70eec00e340df81be31cedbbfd4aadd6b76" +dependencies = [ + "anyhow", + "bitflags 1.3.2", + "byteorder", + "libc", + "netlink-packet-core", + "netlink-packet-utils", + "smallvec", +] + +[[package]] +name = "netlink-packet-utils" +version = "0.5.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0ede8a08c71ad5a95cdd0e4e52facd37190977039a4704eb82a283f713747d34" +dependencies = [ + "anyhow", + "byteorder", + "paste", + "thiserror 1.0.69", +] + +[[package]] +name = "netlink-sys" +version = "0.8.8" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cd6c30ed10fa69cc491d491b85cc971f6bdeb8e7367b7cde2ee6cc878d583fae" +dependencies = [ + "bytes", + "libc", + "log", +] + +[[package]] +name = "netstat2" +version = "0.11.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "496f264d3ead4870d6b366deb9d20597592d64aac2a907f3e7d07c2325ba4663" +dependencies = [ + "bindgen", + "bitflags 2.11.0", + "byteorder", + "netlink-packet-core", + "netlink-packet-sock-diag", + "netlink-packet-utils", + "netlink-sys", + "num-derive", + "num-traits", + "thiserror 2.0.18", +] + [[package]] name = "new_debug_unreachable" version = "1.0.6" @@ -2686,6 +2780,16 @@ version = "0.1.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "72ef4a56884ca558e5ddb05a1d1e7e1bfd9a68d9ed024c21704cc98872dae1bb" +[[package]] +name = "nom" +version = "7.1.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d273983c5a657a70a3e8f2a01329822f3b8c8172b73826411a55751e404a0a4a" +dependencies = [ + "memchr", + "minimal-lexical", +] + [[package]] name = "nom" version = "8.0.0" @@ -2709,12 +2813,32 @@ dependencies = [ "zbus", ] +[[package]] +name = "ntapi" +version = "0.4.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c3b335231dfd352ffb0f8017f3b6027a4917f7df785ea2143d8af2adc66980ae" +dependencies = [ + "winapi", +] + [[package]] name = "num-conv" version = "0.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cf97ec579c3c42f953ef76dbf8d55ac91fb219dde70e49aa4a6b7d74e9919050" +[[package]] +name = "num-derive" +version = "0.3.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "876a53fff98e03a936a674b29568b0e605f06b29372c2489ff4de23f1949743d" +dependencies = [ + "proc-macro2", + "quote", + "syn 1.0.109", +] + [[package]] name = "num-traits" version = "0.2.19" @@ -2831,6 +2955,16 @@ dependencies = [ "objc2-core-foundation", ] +[[package]] +name = "objc2-io-kit" +version = "0.3.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "33fafba39597d6dc1fb709123dfa8289d39406734be322956a69f0931c73bb15" +dependencies = [ + "libc", + "objc2-core-foundation", +] + [[package]] name = "objc2-io-surface" version = "0.3.2" @@ -2910,50 +3044,12 @@ dependencies = [ "pathdiff", ] -[[package]] -name = "openssl" -version = "0.10.76" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "951c002c75e16ea2c65b8c7e4d3d51d5530d8dfa7d060b4776828c88cfb18ecf" -dependencies = [ - "bitflags 2.11.0", - "cfg-if", - "foreign-types 0.3.2", - "libc", - "once_cell", - "openssl-macros", - "openssl-sys", -] - -[[package]] -name = "openssl-macros" -version = "0.1.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a948666b637a0f465e8564c73e89d4dde00d72d4d473cc972f390fc3dcee7d9c" -dependencies = [ - "proc-macro2", - "quote", - "syn 2.0.117", -] - [[package]] name = "openssl-probe" version = "0.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7c87def4c32ab89d880effc9e097653c8da5d6ef28e6b539d313baaacfbafcbe" -[[package]] -name = "openssl-sys" -version = "0.9.112" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "57d55af3b3e226502be1526dfdba67ab0e9c96fc293004e79576b2b9edb0dbdb" -dependencies = [ - "cc", - "libc", - "pkg-config", - "vcpkg", -] - [[package]] name = "option-ext" version = "0.2.0" @@ -3048,6 +3144,12 @@ dependencies = [ "windows-link 0.2.1", ] +[[package]] +name = "paste" +version = "1.0.15" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "57c0d7b74b563b49d38dae00a0c37d4d6de9b432382b2892f0574ddcae73fd0a" + [[package]] name = "pathdiff" version = "0.2.3" @@ -3513,6 +3615,61 @@ dependencies = [ "memchr", ] +[[package]] +name = "quinn" +version = "0.11.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b9e20a958963c291dc322d98411f541009df2ced7b5a4f2bd52337638cfccf20" +dependencies = [ + "bytes", + "cfg_aliases", + "pin-project-lite", + "quinn-proto", + "quinn-udp", + "rustc-hash", + "rustls", + "socket2", + "thiserror 2.0.18", + "tokio", + "tracing", + "web-time", +] + +[[package]] +name = "quinn-proto" +version = "0.11.14" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "434b42fec591c96ef50e21e886936e66d3cc3f737104fdb9b737c40ffb94c098" +dependencies = [ + "bytes", + "getrandom 0.3.4", + "lru-slab", + "rand 0.9.2", + "ring", + "rustc-hash", + "rustls", + "rustls-pki-types", + "slab", + "thiserror 2.0.18", + "tinyvec", + "tracing", + "web-time", +] + +[[package]] +name = "quinn-udp" +version = "0.5.14" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "addec6a0dcad8a8d96a771f815f0eaf55f9d1805756410b39f5fa81332574cbd" +dependencies = [ + "cfg_aliases", + "libc", + "once_cell", + "socket2", + "tracing", + "windows-sys 0.60.2", +] + [[package]] name = "quote" version = "1.0.45" @@ -3546,10 +3703,12 @@ version = "0.1.0" dependencies = [ "anyhow", "chrono", + "netstat2", "regex", "reqwest 0.12.28", "serde", "serde_json", + "sysinfo", "tempfile", "thiserror 2.0.18", "tokio", @@ -3768,29 +3927,26 @@ checksum = "eddd3ca559203180a307f12d114c268abf583f59b03cb906fd0b3ff8646c1147" dependencies = [ "base64 0.22.1", "bytes", - "encoding_rs", "futures-core", - "h2", "http", "http-body", "http-body-util", "hyper", "hyper-rustls", - "hyper-tls", "hyper-util", "js-sys", "log", - "mime", - "native-tls", "percent-encoding", "pin-project-lite", + "quinn", + "rustls", "rustls-pki-types", "serde", "serde_json", "serde_urlencoded", "sync_wrapper", "tokio", - "tokio-native-tls", + "tokio-rustls", "tower", "tower-http", "tower-service", @@ -3798,6 +3954,7 @@ dependencies = [ "wasm-bindgen", "wasm-bindgen-futures", "web-sys", + "webpki-roots", ] [[package]] @@ -3996,6 +4153,7 @@ version = "1.14.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "be040f8b0a225e40375822a563fa9524378b9d63112f53e19ffff34df5d33fdd" dependencies = [ + "web-time", "zeroize", ] @@ -4005,7 +4163,7 @@ version = "0.6.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1d99feebc72bae7ab76ba994bb5e121b8d83d910ca40b36e0921f53becc41784" dependencies = [ - "core-foundation 0.10.1", + "core-foundation", "core-foundation-sys", "jni", "log", @@ -4137,7 +4295,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b7f4bc775c73d9a02cde8bf7b2ec4c9d12743edf609006c7facc23998404cd1d" dependencies = [ "bitflags 2.11.0", - "core-foundation 0.10.1", + "core-foundation", "core-foundation-sys", "libc", "security-framework-sys", @@ -4653,24 +4811,17 @@ dependencies = [ ] [[package]] -name = "system-configuration" -version = "0.7.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a13f3d0daba03132c0aa9767f98351b3488edc2c100cda2d2ec2b04f3d8d3c8b" -dependencies = [ - "bitflags 2.11.0", - "core-foundation 0.9.4", - "system-configuration-sys", -] - -[[package]] -name = "system-configuration-sys" -version = "0.6.0" +name = "sysinfo" +version = "0.37.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8e1d1b10ced5ca923a1fcb8d03e96b8d3268065d724548c0211415ff6ac6bac4" +checksum = "16607d5caffd1c07ce073528f9ed972d88db15dd44023fa57142963be3feb11f" dependencies = [ - "core-foundation-sys", "libc", + "memchr", + "ntapi", + "objc2-core-foundation", + "objc2-io-kit", + "windows", ] [[package]] @@ -4694,7 +4845,7 @@ checksum = "9103edf55f2da3c82aea4c7fab7c4241032bfeea0e71fa557d98e00e7ce7cc20" dependencies = [ "bitflags 2.11.0", "block2", - "core-foundation 0.10.1", + "core-foundation", "core-graphics", "crossbeam-channel", "dispatch2", @@ -5326,16 +5477,6 @@ dependencies = [ "syn 2.0.117", ] -[[package]] -name = "tokio-native-tls" -version = "0.3.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bbae76ab933c85776efabc971569dd6119c580d8f5d448769dec1764bf796ef2" -dependencies = [ - "native-tls", - "tokio", -] - [[package]] name = "tokio-rustls" version = "0.26.4" @@ -5569,7 +5710,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b8765b90061cba6c22b5831f675da109ae5561588290f9fa2317adab2714d5a6" dependencies = [ "memchr", - "nom", + "nom 8.0.0", "petgraph", ] @@ -6001,6 +6142,16 @@ dependencies = [ "wasm-bindgen", ] +[[package]] +name = "web-time" +version = "1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5a6580f308b1fad9207618087a65c04e7a10bc77e02c8e84e9b00dd4b12fa0bb" +dependencies = [ + "js-sys", + "wasm-bindgen", +] + [[package]] name = "web_atoms" version = "0.2.3" @@ -6066,6 +6217,15 @@ dependencies = [ "rustls-pki-types", ] +[[package]] +name = "webpki-roots" +version = "1.0.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "22cfaf3c063993ff62e73cb4311efde4db1efb31ab78a3e5c457939ad5cc0bed" +dependencies = [ + "rustls-pki-types", +] + [[package]] name = "webview2-com" version = "0.38.2" @@ -6257,17 +6417,6 @@ dependencies = [ "windows-link 0.1.3", ] -[[package]] -name = "windows-registry" -version = "0.6.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "02752bf7fbdcce7f2a27a742f798510f3e5ad88dbe84871e5168e2120c3d5720" -dependencies = [ - "windows-link 0.2.1", - "windows-result 0.4.1", - "windows-strings 0.5.1", -] - [[package]] name = "windows-result" version = "0.3.4" diff --git a/Cargo.toml b/Cargo.toml index ab32a14..df01094 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -2,7 +2,45 @@ members = [ "crates/app-services", "crates/loopforge-app-core", + "crates/native-shell", "crates/ralph-core", "src-tauri", ] resolver = "2" + +[workspace.lints.clippy] +all = { level = "deny", priority = -1 } +pedantic = { level = "deny", priority = -1 } +unused_async = "allow" +format_push_string = "allow" +needless_pass_by_value = "allow" +cast_possible_truncation = "allow" +cast_precision_loss = "allow" +cast_sign_loss = "allow" +cast_possible_wrap = "allow" +too_many_lines = "allow" +too_many_arguments = "allow" +assigning_clones = "allow" +match_same_arms = "allow" +manual_let_else = "allow" +ref_option = "allow" +similar_names = "allow" +items_after_statements = "allow" +unused_self = "allow" +trivially_copy_pass_by_ref = "allow" +struct_field_names = "allow" +struct_excessive_bools = "allow" +should_implement_trait = "allow" +option_map_unit_fn = "allow" +missing_fields_in_debug = "allow" +let_underscore_future = "allow" +missing_errors_doc = "allow" +missing_panics_doc = "allow" +module_name_repetitions = "allow" +must_use_candidate = "allow" +doc_markdown = "allow" + +[workspace.lints.rust] +unsafe_code = "deny" +unused_qualifications = "warn" +dead_code = "allow" diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000..c87d6e3 --- /dev/null +++ b/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2026 Jorge Alexander Taberoa Jimenez + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/README.md b/README.md new file mode 100644 index 0000000..64049e8 --- /dev/null +++ b/README.md @@ -0,0 +1,372 @@ +
+ LoopForge + +

LoopForge

+ +

A desktop orchestrator that turns any AI coding CLI into a long running, self verifying engineer.

+ +

Plan. Atomize. Execute. Monitor.

+ +

+ Latest release + CI + License + Rust + Tauri + Bun + Platforms +

+
+ +> [!NOTE] +> LoopForge operationalises the Ralph loop pattern popularised by [Geoffrey Huntley](https://ghuntley.com/ralph) and gives it a visual control plane. Bring your own CLI agent, describe the work as atomic stories, then let the loop grind until every story passes verification. + +
+Table of contents + +1. [Why LoopForge](#why-loopforge) +2. [How the loop works](#how-the-loop-works) +3. [Features](#features) +4. [Demo](#demo) +5. [Quick start](#quick-start) +6. [Your first loop](#your-first-loop) +7. [Use cases](#use-cases) +8. [Supported providers](#supported-providers) +9. [Artifacts and file layout](#artifacts-and-file-layout) +10. [Configuration](#configuration) +11. [Build from source](#build-from-source) +12. [Architecture](#architecture) +13. [Roadmap](#roadmap) +14. [Troubleshooting](#troubleshooting) +15. [Contributing](#contributing) +16. [Security](#security) +17. [License and credits](#license-and-credits) + +
+ +## Why LoopForge + +Single shot prompts are fine for five minute changes. They fall over on the work that actually consumes engineering time: porting a legacy service from one language to another, regenerating a test suite, filling in months of missing documentation, or hardening a module until every boundary has coverage. Running a CLI agent once rarely finishes that work, and running it by hand in a terminal loop is tedious and opaque. + +LoopForge takes Ralph, the bash loop that Huntley described as "deterministically bad in an undeterministic world", and turns it into a desktop product: + +- A wizard captures the plan and atomises it into small, verifiable stories. +- A loop engine runs one story per iteration, reloads the PRD every time, and keeps the filesystem plus git as the source of truth. +- A monitor streams the live output of the chosen agent, together with iteration count, commits, failures and rate limit events. +- Parallel work is isolated in git worktrees so multiple stories can land without stepping on each other. + +If Cursor and Claude Code are the cockpit for interactive coding, LoopForge is the autopilot for the long haul. + +## How the loop works + +```mermaid +flowchart LR + A[Describe goal] --> B[Plan
interactive session] + B --> C[Atomize
one shot per section] + C --> D[Configure
agent, model, tuning] + D --> E{Loop engine} + E -->|pending story| F[Build prompt
PRD + guardrails + failure memory] + F --> G[Run agent in worktree] + G --> H{Verify} + H -->|pass| I[Mark story passed
commit + merge] + H -->|fail| J[Append guardrail
bump failure memory] + I --> E + J --> E + E -->|all stories passed
or gutter threshold| K[Session ends] +``` + +At any iteration the loop honours three control files: `.ralph-pause` freezes work, `.ralph-done` ends the session cleanly, and `failure_memory.json` records how many times each story has failed so that a story hitting the gutter threshold is skipped and flagged for human review. + +## Features + +| Area | What you get | +|---|---| +| Wizard | Five step flow: Describe, Plan, Atomize, Configure, Launch. Each step persists to disk so a draft can be resumed after a reboot. | +| Plan engine | Drives an interactive planning session with any supported agent and streams the agent output into `plan.md`. | +| Atomizer | MiniJinja pipeline that chunks `plan.md`, asks the agent to emit strict JSON user stories, merges batches and writes `prd.json`. | +| Loop engine | Runs the Ralph loop with configurable iteration cap, cooldown, stall timeout, verification retries and gutter threshold. | +| Parallel stories | Ready stories that have no unresolved dependencies can run concurrently in isolated git worktrees. Successful branches are fast forward merged and torn down automatically. | +| Failure memory | Keeps a rolling record of story failures; the next prompt carries the previous failure signal so the agent stops repeating the same mistake. | +| Rate limit detection | Scans agent output for common throttle signatures (Claude, OpenAI, OpenRouter, Gemini) and pauses before retrying. | +| Guardrails | `guardrails.md` accumulates durable "do not do this again" notes across iterations of a project. | +| Monitor | Live terminal with xterm.js, progress tab with story state, activity log, raw output log and an Ask tab for quick side questions. | +| Desktop integration | Tray icon, system notifications, dark mode, transparent window effects, auto updater wired to GitHub Releases. | +| Bring your own agent | Works with any CLI binary that exits non zero on failure. No vendor lock in, no managed credentials. | + +## Demo + +> [!TIP] +> A one minute video walkthrough is on the [Releases page](https://github.com/taberoajorge/loopforge/releases). For a text walkthrough of a real session, see [Your first loop](#your-first-loop). + +## Quick start + +### 1. Install an agent CLI + +LoopForge does not ship with a language model. Install at least one of the following and make sure it is on your `PATH`: + +| Agent | Install command | Docs | +|---|---|---| +| Claude Code | `npm i -g @anthropic-ai/claude-code` | [claude.com](https://www.claude.com/product/claude-code) | +| OpenAI Codex | `npm i -g @openai/codex` | [openai.com/codex](https://developers.openai.com/codex) | +| Cursor Agent | Installed with the [Cursor app](https://cursor.com/) | [docs.cursor.com](https://docs.cursor.com/) | +| Gemini CLI | `npm i -g @google/gemini-cli` | [geminicli.com](https://geminicli.com/) | +| OpenCode | `curl -fsSL https://opencode.ai/install \| bash` | [opencode.ai](https://opencode.ai/) | + +LoopForge detects installed agents at launch and shows them in the system readiness panel. + +### 2. Install LoopForge + +
+Download a release (recommended) + +Grab the latest installer for your platform from the [Releases page](https://github.com/taberoajorge/loopforge/releases). + +| Platform | Asset | +|---|---| +| macOS (Apple Silicon) | `LoopForge__aarch64.dmg` | +| macOS (Intel) | `LoopForge__x64.dmg` | +| Linux (Debian, Ubuntu) | `loopforge__amd64.deb` | +| Linux (AppImage) | `loopforge__amd64.AppImage` | +| Windows | `LoopForge__x64-setup.exe` | + +Open the installer and follow the prompts. Auto updates land through GitHub Releases. + +
+ +
+Run from source + +Prerequisites: + +- Rust 1.77 or newer (`rustup default stable`) +- Bun 1.3 or newer (`curl -fsSL https://bun.sh/install | bash`) +- Platform toolchain for Tauri (see the [Tauri prerequisites guide](https://tauri.app/start/prerequisites/)) + +Clone and launch: + +```bash +git clone https://github.com/taberoajorge/loopforge.git +cd loopforge +bun install +bun run tauri dev +``` + +The app opens at the splash window after the first compile. Subsequent launches are seconds. + +
+ +### 3. Verify the install + +Open LoopForge. The launch screen shows a System Readiness panel. Confirm: + +- Git is available. +- At least one agent is listed as available. +- The shell is detected. + +Dismiss the panel or refresh after installing a missing agent. + +## Your first loop + +1. **Describe**. Name the project, pick the working directory and write a short description of the outcome. LoopForge saves a draft immediately. +2. **Plan**. Choose an agent and start the planning session. The agent interviews you through the terminal pane and writes the conversation to `plan.md` until you accept the plan. +3. **Atomize**. LoopForge splits `plan.md` into sections, asks the agent to generate strict JSON stories per section, merges the batches and dedupes. Review the list, edit titles, acceptance criteria or dependencies, and confirm. +4. **Configure**. Pick the execution agent, model, effort level, iteration cap, cooldown, stall timeout, gutter threshold and optional test command. +5. **Launch**. LoopForge creates the canonical artifacts, starts the loop and switches to the Monitor. +6. **Monitor**. Watch stories change from pending to passed. Pause, resume or stop the loop. When every story passes, the session ends and the project moves to the Finished list. + +> [!IMPORTANT] +> Every iteration starts from a fresh context window. If a story depends on work from a previous iteration, encode that dependency in `dependsOn` in `prd.json`. The loop never carries conversational state across iterations, only durable artifacts. + +## Use cases + +- **Legacy migrations**. Port a Java backend to TypeScript one module at a time, with every story gated by the new test suite. This is how LoopForge was stress tested during development. +- **Large refactors**. Replace a homegrown module with a library, split a service, or migrate a test runner (for example Jest to Vitest), iterating until the build and the tests go green. +- **Test coverage backfill**. Feed LoopForge a coverage report and let it write missing tests until the line coverage threshold is met. +- **Documentation sweeps**. Regenerate or modernise the docs folder, one file per story, with a verification that runs `bun run typecheck` on code samples. +- **Greenfield spikes**. Take a PRD for a new CLI, service or app and let LoopForge scaffold, wire and verify each vertical slice. +- **Scripted dependency upgrades**. Codify the upgrade recipe, let the loop run through every touched file, and merge when CI passes. + +> [!WARNING] +> Ralph style loops are powerful precisely because they will keep going. Always run them inside a sandbox, a container or a disposable branch. Set an iteration cap, a gutter threshold and a test command before you hit Launch. + +## Supported providers + +LoopForge detects provider binaries at launch through the `detect_agents` command. The loop engine also ships an `OpenRouter` provider for routing through a single account. + +| Provider | Binary detected | Supports plan | Supports execute | Notes | +|---|---|---|---|---| +| Claude Code | `claude` | yes | yes | Runs with `--dangerously-skip-permissions` for unattended sessions. | +| OpenAI Codex | `codex` | yes | yes | Honours Codex rate card limits. | +| Cursor Agent | `cursor-agent` | yes | yes | Requires a Cursor subscription. | +| Gemini CLI | `gemini` | yes | yes | Uses the authenticated Google account. | +| OpenCode | `opencode` | yes | yes | Best for cost sensitive long runs. | +| OpenRouter | via API | no | yes (programmatic) | Routes through any model OpenRouter exposes. | + +Each provider uses the same `AgentResult` contract, so swapping one for another in `config.json` does not require code changes. + +## Artifacts and file layout + +Per project directory: + +``` +/ + draft.json wizard snapshot, written during every wizard transition + plan.md plan conversation persisted by the plan engine + prd.json canonical PRD emitted by the atomizer, edited in the UI + config.json execution configuration written by the Configure step + prompt.md execution prompt written by the atomizer stage 4 + guardrails.md living document of constraints accumulated across iterations + .ralph_state runtime state, iteration counters and health metadata + .ralph-pause sentinel file, presence pauses the loop + .ralph-done sentinel file, presence ends the loop cleanly + failure_memory.json rolling log of recent failures per story + activity.log human readable activity feed consumed by the Monitor + error.log error channel for the loop engine + codex_output.log raw agent output for the Raw Output tab +``` + +LoopForge also stores project metadata and session state in a local SQLite database under the platform data directory (for example `~/Library/Application Support/com.loopforge.desktop/loopforge.db` on macOS). + +## Configuration + +The loop engine reads tuning values from `config.json`, with environment variable overrides available for advanced users. + +| Variable | Default | Purpose | +|---|---|---| +| `MAX_ITERATIONS` | `9999` | Hard cap on iterations before the session stops. | +| `RATE_LIMIT_WAIT` | `120` | Seconds to wait after a rate limit is detected. | +| `GUTTER_THRESHOLD` | `3` | Consecutive failures before a story is blocked. | +| `CODEX_STALL_TIMEOUT` | `300` | Seconds without output before the agent is treated as stalled. | +| `MAX_VERIFICATION_RETRIES` | `3` | Retries for the verification command after a story claims completion. | + +A minimal `config.json`: + +```json +{ + "project": { "working_directory": "." }, + "prd": { + "path": "prd.json", + "prompt": "prompt.md", + "guardrails": "guardrails.md", + "progress": "activity.log" + }, + "test": { "command": "bun run typecheck" } +} +``` + +## Build from source + +```bash +git clone https://github.com/taberoajorge/loopforge.git +cd loopforge +bun install +bun run tauri dev # dev with hot reload +bun run tauri build # release installer for the host platform +cargo test # Rust unit and integration tests +bun run test # Vitest unit tests +bun run typecheck # TypeScript type check +bun run e2e:smoke # WebdriverIO smoke test +``` + +
+Useful scripts + +| Command | Purpose | +|---|---| +| `bun run dev` | Vite dev server on port 1420, frontend only. | +| `bun run dev:tauri` | Vite dev server prepared for Tauri. | +| `bun run build` | Production build of the frontend. | +| `bun run lint` / `lint:fix` | Biome lint. | +| `bun run format` / `format:fix` | Biome formatter. | +| `bun run e2e` | Full WebdriverIO suite. | +| `cargo test --workspace` | Every Rust crate in the workspace. | + +
+ +## Architecture + +LoopForge uses a three layer hexagonal architecture with explicit boundaries. + +| Layer | Directory | Responsibility | +|---|---|---| +| Domain | `crates/ralph-core/src/` | Pure loop logic, providers, PRD, prompt builder, detection, verification, configuration. | +| Shell | `src-tauri/src/` | Tauri commands, event streams, SQLite storage, process lifecycle and filesystem adapters. | +| UI | `src/` | React 19 pages, features, stores and components built on Tailwind and Radix. | + +IPC uses `#[tauri::command]` for request and response, and `app.emit()` for streaming events such as agent output, iteration updates, story completions, story blocks and rate limit notices. + +For deeper references see [AGENTS.md](AGENTS.md) and [CLAUDE.md](CLAUDE.md), which document module boundaries, the artifact contract and the conventions every contribution follows. + +## Roadmap + +- [x] Plan, Atomize, Execute, Monitor vertical slice +- [x] Parallel stories through git worktrees with automatic merge and teardown +- [x] Failure memory and gutter threshold +- [x] Rate limit detection across major providers +- [x] Signed release workflow for macOS, Linux, Windows +- [ ] Cost and token usage tracking in the Monitor +- [ ] Shareable PRD bundles that can be imported into a new project +- [ ] Native MCP server so other tools can read and write LoopForge projects +- [ ] Plugin API for custom verification hooks +- [ ] Remote execution mode targeting a headless daemon + +## Troubleshooting + +
+The loop stops saying "services unavailable" + +The health checker could not reach an external service the project depends on. Update `config.json` with the right health URL or mark the service as `optional = true`. + +
+ +
+A story fails the same way three times and is blocked + +That is the gutter threshold doing its job. Open the story in the Atomize tab, tighten the acceptance criteria or split it into smaller stories, then relaunch. You can also raise `GUTTER_THRESHOLD` but only after understanding why the agent keeps failing. + +
+ +
+Parallel stories leave orphan branches + +A failed or rate limited worker will keep its worktree for manual inspection. Inspect `/.loopforge/worktrees`, review the branch, then run `git worktree remove` if you do not need it. + +
+ +
+The agent hangs for a long time with no output + +LoopForge kills stalled processes after `CODEX_STALL_TIMEOUT` seconds by default. If your model legitimately takes longer, bump the value in the Configure step. + +
+ +## Contributing + +Pull requests are welcome. Start with [CONTRIBUTING.md](CONTRIBUTING.md). The short version: + +- Follow the conventions in [AGENTS.md](AGENTS.md) and the scoped rules under `.cursor/rules/`. +- Keep every Rust and TypeScript file under 200 lines. +- No comments, no JSDoc, no single character variables, no `eslint-disable`. +- Commit format is `type(scope): description` and commits are atomic. +- Run `cargo test`, `bun run typecheck` and `bun run test` before opening a PR. + +If you want to discuss a larger change first, open an issue labelled `proposal`. + +## Security + +LoopForge runs agent CLIs with your credentials on your machine. Treat the output as untrusted code until it is reviewed. + +> [!CAUTION] +> Always run LoopForge against a disposable branch or a sandboxed checkout. The loop commits, merges and deletes worktrees by design. + +To report a vulnerability, email the maintainer listed in [AGENTS.md](AGENTS.md) rather than opening a public issue. + +## License and credits + +LoopForge is released under the [MIT License](LICENSE). + +- The Ralph loop pattern was described by [Geoffrey Huntley](https://ghuntley.com/ralph) in 2025 and popularised under the "Ralph Wiggum" nickname. +- The three layer architecture borrows from the Tauri community and the hexagonal pattern commonly used in Rust services. +- Monospace UI typography uses [Inter Variable](https://rsms.me/inter/) and the terminal is rendered by [xterm.js](https://xtermjs.org/). + +Stand on the shoulders of the giants, then iterate. diff --git a/SECURITY.md b/SECURITY.md new file mode 100644 index 0000000..5d7761a --- /dev/null +++ b/SECURITY.md @@ -0,0 +1,43 @@ +# Security Policy + +## Supported Versions + +LoopForge is under active development. Only the latest release on the `main` branch receives security fixes. + +| Version | Supported | +| ------- | :-------: | +| `main` | yes | +| `< v0.1.0` | no | + +## Reporting a Vulnerability + +Please do not open a public issue for security problems. Report them privately so the fix can ship before the exploit spreads. + +1. Preferred channel is GitHub private vulnerability reporting. Open a report through the `Security` tab of this repository once the feature is enabled. +2. Alternative channel: send a short description, reproduction steps, and impact analysis to the maintainers through a direct message on GitHub. +3. You will receive an acknowledgment within 72 hours. A remediation plan is shared within 7 days once the issue is validated. + +Please avoid testing vulnerabilities against production deployments that you do not own. Never run exploits against third party services through LoopForge. + +## Scope + +The following areas receive priority: + +- Local execution of shell commands by the Ralph loop engine. +- IPC boundary between the Tauri backend and the frontend renderer. +- Storage of Ralph credentials, environment variables, and project metadata in SQLite and the operating system keychain. +- Supply chain integrity of Rust crates, npm packages, and GitHub Actions referenced in workflows. +- Agent providers that execute generated code on the developer machine. + +Out of scope: + +- Social engineering of maintainers or users. +- Attacks that require physical access to the developer machine. +- Vulnerabilities already tracked in public advisories for upstream dependencies when a fix is pending upstream. + +## Handling and disclosure + +- Fixes are developed on a private branch or a security advisory draft. +- A coordinated release with release notes is published once the patch lands on `main`. +- CVEs are requested through GitHub when the impact warrants it. +- Reporters are credited in the advisory unless they request anonymity. diff --git a/biome.json b/biome.json new file mode 100644 index 0000000..3310dfb --- /dev/null +++ b/biome.json @@ -0,0 +1,73 @@ +{ + "$schema": "https://biomejs.dev/schemas/2.4.11/schema.json", + "vcs": { + "enabled": true, + "clientKind": "git", + "useIgnoreFile": true + }, + "assist": { "actions": { "source": { "organizeImports": "on" } } }, + "formatter": { + "enabled": true, + "indentStyle": "space", + "indentWidth": 2, + "lineWidth": 100 + }, + "linter": { + "enabled": true, + "rules": { + "recommended": true, + "correctness": { + "noUnusedImports": "error", + "useExhaustiveDependencies": "warn", + "noUnusedFunctionParameters": "warn", + "noUnusedVariables": "warn" + }, + "complexity": { + "useOptionalChain": "warn" + }, + "style": { + "noNonNullAssertion": "warn", + "useConst": "error", + "useImportType": "error" + }, + "suspicious": { + "noConfusingVoidType": "warn", + "noArrayIndexKey": "warn", + "noExplicitAny": "warn", + "noConsole": "warn" + }, + "a11y": { + "useButtonType": "warn", + "noSvgWithoutTitle": "warn", + "useSemanticElements": "warn", + "useAriaPropsForRole": "warn", + "noStaticElementInteractions": "warn", + "noNoninteractiveTabindex": "warn", + "noLabelWithoutControl": "warn" + }, + "nursery": { + "useSortedClasses": { + "level": "warn", + "options": { + "attributes": ["class", "className"], + "functions": ["clsx", "cva", "cn", "twMerge"] + } + } + } + } + }, + "javascript": { + "formatter": { + "quoteStyle": "double", + "semicolons": "always" + } + }, + "css": { + "parser": { + "tailwindDirectives": true + } + }, + "files": { + "includes": ["**", "!**/dist/", "!**/node_modules/", "!**/src-tauri/", "!**/e2e/"] + } +} diff --git a/bun.lock b/bun.lock index b2bab4e..ee31a25 100644 --- a/bun.lock +++ b/bun.lock @@ -13,7 +13,7 @@ "@radix-ui/react-select": "^2.2.6", "@radix-ui/react-tooltip": "^1.2.8", "@tauri-apps/api": "^2", - "@tauri-apps/plugin-dialog": "^2.6.0", + "@tauri-apps/plugin-dialog": "2.6.0", "@tauri-apps/plugin-notification": "^2.3.3", "@xterm/addon-fit": "^0.11.0", "@xterm/xterm": "^6.0.0", @@ -31,6 +31,7 @@ "zustand": "^5", }, "devDependencies": { + "@biomejs/biome": "2.4.11", "@tailwindcss/vite": "^4", "@tauri-apps/cli": "^2", "@testing-library/jest-dom": "^6.8.0", @@ -48,6 +49,7 @@ "@wdio/spec-reporter": "^9.27.0", "expect-webdriverio": "^5.6.5", "jsdom": "^26.1.0", + "lefthook": "^2.1.5", "shadcn": "^4.1.2", "tailwindcss": "^4", "typescript": "^5", @@ -123,6 +125,24 @@ "@babel/types": ["@babel/types@7.29.0", "", { "dependencies": { "@babel/helper-string-parser": "^7.27.1", "@babel/helper-validator-identifier": "^7.28.5" } }, "sha512-LwdZHpScM4Qz8Xw2iKSzS+cfglZzJGvofQICy7W7v4caru4EaAmyUuO6BGrbyQ2mYV11W0U8j5mBhd14dd3B0A=="], + "@biomejs/biome": ["@biomejs/biome@2.4.11", "", { "optionalDependencies": { "@biomejs/cli-darwin-arm64": "2.4.11", "@biomejs/cli-darwin-x64": "2.4.11", "@biomejs/cli-linux-arm64": "2.4.11", "@biomejs/cli-linux-arm64-musl": "2.4.11", "@biomejs/cli-linux-x64": "2.4.11", "@biomejs/cli-linux-x64-musl": "2.4.11", "@biomejs/cli-win32-arm64": "2.4.11", "@biomejs/cli-win32-x64": "2.4.11" }, "bin": { "biome": "bin/biome" } }, "sha512-nWxHX8tf3Opb/qRgZpBbsTOqOodkbrkJ7S+JxJAruxOReaDPPmPuLBAGQ8vigyUgo0QBB+oQltNEAvalLcjggA=="], + + "@biomejs/cli-darwin-arm64": ["@biomejs/cli-darwin-arm64@2.4.11", "", { "os": "darwin", "cpu": "arm64" }, "sha512-wOt+ed+L2dgZanWyL6i29qlXMc088N11optzpo10peayObBaAshbTcxKUchzEMp9QSY8rh5h6VfAFE3WTS1rqg=="], + + "@biomejs/cli-darwin-x64": ["@biomejs/cli-darwin-x64@2.4.11", "", { "os": "darwin", "cpu": "x64" }, "sha512-gZ6zR8XmZlExfi/Pz/PffmdpWOQ8Qhy7oBztgkR8/ylSRyLwfRPSadmiVCV8WQ8PoJ2MWUy2fgID9zmtgUUJmw=="], + + "@biomejs/cli-linux-arm64": ["@biomejs/cli-linux-arm64@2.4.11", "", { "os": "linux", "cpu": "arm64" }, "sha512-avdJaEElXrKceK0va9FkJ4P5ci3N01TGkc6ni3P8l3BElqbOz42Wg2IyX3gbh0ZLEd4HVKEIrmuVu/AMuSeFFA=="], + + "@biomejs/cli-linux-arm64-musl": ["@biomejs/cli-linux-arm64-musl@2.4.11", "", { "os": "linux", "cpu": "arm64" }, "sha512-+Sbo1OAmlegtdwqFE8iOxFIWLh1B3OEgsuZfBpyyN/kWuqZ8dx9ZEes6zVnDMo+zRHF2wLynRVhoQmV7ohxl2Q=="], + + "@biomejs/cli-linux-x64": ["@biomejs/cli-linux-x64@2.4.11", "", { "os": "linux", "cpu": "x64" }, "sha512-TagWV0iomp5LnEnxWFg4nQO+e52Fow349vaX0Q/PIcX6Zhk4GGBgp3qqZ8PVkpC+cuehRctMf3+6+FgQ8jCEFQ=="], + + "@biomejs/cli-linux-x64-musl": ["@biomejs/cli-linux-x64-musl@2.4.11", "", { "os": "linux", "cpu": "x64" }, "sha512-bexd2IklK7ZgPhrz6jXzpIL6dEAH9MlJU1xGTrypx+FICxrXUp4CqtwfiuoDKse+UlgAlWtzML3jrMqeEAHEhA=="], + + "@biomejs/cli-win32-arm64": ["@biomejs/cli-win32-arm64@2.4.11", "", { "os": "win32", "cpu": "arm64" }, "sha512-RJhaTnY8byzxDt4bDVb7AFPHkPcjOPK3xBip4ZRTrN3TEfyhjLRm3r3mqknqydgVTB74XG8l4jMLwEACEeihVg=="], + + "@biomejs/cli-win32-x64": ["@biomejs/cli-win32-x64@2.4.11", "", { "os": "win32", "cpu": "x64" }, "sha512-A8D3JM/00C2KQgUV3oj8Ba15EHEYwebAGCy5Sf9GAjr5Y3+kJIYOiESoqRDeuRZueuMdCsbLZIUqmPhpYXJE9A=="], + "@csstools/color-helpers": ["@csstools/color-helpers@5.1.0", "", {}, "sha512-S11EXWJyy0Mz5SYvRmY8nJYTFFd1LCNV+7cXyAgQtOOuzb4EsgfqDufL+9esx72/eLhsRdGZwaldu/h+E4t4BA=="], "@csstools/css-calc": ["@csstools/css-calc@2.1.4", "", { "peerDependencies": { "@csstools/css-parser-algorithms": "^3.0.5", "@csstools/css-tokenizer": "^3.0.4" } }, "sha512-3N8oaj+0juUw/1H3YwmDDJXCgTB1gKU6Hc/bB502u9zR0q2vd786XJH9QfrKIEgFlZmhZiq6epXl4rHqhzsIgQ=="], @@ -133,7 +153,7 @@ "@csstools/css-tokenizer": ["@csstools/css-tokenizer@3.0.4", "", {}, "sha512-Vd/9EVDiu6PPJt9yAh6roZP6El1xHrdvIVGjyBsHR0RYwNHgL7FJPyIIW4fANJNG6FtyZfvlRPpFI4ZM/lubvw=="], - "@dotenvx/dotenvx": ["@dotenvx/dotenvx@1.59.1", "", { "dependencies": { "commander": "^11.1.0", "dotenv": "^17.2.1", "eciesjs": "^0.4.10", "execa": "^5.1.1", "fdir": "^6.2.0", "ignore": "^5.3.0", "object-treeify": "1.1.33", "picomatch": "^4.0.2", "which": "^4.0.0" }, "bin": { "dotenvx": "src/cli/dotenvx.js" } }, "sha512-Qg+meC+XFxliuVSDlEPkKnaUjdaJKK6FNx/Wwl2UxhQR8pyPIuLhMavsF7ePdB9qFZUWV1jEK3ckbJir/WmF4w=="], + "@dotenvx/dotenvx": ["@dotenvx/dotenvx@1.61.0", "", { "dependencies": { "commander": "^11.1.0", "dotenv": "^17.2.1", "eciesjs": "^0.4.10", "execa": "^5.1.1", "fdir": "^6.2.0", "ignore": "^5.3.0", "object-treeify": "1.1.33", "picomatch": "^4.0.2", "which": "^4.0.0", "yocto-spinner": "^1.1.0" }, "bin": { "dotenvx": "src/cli/dotenvx.js" } }, "sha512-utL3cpZoFzflyqUkjYbxYujI6STBTmO5LFn4bbin/NZnRWN6wQ7eErhr3/Vpa5h/jicPFC6kTa42r940mQftJQ=="], "@ecies/ciphers": ["@ecies/ciphers@0.2.6", "", { "peerDependencies": { "@noble/ciphers": "^1.0.0" } }, "sha512-patgsRPKGkhhoBjETV4XxD0En4ui5fbX0hzayqI3M8tvNMGUoUvmyYAIWwlxBc1KX5cturfqByYdj5bYGRpN9g=="], @@ -199,7 +219,7 @@ "@fontsource-variable/inter": ["@fontsource-variable/inter@5.2.8", "", {}, "sha512-kOfP2D+ykbcX/P3IFnokOhVRNoTozo5/JxhAIVYLpea/UBmCQ/YWPBfWIDuBImXX/15KH+eKh4xpEUyS2sQQGQ=="], - "@hono/node-server": ["@hono/node-server@1.19.12", "", { "peerDependencies": { "hono": "^4" } }, "sha512-txsUW4SQ1iilgE0l9/e9VQWmELXifEFvmdA1j6WFh/aFPj99hIntrSsq/if0UWyGVkmrRPKA1wCeP+UCr1B9Uw=="], + "@hono/node-server": ["@hono/node-server@1.19.13", "", { "peerDependencies": { "hono": "^4" } }, "sha512-TsQLe4i2gvoTtrHje625ngThGBySOgSK3Xo2XRYOdqGN1teR8+I7vchQC46uLJi8OF62YTYA3AhSpumtkhsaKQ=="], "@inquirer/ansi": ["@inquirer/ansi@1.0.2", "", {}, "sha512-S8qNSZiYzFd0wAcyG5AXCvUHC5Sr7xpZ9wZ2py9XR88jUz8wooStVx5M6dRzczbBWjic9NP7+rY0Xi7qqK/aMQ=="], @@ -355,55 +375,55 @@ "@rolldown/pluginutils": ["@rolldown/pluginutils@1.0.0-beta.27", "", {}, "sha512-+d0F4MKMCbeVUJwG96uQ4SgAznZNSq93I3V+9NHA4OpvqG8mRCpGdKmK8l/dl02h2CCDHwW2FqilnTyDcAnqjA=="], - "@rollup/rollup-android-arm-eabi": ["@rollup/rollup-android-arm-eabi@4.60.0", "", { "os": "android", "cpu": "arm" }, "sha512-WOhNW9K8bR3kf4zLxbfg6Pxu2ybOUbB2AjMDHSQx86LIF4rH4Ft7vmMwNt0loO0eonglSNy4cpD3MKXXKQu0/A=="], + "@rollup/rollup-android-arm-eabi": ["@rollup/rollup-android-arm-eabi@4.60.1", "", { "os": "android", "cpu": "arm" }, "sha512-d6FinEBLdIiK+1uACUttJKfgZREXrF0Qc2SmLII7W2AD8FfiZ9Wjd+rD/iRuf5s5dWrr1GgwXCvPqOuDquOowA=="], - "@rollup/rollup-android-arm64": ["@rollup/rollup-android-arm64@4.60.0", "", { "os": "android", "cpu": "arm64" }, "sha512-u6JHLll5QKRvjciE78bQXDmqRqNs5M/3GVqZeMwvmjaNODJih/WIrJlFVEihvV0MiYFmd+ZyPr9wxOVbPAG2Iw=="], + "@rollup/rollup-android-arm64": ["@rollup/rollup-android-arm64@4.60.1", "", { "os": "android", "cpu": "arm64" }, "sha512-YjG/EwIDvvYI1YvYbHvDz/BYHtkY4ygUIXHnTdLhG+hKIQFBiosfWiACWortsKPKU/+dUwQQCKQM3qrDe8c9BA=="], - "@rollup/rollup-darwin-arm64": ["@rollup/rollup-darwin-arm64@4.60.0", "", { "os": "darwin", "cpu": "arm64" }, "sha512-qEF7CsKKzSRc20Ciu2Zw1wRrBz4g56F7r/vRwY430UPp/nt1x21Q/fpJ9N5l47WWvJlkNCPJz3QRVw008fi7yA=="], + "@rollup/rollup-darwin-arm64": ["@rollup/rollup-darwin-arm64@4.60.1", "", { "os": "darwin", "cpu": "arm64" }, "sha512-mjCpF7GmkRtSJwon+Rq1N8+pI+8l7w5g9Z3vWj4T7abguC4Czwi3Yu/pFaLvA3TTeMVjnu3ctigusqWUfjZzvw=="], - "@rollup/rollup-darwin-x64": ["@rollup/rollup-darwin-x64@4.60.0", "", { "os": "darwin", "cpu": "x64" }, "sha512-WADYozJ4QCnXCH4wPB+3FuGmDPoFseVCUrANmA5LWwGmC6FL14BWC7pcq+FstOZv3baGX65tZ378uT6WG8ynTw=="], + "@rollup/rollup-darwin-x64": ["@rollup/rollup-darwin-x64@4.60.1", "", { "os": "darwin", "cpu": "x64" }, "sha512-haZ7hJ1JT4e9hqkoT9R/19XW2QKqjfJVv+i5AGg57S+nLk9lQnJ1F/eZloRO3o9Scy9CM3wQ9l+dkXtcBgN5Ew=="], - "@rollup/rollup-freebsd-arm64": ["@rollup/rollup-freebsd-arm64@4.60.0", "", { "os": "freebsd", "cpu": "arm64" }, "sha512-6b8wGHJlDrGeSE3aH5mGNHBjA0TTkxdoNHik5EkvPHCt351XnigA4pS7Wsj/Eo9Y8RBU6f35cjN9SYmCFBtzxw=="], + "@rollup/rollup-freebsd-arm64": ["@rollup/rollup-freebsd-arm64@4.60.1", "", { "os": "freebsd", "cpu": "arm64" }, "sha512-czw90wpQq3ZsAVBlinZjAYTKduOjTywlG7fEeWKUA7oCmpA8xdTkxZZlwNJKWqILlq0wehoZcJYfBvOyhPTQ6w=="], - "@rollup/rollup-freebsd-x64": ["@rollup/rollup-freebsd-x64@4.60.0", "", { "os": "freebsd", "cpu": "x64" }, "sha512-h25Ga0t4jaylMB8M/JKAyrvvfxGRjnPQIR8lnCayyzEjEOx2EJIlIiMbhpWxDRKGKF8jbNH01NnN663dH638mA=="], + "@rollup/rollup-freebsd-x64": ["@rollup/rollup-freebsd-x64@4.60.1", "", { "os": "freebsd", "cpu": "x64" }, "sha512-KVB2rqsxTHuBtfOeySEyzEOB7ltlB/ux38iu2rBQzkjbwRVlkhAGIEDiiYnO2kFOkJp+Z7pUXKyrRRFuFUKt+g=="], - "@rollup/rollup-linux-arm-gnueabihf": ["@rollup/rollup-linux-arm-gnueabihf@4.60.0", "", { "os": "linux", "cpu": "arm" }, "sha512-RzeBwv0B3qtVBWtcuABtSuCzToo2IEAIQrcyB/b2zMvBWVbjo8bZDjACUpnaafaxhTw2W+imQbP2BD1usasK4g=="], + "@rollup/rollup-linux-arm-gnueabihf": ["@rollup/rollup-linux-arm-gnueabihf@4.60.1", "", { "os": "linux", "cpu": "arm" }, "sha512-L+34Qqil+v5uC0zEubW7uByo78WOCIrBvci69E7sFASRl0X7b/MB6Cqd1lky/CtcSVTydWa2WZwFuWexjS5o6g=="], - "@rollup/rollup-linux-arm-musleabihf": ["@rollup/rollup-linux-arm-musleabihf@4.60.0", "", { "os": "linux", "cpu": "arm" }, "sha512-Sf7zusNI2CIU1HLzuu9Tc5YGAHEZs5Lu7N1ssJG4Tkw6e0MEsN7NdjUDDfGNHy2IU+ENyWT+L2obgWiguWibWQ=="], + "@rollup/rollup-linux-arm-musleabihf": ["@rollup/rollup-linux-arm-musleabihf@4.60.1", "", { "os": "linux", "cpu": "arm" }, "sha512-n83O8rt4v34hgFzlkb1ycniJh7IR5RCIqt6mz1VRJD6pmhRi0CXdmfnLu9dIUS6buzh60IvACM842Ffb3xd6Gg=="], - "@rollup/rollup-linux-arm64-gnu": ["@rollup/rollup-linux-arm64-gnu@4.60.0", "", { "os": "linux", "cpu": "arm64" }, "sha512-DX2x7CMcrJzsE91q7/O02IJQ5/aLkVtYFryqCjduJhUfGKG6yJV8hxaw8pZa93lLEpPTP/ohdN4wFz7yp/ry9A=="], + "@rollup/rollup-linux-arm64-gnu": ["@rollup/rollup-linux-arm64-gnu@4.60.1", "", { "os": "linux", "cpu": "arm64" }, "sha512-Nql7sTeAzhTAja3QXeAI48+/+GjBJ+QmAH13snn0AJSNL50JsDqotyudHyMbO2RbJkskbMbFJfIJKWA6R1LCJQ=="], - "@rollup/rollup-linux-arm64-musl": ["@rollup/rollup-linux-arm64-musl@4.60.0", "", { "os": "linux", "cpu": "arm64" }, "sha512-09EL+yFVbJZlhcQfShpswwRZ0Rg+z/CsSELFCnPt3iK+iqwGsI4zht3secj5vLEs957QvFFXnzAT0FFPIxSrkQ=="], + "@rollup/rollup-linux-arm64-musl": ["@rollup/rollup-linux-arm64-musl@4.60.1", "", { "os": "linux", "cpu": "arm64" }, "sha512-+pUymDhd0ys9GcKZPPWlFiZ67sTWV5UU6zOJat02M1+PiuSGDziyRuI/pPue3hoUwm2uGfxdL+trT6Z9rxnlMA=="], - "@rollup/rollup-linux-loong64-gnu": ["@rollup/rollup-linux-loong64-gnu@4.60.0", "", { "os": "linux", "cpu": "none" }, "sha512-i9IcCMPr3EXm8EQg5jnja0Zyc1iFxJjZWlb4wr7U2Wx/GrddOuEafxRdMPRYVaXjgbhvqalp6np07hN1w9kAKw=="], + "@rollup/rollup-linux-loong64-gnu": ["@rollup/rollup-linux-loong64-gnu@4.60.1", "", { "os": "linux", "cpu": "none" }, "sha512-VSvgvQeIcsEvY4bKDHEDWcpW4Yw7BtlKG1GUT4FzBUlEKQK0rWHYBqQt6Fm2taXS+1bXvJT6kICu5ZwqKCnvlQ=="], - "@rollup/rollup-linux-loong64-musl": ["@rollup/rollup-linux-loong64-musl@4.60.0", "", { "os": "linux", "cpu": "none" }, "sha512-DGzdJK9kyJ+B78MCkWeGnpXJ91tK/iKA6HwHxF4TAlPIY7GXEvMe8hBFRgdrR9Ly4qebR/7gfUs9y2IoaVEyog=="], + "@rollup/rollup-linux-loong64-musl": ["@rollup/rollup-linux-loong64-musl@4.60.1", "", { "os": "linux", "cpu": "none" }, "sha512-4LqhUomJqwe641gsPp6xLfhqWMbQV04KtPp7/dIp0nzPxAkNY1AbwL5W0MQpcalLYk07vaW9Kp1PBhdpZYYcEw=="], - "@rollup/rollup-linux-ppc64-gnu": ["@rollup/rollup-linux-ppc64-gnu@4.60.0", "", { "os": "linux", "cpu": "ppc64" }, "sha512-RwpnLsqC8qbS8z1H1AxBA1H6qknR4YpPR9w2XX0vo2Sz10miu57PkNcnHVaZkbqyw/kUWfKMI73jhmfi9BRMUQ=="], + "@rollup/rollup-linux-ppc64-gnu": ["@rollup/rollup-linux-ppc64-gnu@4.60.1", "", { "os": "linux", "cpu": "ppc64" }, "sha512-tLQQ9aPvkBxOc/EUT6j3pyeMD6Hb8QF2BTBnCQWP/uu1lhc9AIrIjKnLYMEroIz/JvtGYgI9dF3AxHZNaEH0rw=="], - "@rollup/rollup-linux-ppc64-musl": ["@rollup/rollup-linux-ppc64-musl@4.60.0", "", { "os": "linux", "cpu": "ppc64" }, "sha512-Z8pPf54Ly3aqtdWC3G4rFigZgNvd+qJlOE52fmko3KST9SoGfAdSRCwyoyG05q1HrrAblLbk1/PSIV+80/pxLg=="], + "@rollup/rollup-linux-ppc64-musl": ["@rollup/rollup-linux-ppc64-musl@4.60.1", "", { "os": "linux", "cpu": "ppc64" }, "sha512-RMxFhJwc9fSXP6PqmAz4cbv3kAyvD1etJFjTx4ONqFP9DkTkXsAMU4v3Vyc5BgzC+anz7nS/9tp4obsKfqkDHg=="], - "@rollup/rollup-linux-riscv64-gnu": ["@rollup/rollup-linux-riscv64-gnu@4.60.0", "", { "os": "linux", "cpu": "none" }, "sha512-3a3qQustp3COCGvnP4SvrMHnPQ9d1vzCakQVRTliaz8cIp/wULGjiGpbcqrkv0WrHTEp8bQD/B3HBjzujVWLOA=="], + "@rollup/rollup-linux-riscv64-gnu": ["@rollup/rollup-linux-riscv64-gnu@4.60.1", "", { "os": "linux", "cpu": "none" }, "sha512-QKgFl+Yc1eEk6MmOBfRHYF6lTxiiiV3/z/BRrbSiW2I7AFTXoBFvdMEyglohPj//2mZS4hDOqeB0H1ACh3sBbg=="], - "@rollup/rollup-linux-riscv64-musl": ["@rollup/rollup-linux-riscv64-musl@4.60.0", "", { "os": "linux", "cpu": "none" }, "sha512-pjZDsVH/1VsghMJ2/kAaxt6dL0psT6ZexQVrijczOf+PeP2BUqTHYejk3l6TlPRydggINOeNRhvpLa0AYpCWSQ=="], + "@rollup/rollup-linux-riscv64-musl": ["@rollup/rollup-linux-riscv64-musl@4.60.1", "", { "os": "linux", "cpu": "none" }, "sha512-RAjXjP/8c6ZtzatZcA1RaQr6O1TRhzC+adn8YZDnChliZHviqIjmvFwHcxi4JKPSDAt6Uhf/7vqcBzQJy0PDJg=="], - "@rollup/rollup-linux-s390x-gnu": ["@rollup/rollup-linux-s390x-gnu@4.60.0", "", { "os": "linux", "cpu": "s390x" }, "sha512-3ObQs0BhvPgiUVZrN7gqCSvmFuMWvWvsjG5ayJ3Lraqv+2KhOsp+pUbigqbeWqueGIsnn+09HBw27rJ+gYK4VQ=="], + "@rollup/rollup-linux-s390x-gnu": ["@rollup/rollup-linux-s390x-gnu@4.60.1", "", { "os": "linux", "cpu": "s390x" }, "sha512-wcuocpaOlaL1COBYiA89O6yfjlp3RwKDeTIA0hM7OpmhR1Bjo9j31G1uQVpDlTvwxGn2nQs65fBFL5UFd76FcQ=="], - "@rollup/rollup-linux-x64-gnu": ["@rollup/rollup-linux-x64-gnu@4.60.0", "", { "os": "linux", "cpu": "x64" }, "sha512-EtylprDtQPdS5rXvAayrNDYoJhIz1/vzN2fEubo3yLE7tfAw+948dO0g4M0vkTVFhKojnF+n6C8bDNe+gDRdTg=="], + "@rollup/rollup-linux-x64-gnu": ["@rollup/rollup-linux-x64-gnu@4.60.1", "", { "os": "linux", "cpu": "x64" }, "sha512-77PpsFQUCOiZR9+LQEFg9GClyfkNXj1MP6wRnzYs0EeWbPcHs02AXu4xuUbM1zhwn3wqaizle3AEYg5aeoohhg=="], - "@rollup/rollup-linux-x64-musl": ["@rollup/rollup-linux-x64-musl@4.60.0", "", { "os": "linux", "cpu": "x64" }, "sha512-k09oiRCi/bHU9UVFqD17r3eJR9bn03TyKraCrlz5ULFJGdJGi7VOmm9jl44vOJvRJ6P7WuBi/s2A97LxxHGIdw=="], + "@rollup/rollup-linux-x64-musl": ["@rollup/rollup-linux-x64-musl@4.60.1", "", { "os": "linux", "cpu": "x64" }, "sha512-5cIATbk5vynAjqqmyBjlciMJl1+R/CwX9oLk/EyiFXDWd95KpHdrOJT//rnUl4cUcskrd0jCCw3wpZnhIHdD9w=="], - "@rollup/rollup-openbsd-x64": ["@rollup/rollup-openbsd-x64@4.60.0", "", { "os": "openbsd", "cpu": "x64" }, "sha512-1o/0/pIhozoSaDJoDcec+IVLbnRtQmHwPV730+AOD29lHEEo4F5BEUB24H0OBdhbBBDwIOSuf7vgg0Ywxdfiiw=="], + "@rollup/rollup-openbsd-x64": ["@rollup/rollup-openbsd-x64@4.60.1", "", { "os": "openbsd", "cpu": "x64" }, "sha512-cl0w09WsCi17mcmWqqglez9Gk8isgeWvoUZ3WiJFYSR3zjBQc2J5/ihSjpl+VLjPqjQ/1hJRcqBfLjssREQILw=="], - "@rollup/rollup-openharmony-arm64": ["@rollup/rollup-openharmony-arm64@4.60.0", "", { "os": "none", "cpu": "arm64" }, "sha512-pESDkos/PDzYwtyzB5p/UoNU/8fJo68vcXM9ZW2V0kjYayj1KaaUfi1NmTUTUpMn4UhU4gTuK8gIaFO4UGuMbA=="], + "@rollup/rollup-openharmony-arm64": ["@rollup/rollup-openharmony-arm64@4.60.1", "", { "os": "none", "cpu": "arm64" }, "sha512-4Cv23ZrONRbNtbZa37mLSueXUCtN7MXccChtKpUnQNgF010rjrjfHx3QxkS2PI7LqGT5xXyYs1a7LbzAwT0iCA=="], - "@rollup/rollup-win32-arm64-msvc": ["@rollup/rollup-win32-arm64-msvc@4.60.0", "", { "os": "win32", "cpu": "arm64" }, "sha512-hj1wFStD7B1YBeYmvY+lWXZ7ey73YGPcViMShYikqKT1GtstIKQAtfUI6yrzPjAy/O7pO0VLXGmUVWXQMaYgTQ=="], + "@rollup/rollup-win32-arm64-msvc": ["@rollup/rollup-win32-arm64-msvc@4.60.1", "", { "os": "win32", "cpu": "arm64" }, "sha512-i1okWYkA4FJICtr7KpYzFpRTHgy5jdDbZiWfvny21iIKky5YExiDXP+zbXzm3dUcFpkEeYNHgQ5fuG236JPq0g=="], - "@rollup/rollup-win32-ia32-msvc": ["@rollup/rollup-win32-ia32-msvc@4.60.0", "", { "os": "win32", "cpu": "ia32" }, "sha512-SyaIPFoxmUPlNDq5EHkTbiKzmSEmq/gOYFI/3HHJ8iS/v1mbugVa7dXUzcJGQfoytp9DJFLhHH4U3/eTy2Bq4w=="], + "@rollup/rollup-win32-ia32-msvc": ["@rollup/rollup-win32-ia32-msvc@4.60.1", "", { "os": "win32", "cpu": "ia32" }, "sha512-u09m3CuwLzShA0EYKMNiFgcjjzwqtUMLmuCJLeZWjjOYA3IT2Di09KaxGBTP9xVztWyIWjVdsB2E9goMjZvTQg=="], - "@rollup/rollup-win32-x64-gnu": ["@rollup/rollup-win32-x64-gnu@4.60.0", "", { "os": "win32", "cpu": "x64" }, "sha512-RdcryEfzZr+lAr5kRm2ucN9aVlCCa2QNq4hXelZxb8GG0NJSazq44Z3PCCc8wISRuCVnGs0lQJVX5Vp6fKA+IA=="], + "@rollup/rollup-win32-x64-gnu": ["@rollup/rollup-win32-x64-gnu@4.60.1", "", { "os": "win32", "cpu": "x64" }, "sha512-k+600V9Zl1CM7eZxJgMyTUzmrmhB/0XZnF4pRypKAlAgxmedUA+1v9R+XOFv56W4SlHEzfeMtzujLJD22Uz5zg=="], - "@rollup/rollup-win32-x64-msvc": ["@rollup/rollup-win32-x64-msvc@4.60.0", "", { "os": "win32", "cpu": "x64" }, "sha512-PrsWNQ8BuE00O3Xsx3ALh2Df8fAj9+cvvX9AIA6o4KpATR98c9mud4XtDWVvsEuyia5U4tVSTKygawyJkjm60w=="], + "@rollup/rollup-win32-x64-msvc": ["@rollup/rollup-win32-x64-msvc@4.60.1", "", { "os": "win32", "cpu": "x64" }, "sha512-lWMnixq/QzxyhTV6NjQJ4SFo1J6PvOX8vUx5Wb4bBPsEb+8xZ89Bz6kOXpfXj9ak9AHTQVQzlgzBEc1SyM27xQ=="], "@sec-ant/readable-stream": ["@sec-ant/readable-stream@0.4.1", "", {}, "sha512-831qok9r2t8AlxLko40y2ebgSDhenenCatLVeW/uBtnHPyhHOvG0C7TvfgecV+wHzIm5KUICgzmVpWS+IMEAeg=="], @@ -655,7 +675,7 @@ "base64-js": ["base64-js@1.5.1", "", {}, "sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA=="], - "baseline-browser-mapping": ["baseline-browser-mapping@2.10.10", "", { "bin": { "baseline-browser-mapping": "dist/cli.cjs" } }, "sha512-sUoJ3IMxx4AyRqO4MLeHlnGDkyXRoUG0/AI9fjK+vS72ekpV0yWVY7O0BVjmBcRtkNcsAO2QDZ4tdKKGoI6YaQ=="], + "baseline-browser-mapping": ["baseline-browser-mapping@2.10.18", "", { "bin": { "baseline-browser-mapping": "dist/cli.cjs" } }, "sha512-VSnGQAOLtP5mib/DPyg2/t+Tlv65NTBz83BJBJvmLVHHuKJVaDOBvJJykiT5TR++em5nfAySPccDZDa4oSrn8A=="], "basic-ftp": ["basic-ftp@5.2.2", "", {}, "sha512-1tDrzKsdCg70WGvbFss/ulVAxupNauGnOlgpyjKzeQxzyllBLS0CGLV7tjIXTK3ZQA9/FBEm9qyFFN1bciA6pw=="], @@ -665,13 +685,13 @@ "boolbase": ["boolbase@1.0.0", "", {}, "sha512-JZOSA7Mo9sNGB8+UjSgzdLtokWAky1zbztM3WRLCbZ70/3cTANmQmOdR7y2g+J0e2WXywy1yS468tY+IruqEww=="], - "brace-expansion": ["brace-expansion@2.0.3", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-MCV/fYJEbqx68aE58kv2cA/kiky1G8vux3OR6/jbS+jIMe/6fJWa0DTzJU7dqijOWYwHi1t29FlfYI9uytqlpA=="], + "brace-expansion": ["brace-expansion@2.1.0", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-TN1kCZAgdgweJhWWpgKYrQaMNHcDULHkWwQIspdtjV4Y5aurRdZpjAqn6yX3FPqTA9ngHCc4hJxMAMgGfve85w=="], "braces": ["braces@3.0.3", "", { "dependencies": { "fill-range": "^7.1.1" } }, "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA=="], "browser-stdout": ["browser-stdout@1.3.1", "", {}, "sha512-qhAVI1+Av2X7qelOfAIYwXONood6XlZE/fXaBSmW/T5SzLAmCgzi+eiWE7fUvbHaeNBQH13UftjpXxsfLkMpgw=="], - "browserslist": ["browserslist@4.28.1", "", { "dependencies": { "baseline-browser-mapping": "^2.9.0", "caniuse-lite": "^1.0.30001759", "electron-to-chromium": "^1.5.263", "node-releases": "^2.0.27", "update-browserslist-db": "^1.2.0" }, "bin": { "browserslist": "cli.js" } }, "sha512-ZC5Bd0LgJXgwGqUknZY/vkUQ04r8NXnJZ3yYi4vDmSiZmC/pdSN0NbNRPxZpbtO4uAfDUAFffO8IZoM3Gj8IkA=="], + "browserslist": ["browserslist@4.28.2", "", { "dependencies": { "baseline-browser-mapping": "^2.10.12", "caniuse-lite": "^1.0.30001782", "electron-to-chromium": "^1.5.328", "node-releases": "^2.0.36", "update-browserslist-db": "^1.2.3" }, "bin": { "browserslist": "cli.js" } }, "sha512-48xSriZYYg+8qXna9kwqjIVzuQxi+KYWp2+5nCYnYKPTr0LvD89Jqk2Or5ogxz0NUMfIjhh2lIUX/LyX9B4oIg=="], "buffer": ["buffer@6.0.3", "", { "dependencies": { "base64-js": "^1.3.1", "ieee754": "^1.2.1" } }, "sha512-FTiCpNxtwiZZHEZbcbTIcZjERVICn9yq/pDFkTl95/AxzD1naBctN7YO68riM/gLSDY7sdrMby8hofADYuuqOA=="], @@ -691,7 +711,7 @@ "camelcase": ["camelcase@6.3.0", "", {}, "sha512-Gmy6FhYlCY7uOElZUSbxo2UCDH8owEk996gkbrpsgGtrJLM3J7jGxl9Ic7Qwwj4ivOE5AWZWRMecDdF7hqGjFA=="], - "caniuse-lite": ["caniuse-lite@1.0.30001780", "", {}, "sha512-llngX0E7nQci5BPJDqoZSbuZ5Bcs9F5db7EtgfwBerX9XGtkkiO4NwfDDIRzHTTwcYC8vC7bmeUEPGrKlR/TkQ=="], + "caniuse-lite": ["caniuse-lite@1.0.30001787", "", {}, "sha512-mNcrMN9KeI68u7muanUpEejSLghOKlVhRqS/Za2IeyGllJ9I9otGpR9g3nsw7n4W378TE/LyIteA0+/FOZm4Kg=="], "ccount": ["ccount@2.0.1", "", {}, "sha512-eyrF0jiFpY+3drT6383f1qhkbGsLSifNAjA61IUjZjmLCWjItY6LB9ft9YhoDgwfmclB2zhu51Lc7+95b8NRAg=="], @@ -749,7 +769,7 @@ "concat-map": ["concat-map@0.0.1", "", {}, "sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg=="], - "content-disposition": ["content-disposition@1.0.1", "", {}, "sha512-oIXISMynqSqm241k6kcQ5UwttDILMK4BiurCfGEREw6+X9jkkpEe5T9FZaApyLGGOnFuyMWZpdolTXMtvEJ08Q=="], + "content-disposition": ["content-disposition@1.1.0", "", {}, "sha512-5jRCH9Z/+DRP7rkvY83B+yGIGX96OYdJmzngqnw2SBSxqCFPd0w2km3s5iawpGX8krnwSGmF0FW5Nhr0Hfai3g=="], "content-type": ["content-type@1.0.5", "", {}, "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA=="], @@ -841,7 +861,7 @@ "domutils": ["domutils@3.2.2", "", { "dependencies": { "dom-serializer": "^2.0.0", "domelementtype": "^2.3.0", "domhandler": "^5.0.3" } }, "sha512-6kZKyUajlDuqlHKVX1w7gyslj9MPIXzIFiz/rGu35uC1wMi+kMhQwGhl4lt9unC9Vb9INnY9Z3/ZA3+FhASLaw=="], - "dotenv": ["dotenv@17.3.1", "", {}, "sha512-IO8C/dzEb6O3F9/twg6ZLXz164a2fhTnEWb95H23Dm4OuN+92NmEAlTrupP9VW6Jm3sO26tQlqyvyi4CsnY9GA=="], + "dotenv": ["dotenv@17.4.1", "", {}, "sha512-k8DaKGP6r1G30Lx8V4+pCsLzKr8vLmV2paqEj1Y55GdAgJuIqpRp5FfajGF8KtwMxCz9qJc6wUIJnm053d/WCw=="], "dunder-proto": ["dunder-proto@1.0.1", "", { "dependencies": { "call-bind-apply-helpers": "^1.0.1", "es-errors": "^1.3.0", "gopd": "^1.2.0" } }, "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A=="], @@ -859,7 +879,7 @@ "ejs": ["ejs@3.1.10", "", { "dependencies": { "jake": "^10.8.5" }, "bin": { "ejs": "bin/cli.js" } }, "sha512-UeJmFfOrAQS8OJWPZ4qtgHyWExa088/MtK5UEyoJGFH67cDEXkZSviOiKRCZ4Xij0zxI3JECgYs3oKx+AizQBA=="], - "electron-to-chromium": ["electron-to-chromium@1.5.321", "", {}, "sha512-L2C7Q279W2D/J4PLZLk7sebOILDSWos7bMsMNN06rK482umHUrh/3lM8G7IlHFOYip2oAg5nha1rCMxr/rs6ZQ=="], + "electron-to-chromium": ["electron-to-chromium@1.5.335", "", {}, "sha512-q9n5T4BR4Xwa2cwbrwcsDJtHD/enpQ5S1xF1IAtdqf5AAgqDFmR/aakqH3ChFdqd/QXJhS3rnnXFtexU7rax6Q=="], "emoji-regex": ["emoji-regex@8.0.0", "", {}, "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="], @@ -1035,7 +1055,7 @@ "headers-polyfill": ["headers-polyfill@4.0.3", "", {}, "sha512-IScLbePpkvO846sIwOtOTDjutRMWdXdJmXdMvk6gCBHxFO8d+QKOQedyZSxFTTFYRSmlgSTDtXqqq4pcenBXLQ=="], - "hono": ["hono@4.12.9", "", {}, "sha512-wy3T8Zm2bsEvxKZM5w21VdHDDcwVS1yUFFY6i8UobSsKfFceT7TOwhbhfKsDyx7tYQlmRM5FLpIuYvNFyjctiA=="], + "hono": ["hono@4.12.12", "", {}, "sha512-p1JfQMKaceuCbpJKAPKVqyqviZdS0eUxH9v82oWo1kb9xjQ5wA6iP3FNVAPDFlz5/p7d45lO+BpSk1tuSZMF4Q=="], "hosted-git-info": ["hosted-git-info@8.1.0", "", { "dependencies": { "lru-cache": "^10.0.1" } }, "sha512-Rw/B2DNQaPBICNXEm8balFz9a6WpZrkCGpcWFpy7nCj+NyhSdqXipmfvtmWt9xGfp0wZnBxB+iVpLmQMYt47Tw=="], @@ -1175,6 +1195,28 @@ "lazystream": ["lazystream@1.0.1", "", { "dependencies": { "readable-stream": "^2.0.5" } }, "sha512-b94GiNHQNy6JNTrt5w6zNyffMrNkXZb3KTkCZJb2V1xaEGCk093vkZ2jk3tpaeP33/OiXC+WvK9AxUebnf5nbw=="], + "lefthook": ["lefthook@2.1.5", "", { "optionalDependencies": { "lefthook-darwin-arm64": "2.1.5", "lefthook-darwin-x64": "2.1.5", "lefthook-freebsd-arm64": "2.1.5", "lefthook-freebsd-x64": "2.1.5", "lefthook-linux-arm64": "2.1.5", "lefthook-linux-x64": "2.1.5", "lefthook-openbsd-arm64": "2.1.5", "lefthook-openbsd-x64": "2.1.5", "lefthook-windows-arm64": "2.1.5", "lefthook-windows-x64": "2.1.5" }, "bin": { "lefthook": "bin/index.js" } }, "sha512-yB9IFWurFllusbPZqvG0EavTmpNXPya2MuO7Li7YT78xAj3uCQ3AgmW9TVUbTTsSMhsegbiAMRpwfEk2TP1P0A=="], + + "lefthook-darwin-arm64": ["lefthook-darwin-arm64@2.1.5", "", { "os": "darwin", "cpu": "arm64" }, "sha512-VITTaw8PxxyE26gkZ8UcwIa5ZrWnKNRGLeeSrqri40cQdXvLTEoMq2tjjw7eiL9UcB0waRReDdzydevy9GOPUQ=="], + + "lefthook-darwin-x64": ["lefthook-darwin-x64@2.1.5", "", { "os": "darwin", "cpu": "x64" }, "sha512-AvtjYiW0BSGHBGrdvL313seUymrW9FxI+6JJwJ+ZSaa2sH81etrTB0wAwlH1L9VfFwK9+gWvatZBvLfF3L4fPw=="], + + "lefthook-freebsd-arm64": ["lefthook-freebsd-arm64@2.1.5", "", { "os": "freebsd", "cpu": "arm64" }, "sha512-mXjJwe8jKGWGiBYUxfQY1ab3Nn5NhafqT9q3KJz8m5joGGQj4JD0cbWxF1nVBLBWsDGbWZRZunTCMGcIScT2bQ=="], + + "lefthook-freebsd-x64": ["lefthook-freebsd-x64@2.1.5", "", { "os": "freebsd", "cpu": "x64" }, "sha512-exD69dCjc1K45BxatDPGoH4NmEvgLKPm4kJLOWn1fTeHRKZwWiFPwnjknEoG2OemlCDHmCU++5X40kMEG0WBlA=="], + + "lefthook-linux-arm64": ["lefthook-linux-arm64@2.1.5", "", { "os": "linux", "cpu": "arm64" }, "sha512-57TDKC5ewWpsCLZQKIJMHumFEObYKVundmPpiWhX491hINRZYYOL/26yrnVnNcidThRzTiTC+HLcuplLcaXtbA=="], + + "lefthook-linux-x64": ["lefthook-linux-x64@2.1.5", "", { "os": "linux", "cpu": "x64" }, "sha512-bqK3LrAB5l5YaCaoHk6qRWlITrGWzP4FbwRxA31elbxjd0wgNWZ2Sn3zEfSEcxz442g7/PPkEwqqsTx0kSFzpg=="], + + "lefthook-openbsd-arm64": ["lefthook-openbsd-arm64@2.1.5", "", { "os": "openbsd", "cpu": "arm64" }, "sha512-5aSwK7vV3A6t0w9PnxCMiVjQlcvopBP50BtmnnLnNJyAYHnFbZ0Baq5M0WkE9IsUkWSux0fe6fd0jDkuG711MA=="], + + "lefthook-openbsd-x64": ["lefthook-openbsd-x64@2.1.5", "", { "os": "openbsd", "cpu": "x64" }, "sha512-Y+pPdDuENJ8qWnUgL02xxhpjblc0WnwXvWGfqnl3WZrAgHzQpwx3G6469RID/wlNVdHYAlw3a8UkFSMYsTzXvA=="], + + "lefthook-windows-arm64": ["lefthook-windows-arm64@2.1.5", "", { "os": "win32", "cpu": "arm64" }, "sha512-2PlcFBjTzJaMufw0c28kfhB/0zmaRCU0TRPPsil/HU2YNOExod4upPGLk9qjgsOmb2YVWFz6zq6u7+D1yqmzTQ=="], + + "lefthook-windows-x64": ["lefthook-windows-x64@2.1.5", "", { "os": "win32", "cpu": "x64" }, "sha512-yiAh8qxml6uqy10jDxOdN9fOQpyLxBFY1fgCEAhn7sVJYmJKRhjqSBwZX6LG5MQjzr29KStrIdw7TR3lf3rT7Q=="], + "lie": ["lie@3.3.0", "", { "dependencies": { "immediate": "~3.0.5" } }, "sha512-UaiMJzeWRlEujzAuw5LokY1L5ecNQYZKfmyZ9L7wDHb/p5etKaxXhohBcrw0EYby+G/NA52vRSN4N39dxHAIwQ=="], "lightningcss": ["lightningcss@1.32.0", "", { "dependencies": { "detect-libc": "^2.0.3" }, "optionalDependencies": { "lightningcss-android-arm64": "1.32.0", "lightningcss-darwin-arm64": "1.32.0", "lightningcss-darwin-x64": "1.32.0", "lightningcss-freebsd-x64": "1.32.0", "lightningcss-linux-arm-gnueabihf": "1.32.0", "lightningcss-linux-arm64-gnu": "1.32.0", "lightningcss-linux-arm64-musl": "1.32.0", "lightningcss-linux-x64-gnu": "1.32.0", "lightningcss-linux-x64-musl": "1.32.0", "lightningcss-win32-arm64-msvc": "1.32.0", "lightningcss-win32-x64-msvc": "1.32.0" } }, "sha512-NXYBzinNrblfraPGyrbPoD19C1h9lfI/1mzgWYvXUTe414Gz/X1FD2XBZSZM7rRTrMA8JL3OtAaGifrIKhQ5yQ=="], @@ -1231,7 +1273,7 @@ "lru-cache": ["lru-cache@5.1.1", "", { "dependencies": { "yallist": "^3.0.2" } }, "sha512-KpNARQA3Iwv+jTA0utUVVbrh+Jlrr1Fv0e56GGzAFOXN7dk/FviaDW8LHmK52DlcH4WP2n6gI8vN1aesBFgo9w=="], - "lucide-react": ["lucide-react@1.7.0", "", { "peerDependencies": { "react": "^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0" } }, "sha512-yI7BeItCLZJTXikmK4KNUGCKoGzSvbKlfCvw44bU4fXAL6v3gYS4uHD1jzsLkfwODYwI6Drw5Tu9Z5ulDe0TSg=="], + "lucide-react": ["lucide-react@1.8.0", "", { "peerDependencies": { "react": "^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0" } }, "sha512-WuvlsjngSk7TnTBJ1hsCy3ql9V9VOdcPkd3PKcSmM34vJD8KG6molxz7m7zbYFgICwsanQWmJ13JlYs4Zp7Arw=="], "lz-string": ["lz-string@1.5.0", "", { "bin": { "lz-string": "bin/bin.js" } }, "sha512-h5bgJWpxJNswbU7qCrV0tIKQCaS3blPDrqKWx+QxzuzL1zGUzij9XCWLrSLsJPu5t+eWA/ycetzYAO5IOMcWAQ=="], @@ -1361,7 +1403,7 @@ "ms": ["ms@2.1.3", "", {}, "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="], - "msw": ["msw@2.12.14", "", { "dependencies": { "@inquirer/confirm": "^5.0.0", "@mswjs/interceptors": "^0.41.2", "@open-draft/deferred-promise": "^2.2.0", "@types/statuses": "^2.0.6", "cookie": "^1.0.2", "graphql": "^16.12.0", "headers-polyfill": "^4.0.2", "is-node-process": "^1.2.0", "outvariant": "^1.4.3", "path-to-regexp": "^6.3.0", "picocolors": "^1.1.1", "rettime": "^0.10.1", "statuses": "^2.0.2", "strict-event-emitter": "^0.5.1", "tough-cookie": "^6.0.0", "type-fest": "^5.2.0", "until-async": "^3.0.2", "yargs": "^17.7.2" }, "peerDependencies": { "typescript": ">= 4.8.x" }, "optionalPeers": ["typescript"], "bin": { "msw": "cli/index.js" } }, "sha512-4KXa4nVBIBjbDbd7vfQNuQ25eFxug0aropCQFoI0JdOBuJWamkT1yLVIWReFI8SiTRc+H1hKzaNk+cLk2N9rtQ=="], + "msw": ["msw@2.13.2", "", { "dependencies": { "@inquirer/confirm": "^5.0.0", "@mswjs/interceptors": "^0.41.2", "@open-draft/deferred-promise": "^2.2.0", "@types/statuses": "^2.0.6", "cookie": "^1.0.2", "graphql": "^16.12.0", "headers-polyfill": "^4.0.2", "is-node-process": "^1.2.0", "outvariant": "^1.4.3", "path-to-regexp": "^6.3.0", "picocolors": "^1.1.1", "rettime": "^0.10.1", "statuses": "^2.0.2", "strict-event-emitter": "^0.5.1", "tough-cookie": "^6.0.0", "type-fest": "^5.2.0", "until-async": "^3.0.2", "yargs": "^17.7.2" }, "peerDependencies": { "typescript": ">= 4.8.x" }, "optionalPeers": ["typescript"], "bin": { "msw": "cli/index.js" } }, "sha512-go2H1TIERKkC48pXiwec5l6sbNqYuvqOk3/vHGo1Zd+pq/H63oFawDQerH+WQdUw/flJFHDG7F+QdWMwhntA/A=="], "mute-stream": ["mute-stream@2.0.0", "", {}, "sha512-WWdIxpyjEn+FhQJQQv9aQAYlHoNVdzIzUySNV1gHUPDSdZJ3yZn7pAAbQcV7B56Mvu881q9FZV+0Vx2xC44VWA=="], @@ -1375,7 +1417,7 @@ "node-fetch": ["node-fetch@3.3.2", "", { "dependencies": { "data-uri-to-buffer": "^4.0.0", "fetch-blob": "^3.1.4", "formdata-polyfill": "^4.0.10" } }, "sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA=="], - "node-releases": ["node-releases@2.0.36", "", {}, "sha512-TdC8FSgHz8Mwtw9g5L4gR/Sh9XhSP/0DEkQxfEFXOpiul5IiHgHan2VhYYb6agDSfp4KuvltmGApc8HMgUrIkA=="], + "node-releases": ["node-releases@2.0.37", "", {}, "sha512-1h5gKZCF+pO/o3Iqt5Jp7wc9rH3eJJ0+nh/CIoiRwjRxde/hAHyLPXYN4V3CqKAbiZPSeJFSWHmJsbkicta0Eg=="], "normalize-package-data": ["normalize-package-data@7.0.1", "", { "dependencies": { "hosted-git-info": "^8.0.0", "semver": "^7.3.5", "validate-npm-package-license": "^3.0.4" } }, "sha512-linxNAT6M0ebEYZOx2tO6vBEFsVgnPpv+AVjk0wJHfaUIbq31Jm3T6vvZaarnOeWDh8ShnwXuaAyM7WT3RzErA=="], @@ -1453,11 +1495,11 @@ "picocolors": ["picocolors@1.1.1", "", {}, "sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA=="], - "picomatch": ["picomatch@4.0.3", "", {}, "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q=="], + "picomatch": ["picomatch@4.0.4", "", {}, "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A=="], "pkce-challenge": ["pkce-challenge@5.0.1", "", {}, "sha512-wQ0b/W4Fr01qtpHlqSqspcj3EhBvimsdh0KlHhH8HRZnMsEa0ea2fTULOXOS9ccQr3om+GcGRk4e+isrZWV8qQ=="], - "postcss": ["postcss@8.5.8", "", { "dependencies": { "nanoid": "^3.3.11", "picocolors": "^1.1.1", "source-map-js": "^1.2.1" } }, "sha512-OW/rX8O/jXnm82Ey1k44pObPtdblfiuWnrd8X7GJ7emImCOstunGbXUpp7HdBrFQX6rJzn3sPT397Wp5aCwCHg=="], + "postcss": ["postcss@8.5.9", "", { "dependencies": { "nanoid": "^3.3.11", "picocolors": "^1.1.1", "source-map-js": "^1.2.1" } }, "sha512-7a70Nsot+EMX9fFU3064K/kdHWZqGVY+BADLyXc8Dfv+mTLLVl6JzJpPaCZ2kQL9gIJvKXSLMHhqdRRjwQeFtw=="], "postcss-selector-parser": ["postcss-selector-parser@7.1.1", "", { "dependencies": { "cssesc": "^3.0.0", "util-deprecate": "^1.0.2" } }, "sha512-orRsuYpJVw8LdAwqqLykBj9ecS5/cRHlI5+nvTo8LcCKmzDmqVORXtOIYEEQuL9D4BxtA1lm5isAqzQZCoQ6Eg=="], @@ -1487,7 +1529,7 @@ "punycode": ["punycode@2.3.1", "", {}, "sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg=="], - "qs": ["qs@6.15.0", "", { "dependencies": { "side-channel": "^1.1.0" } }, "sha512-mAZTtNCeetKMH+pSjrb76NAM8V9a05I9aBZOHztWy/UqcJdQYNsf59vrRKWnojAT9Y+GbIvoTBC++CPHqpDBhQ=="], + "qs": ["qs@6.15.1", "", { "dependencies": { "side-channel": "^1.1.0" } }, "sha512-6YHEFRL9mfgcAvql/XhwTvf5jKcOiiupt2FiJxHkiX1z4j7WL8J/jRHYLluORvc1XxB5rV20KoeK00gVJamspg=="], "query-selector-shadow-dom": ["query-selector-shadow-dom@1.0.1", "", {}, "sha512-lT5yCqEBgfoMYpf3F2xQRK7zEr1rhIIZuceDK6+xRkJQ4NMbHTwXqk4NkwDwQMNqXgG9r9fyHnzwNVs6zV5KRw=="], @@ -1499,9 +1541,9 @@ "raw-body": ["raw-body@3.0.2", "", { "dependencies": { "bytes": "~3.1.2", "http-errors": "~2.0.1", "iconv-lite": "~0.7.0", "unpipe": "~1.0.0" } }, "sha512-K5zQjDllxWkf7Z5xJdV0/B0WTNqx6vxG70zJE4N0kBs4LovmEYWJzQGxC9bS9RAKu3bgM40lrd5zoLJ12MQ5BA=="], - "react": ["react@19.2.4", "", {}, "sha512-9nfp2hYpCwOjAN+8TZFGhtWEwgvWHXqESH8qT89AT/lWklpLON22Lc8pEtnpsZz7VmawabSU0gCjnj8aC0euHQ=="], + "react": ["react@19.2.5", "", {}, "sha512-llUJLzz1zTUBrskt2pwZgLq59AemifIftw4aB7JxOqf1HY2FDaGDxgwpAPVzHU1kdWabH7FauP4i1oEeer2WCA=="], - "react-dom": ["react-dom@19.2.4", "", { "dependencies": { "scheduler": "^0.27.0" }, "peerDependencies": { "react": "^19.2.4" } }, "sha512-AXJdLo8kgMbimY95O2aKQqsz2iWi9jMgKJhRBAxECE4IFxfcazB2LmzloIoibJI3C12IlY20+KFaLv+71bUJeQ=="], + "react-dom": ["react-dom@19.2.5", "", { "dependencies": { "scheduler": "^0.27.0" }, "peerDependencies": { "react": "^19.2.5" } }, "sha512-J5bAZz+DXMMwW/wV3xzKke59Af6CHY7G4uYLN1OvBcKEsWOs4pQExj86BBKamxl/Ik5bx9whOrvBlSDfWzgSag=="], "react-is": ["react-is@17.0.2", "", {}, "sha512-w2GsyukL62IJnlaff/nRegPQR94C/XXamvMWmSHRJ4y7Ts/4ocGRmTHvOs8PSE6pB3dWOrD/nueuU5sduBsQ4w=="], @@ -1513,7 +1555,7 @@ "react-remove-scroll-bar": ["react-remove-scroll-bar@2.3.8", "", { "dependencies": { "react-style-singleton": "^2.2.2", "tslib": "^2.0.0" }, "peerDependencies": { "@types/react": "*", "react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0" }, "optionalPeers": ["@types/react"] }, "sha512-9r+yi9+mgU33AKcj6IbT9oRCO78WriSj6t/cF8DWBZJ9aOGPOTEDvdUDz1FwKim7QXWwmHqtdHnRJfhAxEG46Q=="], - "react-router": ["react-router@7.13.1", "", { "dependencies": { "cookie": "^1.0.1", "set-cookie-parser": "^2.6.0" }, "peerDependencies": { "react": ">=18", "react-dom": ">=18" }, "optionalPeers": ["react-dom"] }, "sha512-td+xP4X2/6BJvZoX6xw++A2DdEi++YypA69bJUV5oVvqf6/9/9nNlD70YO1e9d3MyamJEBQFEzk6mbfDYbqrSA=="], + "react-router": ["react-router@7.14.0", "", { "dependencies": { "cookie": "^1.0.1", "set-cookie-parser": "^2.6.0" }, "peerDependencies": { "react": ">=18", "react-dom": ">=18" }, "optionalPeers": ["react-dom"] }, "sha512-m/xR9N4LQLmAS0ZhkY2nkPA1N7gQ5TUVa5n8TgANuDTARbn1gt+zLPXEm7W0XDTbrQ2AJSJKhoa6yx1D8BcpxQ=="], "react-style-singleton": ["react-style-singleton@2.2.3", "", { "dependencies": { "get-nonce": "^1.0.0", "tslib": "^2.0.0" }, "peerDependencies": { "@types/react": "*", "react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0 || ^19.0.0-rc" }, "optionalPeers": ["@types/react"] }, "sha512-b6jSvxvVnyptAiLjbkWLE/lOnR4lfTtDAl+eUC7RZy+QQWc6wRzIV2CE6xBuMmDxc2qIihtDCZD5NPOFl7fRBQ=="], @@ -1561,7 +1603,7 @@ "rgb2hex": ["rgb2hex@0.2.5", "", {}, "sha512-22MOP1Rh7sAo1BZpDG6R5RFYzR2lYEgwq7HEmyW2qcsOqR2lQKmn+O//xV3YG/0rrhMC6KVX2hU+ZXuaw9a5bw=="], - "rollup": ["rollup@4.60.0", "", { "dependencies": { "@types/estree": "1.0.8" }, "optionalDependencies": { "@rollup/rollup-android-arm-eabi": "4.60.0", "@rollup/rollup-android-arm64": "4.60.0", "@rollup/rollup-darwin-arm64": "4.60.0", "@rollup/rollup-darwin-x64": "4.60.0", "@rollup/rollup-freebsd-arm64": "4.60.0", "@rollup/rollup-freebsd-x64": "4.60.0", "@rollup/rollup-linux-arm-gnueabihf": "4.60.0", "@rollup/rollup-linux-arm-musleabihf": "4.60.0", "@rollup/rollup-linux-arm64-gnu": "4.60.0", "@rollup/rollup-linux-arm64-musl": "4.60.0", "@rollup/rollup-linux-loong64-gnu": "4.60.0", "@rollup/rollup-linux-loong64-musl": "4.60.0", "@rollup/rollup-linux-ppc64-gnu": "4.60.0", "@rollup/rollup-linux-ppc64-musl": "4.60.0", "@rollup/rollup-linux-riscv64-gnu": "4.60.0", "@rollup/rollup-linux-riscv64-musl": "4.60.0", "@rollup/rollup-linux-s390x-gnu": "4.60.0", "@rollup/rollup-linux-x64-gnu": "4.60.0", "@rollup/rollup-linux-x64-musl": "4.60.0", "@rollup/rollup-openbsd-x64": "4.60.0", "@rollup/rollup-openharmony-arm64": "4.60.0", "@rollup/rollup-win32-arm64-msvc": "4.60.0", "@rollup/rollup-win32-ia32-msvc": "4.60.0", "@rollup/rollup-win32-x64-gnu": "4.60.0", "@rollup/rollup-win32-x64-msvc": "4.60.0", "fsevents": "~2.3.2" }, "bin": { "rollup": "dist/bin/rollup" } }, "sha512-yqjxruMGBQJ2gG4HtjZtAfXArHomazDHoFwFFmZZl0r7Pdo7qCIXKqKHZc8yeoMgzJJ+pO6pEEHa+V7uzWlrAQ=="], + "rollup": ["rollup@4.60.1", "", { "dependencies": { "@types/estree": "1.0.8" }, "optionalDependencies": { "@rollup/rollup-android-arm-eabi": "4.60.1", "@rollup/rollup-android-arm64": "4.60.1", "@rollup/rollup-darwin-arm64": "4.60.1", "@rollup/rollup-darwin-x64": "4.60.1", "@rollup/rollup-freebsd-arm64": "4.60.1", "@rollup/rollup-freebsd-x64": "4.60.1", "@rollup/rollup-linux-arm-gnueabihf": "4.60.1", "@rollup/rollup-linux-arm-musleabihf": "4.60.1", "@rollup/rollup-linux-arm64-gnu": "4.60.1", "@rollup/rollup-linux-arm64-musl": "4.60.1", "@rollup/rollup-linux-loong64-gnu": "4.60.1", "@rollup/rollup-linux-loong64-musl": "4.60.1", "@rollup/rollup-linux-ppc64-gnu": "4.60.1", "@rollup/rollup-linux-ppc64-musl": "4.60.1", "@rollup/rollup-linux-riscv64-gnu": "4.60.1", "@rollup/rollup-linux-riscv64-musl": "4.60.1", "@rollup/rollup-linux-s390x-gnu": "4.60.1", "@rollup/rollup-linux-x64-gnu": "4.60.1", "@rollup/rollup-linux-x64-musl": "4.60.1", "@rollup/rollup-openbsd-x64": "4.60.1", "@rollup/rollup-openharmony-arm64": "4.60.1", "@rollup/rollup-win32-arm64-msvc": "4.60.1", "@rollup/rollup-win32-ia32-msvc": "4.60.1", "@rollup/rollup-win32-x64-gnu": "4.60.1", "@rollup/rollup-win32-x64-msvc": "4.60.1", "fsevents": "~2.3.2" }, "bin": { "rollup": "dist/bin/rollup" } }, "sha512-VmtB2rFU/GroZ4oL8+ZqXgSA38O6GR8KSIvWmEFv63pQ0G6KaBH9s07PO8XTXP4vI+3UJUEypOfjkGfmSBBR0w=="], "router": ["router@2.2.0", "", { "dependencies": { "debug": "^4.4.0", "depd": "^2.0.0", "is-promise": "^4.0.0", "parseurl": "^1.3.3", "path-to-regexp": "^8.0.0" } }, "sha512-nLTrUKm2UyiL7rlhapu/Zl45FwNgkZGaCpZbIHajDYgwlJCOzLSk+cIPAnsEqV955GjILJnKbdQC1nVPz+gAYQ=="], @@ -1603,7 +1645,7 @@ "setprototypeof": ["setprototypeof@1.2.0", "", {}, "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw=="], - "shadcn": ["shadcn@4.1.2", "", { "dependencies": { "@babel/core": "^7.28.0", "@babel/parser": "^7.28.0", "@babel/plugin-transform-typescript": "^7.28.0", "@babel/preset-typescript": "^7.27.1", "@dotenvx/dotenvx": "^1.48.4", "@modelcontextprotocol/sdk": "^1.26.0", "@types/validate-npm-package-name": "^4.0.2", "browserslist": "^4.26.2", "commander": "^14.0.0", "cosmiconfig": "^9.0.0", "dedent": "^1.6.0", "deepmerge": "^4.3.1", "diff": "^8.0.2", "execa": "^9.6.0", "fast-glob": "^3.3.3", "fs-extra": "^11.3.1", "fuzzysort": "^3.1.0", "https-proxy-agent": "^7.0.6", "kleur": "^4.1.5", "msw": "^2.10.4", "node-fetch": "^3.3.2", "open": "^11.0.0", "ora": "^8.2.0", "postcss": "^8.5.6", "postcss-selector-parser": "^7.1.0", "prompts": "^2.4.2", "recast": "^0.23.11", "stringify-object": "^5.0.0", "tailwind-merge": "^3.0.1", "ts-morph": "^26.0.0", "tsconfig-paths": "^4.2.0", "validate-npm-package-name": "^7.0.1", "zod": "^3.24.1", "zod-to-json-schema": "^3.24.6" }, "bin": { "shadcn": "dist/index.js" } }, "sha512-qNQcCavkbYsgBj+X09tF2bTcwRd8abR880bsFkDU2kMqceMCLAm5c+cLg7kWDhfh1H9g08knpQ5ZEf6y/co16g=="], + "shadcn": ["shadcn@4.2.0", "", { "dependencies": { "@babel/core": "^7.28.0", "@babel/parser": "^7.28.0", "@babel/plugin-transform-typescript": "^7.28.0", "@babel/preset-typescript": "^7.27.1", "@dotenvx/dotenvx": "^1.48.4", "@modelcontextprotocol/sdk": "^1.26.0", "@types/validate-npm-package-name": "^4.0.2", "browserslist": "^4.26.2", "commander": "^14.0.0", "cosmiconfig": "^9.0.0", "dedent": "^1.6.0", "deepmerge": "^4.3.1", "diff": "^8.0.2", "execa": "^9.6.0", "fast-glob": "^3.3.3", "fs-extra": "^11.3.1", "fuzzysort": "^3.1.0", "https-proxy-agent": "^7.0.6", "kleur": "^4.1.5", "msw": "^2.10.4", "node-fetch": "^3.3.2", "open": "^11.0.0", "ora": "^8.2.0", "postcss": "^8.5.6", "postcss-selector-parser": "^7.1.0", "prompts": "^2.4.2", "recast": "^0.23.11", "stringify-object": "^5.0.0", "tailwind-merge": "^3.0.1", "ts-morph": "^26.0.0", "tsconfig-paths": "^4.2.0", "validate-npm-package-name": "^7.0.1", "zod": "^3.24.1", "zod-to-json-schema": "^3.24.6" }, "bin": { "shadcn": "dist/index.js" } }, "sha512-ZDuV340itidaUd4Gi1BxQX+Y7Ush6BHp6URZBM2RyxUUBZ6yFtOWIr4nVY+Ro+YRSpo82v7JrsmtcU5xoBCMJQ=="], "shebang-command": ["shebang-command@2.0.0", "", { "dependencies": { "shebang-regex": "^3.0.0" } }, "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA=="], @@ -1611,7 +1653,7 @@ "side-channel": ["side-channel@1.1.0", "", { "dependencies": { "es-errors": "^1.3.0", "object-inspect": "^1.13.3", "side-channel-list": "^1.0.0", "side-channel-map": "^1.0.1", "side-channel-weakmap": "^1.0.2" } }, "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw=="], - "side-channel-list": ["side-channel-list@1.0.0", "", { "dependencies": { "es-errors": "^1.3.0", "object-inspect": "^1.13.3" } }, "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA=="], + "side-channel-list": ["side-channel-list@1.0.1", "", { "dependencies": { "es-errors": "^1.3.0", "object-inspect": "^1.13.4" } }, "sha512-mjn/0bi/oUURjc5Xl7IaWi/OJJJumuoJFQJfDDyO46+hBWsfaVM65TBHq2eoZBhzl9EchxOijpkbRC8SVBQU0w=="], "side-channel-map": ["side-channel-map@1.0.1", "", { "dependencies": { "call-bound": "^1.0.2", "es-errors": "^1.3.0", "get-intrinsic": "^1.2.5", "object-inspect": "^1.13.3" } }, "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA=="], @@ -1705,7 +1747,7 @@ "tailwindcss": ["tailwindcss@4.2.2", "", {}, "sha512-KWBIxs1Xb6NoLdMVqhbhgwZf2PGBpPEiwOqgI4pFIYbNTfBXiKYyWoTsXgBQ9WFg/OlhnvHaY+AEpW7wSmFo2Q=="], - "tapable": ["tapable@2.3.0", "", {}, "sha512-g9ljZiwki/LfxmQADO3dEY1CbpmXT5Hm2fJ+QaGKwSXUylMybePR7/67YW7jOrrvjEgL1Fmz5kzyAjWVWLlucg=="], + "tapable": ["tapable@2.3.2", "", {}, "sha512-1MOpMXuhGzGL5TTCZFItxCc0AARf1EZFQkGqMm7ERKj8+Hgr5oLvJOVFcC+lRmR8hCe2S3jC4T5D7Vg/d7/fhA=="], "tar-fs": ["tar-fs@3.1.2", "", { "dependencies": { "pump": "^3.0.0", "tar-stream": "^3.1.5" }, "optionalDependencies": { "bare-fs": "^4.0.1", "bare-path": "^3.0.0" } }, "sha512-QGxxTxxyleAdyM3kpFs14ymbYmNFrfY+pHj7Z8FgtbZ7w2//VAgLMac7sT6nRpIHjppXO2AwwEOg0bPFVRcmXw=="], @@ -1721,7 +1763,7 @@ "tinyexec": ["tinyexec@0.3.2", "", {}, "sha512-KQQR9yN7R5+OSwaK0XQoj22pwHoTlgYqmUscPYoknOoWCWfj/5/ABTMRi69FrKU5ffPVh5QcFikpWJI/P1ocHA=="], - "tinyglobby": ["tinyglobby@0.2.15", "", { "dependencies": { "fdir": "^6.5.0", "picomatch": "^4.0.3" } }, "sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ=="], + "tinyglobby": ["tinyglobby@0.2.16", "", { "dependencies": { "fdir": "^6.5.0", "picomatch": "^4.0.4" } }, "sha512-pn99VhoACYR8nFHhxqix+uvsbXineAasWm5ojXoN8xEwK5Kd3/TrhNn1wByuD52UxWRLy8pu+kRMniEi6Eq9Zg=="], "tinypool": ["tinypool@1.1.1", "", {}, "sha512-Zba82s87IFq9A9XmjiX5uZA/ARWDrB03OHlq+Vw1fSdt0I+4/Kutwy8BP4Y/y/aORMo61FQ0vIb5j44vSo5Pkg=="], @@ -1793,8 +1835,6 @@ "use-sidecar": ["use-sidecar@1.1.3", "", { "dependencies": { "detect-node-es": "^1.1.0", "tslib": "^2.0.0" }, "peerDependencies": { "@types/react": "*", "react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0 || ^19.0.0-rc" }, "optionalPeers": ["@types/react"] }, "sha512-Fedw0aZvkhynoPYlA5WXrMCAMm+nSWdZt6lzJQ7Ok8S6Q+VsHmHpRWndVRJ8Be0ZbkfPc5LRYH+5XrzXcEeLRQ=="], - "use-sync-external-store": ["use-sync-external-store@1.6.0", "", { "peerDependencies": { "react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0" } }, "sha512-Pp6GSwGP/NrPIrxVFAIkOQeyw8lFenOHijQWkUTrDvrF4ALqylP2C/KCkeS9dpUM3KvYRQhna5vt7IL95+ZQ9w=="], - "userhome": ["userhome@1.0.1", "", {}, "sha512-5cnLm4gseXjAclKowC4IjByaGsjtAoV6PrOQOljplNB54ReUYJP8HdAFq2muHinSDAh09PPX/uXDPfdxRHvuSA=="], "util-deprecate": ["util-deprecate@1.0.2", "", {}, "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw=="], @@ -1809,7 +1849,7 @@ "vfile-message": ["vfile-message@4.0.3", "", { "dependencies": { "@types/unist": "^3.0.0", "unist-util-stringify-position": "^4.0.0" } }, "sha512-QTHzsGd1EhbZs4AsQ20JX1rC3cOlt/IWJruk893DfLRr57lcnOeMaWG4K0JrRta4mIJZKth2Au3mM3u03/JWKw=="], - "vite": ["vite@6.4.1", "", { "dependencies": { "esbuild": "^0.25.0", "fdir": "^6.4.4", "picomatch": "^4.0.2", "postcss": "^8.5.3", "rollup": "^4.34.9", "tinyglobby": "^0.2.13" }, "optionalDependencies": { "fsevents": "~2.3.3" }, "peerDependencies": { "@types/node": "^18.0.0 || ^20.0.0 || >=22.0.0", "jiti": ">=1.21.0", "less": "*", "lightningcss": "^1.21.0", "sass": "*", "sass-embedded": "*", "stylus": "*", "sugarss": "*", "terser": "^5.16.0", "tsx": "^4.8.1", "yaml": "^2.4.2" }, "optionalPeers": ["@types/node", "jiti", "less", "lightningcss", "sass", "sass-embedded", "stylus", "sugarss", "terser", "tsx", "yaml"], "bin": { "vite": "bin/vite.js" } }, "sha512-+Oxm7q9hDoLMyJOYfUYBuHQo+dkAloi33apOPP56pzj+vsdJDzr+j1NISE5pyaAuKL4A3UD34qd0lx5+kfKp2g=="], + "vite": ["vite@6.4.2", "", { "dependencies": { "esbuild": "^0.25.0", "fdir": "^6.4.4", "picomatch": "^4.0.2", "postcss": "^8.5.3", "rollup": "^4.34.9", "tinyglobby": "^0.2.13" }, "optionalDependencies": { "fsevents": "~2.3.3" }, "peerDependencies": { "@types/node": "^18.0.0 || ^20.0.0 || >=22.0.0", "jiti": ">=1.21.0", "less": "*", "lightningcss": "^1.21.0", "sass": "*", "sass-embedded": "*", "stylus": "*", "sugarss": "*", "terser": "^5.16.0", "tsx": "^4.8.1", "yaml": "^2.4.2" }, "optionalPeers": ["@types/node", "jiti", "less", "lightningcss", "sass", "sass-embedded", "stylus", "sugarss", "terser", "tsx", "yaml"], "bin": { "vite": "bin/vite.js" } }, "sha512-2N/55r4JDJ4gdrCvGgINMy+HH3iRpNIz8K6SFwVsA+JbQScLiC+clmAxBgwiSPgcG9U15QmvqCGWzMbqda5zGQ=="], "vite-node": ["vite-node@3.2.4", "", { "dependencies": { "cac": "^6.7.14", "debug": "^4.4.1", "es-module-lexer": "^1.7.0", "pathe": "^2.0.3", "vite": "^5.0.0 || ^6.0.0 || ^7.0.0-0" }, "bin": { "vite-node": "vite-node.mjs" } }, "sha512-EbKSKh+bh1E1IFxeO0pg1n4dvoOTt0UDiXMd/qn++r98+jPO1xtJilvXldeuQ8giIB5IkpjCgMleHMNEsGH6pg=="], @@ -1869,6 +1909,8 @@ "yocto-queue": ["yocto-queue@1.2.2", "", {}, "sha512-4LCcse/U2MHZ63HAJVE+v71o7yOdIe4cZ70Wpf8D/IyjDKYQLV5GD46B+hSTjJsvV5PztjvHoU580EftxjDZFQ=="], + "yocto-spinner": ["yocto-spinner@1.1.0", "", { "dependencies": { "yoctocolors": "^2.1.1" } }, "sha512-/BY0AUXnS7IKO354uLLA2eRcWiqDifEbd6unXCsOxkFDAkhgUL3PH9X2bFoaU0YchnDXsF+iKleeTLJGckbXfA=="], + "yoctocolors": ["yoctocolors@2.1.2", "", {}, "sha512-CzhO+pFNo8ajLM2d2IW/R93ipy99LWjtwblvC1RsoSUMZgyLbYFr221TnSNT7GjGdYui6P459mw9JH/g/zW2ug=="], "yoctocolors-cjs": ["yoctocolors-cjs@2.1.3", "", {}, "sha512-U/PBtDf35ff0D8X8D0jfdzHYEPFxAI7jJlxZXwCSez5M3190m+QobIfh+sWDWSHMCWWJN2AWamkegn6vr6YBTw=="], @@ -1901,13 +1943,13 @@ "@puppeteer/browsers/semver": ["semver@7.7.4", "", { "bin": { "semver": "bin/semver.js" } }, "sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA=="], - "@tailwindcss/oxide-wasm32-wasi/@emnapi/core": ["@emnapi/core@1.9.1", "", { "dependencies": { "@emnapi/wasi-threads": "1.2.0", "tslib": "^2.4.0" }, "bundled": true }, "sha512-mukuNALVsoix/w1BJwFzwXBN/dHeejQtuVzcDsfOEsdpCumXb/E9j8w11h5S54tT1xhifGfbbSm/ICrObRb3KA=="], + "@tailwindcss/oxide-wasm32-wasi/@emnapi/core": ["@emnapi/core@1.9.2", "", { "dependencies": { "@emnapi/wasi-threads": "1.2.1", "tslib": "^2.4.0" }, "bundled": true }, "sha512-UC+ZhH3XtczQYfOlu3lNEkdW/p4dsJ1r/bP7H8+rhao3TTTMO1ATq/4DdIi23XuGoFY+Cz0JmCbdVl0hz9jZcA=="], - "@tailwindcss/oxide-wasm32-wasi/@emnapi/runtime": ["@emnapi/runtime@1.9.1", "", { "dependencies": { "tslib": "^2.4.0" }, "bundled": true }, "sha512-VYi5+ZVLhpgK4hQ0TAjiQiZ6ol0oe4mBx7mVv7IflsiEp0OWoVsp/+f9Vc1hOhE0TtkORVrI1GvzyreqpgWtkA=="], + "@tailwindcss/oxide-wasm32-wasi/@emnapi/runtime": ["@emnapi/runtime@1.9.2", "", { "dependencies": { "tslib": "^2.4.0" }, "bundled": true }, "sha512-3U4+MIWHImeyu1wnmVygh5WlgfYDtyf0k8AbLhMFxOipihf6nrWC4syIm/SwEeec0mNSafiiNnMJwbza/Is6Lw=="], - "@tailwindcss/oxide-wasm32-wasi/@emnapi/wasi-threads": ["@emnapi/wasi-threads@1.2.0", "", { "dependencies": { "tslib": "^2.4.0" }, "bundled": true }, "sha512-N10dEJNSsUx41Z6pZsXU8FjPjpBEplgH24sfkmITrBED1/U2Esum9F3lfLrMjKHHjmi557zQn7kR9R+XWXu5Rg=="], + "@tailwindcss/oxide-wasm32-wasi/@emnapi/wasi-threads": ["@emnapi/wasi-threads@1.2.1", "", { "dependencies": { "tslib": "^2.4.0" }, "bundled": true }, "sha512-uTII7OYF+/Mes/MrcIOYp5yOtSMLBWSIoLPpcgwipoiKbli6k322tcoFsxoIIxPDqW01SQGAgko4EzZi2BNv2w=="], - "@tailwindcss/oxide-wasm32-wasi/@napi-rs/wasm-runtime": ["@napi-rs/wasm-runtime@1.1.1", "", { "dependencies": { "@emnapi/core": "^1.7.1", "@emnapi/runtime": "^1.7.1", "@tybys/wasm-util": "^0.10.1" }, "bundled": true }, "sha512-p64ah1M1ld8xjWv3qbvFwHiFVWrq1yFvV4f7w+mzaqiR4IlSgkqhcRdHwsGgomwzBH51sRY4NEowLxnaBjcW/A=="], + "@tailwindcss/oxide-wasm32-wasi/@napi-rs/wasm-runtime": ["@napi-rs/wasm-runtime@1.1.3", "", { "dependencies": { "@tybys/wasm-util": "^0.10.1" }, "peerDependencies": { "@emnapi/core": "^1.7.1", "@emnapi/runtime": "^1.7.1" }, "bundled": true }, "sha512-xK9sGVbJWYb08+mTJt3/YV24WxvxpXcXtP6B172paPZ+Ts69Re9dAr7lKwJoeIx8OoeuimEiRZ7umkiUVClmmQ=="], "@tailwindcss/oxide-wasm32-wasi/@tybys/wasm-util": ["@tybys/wasm-util@0.10.1", "", { "dependencies": { "tslib": "^2.4.0" }, "bundled": true }, "sha512-9tTaPJLSiejZKx+Bmog4uSubteqTvFrVrURwkmHixBo0G4seD0zUxp98E1DzUBJxLQ3NPwXrGKDiVjwx/DpPsg=="], @@ -2033,8 +2075,6 @@ "proxy-agent/lru-cache": ["lru-cache@7.18.3", "", {}, "sha512-jumlc0BIUrS3qJGgIkWZsyfAM7NCWiBcCDhnd+3NNM5KbBmLTgHVfWBcg6W+rLUsIpzpERPsvwUP7CckAQSOoA=="], - "randombytes/safe-buffer": ["safe-buffer@5.2.1", "", {}, "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ=="], - "raw-body/iconv-lite": ["iconv-lite@0.7.2", "", { "dependencies": { "safer-buffer": ">= 2.1.2 < 3.0.0" } }, "sha512-im9DjEDQ55s9fL4EYzOAv0yMqmMBSZp6G0VvFyTMPKWxiSBHUj9NW/qqLmXUwXrrM7AvqSlTCfvqRb0cM8yYqw=="], "read-pkg/normalize-package-data": ["normalize-package-data@6.0.2", "", { "dependencies": { "hosted-git-info": "^7.0.0", "semver": "^7.3.5", "validate-npm-package-license": "^3.0.4" } }, "sha512-V6gygoYb/5EmNI+MEGrWkC+e6+Rr7mTmfHrxDbLzxQogBkgzo76rkok0Am6thgSF7Mv2nLOajAJj5vDJZEFn7g=="], @@ -2045,7 +2085,7 @@ "restore-cursor/onetime": ["onetime@7.0.0", "", { "dependencies": { "mimic-function": "^5.0.0" } }, "sha512-VXJjc87FScF88uafS3JllDgvAm+c/Slfz06lorj2uAY34rlUu0Nt+v8wreiImcrgAjjIHp1rXpTDlLOGw29WwQ=="], - "router/path-to-regexp": ["path-to-regexp@8.4.1", "", {}, "sha512-fvU78fIjZ+SBM9YwCknCvKOUKkLVqtWDVctl0s7xIqfmfb38t2TT4ZU2gHm+Z8xGwgW+QWEU3oQSAzIbo89Ggw=="], + "router/path-to-regexp": ["path-to-regexp@8.4.2", "", {}, "sha512-qRcuIdP69NPm4qbACK+aDogI5CBDMi1jKe0ry5rSQJz8JVLsC7jV8XpiJjGRLLol3N+R5ihGYcrPLTno6pAdBA=="], "stack-utils/escape-string-regexp": ["escape-string-regexp@2.0.0", "", {}, "sha512-UpzcLCXolUWcNu5HtVMHYdXJjArjsF9C0aNnquZYY4uW/Vu0miy5YoWvbV345HauVvcAUnpRuhMMcqTcGOY2+w=="], @@ -2175,7 +2215,7 @@ "mocha/yargs/cliui": ["cliui@7.0.4", "", { "dependencies": { "string-width": "^4.2.0", "strip-ansi": "^6.0.0", "wrap-ansi": "^7.0.0" } }, "sha512-OcRE68cOsVMXp1Yvonl/fzkQOyjLSu/8bhPDfQt0e0/Eb283TKP20Fs2MqoPsr9SwA595rRCA+QMzYc9nBP+JQ=="], - "msw/tough-cookie/tldts": ["tldts@7.0.27", "", { "dependencies": { "tldts-core": "^7.0.27" }, "bin": { "tldts": "bin/cli.js" } }, "sha512-I4FZcVFcqCRuT0ph6dCDpPuO4Xgzvh+spkcTr1gK7peIvxWauoloVO0vuy1FQnijT63ss6AsHB6+OIM4aXHbPg=="], + "msw/tough-cookie/tldts": ["tldts@7.0.28", "", { "dependencies": { "tldts-core": "^7.0.28" }, "bin": { "tldts": "bin/cli.js" } }, "sha512-+Zg3vWhRUv8B1maGSTFdev9mjoo8Etn2Ayfs4cnjlD3CsGkxXX4QyW3j2WJ0wdjYcYmy7Lx2RDsZMhgCWafKIw=="], "ora/log-symbols/is-unicode-supported": ["is-unicode-supported@1.3.0", "", {}, "sha512-43r2mRvz+8JRIKnWJ+3j8JtjRKZ6GmjzfaE/qiBJnikNnYv/6bagRJ1kUhNk8R5EX/GkobD+r+sfxCPJsiKBLQ=="], @@ -2191,7 +2231,7 @@ "read-pkg/parse-json/type-fest": ["type-fest@3.13.1", "", {}, "sha512-tLq3bSNx+xSpwvAJnzrK0Ep5CLNWjvFTOp71URMaAEWBfRb9nnJiBoUe0tF8bI4ZFO3omgBR6NvnbzVUT3Ly4g=="], - "recursive-readdir/minimatch/brace-expansion": ["brace-expansion@1.1.13", "", { "dependencies": { "balanced-match": "^1.0.0", "concat-map": "0.0.1" } }, "sha512-9ZLprWS6EENmhEOpjCYW2c8VkmOvckIJZfkr7rBW6dObmfgJ/L1GpSYW5Hpo9lDz4D1+n0Ckz8rU7FwHDQiG/w=="], + "recursive-readdir/minimatch/brace-expansion": ["brace-expansion@1.1.14", "", { "dependencies": { "balanced-match": "^1.0.0", "concat-map": "0.0.1" } }, "sha512-MWPGfDxnyzKU7rNOW9SP/c50vi3xrmrua/+6hfPbCS2ABNWfx24vPidzvC7krjU/RTo235sV776ymlsMtGKj8g=="], "tsx/esbuild/@esbuild/aix-ppc64": ["@esbuild/aix-ppc64@0.27.7", "", { "os": "aix", "cpu": "ppc64" }, "sha512-EKX3Qwmhz1eMdEJokhALr0YiD0lhQNwDqkPYyPhiSwKrh7/4KRjQc04sZ8db+5DVVnZ1LmbNDI1uAMPEUBnQPg=="], @@ -2265,7 +2305,7 @@ "mocha/yargs/cliui/strip-ansi": ["strip-ansi@6.0.1", "", { "dependencies": { "ansi-regex": "^5.0.1" } }, "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A=="], - "msw/tough-cookie/tldts/tldts-core": ["tldts-core@7.0.27", "", {}, "sha512-YQ7uPjgWUibIK6DW5lrKujGwUKhLevU4hcGbP5O6TcIUb+oTjJYJVWPS4nZsIHrEEEG6myk/oqAJUEQmpZrHsg=="], + "msw/tough-cookie/tldts/tldts-core": ["tldts-core@7.0.28", "", {}, "sha512-7W5Efjhsc3chVdFhqtaU0KtK32J37Zcr9RKtID54nG+tIpcY79CQK/veYPODxtD/LJ4Lue66jvrQzIX2Z2/pUQ=="], "read-pkg/normalize-package-data/hosted-git-info/lru-cache": ["lru-cache@10.4.3", "", {}, "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ=="], diff --git a/crates/app-services/Cargo.toml b/crates/app-services/Cargo.toml index b02e475..f6b6479 100644 --- a/crates/app-services/Cargo.toml +++ b/crates/app-services/Cargo.toml @@ -9,3 +9,6 @@ thiserror = "2" [dev-dependencies] serde_json = "1" + +[lints] +workspace = true diff --git a/crates/app-services/src/lib.rs b/crates/app-services/src/lib.rs index 1da2d93..489762d 100644 --- a/crates/app-services/src/lib.rs +++ b/crates/app-services/src/lib.rs @@ -1,9 +1,16 @@ pub mod monitor; +pub mod project_queries; pub mod session; +pub mod wizard; pub use monitor::{ MonitorEvent, MonitorMessage, MonitorService, MonitorSnapshot, MonitorStream, OutputEntry, }; +pub use project_queries::{IterationStory, ProjectDetail, ProjectQueryRecord, ProjectsByStatus}; pub use session::{ IterationSummary, ServiceError, ServiceResult, SessionInfo, SessionService, SessionStats, }; +pub use wizard::{ + SaveDraftCommand, SaveWizardStateCommand, WizardAtomizeSnapshot, WizardConfigureSnapshot, + WizardDescribeSnapshot, WizardPlanSnapshot, WizardSnapshot, +}; diff --git a/crates/app-services/src/monitor.rs b/crates/app-services/src/monitor.rs index eee0fe2..fa48a4d 100644 --- a/crates/app-services/src/monitor.rs +++ b/crates/app-services/src/monitor.rs @@ -81,6 +81,23 @@ pub struct MonitorMessage { pub event: Option, } +pub trait MonitorRepository { + fn latest_session( + &self, + project_id: &str, + ) -> crate::session::ServiceResult>; + fn recent_output( + &self, + project_id: &str, + limit: usize, + ) -> crate::session::ServiceResult>; + fn recent_events( + &self, + project_id: &str, + limit: usize, + ) -> crate::session::ServiceResult>; +} + pub trait MonitorService { fn snapshot(&self, project_id: &str) -> crate::session::ServiceResult; fn recent_output( @@ -95,30 +112,39 @@ pub trait MonitorService { ) -> crate::session::ServiceResult>; } -#[cfg(test)] -mod tests { - use super::{MonitorEvent, MonitorMessage, MonitorStream, OutputEntry}; +pub struct RuntimeMonitorService { + repository: R, +} - #[test] - fn serializes_monitor_event_payloads() { - let event = MonitorEvent::Output { - entry: OutputEntry { - project_id: "project-1".into(), - session_id: Some("session-1".into()), - stream: MonitorStream::Stderr, - content: "line".into(), - emitted_at: Some("2026-04-10T00:00:00Z".into()), - }, - }; - let message = serde_json::to_value(MonitorMessage { - project_id: "project-1".into(), - snapshot: None, - event: Some(event), +impl RuntimeMonitorService { + pub fn new(repository: R) -> Self { + Self { repository } + } +} + +impl MonitorService for RuntimeMonitorService { + fn snapshot(&self, project_id: &str) -> crate::session::ServiceResult { + Ok(MonitorSnapshot { + project_id: project_id.to_string(), + session: self.repository.latest_session(project_id)?, + recent_output: self.repository.recent_output(project_id, 200)?, + events: self.repository.recent_events(project_id, 100)?, }) - .expect("monitor message serializes"); + } + + fn recent_output( + &self, + project_id: &str, + limit: usize, + ) -> crate::session::ServiceResult> { + self.repository.recent_output(project_id, limit) + } - assert_eq!(message["projectId"], "project-1"); - assert_eq!(message["event"]["type"], "output"); - assert_eq!(message["event"]["entry"]["stream"], "stderr"); + fn recent_events( + &self, + project_id: &str, + limit: usize, + ) -> crate::session::ServiceResult> { + self.repository.recent_events(project_id, limit) } } diff --git a/crates/app-services/src/project_queries.rs b/crates/app-services/src/project_queries.rs new file mode 100644 index 0000000..150291a --- /dev/null +++ b/crates/app-services/src/project_queries.rs @@ -0,0 +1,136 @@ +use serde::{Deserialize, Serialize}; + +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)] +#[serde(rename_all = "camelCase")] +pub struct ProjectQueryRecord { + #[serde(default)] + pub id: String, + #[serde(default)] + pub name: String, + #[serde(default)] + pub description: String, + #[serde(default)] + pub status: String, + #[serde(default)] + pub working_directory: String, + #[serde(default)] + pub created_at: String, + #[serde(default)] + pub updated_at: String, + #[serde(default)] + pub wizard_step: Option, +} + +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)] +#[serde(rename_all = "camelCase")] +pub struct ProjectsByStatus { + #[serde(default)] + pub active: Vec, + #[serde(default)] + pub paused: Vec, + #[serde(default)] + pub completed: Vec, + #[serde(default)] + pub draft: Vec, + #[serde(default)] + pub archived: Vec, + #[serde(default)] + pub blocked: Vec, + #[serde(default)] + pub failed: Vec, +} + +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)] +#[serde(rename_all = "camelCase")] +pub struct IterationStory { + #[serde(default)] + pub id: String, + #[serde(default)] + pub title: String, + #[serde(default)] + pub status: String, + #[serde(default)] + pub duration_secs: Option, + #[serde(default)] + pub duration_label: Option, + #[serde(default)] + pub attempts: i64, +} + +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)] +#[serde(rename_all = "camelCase")] +pub struct ProjectDetail { + #[serde(default)] + pub project: ProjectData, + #[serde(default)] + pub total_stories: usize, + #[serde(default)] + pub passed_count: usize, + #[serde(default)] + pub blocked_count: usize, + #[serde(default)] + pub pending_count: usize, + #[serde(default)] + pub stories: Vec, +} + +#[cfg(test)] +mod tests { + use super::{IterationStory, ProjectDetail, ProjectQueryRecord, ProjectsByStatus}; + + #[test] + fn projects_by_status_roundtrips_serialization() { + let grouped = ProjectsByStatus { + active: vec![ProjectQueryRecord { + id: "project-1".to_string(), + name: "LoopForge".to_string(), + status: "active".to_string(), + ..Default::default() + }], + blocked: vec![ProjectQueryRecord { + id: "project-2".to_string(), + name: "Blocked".to_string(), + status: "blocked".to_string(), + wizard_step: Some("configure".to_string()), + ..Default::default() + }], + ..Default::default() + }; + + let json = serde_json::to_string(&grouped).expect("serialize projects by status"); + let restored: ProjectsByStatus = + serde_json::from_str(&json).expect("deserialize projects by status"); + + assert_eq!(restored, grouped); + assert!(json.contains("\"wizardStep\":\"configure\"")); + } + + #[test] + fn project_detail_uses_project_query_contract_shape() { + let detail = ProjectDetail { + project: ProjectQueryRecord { + id: "project-1".to_string(), + name: "LoopForge".to_string(), + status: "draft".to_string(), + ..Default::default() + }, + total_stories: 3, + passed_count: 1, + blocked_count: 1, + pending_count: 1, + stories: vec![IterationStory { + id: "story-1".to_string(), + title: "Extract contracts".to_string(), + status: "passed".to_string(), + attempts: 2, + ..Default::default() + }], + }; + + let value = serde_json::to_value(detail).expect("serialize project detail"); + + assert_eq!(value["project"]["id"], "project-1"); + assert_eq!(value["totalStories"], 3); + assert_eq!(value["stories"][0]["attempts"], 2); + } +} diff --git a/crates/app-services/src/session.rs b/crates/app-services/src/session.rs index 1dafbca..3b4b82e 100644 --- a/crates/app-services/src/session.rs +++ b/crates/app-services/src/session.rs @@ -75,6 +75,46 @@ pub struct SessionStats { pub is_running: bool, } +#[derive(Debug, Clone, PartialEq, Eq, Default)] +pub struct LatestSessionRecord { + pub id: String, + pub started_at: Option, +} + +#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)] +pub struct IterationCounts { + pub total: i64, + pub success: i64, + pub rate_limited: i64, +} + +#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)] +pub struct StoryCounts { + pub total: usize, + pub passed: usize, + pub blocked: usize, + pub pending: usize, +} + +pub trait SessionRepository { + fn is_running(&self, project_id: &str) -> ServiceResult; + fn latest_session_record(&self, project_id: &str) + -> ServiceResult>; + fn iteration_counts(&self, session_id: &str) -> ServiceResult; + fn latest_agent(&self, session_id: &str) -> ServiceResult>; + fn story_counts(&self, project_id: &str) -> ServiceResult; + fn stories_per_hour( + &self, + session: Option<&LatestSessionRecord>, + passed_stories: usize, + ) -> ServiceResult; + fn iteration_history( + &self, + project_id: &str, + limit: usize, + ) -> ServiceResult>; +} + pub trait SessionService { fn latest_session(&self, project_id: &str) -> ServiceResult>; fn session_stats(&self, project_id: &str) -> ServiceResult; @@ -85,45 +125,76 @@ pub trait SessionService { ) -> ServiceResult>; } -#[cfg(test)] -mod tests { - use super::{IterationSummary, SessionInfo, SessionStats}; - - #[test] - fn serializes_session_dtos_with_camel_case() { - let payload = serde_json::to_value(SessionStats { - project_id: "project-1".into(), - session_id: Some("session-1".into()), - total_iterations: 4, - success_count: 3, - failure_count: 1, - rate_limited_count: 0, - success_rate: 0.75, - stories_per_hour: 1.5, - total_stories: 5, - passed_stories: 3, - blocked_stories: 0, - pending_stories: 2, - current_agent: Some("codex".into()), - is_running: true, - }) - .expect("session stats serialize"); +pub struct RuntimeSessionService { + repository: R, +} + +impl RuntimeSessionService { + pub fn new(repository: R) -> Self { + Self { repository } + } +} + +impl SessionService for RuntimeSessionService { + fn latest_session(&self, project_id: &str) -> ServiceResult> { + self.repository + .latest_session_record(project_id) + .map(|session| { + session.map(|record| SessionInfo { + id: record.id, + started_at: record.started_at, + ended_at: None, + }) + }) + } + + fn session_stats(&self, project_id: &str) -> ServiceResult { + let is_running = self.repository.is_running(project_id)?; + let session = self.repository.latest_session_record(project_id)?; + let counts = session + .as_ref() + .map(|record| self.repository.iteration_counts(&record.id)) + .transpose()? + .unwrap_or_default(); + let current_agent = session + .as_ref() + .map(|record| self.repository.latest_agent(&record.id)) + .transpose()? + .flatten(); + let stories = self.repository.story_counts(project_id)?; + let failure_count = counts.total - counts.success - counts.rate_limited; + let success_rate = if counts.total > 0 { + counts.success as f64 / counts.total as f64 + } else { + 0.0 + }; + let stories_per_hour = self + .repository + .stories_per_hour(session.as_ref(), stories.passed)?; - assert_eq!(payload["projectId"], "project-1"); - assert_eq!(payload["sessionId"], "session-1"); - assert_eq!(payload["successRate"], 0.75); + Ok(SessionStats { + project_id: project_id.to_string(), + session_id: self.latest_session(project_id)?.map(|info| info.id), + total_iterations: counts.total, + success_count: counts.success, + failure_count, + rate_limited_count: counts.rate_limited, + success_rate, + stories_per_hour, + total_stories: stories.total, + passed_stories: stories.passed, + blocked_stories: stories.blocked, + pending_stories: stories.pending, + current_agent, + is_running, + }) } - #[test] - fn defaults_allow_backward_compatible_decoding() { - let decoded: SessionInfo = - serde_json::from_str(r#"{"id":"session-1"}"#).expect("session info decodes"); - let iteration: IterationSummary = serde_json::from_str( - r#"{"storyId":"S-1","startedAt":"2026-04-10T00:00:00Z","durationSecs":3,"result":"success","agentUsed":"codex"}"#, - ) - .expect("iteration summary decodes"); - - assert_eq!(decoded.id, "session-1"); - assert_eq!(iteration.story_id, "S-1"); + fn iteration_history( + &self, + project_id: &str, + limit: usize, + ) -> ServiceResult> { + self.repository.iteration_history(project_id, limit) } } diff --git a/crates/app-services/src/wizard.rs b/crates/app-services/src/wizard.rs new file mode 100644 index 0000000..455b53b --- /dev/null +++ b/crates/app-services/src/wizard.rs @@ -0,0 +1,196 @@ +use serde::{Deserialize, Serialize}; + +fn default_snapshot_version() -> u32 { + 1 +} + +fn default_wizard_step() -> String { + "describe".to_string() +} + +fn default_highest_step() -> u32 { + 1 +} + +fn default_plan_agent() -> String { + "claude".to_string() +} + +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)] +#[serde(rename_all = "camelCase")] +pub struct WizardDescribeSnapshot { + #[serde(default)] + pub name: String, + #[serde(default)] + pub description: String, + #[serde(default)] + pub working_directory: String, + #[serde(default = "default_plan_agent")] + pub plan_agent: String, + #[serde(default)] + pub plan_model: Option, + #[serde(default)] + pub plan_effort: Option, +} + +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)] +#[serde(rename_all = "camelCase")] +pub struct WizardPlanSnapshot { + #[serde(default)] + pub completed: bool, +} + +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)] +#[serde(rename_all = "camelCase")] +pub struct WizardAtomizeSnapshot { + #[serde(default)] + pub stories_count: usize, +} + +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)] +#[serde(rename_all = "camelCase")] +pub struct WizardConfigureSnapshot { + #[serde(default)] + pub execute_agent: String, + #[serde(default)] + pub execute_model: Option, + #[serde(default)] + pub execute_effort: Option, + #[serde(default)] + pub fallback_chain: Vec, + #[serde(default)] + pub gutter_threshold: u32, + #[serde(default)] + pub max_iterations: u32, + #[serde(default)] + pub cooldown_seconds: u32, + #[serde(default)] + pub test_command: String, + #[serde(default)] + pub max_verification_retries: u32, + #[serde(default)] + pub scm_provider: String, + #[serde(default)] + pub review_polling_interval: u32, + #[serde(default)] + pub review_timeout: u32, +} + +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)] +#[serde(rename_all = "camelCase")] +pub struct WizardSnapshot { + #[serde(default = "default_snapshot_version")] + pub version: u32, + #[serde(default)] + pub project_id: String, + #[serde(default = "default_wizard_step")] + pub current_step: String, + #[serde(default = "default_highest_step")] + pub highest_step: u32, + #[serde(default)] + pub describe: WizardDescribeSnapshot, + #[serde(default)] + pub plan: WizardPlanSnapshot, + #[serde(default)] + pub atomize: WizardAtomizeSnapshot, + #[serde(default)] + pub configure: WizardConfigureSnapshot, +} + +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)] +#[serde(rename_all = "camelCase")] +pub struct SaveWizardStateCommand { + #[serde(default)] + pub project_id: String, + #[serde(default = "default_wizard_step")] + pub wizard_step: String, + #[serde(default)] + pub wizard_state_json: String, +} + +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)] +#[serde(rename_all = "camelCase")] +pub struct SaveDraftCommand { + #[serde(default)] + pub project_id: String, + #[serde(default)] + pub draft_json: String, +} + +#[cfg(test)] +mod tests { + use super::{ + SaveWizardStateCommand, WizardConfigureSnapshot, WizardDescribeSnapshot, WizardSnapshot, + }; + + #[test] + fn wizard_snapshot_roundtrips_serialization() { + let snapshot = WizardSnapshot { + version: 1, + project_id: "project-1".to_string(), + current_step: "configure".to_string(), + highest_step: 4, + describe: WizardDescribeSnapshot { + name: "LoopForge".to_string(), + description: "Shared contracts".to_string(), + working_directory: "/tmp/loopforge".to_string(), + plan_agent: "codex".to_string(), + plan_model: Some("gpt-5.4".to_string()), + plan_effort: Some("high".to_string()), + }, + plan: Default::default(), + atomize: Default::default(), + configure: WizardConfigureSnapshot { + execute_agent: "codex".to_string(), + fallback_chain: vec!["claude".to_string()], + max_iterations: 12, + ..Default::default() + }, + }; + + let json = serde_json::to_string(&snapshot).expect("serialize wizard snapshot"); + let restored: WizardSnapshot = + serde_json::from_str(&json).expect("deserialize wizard snapshot"); + + assert_eq!(restored, snapshot); + assert!(json.contains("\"currentStep\":\"configure\"")); + } + + #[test] + fn legacy_wizard_snapshot_uses_backward_compatible_defaults() { + let legacy = r#"{ + "projectId":"project-1", + "describe":{ + "name":"LoopForge", + "description":"Shared contracts", + "workingDirectory":"/tmp/loopforge" + }, + "plan":{"completed":true}, + "atomize":{"storiesCount":2} + }"#; + + let snapshot: WizardSnapshot = + serde_json::from_str(legacy).expect("deserialize legacy snapshot"); + + assert_eq!(snapshot.version, 1); + assert_eq!(snapshot.current_step, "describe"); + assert_eq!(snapshot.highest_step, 1); + assert_eq!(snapshot.describe.plan_agent, "claude"); + assert_eq!(snapshot.configure.execute_agent, ""); + } + + #[test] + fn save_wizard_state_command_uses_camel_case() { + let command = SaveWizardStateCommand { + project_id: "project-1".to_string(), + wizard_step: "plan".to_string(), + wizard_state_json: "{}".to_string(), + }; + + let value = serde_json::to_value(command).expect("serialize wizard state command"); + + assert_eq!(value["projectId"], "project-1"); + assert_eq!(value["wizardStep"], "plan"); + assert_eq!(value["wizardStateJson"], "{}"); + } +} diff --git a/crates/loopforge-app-core/Cargo.toml b/crates/loopforge-app-core/Cargo.toml index b29e9d7..6127c27 100644 --- a/crates/loopforge-app-core/Cargo.toml +++ b/crates/loopforge-app-core/Cargo.toml @@ -5,3 +5,7 @@ edition = "2021" description = "Application core services for LoopForge backend orchestration" [dependencies] +app-services = { path = "../app-services" } + +[lints] +workspace = true diff --git a/crates/loopforge-app-core/src/atomizer.rs b/crates/loopforge-app-core/src/atomizer.rs index 28e5918..96be84c 100644 --- a/crates/loopforge-app-core/src/atomizer.rs +++ b/crates/loopforge-app-core/src/atomizer.rs @@ -1,3 +1,5 @@ +use std::future::Future; + #[derive(Debug, Clone, PartialEq, Eq)] pub enum AtomizerStage { CollectPlan, @@ -6,11 +8,57 @@ pub enum AtomizerStage { WriteStories, } +impl AtomizerStage { + pub fn ordered() -> [Self; 4] { + [ + Self::CollectPlan, + Self::BuildPrompt, + Self::ReviewStories, + Self::WriteStories, + ] + } + + pub fn index(&self) -> u8 { + match self { + Self::CollectPlan => 1, + Self::BuildPrompt => 2, + Self::ReviewStories => 3, + Self::WriteStories => 4, + } + } + + pub fn stage_name(&self) -> String { + match self { + Self::CollectPlan => String::from("summarize"), + Self::BuildPrompt => String::from("chunk"), + Self::ReviewStories => String::from("atomize"), + Self::WriteStories => String::from("merge"), + } + } + + fn start_message(&self) -> String { + match self { + Self::CollectPlan => String::from("Summarizing plan..."), + Self::BuildPrompt => String::from("Splitting into sections..."), + Self::ReviewStories => String::from("Atomizing sections..."), + Self::WriteStories => String::from("Merging and ordering stories..."), + } + } +} + #[derive(Debug, Clone, PartialEq, Eq)] pub struct AtomizerRequest { pub project_id: String, } +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct AtomizerProgress { + pub stage: u8, + pub stage_name: String, + pub message: String, + pub project_id: String, +} + #[derive(Debug, Clone, PartialEq, Eq)] pub enum AtomizerEvent { StageStarted { @@ -20,9 +68,67 @@ pub enum AtomizerEvent { StageCompleted { project_id: String, stage: AtomizerStage, + story_count: Option, }, } +impl AtomizerEvent { + pub fn as_progress_payload(&self) -> AtomizerProgress { + match self { + Self::StageStarted { project_id, stage } => AtomizerProgress { + stage: stage.index(), + stage_name: stage.stage_name(), + message: stage.start_message(), + project_id: project_id.clone(), + }, + Self::StageCompleted { + project_id, + stage, + story_count, + } => AtomizerProgress { + stage: stage.index(), + stage_name: stage.stage_name(), + message: match story_count { + Some(total) => format!("Done — {total} stories"), + None => String::from("Stage completed"), + }, + project_id: project_id.clone(), + }, + } + } +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct AtomizerRunResult { + pub output: OutputData, + pub events: Vec, +} + pub trait AtomizerService { fn stages(&self, request: &AtomizerRequest) -> Vec; } + +pub async fn run_atomizer( + request: AtomizerRequest, + execute: impl FnOnce(AtomizerRequest) -> FutureData, + story_count: impl FnOnce(&OutputData) -> usize, +) -> Result, ErrorData> +where + FutureData: Future>, +{ + let mut events = AtomizerStage::ordered() + .into_iter() + .map(|stage| AtomizerEvent::StageStarted { + project_id: request.project_id.clone(), + stage, + }) + .collect::>(); + let project_id = request.project_id.clone(); + let output = execute(request).await?; + events.push(AtomizerEvent::StageCompleted { + project_id, + stage: AtomizerStage::WriteStories, + story_count: Some(story_count(&output)), + }); + Ok(AtomizerRunResult { output, events }) +} diff --git a/crates/loopforge-app-core/src/events.rs b/crates/loopforge-app-core/src/events.rs index bbc6f28..f0d7017 100644 --- a/crates/loopforge-app-core/src/events.rs +++ b/crates/loopforge-app-core/src/events.rs @@ -1,6 +1,6 @@ use std::sync::{mpsc, Arc, Mutex}; -use crate::{AtomizerEvent, LoopSessionEvent, PlanSessionEvent}; +use crate::{atomizer::AtomizerProgress, AtomizerEvent, LoopSessionEvent, PlanSessionEvent}; #[derive(Debug, Clone, PartialEq, Eq)] pub enum AppEvent { @@ -23,7 +23,7 @@ impl EventFanout { let (sender, receiver) = mpsc::channel(); self.subscribers .lock() - .expect("event fanout lock poisoned") + .unwrap_or_else(std::sync::PoisonError::into_inner) .push(sender); receiver } @@ -32,11 +32,18 @@ impl EventFanout { let mut subscribers = self .subscribers .lock() - .expect("event fanout lock poisoned"); + .unwrap_or_else(std::sync::PoisonError::into_inner); subscribers.retain(|subscriber| subscriber.send(event.clone()).is_ok()); } } +pub fn atomizer_progress_payloads(events: &[AtomizerEvent]) -> Vec { + events + .iter() + .map(AtomizerEvent::as_progress_payload) + .collect::>() +} + #[cfg(test)] mod tests { use super::*; @@ -53,4 +60,16 @@ mod tests { assert_eq!(first.recv().unwrap(), event); assert_eq!(second.recv().unwrap(), event); } + + #[test] + fn converts_atomizer_events_into_progress_payloads() { + let events = vec![AtomizerEvent::StageStarted { + project_id: String::from("project-1"), + stage: crate::AtomizerStage::CollectPlan, + }]; + let payloads = atomizer_progress_payloads(&events); + assert_eq!(payloads.len(), 1); + assert_eq!(payloads[0].stage, 1); + assert_eq!(payloads[0].stage_name, "summarize"); + } } diff --git a/crates/loopforge-app-core/src/lib.rs b/crates/loopforge-app-core/src/lib.rs index d596d22..6820b16 100644 --- a/crates/loopforge-app-core/src/lib.rs +++ b/crates/loopforge-app-core/src/lib.rs @@ -3,21 +3,25 @@ pub mod events; pub mod loop_session; pub mod plan; pub mod projects; +pub mod wizard; pub use atomizer::{AtomizerEvent, AtomizerRequest, AtomizerService, AtomizerStage}; pub use events::{AppEvent, EventFanout}; pub use loop_session::{LoopCommand, LoopSessionEvent, LoopSessionHandle, LoopSessionService}; pub use plan::{PlanSessionEvent, PlanSessionHandle, PlanSessionService, PlanSessionStatus}; pub use projects::{ProjectCommand, ProjectService, ProjectSummary}; +pub use wizard::{RuntimeWizardService, WizardRepository, WizardService}; #[cfg(test)] mod tests { use super::*; + use app_services::{SaveDraftCommand, WizardSnapshot}; struct NoopProjectService; struct NoopPlanService; struct NoopAtomizerService; struct NoopLoopSessionService; + struct NoopWizardService; impl ProjectService for NoopProjectService { fn apply(&self, command: ProjectCommand) -> ProjectSummary { @@ -61,12 +65,45 @@ mod tests { } } + impl WizardService for NoopWizardService { + fn snapshot( + &self, + project_id: &str, + ) -> app_services::ServiceResult> { + Ok(Some(WizardSnapshot { + project_id: project_id.to_string(), + ..Default::default() + })) + } + + fn save_state( + &self, + command: app_services::SaveWizardStateCommand, + ) -> app_services::ServiceResult { + Ok(WizardSnapshot { + project_id: command.project_id, + ..Default::default() + }) + } + + fn save_draft( + &self, + command: SaveDraftCommand, + ) -> app_services::ServiceResult { + Ok(WizardSnapshot { + project_id: command.project_id, + ..Default::default() + }) + } + } + #[test] fn exports_service_boundaries() { let project_service = NoopProjectService; let plan_service = NoopPlanService; let atomizer_service = NoopAtomizerService; let loop_service = NoopLoopSessionService; + let wizard_service = NoopWizardService; let summary = project_service.apply(ProjectCommand::Create { id: String::from("project-1"), name: String::from("LoopForge"), @@ -78,9 +115,16 @@ mod tests { let loop_handle = loop_service.dispatch(LoopCommand::Start { project_id: summary.id, }); + let wizard_snapshot = wizard_service + .save_draft(SaveDraftCommand { + project_id: String::from("project-1"), + draft_json: String::from("{}"), + }) + .expect("save draft"); assert_eq!(summary.name, "LoopForge"); assert_eq!(plan.status, PlanSessionStatus::Idle); assert_eq!(stages.last(), Some(&AtomizerStage::WriteStories)); assert!(loop_handle.active); + assert_eq!(wizard_snapshot.project_id, "project-1"); } } diff --git a/crates/loopforge-app-core/src/plan.rs b/crates/loopforge-app-core/src/plan.rs index b0aec42..9ebf2fa 100644 --- a/crates/loopforge-app-core/src/plan.rs +++ b/crates/loopforge-app-core/src/plan.rs @@ -1,3 +1,7 @@ +use std::collections::HashMap; +use std::future::Future; +use std::sync::{Arc, Mutex, OnceLock}; + #[derive(Debug, Clone, PartialEq, Eq)] pub enum PlanSessionStatus { Idle, @@ -21,3 +25,87 @@ pub enum PlanSessionEvent { pub trait PlanSessionService { fn open(&self, project_id: impl Into) -> PlanSessionHandle; } + +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum PlanCleanupReason { + ExplicitStop, + WindowClose, + RestartRecovery, + ProcessExit { exit_code: i32 }, +} + +type CleanupHook = Arc; + +fn cleanup_registry() -> &'static Mutex> { + static REGISTRY: OnceLock>> = OnceLock::new(); + REGISTRY.get_or_init(|| Mutex::new(HashMap::new())) +} + +pub fn register_cleanup(project_id: &str, hook: CleanupHook) { + if let Ok(mut registry) = cleanup_registry().lock() { + registry.insert(project_id.to_string(), hook); + } +} + +pub fn cleanup_session( + project_id: &str, + reason: PlanCleanupReason, + take_session: impl FnOnce(&str) -> Result, ErrorData>, + stop_session: impl FnOnce(SessionData) -> Result<(), ErrorData>, +) -> Result { + let session = take_session(project_id)?; + let removed = session.is_some(); + if let Some(session) = session { + stop_session(session)?; + } + let hook = cleanup_registry() + .lock() + .ok() + .and_then(|mut registry| registry.remove(project_id)); + let had_hook = hook.is_some(); + if let Some(hook) = hook { + hook(reason); + } + Ok(removed || had_hook) +} + +pub fn cleanup_sessions( + project_ids: impl IntoIterator, + reason: PlanCleanupReason, + mut cleanup: impl FnMut(&str, PlanCleanupReason) -> Result, +) -> Result<(), ErrorData> { + for project_id in project_ids { + cleanup(&project_id, reason)?; + } + Ok(()) +} + +pub async fn start_plan( + project_id: String, + start: impl FnOnce(String) -> FutureData, +) -> Result<(), ErrorData> +where + FutureData: Future>, +{ + start(project_id).await +} + +pub fn write_to_plan( + project_id: String, + input: String, + write: impl FnOnce(String, Vec) -> Result<(), ErrorData>, + touch_activity: impl FnOnce(), +) -> Result<(), ErrorData> { + let mut payload = input.into_bytes(); + payload.push(b'\n'); + write(project_id, payload)?; + touch_activity(); + Ok(()) +} + +pub fn stop_plan( + project_id: String, + stop: impl FnOnce(String, PlanCleanupReason) -> Result<(), ErrorData>, +) -> Result<(), ErrorData> { + stop(project_id, PlanCleanupReason::ExplicitStop) +} diff --git a/crates/loopforge-app-core/src/projects.rs b/crates/loopforge-app-core/src/projects.rs index 4913ae9..65e48ac 100644 --- a/crates/loopforge-app-core/src/projects.rs +++ b/crates/loopforge-app-core/src/projects.rs @@ -13,3 +13,112 @@ pub enum ProjectCommand { pub trait ProjectService { fn apply(&self, command: ProjectCommand) -> ProjectSummary; } + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct CreateProjectRequest { + pub name: String, + pub description: String, + pub working_directory: String, + pub wizard_step: Option, +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct CreateProjectRecord { + pub id: String, + pub name: String, + pub description: String, + pub status: String, + pub working_directory: String, + pub created_at: String, + pub updated_at: String, + pub wizard_step: Option, +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct ProjectDetail { + pub project: ProjectData, + pub total_stories: usize, + pub passed_count: usize, + pub blocked_count: usize, + pub pending_count: usize, + pub stories: Vec, +} + +pub trait StoryState { + fn passes(&self) -> bool; + fn blocked(&self) -> bool; +} + +pub fn create_project( + request: CreateProjectRequest, + create_id: impl FnOnce() -> String, + now: impl FnOnce() -> String, + initialize_artifacts: impl FnOnce(&str, &str) -> Result<(), ErrorData>, + persist: impl FnOnce(CreateProjectRecord) -> Result, +) -> Result { + let project_id = create_id(); + let timestamp = now(); + initialize_artifacts(&project_id, &request.name)?; + persist(CreateProjectRecord { + id: project_id, + name: request.name, + description: request.description, + status: "draft".to_string(), + working_directory: request.working_directory, + created_at: timestamp.clone(), + updated_at: timestamp, + wizard_step: request.wizard_step, + }) +} + +pub fn list_projects( + load: impl FnOnce() -> Result, +) -> Result { + load() +} + +pub fn finalize_draft( + project_id: String, + now: impl FnOnce() -> String, + clear_draft_state: impl FnOnce(&str, &str) -> Result<(), ErrorData>, + remove_draft_artifact: impl FnOnce(&str) -> Result<(), ErrorData>, +) -> Result<(), ErrorData> { + let updated_at = now(); + clear_draft_state(&project_id, &updated_at)?; + remove_draft_artifact(&project_id) +} + +pub fn discard_draft( + project_id: String, + delete_draft: impl FnOnce(&str) -> Result, + remove_artifacts: impl FnOnce(&str) -> Result<(), ErrorData>, + not_found: impl FnOnce(String) -> ErrorData, +) -> Result<(), ErrorData> { + if !delete_draft(&project_id)? { + return Err(not_found(project_id)); + } + remove_artifacts(&project_id) +} + +pub fn get_project_detail( + project_id: String, + load_project: impl FnOnce(&str) -> Result, + load_stories: impl FnOnce(&str) -> Result, ErrorData>, +) -> Result, ErrorData> +where + StoryData: StoryState, +{ + let project = load_project(&project_id)?; + let stories = load_stories(&project_id)?; + let total_stories = stories.len(); + let passed_count = stories.iter().filter(|story| story.passes()).count(); + let blocked_count = stories.iter().filter(|story| story.blocked()).count(); + Ok(ProjectDetail { + project, + total_stories, + passed_count, + blocked_count, + pending_count: total_stories - passed_count - blocked_count, + stories, + }) +} diff --git a/crates/loopforge-app-core/src/wizard.rs b/crates/loopforge-app-core/src/wizard.rs new file mode 100644 index 0000000..869760a --- /dev/null +++ b/crates/loopforge-app-core/src/wizard.rs @@ -0,0 +1,142 @@ +use app_services::{SaveDraftCommand, SaveWizardStateCommand, ServiceResult, WizardSnapshot}; + +pub trait WizardRepository { + fn load_snapshot(&self, project_id: &str) -> ServiceResult>; + fn save_state(&self, command: SaveWizardStateCommand) -> ServiceResult; + fn save_draft(&self, command: SaveDraftCommand) -> ServiceResult; +} + +pub trait WizardService { + fn snapshot(&self, project_id: &str) -> ServiceResult>; + fn save_state(&self, command: SaveWizardStateCommand) -> ServiceResult; + fn save_draft(&self, command: SaveDraftCommand) -> ServiceResult; +} + +pub struct RuntimeWizardService { + repository: R, +} + +impl RuntimeWizardService { + pub fn new(repository: R) -> Self { + Self { repository } + } +} + +impl WizardService for RuntimeWizardService { + fn snapshot(&self, project_id: &str) -> ServiceResult> { + load_wizard_snapshot(project_id, |project_id| { + self.repository.load_snapshot(project_id) + }) + } + + fn save_state(&self, command: SaveWizardStateCommand) -> ServiceResult { + save_wizard_state(command, |command| self.repository.save_state(command)) + } + + fn save_draft(&self, command: SaveDraftCommand) -> ServiceResult { + save_wizard_draft(command, |command| self.repository.save_draft(command)) + } +} + +pub fn load_wizard_snapshot( + project_id: &str, + load: impl FnOnce(&str) -> ServiceResult>, +) -> ServiceResult> { + load(project_id) +} + +pub fn save_wizard_state( + command: SaveWizardStateCommand, + save: impl FnOnce(SaveWizardStateCommand) -> ServiceResult, +) -> ServiceResult { + save(command) +} + +pub fn save_wizard_draft( + command: SaveDraftCommand, + save: impl FnOnce(SaveDraftCommand) -> ServiceResult, +) -> ServiceResult { + save(command) +} + +#[cfg(test)] +mod tests { + use super::{ + load_wizard_snapshot, save_wizard_draft, save_wizard_state, RuntimeWizardService, + SaveDraftCommand, SaveWizardStateCommand, WizardRepository, WizardService, WizardSnapshot, + }; + use app_services::{ServiceError, WizardAtomizeSnapshot, WizardConfigureSnapshot}; + + #[test] + fn load_snapshot_uses_shared_snapshot_output() { + let snapshot = snapshot("project-1"); + + let restored = load_wizard_snapshot("project-1", |project_id| { + assert_eq!(project_id, "project-1"); + Ok(Some(snapshot.clone())) + }) + .expect("load snapshot"); + + assert_eq!(restored, Some(snapshot)); + } + + #[test] + fn save_state_uses_shared_command_input() { + let command = SaveWizardStateCommand { + project_id: String::from("project-1"), + wizard_step: String::from("plan"), + wizard_state_json: String::from("{\"step\":\"plan\"}"), + }; + let snapshot = snapshot("project-1"); + + let saved = save_wizard_state(command.clone(), |received| { + assert_eq!(received, command); + Ok(snapshot.clone()) + }) + .expect("save wizard state"); + + assert_eq!(saved.project_id, "project-1"); + } + + #[test] + fn runtime_service_delegates_draft_save_through_repository() { + let service = RuntimeWizardService::new(StubWizardRepository); + let command = SaveDraftCommand { + project_id: String::from("project-9"), + draft_json: String::from("{\"draft\":true}"), + }; + + let saved = service.save_draft(command).expect("save draft"); + + assert_eq!(saved.project_id, "project-9"); + assert_eq!(saved.atomize, WizardAtomizeSnapshot { stories_count: 4 }); + assert_eq!(saved.configure, WizardConfigureSnapshot::default()); + } + + struct StubWizardRepository; + + impl WizardRepository for StubWizardRepository { + fn load_snapshot(&self, project_id: &str) -> Result, ServiceError> { + Ok(Some(snapshot(project_id))) + } + + fn save_state( + &self, + command: SaveWizardStateCommand, + ) -> Result { + Ok(snapshot(&command.project_id)) + } + + fn save_draft(&self, command: SaveDraftCommand) -> Result { + save_wizard_draft(command, |received| Ok(snapshot(&received.project_id))) + } + } + + fn snapshot(project_id: &str) -> WizardSnapshot { + WizardSnapshot { + project_id: project_id.to_string(), + atomize: WizardAtomizeSnapshot { stories_count: 4 }, + ..Default::default() + } + } +} diff --git a/crates/loopforge-ui/src/diff_pane.rs b/crates/loopforge-ui/src/diff_pane.rs new file mode 100644 index 0000000..0aa2719 --- /dev/null +++ b/crates/loopforge-ui/src/diff_pane.rs @@ -0,0 +1,197 @@ +use ralph_core::{UnifiedDiffRequest, unified_diff}; +use std::path::PathBuf; +use std::sync::mpsc::{Receiver, TryRecvError, channel}; + +#[derive(Clone, Debug, PartialEq, Eq)] +pub enum DiffLineKind { + Added, + Removed, + Context, + Metadata, + Hunk, +} +#[derive(Clone, Debug, PartialEq, Eq)] +pub struct DiffLine { + pub text: String, + pub kind: DiffLineKind, + pub dark_token: &'static str, + pub light_token: &'static str, +} +#[derive(Clone, Debug, PartialEq, Eq)] +pub struct DiffPaneSnapshot { + pub request_key: String, + pub request_count: usize, + pub is_loading: bool, + pub total_lines: usize, + pub scroll_top_line: usize, + pub visible_lines: Vec, +} +#[derive(Debug)] +struct DiffLoadResult { + request_ticket: usize, + request_key: String, + lines: Vec, +} +#[derive(Debug)] +pub struct DiffPaneState { + repository_path: PathBuf, + request_key: String, + request_count: usize, + is_loading: bool, + lines: Vec, + scroll_top_line: usize, + request_ticket: usize, + pending_receiver: Option>, +} +impl DiffPaneState { + pub fn new(repository_path: PathBuf) -> Self { + Self { + repository_path, + request_key: String::new(), + request_count: 0, + is_loading: false, + lines: Vec::new(), + scroll_top_line: 0, + request_ticket: 0, + pending_receiver: None, + } + } + pub fn resolve_repository_path(start_path: PathBuf) -> PathBuf { + let mut candidate_path = start_path; + loop { + if candidate_path.join(".git").exists() { + return candidate_path; + } + let Some(parent_path) = candidate_path.parent() else { + return candidate_path; + }; + candidate_path = parent_path.to_path_buf(); + } + } + pub fn load_for_selection( + &mut self, + request_key: String, + base_ref: Option<&str>, + head_ref: Option<&str>, + ) { + let repository_path = self.repository_path.clone(); + let base_ref_value = base_ref.map(str::to_string); + let head_ref_value = head_ref.map(str::to_string); + self.start_async_load(request_key, move || { + let mut request = match base_ref_value { + Some(base_ref_item) => UnifiedDiffRequest::between_refs( + repository_path, + base_ref_item, + head_ref_value.unwrap_or_else(|| "HEAD".to_string()), + ), + None => UnifiedDiffRequest::working_tree(repository_path), + }; + request = request.with_context_lines(3); + let patch_text = unified_diff(&request).unwrap_or_else(|error| error.to_string()); + parse_patch_lines(&patch_text) + }); + } + pub fn load_fixture_async(&mut self, request_key: String, patch_text: String) { + self.start_async_load(request_key, move || parse_patch_lines(&patch_text)); + } + pub fn poll_background_load(&mut self) -> bool { + let Some(receiver) = self.pending_receiver.take() else { + return false; + }; + match receiver.try_recv() { + Ok(result) => { + if result.request_ticket == self.request_ticket { + self.request_key = result.request_key; + self.lines = result.lines; + self.is_loading = false; + self.clamp_scroll_top_line(); + } + true + } + Err(TryRecvError::Empty) => { + self.pending_receiver = Some(receiver); + false + } + Err(TryRecvError::Disconnected) => { + self.is_loading = false; + false + } + } + } + pub fn set_scroll_top_line(&mut self, scroll_top_line: usize) { + self.scroll_top_line = scroll_top_line; + self.clamp_scroll_top_line(); + } + pub fn is_loading(&self) -> bool { + self.is_loading + } + pub fn snapshot(&self, viewport_rows: usize) -> DiffPaneSnapshot { + let rows = viewport_rows.max(1); + let start = self.scroll_top_line.min(self.lines.len()); + let end = (start + rows).min(self.lines.len()); + DiffPaneSnapshot { + request_key: self.request_key.clone(), + request_count: self.request_count, + is_loading: self.is_loading, + total_lines: self.lines.len(), + scroll_top_line: self.scroll_top_line, + visible_lines: self.lines[start..end].to_vec(), + } + } + fn start_async_load(&mut self, request_key: String, build_lines: impl FnOnce() -> Vec + Send + 'static) { + self.request_count += 1; + self.request_ticket += 1; + self.request_key = request_key.clone(); + self.is_loading = true; + let request_ticket = self.request_ticket; + let (sender, receiver) = channel(); + std::thread::spawn(move || { + let lines = build_lines(); + let _ = sender.send(DiffLoadResult { + request_ticket, + request_key, + lines, + }); + }); + self.pending_receiver = Some(receiver); + } + fn clamp_scroll_top_line(&mut self) { + if self.lines.is_empty() { + self.scroll_top_line = 0; + return; + } + self.scroll_top_line = self.scroll_top_line.min(self.lines.len() - 1); + } +} +fn parse_patch_lines(patch_text: &str) -> Vec { + if patch_text.is_empty() { + return vec![line_from_text("Working tree is clean".to_string())]; + } + patch_text.lines().map(|line| line_from_text(line.to_string())).collect() +} +fn line_from_text(text: String) -> DiffLine { + let kind = classify_line_kind(&text); + let (dark_token, light_token) = match kind { + DiffLineKind::Added => ("diff-added-dark", "diff-added-light"), + DiffLineKind::Removed => ("diff-removed-dark", "diff-removed-light"), + DiffLineKind::Context => ("diff-context-dark", "diff-context-light"), + DiffLineKind::Metadata => ("diff-meta-dark", "diff-meta-light"), + DiffLineKind::Hunk => ("diff-hunk-dark", "diff-hunk-light"), + }; + DiffLine { text, kind, dark_token, light_token } +} +fn classify_line_kind(line: &str) -> DiffLineKind { + if line.starts_with("@@") { + return DiffLineKind::Hunk; + } + if line.starts_with("diff --git") || line.starts_with("index ") || line.starts_with("--- ") || line.starts_with("+++ ") { + return DiffLineKind::Metadata; + } + if line.starts_with('+') { + return DiffLineKind::Added; + } + if line.starts_with('-') { + return DiffLineKind::Removed; + } + DiffLineKind::Context +} diff --git a/crates/loopforge-ui/src/log_stream.rs b/crates/loopforge-ui/src/log_stream.rs new file mode 100644 index 0000000..004182b --- /dev/null +++ b/crates/loopforge-ui/src/log_stream.rs @@ -0,0 +1,132 @@ +use std::sync::mpsc::{Receiver, TryRecvError, channel}; + +#[derive(Clone, Debug, PartialEq, Eq)] +pub struct LogStreamSnapshot { + pub request_key: String, + pub request_count: usize, + pub is_loading: bool, + pub total_lines: usize, + pub scroll_top_line: usize, + pub visible_lines: Vec, +} +#[derive(Debug)] +struct LogChunk { + request_ticket: usize, + request_key: String, + lines: Vec, + is_final: bool, +} +#[derive(Debug)] +pub struct LogStreamState { + request_key: String, + request_count: usize, + is_loading: bool, + lines: Vec, + scroll_top_line: usize, + request_ticket: usize, + pending_receiver: Option>, +} +impl LogStreamState { + pub fn new() -> Self { + Self { + request_key: String::new(), + request_count: 0, + is_loading: false, + lines: Vec::new(), + scroll_top_line: 0, + request_ticket: 0, + pending_receiver: None, + } + } + pub fn load_fixture_async(&mut self, request_key: String, fixture_text: String, chunk_size: usize) { + self.request_count += 1; + self.request_ticket += 1; + self.request_key = request_key.clone(); + self.is_loading = true; + self.lines.clear(); + self.scroll_top_line = 0; + let chunk_size_value = chunk_size.max(1); + let request_ticket = self.request_ticket; + let (sender, receiver) = channel(); + std::thread::spawn(move || { + let parsed_lines = if fixture_text.is_empty() { + vec!["No log output".to_string()] + } else { + fixture_text.lines().map(str::to_string).collect::>() + }; + for line_chunk in parsed_lines.chunks(chunk_size_value) { + let _ = sender.send(LogChunk { + request_ticket, + request_key: request_key.clone(), + lines: line_chunk.to_vec(), + is_final: false, + }); + } + let _ = sender.send(LogChunk { + request_ticket, + request_key, + lines: Vec::new(), + is_final: true, + }); + }); + self.pending_receiver = Some(receiver); + } + pub fn poll_background_load(&mut self) -> bool { + let Some(receiver) = self.pending_receiver.take() else { + return false; + }; + match receiver.try_recv() { + Ok(chunk) => { + if chunk.request_ticket == self.request_ticket { + self.request_key = chunk.request_key; + if !chunk.lines.is_empty() { + self.lines.extend(chunk.lines); + } + if chunk.is_final { + self.is_loading = false; + self.clamp_scroll_top_line(); + } + } + if self.is_loading { + self.pending_receiver = Some(receiver); + } + true + } + Err(TryRecvError::Empty) => { + self.pending_receiver = Some(receiver); + false + } + Err(TryRecvError::Disconnected) => { + self.is_loading = false; + false + } + } + } + pub fn set_scroll_top_line(&mut self, scroll_top_line: usize) { + self.scroll_top_line = scroll_top_line; + self.clamp_scroll_top_line(); + } + pub fn is_loading(&self) -> bool { + self.is_loading + } + pub fn snapshot(&self, viewport_rows: usize) -> LogStreamSnapshot { + let rows = viewport_rows.max(1); + let start = self.scroll_top_line.min(self.lines.len()); + let end = (start + rows).min(self.lines.len()); + LogStreamSnapshot { + request_key: self.request_key.clone(), + request_count: self.request_count, + is_loading: self.is_loading, + total_lines: self.lines.len(), + scroll_top_line: self.scroll_top_line, + visible_lines: self.lines[start..end].to_vec(), + } + } + fn clamp_scroll_top_line(&mut self) { + if self.lines.is_empty() { + self.scroll_top_line = 0; + return; + } + self.scroll_top_line = self.scroll_top_line.min(self.lines.len() - 1); + } +} diff --git a/crates/loopforge-ui/src/monitor_view.rs b/crates/loopforge-ui/src/monitor_view.rs new file mode 100644 index 0000000..c79ef48 --- /dev/null +++ b/crates/loopforge-ui/src/monitor_view.rs @@ -0,0 +1,155 @@ +use crate::monitor_state::{MonitorState, MonitorSurface}; +use crate::sidebar::{SidebarEntry, build_sidebar_entries, row_count}; +use std::path::PathBuf; + +#[path = "diff_pane.rs"] +mod diff_pane; +use diff_pane::{DiffPaneSnapshot, DiffPaneState}; +#[path = "log_stream.rs"] +mod log_stream; +#[path = "output_pane.rs"] +mod output_pane; +use output_pane::{OutputPaneSnapshot, OutputPaneState}; + +#[derive(Clone, Debug, PartialEq, Eq)] +pub struct MonitorSnapshot { + pub sidebar: Vec, + pub output_placeholder: String, + pub output: OutputPaneSnapshot, + pub diff: DiffPaneSnapshot, + pub focused_surface: MonitorSurface, +} + +#[derive(Debug)] +pub struct MonitorView { + state: MonitorState, + active_sidebar_row: usize, + output_pane: OutputPaneState, + diff_pane: DiffPaneState, +} + +impl MonitorView { + pub fn seeded() -> Self { + let working_directory = std::env::current_dir().unwrap_or_else(|_| PathBuf::from(".")); + let repository_path = DiffPaneState::resolve_repository_path(working_directory); + let mut monitor_view = Self { + state: MonitorState::seeded(), + active_sidebar_row: 0, + output_pane: OutputPaneState::new(), + diff_pane: DiffPaneState::new(repository_path), + }; + monitor_view.refresh_output_pane(); + monitor_view.refresh_diff_pane(); + monitor_view + } + + pub fn select_sidebar_row(&mut self, row_index: usize) -> bool { + if row_index >= row_count(&self.state) { + return false; + } + let session_count = self.state.sessions.len(); + let changed = if row_index < session_count { + self.state.set_active_session(row_index) + } else { + self.state.set_active_agent(row_index - session_count) + }; + if changed { + self.active_sidebar_row = row_index; + self.refresh_output_pane(); + self.refresh_diff_pane(); + } + changed + } + + pub fn cycle_focus(&mut self) { + self.state.cycle_focus(); + } + + pub fn set_diff_scroll_top_line(&mut self, scroll_top_line: usize) { + self.diff_pane.set_scroll_top_line(scroll_top_line); + } + + pub fn set_output_scroll_top_line(&mut self, scroll_top_line: usize) { + self.output_pane.set_scroll_top_line(scroll_top_line); + } + + pub fn begin_output_fixture_playback(&mut self, request_key: String, fixture_text: String) { + self.output_pane.load_fixture_async( + self.output_title(), + request_key, + fixture_text, + 200, + ); + } + + pub fn begin_diff_fixture_playback(&mut self, request_key: String, patch_text: String) { + self.diff_pane.load_fixture_async(request_key, patch_text); + } + + pub fn poll_background_tasks(&mut self) -> bool { + let output_changed = self.output_pane.poll_background_load(); + let diff_changed = self.diff_pane.poll_background_load(); + output_changed || diff_changed + } + + pub fn is_loading(&self) -> bool { + self.output_pane.is_loading() || self.diff_pane.is_loading() + } + + pub fn drain_background_tasks(&mut self, max_polls: usize) { + for _ in 0..max_polls { + let changed = self.poll_background_tasks(); + if !self.is_loading() { + return; + } + if !changed { + std::thread::yield_now(); + } + } + } + + pub fn render_snapshot(&self) -> MonitorSnapshot { + let output = self.output_pane.snapshot(120); + MonitorSnapshot { + sidebar: build_sidebar_entries(&self.state, self.active_sidebar_row), + output_placeholder: output.title.clone(), + output, + diff: self.diff_pane.snapshot(120), + focused_surface: self.state.active_surface.clone(), + } + } + + fn refresh_output_pane(&mut self) { + self.output_pane.load_fixture_async( + self.output_title(), + self.state.active_selection_key(), + self.output_fixture_text(), + 200, + ); + } + + fn refresh_diff_pane(&mut self) { + let (base_ref, head_ref) = self.state.active_diff_refs(); + self.diff_pane + .load_for_selection(self.state.active_selection_key(), base_ref, head_ref); + } + fn output_title(&self) -> String { + let active_session = self.state.active_session(); + let active_agent = self.state.active_agent(); + format!("Output pane: {} via {}", active_session.id, active_agent.model) + } + + fn output_fixture_text(&self) -> String { + let active_session = self.state.active_session(); + let active_agent = self.state.active_agent(); + (0..600) + .map(|line_index| { + format!( + "{} {} log line {}", + active_session.id, active_agent.model, line_index + ) + }) + .collect::>() + .join("\n") + } +} diff --git a/crates/loopforge-ui/src/output_pane.rs b/crates/loopforge-ui/src/output_pane.rs new file mode 100644 index 0000000..fe16250 --- /dev/null +++ b/crates/loopforge-ui/src/output_pane.rs @@ -0,0 +1,46 @@ +use super::log_stream::{LogStreamSnapshot, LogStreamState}; + +#[derive(Clone, Debug, PartialEq, Eq)] +pub struct OutputPaneSnapshot { + pub title: String, + pub stream: LogStreamSnapshot, +} +#[derive(Debug)] +pub struct OutputPaneState { + title: String, + stream: LogStreamState, +} +impl OutputPaneState { + pub fn new() -> Self { + Self { + title: String::new(), + stream: LogStreamState::new(), + } + } + pub fn load_fixture_async( + &mut self, + title: String, + request_key: String, + fixture_text: String, + chunk_size: usize, + ) { + self.title = title; + self.stream + .load_fixture_async(request_key, fixture_text, chunk_size); + } + pub fn poll_background_load(&mut self) -> bool { + self.stream.poll_background_load() + } + pub fn set_scroll_top_line(&mut self, scroll_top_line: usize) { + self.stream.set_scroll_top_line(scroll_top_line); + } + pub fn is_loading(&self) -> bool { + self.stream.is_loading() + } + pub fn snapshot(&self, viewport_rows: usize) -> OutputPaneSnapshot { + OutputPaneSnapshot { + title: self.title.clone(), + stream: self.stream.snapshot(viewport_rows), + } + } +} diff --git a/crates/loopforge-ui/tests/monitor_poc.rs b/crates/loopforge-ui/tests/monitor_poc.rs new file mode 100644 index 0000000..398b80d --- /dev/null +++ b/crates/loopforge-ui/tests/monitor_poc.rs @@ -0,0 +1,115 @@ +use loopforge_ui::{MonitorSurface, MonitorView}; + +fn wait_until_idle(monitor_view: &mut MonitorView) { + monitor_view.drain_background_tasks(50_000); + assert!(!monitor_view.is_loading()); +} + +fn build_log_fixture(line_count: usize) -> String { + (0..line_count) + .map(|line_index| format!("fixture log line {}", line_index)) + .collect::>() + .join("\n") +} + +fn build_patch_fixture(line_count: usize) -> String { + let mut lines = vec![ + "diff --git a/file.txt b/file.txt".to_string(), + "--- a/file.txt".to_string(), + "+++ b/file.txt".to_string(), + "@@ -1 +1 @@".to_string(), + ]; + lines.extend( + (0..line_count) + .map(|line_index| format!("+added line {}", line_index)) + .collect::>(), + ); + lines.join("\n") +} + +#[test] +fn fixture_playback_keeps_window_interactive() { + let mut monitor_view = MonitorView::seeded(); + monitor_view.begin_output_fixture_playback("fixture-output".to_string(), build_log_fixture(8_000)); + monitor_view.begin_diff_fixture_playback("fixture-diff".to_string(), build_patch_fixture(12_000)); + let loading_snapshot = monitor_view.render_snapshot(); + assert!(loading_snapshot.output.stream.is_loading || loading_snapshot.diff.is_loading); + monitor_view.cycle_focus(); + assert_eq!(monitor_view.render_snapshot().focused_surface, MonitorSurface::Output); + monitor_view.cycle_focus(); + assert_eq!(monitor_view.render_snapshot().focused_surface, MonitorSurface::Diff); + monitor_view.set_output_scroll_top_line(400); + monitor_view.set_diff_scroll_top_line(400); + wait_until_idle(&mut monitor_view); + let settled_snapshot = monitor_view.render_snapshot(); + assert_eq!(settled_snapshot.output.stream.total_lines, 8_000); + assert!(settled_snapshot.diff.total_lines >= 12_000); +} + +#[test] +fn scrolls_through_five_thousand_output_lines_repeatably() { + let mut monitor_view = MonitorView::seeded(); + monitor_view.begin_output_fixture_playback("scroll-fixture".to_string(), build_log_fixture(6_200)); + monitor_view.begin_diff_fixture_playback("scroll-diff".to_string(), build_patch_fixture(20)); + wait_until_idle(&mut monitor_view); + monitor_view.set_output_scroll_top_line(5_000); + let snapshot = monitor_view.render_snapshot(); + assert_eq!(snapshot.output.stream.visible_lines.len(), 120); + assert_eq!(snapshot.output.stream.visible_lines[0], "fixture log line 5000"); +} + +#[test] +fn renders_large_patch_without_truncating_visible_rows() { + let mut monitor_view = MonitorView::seeded(); + monitor_view.begin_diff_fixture_playback("large-diff".to_string(), build_patch_fixture(7_500)); + wait_until_idle(&mut monitor_view); + monitor_view.set_diff_scroll_top_line(5_000); + let snapshot = monitor_view.render_snapshot(); + assert_eq!(snapshot.diff.visible_lines.len(), 120); + assert!(snapshot.diff.visible_lines[0].text.starts_with("+added line ")); +} + +#[test] +fn keyboard_focus_remains_reliable_after_repeated_cycles() { + let mut monitor_view = MonitorView::seeded(); + for cycle_index in 0..300 { + monitor_view.cycle_focus(); + let expected_surface = match cycle_index % 3 { + 0 => MonitorSurface::Output, + 1 => MonitorSurface::Diff, + _ => MonitorSurface::Sidebar, + }; + assert_eq!(monitor_view.render_snapshot().focused_surface, expected_surface); + } +} + +#[test] +fn dark_and_light_tokens_remain_in_parity_for_diff_lines() { + let mut monitor_view = MonitorView::seeded(); + let patch = "diff --git a/file.txt b/file.txt\n@@ -1 +1 @@\n-line\n+line\n line"; + monitor_view.begin_diff_fixture_playback("token-parity".to_string(), patch.to_string()); + wait_until_idle(&mut monitor_view); + let snapshot = monitor_view.render_snapshot(); + assert!(snapshot.diff.visible_lines.iter().all(|line| !line.dark_token.is_empty())); + assert!(snapshot.diff.visible_lines.iter().all(|line| !line.light_token.is_empty())); + assert!(snapshot + .diff + .visible_lines + .iter() + .any(|line| line.text.starts_with("diff --git") && line.dark_token == "diff-meta-dark")); + assert!(snapshot + .diff + .visible_lines + .iter() + .any(|line| line.text.starts_with("@@") && line.light_token == "diff-hunk-light")); + assert!(snapshot + .diff + .visible_lines + .iter() + .any(|line| line.text.starts_with('+') && line.dark_token == "diff-added-dark")); + assert!(snapshot + .diff + .visible_lines + .iter() + .any(|line| line.text.starts_with('-') && line.light_token == "diff-removed-light")); +} diff --git a/crates/native-shell/Cargo.toml b/crates/native-shell/Cargo.toml new file mode 100644 index 0000000..ce7c268 --- /dev/null +++ b/crates/native-shell/Cargo.toml @@ -0,0 +1,9 @@ +[package] +name = "native-shell" +version = "0.1.0" +edition = "2021" + +[dependencies] +app-services = { path = "../app-services" } +loopforge-app-core = { path = "../loopforge-app-core" } + diff --git a/crates/native-shell/src/app.rs b/crates/native-shell/src/app.rs new file mode 100644 index 0000000..9f477ea --- /dev/null +++ b/crates/native-shell/src/app.rs @@ -0,0 +1,200 @@ +use std::fs; +use std::io; +use std::path::PathBuf; +#[path = "screens/mod.rs"] +mod screens; +#[path = "services/planning.rs"] +mod planning_service; +#[path = "services/atomization.rs"] +mod atomization_service; +#[path = "services/projects.rs"] +mod projects_service; +#[path = "services/loops.rs"] +mod loops_service; +#[path = "theme/mod.rs"] +pub mod theme; +#[path = "view_models/home.rs"] +mod home_view_model; +use atomization_service::AtomizationService; +use home_view_model::{HomeAction, HomeProjectSummary, HomeSessionSummary, HomeViewModel, ProjectStatus, SessionState}; +use loops_service::LoopService; +use planning_service::PlanningService; +use projects_service::{ProjectLifecycle, ProjectsService}; +use screens::{AtomizationArtifact, AtomizationScreen, AtomizationStageUpdate, AtomizationState, HomeScreen, MonitorScreen, MonitorState, PlanningScreen, PlanningState, ProjectWizardScreen, ProjectWizardState}; +use theme::{ThemeName, ThemeStore}; +#[derive(Debug, Clone)] struct BackendAdapter { projects_root: Option } +impl BackendAdapter { + fn new(projects_root: Option) -> Self { Self { projects_root } } + fn projects_service(&self) -> ProjectsService { ProjectsService::new(self.projects_root.clone()) } + fn planning_service(&self, projects: &ProjectsService) -> PlanningService { PlanningService::new(projects.root().to_path_buf()) } + fn atomization_service(&self, projects: &ProjectsService) -> AtomizationService { AtomizationService::new(projects.root().to_path_buf()) } +} +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum ScreenId { Dashboard, Wizard, Planning, Atomization, Monitor } +#[derive(Debug, Clone, PartialEq, Eq)] +pub enum ScreenView { + Dashboard(HomeScreen), + Wizard(ProjectWizardScreen), + Planning(PlanningScreen), + Atomization(AtomizationScreen), + Monitor(MonitorScreen), +} +#[derive(Debug, Clone)] +pub struct NativeShellApp { + active_screen: ScreenId, + theme: ThemeName, + theme_store: ThemeStore, + projects: ProjectsService, + planning_service: PlanningService, + atomization_service: AtomizationService, + loop_service: LoopService, + wizard_state: ProjectWizardState, + planning_state: PlanningState, + atomization_state: AtomizationState, + monitor_state: MonitorState, + home: HomeViewModel, +} +impl NativeShellApp { + pub fn boot(theme_path: Option) -> io::Result { Self::boot_with_paths(theme_path, None) } + pub fn boot_with_paths(theme_path: Option, projects_root: Option) -> io::Result { + let theme_store = ThemeStore::new(theme_path.unwrap_or_else(ThemeStore::default_path)); + let backend_adapter = BackendAdapter::new(projects_root); + let projects = backend_adapter.projects_service(); + let planning_service = backend_adapter.planning_service(&projects); + let atomization_service = backend_adapter.atomization_service(&projects); + let loop_service = LoopService::new(projects.root().to_path_buf()); + let mut app = Self { + active_screen: ScreenId::Dashboard, + theme: theme_store.load()?, + theme_store, + projects, + planning_service, + atomization_service, + loop_service, + wizard_state: ProjectWizardState::idle(), + planning_state: PlanningState::idle(), + atomization_state: AtomizationState::idle(), + monitor_state: MonitorState::idle(), + home: Self::build_home(Vec::new()), + }; + app.refresh_home()?; + Ok(app) + } + pub fn set_screen(&mut self, screen: ScreenId) { self.active_screen = screen; } + pub fn select_theme(&mut self, theme: ThemeName) -> io::Result<()> { + self.theme_store.save(theme)?; + self.theme = theme; + Ok(()) + } + pub fn start_project_wizard(&mut self, project_name: &str, objective: &str) -> io::Result { + let record = self.projects.create_project(project_name, objective)?; + self.wizard_state = ProjectWizardState::completed(record.id.clone(), record.name.clone(), objective.trim().to_owned()); + self.refresh_home()?; + Ok(record.id) + } + pub fn start_planning_session(&mut self, project_id: &str, objective: &str) -> io::Result<()> { + let project_name = self.home.active_projects.iter().find(|project| project.id == project_id).map(|project| project.name.clone()).unwrap_or_else(|| project_id.to_owned()); + self.planning_state = PlanningState::start(project_id.to_owned(), project_name, objective.trim().to_owned()); + match self.planning_service.start_session(project_id, objective) { + Ok(activity_batch) => { + for activity in activity_batch { self.planning_state = self.planning_state.clone().push_activity(activity); } + self.active_screen = ScreenId::Planning; + Ok(()) + } + Err(error) => { self.planning_state = self.planning_state.clone().fail(error.to_string()); Err(error) } + } + } + pub fn send_planning_input(&mut self, input: &str) -> io::Result<()> { + match self.planning_service.send_input(input) { + Ok(activity_batch) => { for activity in activity_batch { self.planning_state = self.planning_state.clone().push_activity(activity); } Ok(()) } + Err(error) => { self.planning_state = self.planning_state.clone().fail(error.to_string()); Err(error) } + } + } + pub fn stop_planning_session(&mut self) -> io::Result<()> { + let session = self.planning_service.stop_session()?; + let plan_path = session.map(|planning_session| planning_session.plan_path.display().to_string()); + self.planning_state = self.planning_state.clone().stop(plan_path); + Ok(()) + } + pub fn start_atomization_session(&mut self, project_id: &str) -> io::Result<()> { + let project_name = self.home.active_projects.iter().find(|project| project.id == project_id).map(|project| project.name.clone()).unwrap_or_else(|| project_id.to_owned()); + self.atomization_state = AtomizationState::start(project_id.to_owned(), project_name); + match self.atomization_service.run_pipeline(project_id) { + Ok(run) => { + for progress in run.progress { + self.atomization_state = self.atomization_state.clone().push_stage(AtomizationStageUpdate::new(progress.stage.label(), progress.detail)); + } + self.atomization_state = self.atomization_state.clone().complete(run.artifacts.prd_path.display().to_string(), run.artifacts.prompt_path.display().to_string(), run.artifacts.guardrails_path.display().to_string()); + self.active_screen = ScreenId::Atomization; + Ok(()) + } + Err(error) => { self.atomization_state = self.atomization_state.clone().fail(error.to_string()); Err(error) } + } + } + pub fn open_atomization_artifact(&self, artifact: AtomizationArtifact) -> io::Result { + let Some(path) = self.atomization_state.artifact_path(artifact) else { + return Err(io::Error::new(io::ErrorKind::NotFound, "atomization artifact is unavailable")); + }; + fs::read_to_string(path) + } + pub fn start_loop_session(&mut self, project_id: &str) -> io::Result<()> { + let project_name = self.home.active_projects.iter().find(|project| project.id == project_id).map(|project| project.name.clone()).unwrap_or_else(|| project_id.to_owned()); + self.monitor_state = MonitorState::start(project_id.to_owned(), project_name); + match self.loop_service.start_session(project_id) { + Ok(update) => { + self.monitor_state = self.monitor_state.clone().apply_update(update.session_id, update.running, update.events, update.completed_iterations, update.blocked_states, update.rate_limit_events); + self.active_screen = ScreenId::Monitor; + Ok(()) + } + Err(error) => { self.monitor_state = self.monitor_state.clone().fail(error.to_string()); Err(error) } + } + } + pub fn stop_loop_session(&mut self) -> io::Result<()> { + let update = self.loop_service.stop_session()?; + self.monitor_state = match update { + Some(update) => self.monitor_state.clone().apply_update(update.session_id, update.running, update.events, update.completed_iterations, update.blocked_states, update.rate_limit_events), + None => self.monitor_state.clone().stop(), + }; + Ok(()) + } + pub fn resume_project(&mut self, project_id: &str) -> io::Result<()> { self.projects.resume_project(project_id)?; self.refresh_home() } + pub fn archive_project(&mut self, project_id: &str) -> io::Result<()> { self.projects.archive_project(project_id)?; self.refresh_home() } + pub fn home(&self) -> &HomeViewModel { &self.home } + pub fn projects_root(&self) -> PathBuf { self.projects.root().to_path_buf() } + fn refresh_home(&mut self) -> io::Result<()> { self.home = Self::build_home(self.projects.list_projects(false)?); Ok(()) } + fn build_home(projects: Vec) -> HomeViewModel { + let active_projects = projects.iter().map(|project| HomeProjectSummary { + id: project.id.clone(), + name: project.name.clone(), + status: if project.lifecycle == ProjectLifecycle::Active { ProjectStatus::Running } else { ProjectStatus::Idle }, + latest_session: SessionState::Healthy, + }).collect::>(); + let recent_sessions = projects.iter().take(2).map(|project| HomeSessionSummary { + project_id: project.id.clone(), + session_id: project.latest_session.clone(), + status: SessionState::Healthy, + }).collect::>(); + HomeViewModel { + heading: String::from("INITIALIZE SEQUENCE"), + strapline: String::from("Autonomous AI loop orchestrator."), + active_projects, + recent_sessions, + primary_actions: vec![ + HomeAction { id: String::from("start-project"), label: String::from("Start new project") }, + HomeAction { id: String::from("resume-project"), label: String::from("Resume project") }, + HomeAction { id: String::from("archive-project"), label: String::from("Archive project") }, + HomeAction { id: String::from("open-monitor"), label: String::from("Open monitor") }, + ], + } + } + pub fn render(&self) -> ScreenView { + let palette = self.theme.palette(); + match self.active_screen { + ScreenId::Dashboard => ScreenView::Dashboard(HomeScreen::themed(palette, self.home.clone())), + ScreenId::Wizard => ScreenView::Wizard(ProjectWizardScreen::themed(palette, self.wizard_state.clone())), + ScreenId::Planning => ScreenView::Planning(PlanningScreen::themed(palette, self.planning_state.clone())), + ScreenId::Atomization => ScreenView::Atomization(AtomizationScreen::themed(palette, self.atomization_state.clone())), + ScreenId::Monitor => ScreenView::Monitor(MonitorScreen::themed(palette, self.monitor_state.clone())), + } + } +} \ No newline at end of file diff --git a/crates/native-shell/src/lib.rs b/crates/native-shell/src/lib.rs new file mode 100644 index 0000000..be4c53f --- /dev/null +++ b/crates/native-shell/src/lib.rs @@ -0,0 +1,3 @@ +pub mod services; + +pub use services::BackendAdapter; diff --git a/crates/native-shell/src/screens/atomization.rs b/crates/native-shell/src/screens/atomization.rs new file mode 100644 index 0000000..0edc633 --- /dev/null +++ b/crates/native-shell/src/screens/atomization.rs @@ -0,0 +1,123 @@ +use super::super::theme::palette::ThemePalette; + +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum AtomizationArtifact { + Prd, + Prompt, + Guardrails, +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct AtomizationStageUpdate { + pub stage: String, + pub detail: String, +} + +impl AtomizationStageUpdate { + pub fn new(stage: String, detail: String) -> Self { + Self { stage, detail } + } +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct AtomizationState { + pub project_id: Option, + pub project_name: String, + pub running: bool, + pub stage_updates: Vec, + pub prd_path: Option, + pub prompt_path: Option, + pub guardrails_path: Option, + pub last_error: Option, +} + +impl AtomizationState { + pub fn idle() -> Self { + Self { + project_id: None, + project_name: String::new(), + running: false, + stage_updates: Vec::new(), + prd_path: None, + prompt_path: None, + guardrails_path: None, + last_error: None, + } + } + + pub fn start(project_id: String, project_name: String) -> Self { + Self { + project_id: Some(project_id), + project_name, + running: true, + stage_updates: Vec::new(), + prd_path: None, + prompt_path: None, + guardrails_path: None, + last_error: None, + } + } + + pub fn push_stage(mut self, update: AtomizationStageUpdate) -> Self { + self.stage_updates.push(update); + self.last_error = None; + self + } + + pub fn complete(mut self, prd_path: String, prompt_path: String, guardrails_path: String) -> Self { + self.running = false; + self.prd_path = Some(prd_path); + self.prompt_path = Some(prompt_path); + self.guardrails_path = Some(guardrails_path); + self + } + + pub fn fail(mut self, error_message: String) -> Self { + self.running = false; + self.last_error = Some(error_message); + self + } + + pub fn artifact_path(&self, artifact: AtomizationArtifact) -> Option<&str> { + match artifact { + AtomizationArtifact::Prd => self.prd_path.as_deref(), + AtomizationArtifact::Prompt => self.prompt_path.as_deref(), + AtomizationArtifact::Guardrails => self.guardrails_path.as_deref(), + } + } +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct AtomizationScreen { + pub shell_background: &'static str, + pub surface_background: &'static str, + pub text_primary: &'static str, + pub accent: &'static str, + pub heading: String, + pub project_name: String, + pub running: bool, + pub stage_updates: Vec, + pub prd_path: Option, + pub prompt_path: Option, + pub guardrails_path: Option, + pub last_error: Option, +} + +impl AtomizationScreen { + pub fn themed(palette: ThemePalette, state: AtomizationState) -> Self { + Self { + shell_background: palette.shell_background, + surface_background: palette.surface_background, + text_primary: palette.text_primary, + accent: palette.accent, + heading: String::from("ATOMIZATION"), + project_name: state.project_name, + running: state.running, + stage_updates: state.stage_updates, + prd_path: state.prd_path, + prompt_path: state.prompt_path, + guardrails_path: state.guardrails_path, + last_error: state.last_error, + } + } +} diff --git a/crates/native-shell/src/screens/home.rs b/crates/native-shell/src/screens/home.rs new file mode 100644 index 0000000..edeb2c5 --- /dev/null +++ b/crates/native-shell/src/screens/home.rs @@ -0,0 +1,36 @@ +use super::super::home_view_model::{ + HomeAction, HomeProjectSummary, HomeSessionSummary, HomeViewModel, +}; +use super::super::theme::palette::ThemePalette; + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct HomeScreen { + pub shell_background: &'static str, + pub surface_background: &'static str, + pub text_primary: &'static str, + pub accent: &'static str, + pub heading: String, + pub strapline: String, + pub is_empty: bool, + pub active_projects: Vec, + pub recent_sessions: Vec, + pub primary_actions: Vec, +} + +impl HomeScreen { + pub fn themed(palette: ThemePalette, view_model: HomeViewModel) -> Self { + let is_empty = view_model.active_projects.is_empty(); + Self { + shell_background: palette.shell_background, + surface_background: palette.surface_background, + text_primary: palette.text_primary, + accent: palette.accent, + heading: view_model.heading, + strapline: view_model.strapline, + is_empty, + active_projects: view_model.active_projects, + recent_sessions: view_model.recent_sessions, + primary_actions: view_model.primary_actions, + } + } +} diff --git a/crates/native-shell/src/screens/mod.rs b/crates/native-shell/src/screens/mod.rs new file mode 100644 index 0000000..1a90e57 --- /dev/null +++ b/crates/native-shell/src/screens/mod.rs @@ -0,0 +1,11 @@ +pub mod home; +pub mod atomization; +pub mod planning; +pub mod monitor; +pub mod project_wizard; + +pub use atomization::{AtomizationArtifact, AtomizationScreen, AtomizationStageUpdate, AtomizationState}; +pub use home::HomeScreen; +pub use monitor::{MonitorScreen, MonitorState}; +pub use planning::{PlanningScreen, PlanningState}; +pub use project_wizard::{ProjectWizardScreen, ProjectWizardState}; diff --git a/crates/native-shell/src/screens/monitor.rs b/crates/native-shell/src/screens/monitor.rs new file mode 100644 index 0000000..64c675d --- /dev/null +++ b/crates/native-shell/src/screens/monitor.rs @@ -0,0 +1,116 @@ +use super::super::theme::palette::ThemePalette; + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct MonitorStats { + pub completed_iterations: u32, + pub blocked_states: u32, + pub rate_limit_events: u32, +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct MonitorState { + pub project_id: Option, + pub project_name: String, + pub session_id: Option, + pub running: bool, + pub events: Vec, + pub stats: MonitorStats, + pub last_error: Option, +} + +impl MonitorState { + pub fn idle() -> Self { + Self { + project_id: None, + project_name: String::new(), + session_id: None, + running: false, + events: Vec::new(), + stats: MonitorStats { + completed_iterations: 0, + blocked_states: 0, + rate_limit_events: 0, + }, + last_error: None, + } + } + + pub fn start(project_id: String, project_name: String) -> Self { + Self { + project_id: Some(project_id), + project_name, + session_id: None, + running: true, + events: Vec::new(), + stats: MonitorStats { + completed_iterations: 0, + blocked_states: 0, + rate_limit_events: 0, + }, + last_error: None, + } + } + + pub fn apply_update( + mut self, + session_id: String, + running: bool, + events: Vec, + completed_iterations: u32, + blocked_states: u32, + rate_limit_events: u32, + ) -> Self { + self.session_id = Some(session_id); + self.running = running; + self.events.extend(events); + self.stats.completed_iterations = completed_iterations; + self.stats.blocked_states = blocked_states; + self.stats.rate_limit_events = rate_limit_events; + self.last_error = None; + self + } + + pub fn stop(mut self) -> Self { + self.running = false; + self + } + + pub fn fail(mut self, error_message: String) -> Self { + self.running = false; + self.last_error = Some(error_message); + self + } +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct MonitorScreen { + pub shell_background: &'static str, + pub surface_background: &'static str, + pub text_primary: &'static str, + pub accent: &'static str, + pub heading: String, + pub project_name: String, + pub session_id: Option, + pub running: bool, + pub events: Vec, + pub stats: MonitorStats, + pub last_error: Option, +} + +impl MonitorScreen { + pub fn themed(palette: ThemePalette, state: MonitorState) -> Self { + Self { + shell_background: palette.shell_background, + surface_background: palette.surface_background, + text_primary: palette.text_primary, + accent: palette.accent, + heading: String::from("LOOP MONITOR"), + project_name: state.project_name, + session_id: state.session_id, + running: state.running, + events: state.events, + stats: state.stats, + last_error: state.last_error, + } + } +} diff --git a/crates/native-shell/src/screens/planning.rs b/crates/native-shell/src/screens/planning.rs new file mode 100644 index 0000000..755fa9f --- /dev/null +++ b/crates/native-shell/src/screens/planning.rs @@ -0,0 +1,91 @@ +use super::super::theme::palette::ThemePalette; + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct PlanningState { + pub project_id: Option, + pub project_name: String, + pub objective: String, + pub session_active: bool, + pub activity_lines: Vec, + pub plan_path: Option, + pub last_error: Option, +} + +impl PlanningState { + pub fn idle() -> Self { + Self { + project_id: None, + project_name: String::new(), + objective: String::new(), + session_active: false, + activity_lines: Vec::new(), + plan_path: None, + last_error: None, + } + } + + pub fn start(project_id: String, project_name: String, objective: String) -> Self { + Self { + project_id: Some(project_id), + project_name, + objective, + session_active: true, + activity_lines: Vec::new(), + plan_path: None, + last_error: None, + } + } + + pub fn push_activity(mut self, activity: String) -> Self { + self.activity_lines.push(activity); + self.last_error = None; + self + } + + pub fn stop(mut self, plan_path: Option) -> Self { + self.session_active = false; + if let Some(path) = plan_path { + self.plan_path = Some(path); + } + self + } + + pub fn fail(mut self, error_message: String) -> Self { + self.last_error = Some(error_message); + self.session_active = false; + self + } +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct PlanningScreen { + pub shell_background: &'static str, + pub surface_background: &'static str, + pub text_primary: &'static str, + pub accent: &'static str, + pub heading: String, + pub project_name: String, + pub objective: String, + pub session_active: bool, + pub activity_lines: Vec, + pub plan_path: Option, + pub last_error: Option, +} + +impl PlanningScreen { + pub fn themed(palette: ThemePalette, state: PlanningState) -> Self { + Self { + shell_background: palette.shell_background, + surface_background: palette.surface_background, + text_primary: palette.text_primary, + accent: palette.accent, + heading: String::from("PLANNING SESSION"), + project_name: state.project_name, + objective: state.objective, + session_active: state.session_active, + activity_lines: state.activity_lines, + plan_path: state.plan_path, + last_error: state.last_error, + } + } +} diff --git a/crates/native-shell/src/screens/project_wizard.rs b/crates/native-shell/src/screens/project_wizard.rs new file mode 100644 index 0000000..3bf1ddd --- /dev/null +++ b/crates/native-shell/src/screens/project_wizard.rs @@ -0,0 +1,61 @@ +use super::super::theme::palette::ThemePalette; + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct ProjectWizardState { + pub project_id: Option, + pub project_name: String, + pub objective: String, + pub draft_saved: bool, + pub finalized: bool, +} + +impl ProjectWizardState { + pub fn idle() -> Self { + Self { + project_id: None, + project_name: String::new(), + objective: String::new(), + draft_saved: false, + finalized: false, + } + } + + pub fn completed(project_id: String, project_name: String, objective: String) -> Self { + Self { + project_id: Some(project_id), + project_name, + objective, + draft_saved: true, + finalized: true, + } + } +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct ProjectWizardScreen { + pub shell_background: &'static str, + pub surface_background: &'static str, + pub text_primary: &'static str, + pub accent: &'static str, + pub heading: String, + pub project_name: String, + pub objective: String, + pub draft_saved: bool, + pub finalized: bool, +} + +impl ProjectWizardScreen { + pub fn themed(palette: ThemePalette, state: ProjectWizardState) -> Self { + Self { + shell_background: palette.shell_background, + surface_background: palette.surface_background, + text_primary: palette.text_primary, + accent: palette.accent, + heading: String::from("PROJECT WIZARD"), + project_name: state.project_name, + objective: state.objective, + draft_saved: state.draft_saved, + finalized: state.finalized, + } + } +} diff --git a/crates/native-shell/src/services/atomization.rs b/crates/native-shell/src/services/atomization.rs new file mode 100644 index 0000000..869bd42 --- /dev/null +++ b/crates/native-shell/src/services/atomization.rs @@ -0,0 +1,192 @@ +use super::backend_adapter::BackendAdapter; +use loopforge_app_core::atomizer::{ + AtomizerEvent, AtomizerProgress, AtomizerRequest, AtomizerRunResult, AtomizerStage, +}; +use std::fs; +use std::io; +use std::path::{Path, PathBuf}; + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct AtomizationArtifacts { + pub prd_path: PathBuf, + pub prompt_path: PathBuf, + pub guardrails_path: PathBuf, +} + +pub type AtomizationRun = AtomizerRunResult; + +#[derive(Debug, Clone)] +pub struct AtomizationService { + projects_root: PathBuf, +} + +impl AtomizationService { + pub fn new(projects_root: PathBuf) -> Self { + Self { projects_root } + } + + pub fn stages(&self, project_id: &str) -> io::Result> { + BackendAdapter::atomizer_stages( + AtomizerRequest { + project_id: project_id.to_owned(), + }, + |_| Ok::<_, io::Error>(AtomizerStage::ordered().into_iter().collect()), + ) + } + + pub fn run_pipeline(&self, project_id: &str) -> io::Result { + let request = AtomizerRequest { + project_id: project_id.to_owned(), + }; + let stages = self.stages(project_id)?; + BackendAdapter::run_atomizer(request, |request| { + let project_dir = self.projects_root.join(&request.project_id); + fs::create_dir_all(&project_dir)?; + let plan_content = Self::read_plan(project_dir.join("plan.md"))?; + let artifacts = + Self::write_artifacts(&project_dir, &request.project_id, &plan_content)?; + Ok(AtomizerRunResult { + output: artifacts, + events: Self::build_events(&request.project_id, stages), + }) + }) + } + + pub fn progress(run: &AtomizationRun) -> Vec { + BackendAdapter::atomizer_progress(&run.events) + } + + fn write_artifacts( + project_dir: &Path, + project_id: &str, + plan_content: &str, + ) -> io::Result { + let prd_path = project_dir.join("prd.json"); + let prompt_path = project_dir.join("prompt.md"); + let guardrails_path = project_dir.join("guardrails.md"); + fs::write(&prd_path, Self::render_prd(project_id, plan_content))?; + fs::write(&prompt_path, Self::render_prompt(plan_content))?; + fs::write(&guardrails_path, Self::render_guardrails(plan_content))?; + Ok(AtomizationArtifacts { + prd_path, + prompt_path, + guardrails_path, + }) + } + + fn build_events(project_id: &str, stages: Vec) -> Vec { + let completed_stage = stages + .last() + .cloned() + .unwrap_or(AtomizerStage::WriteStories); + let mut events = stages + .into_iter() + .map(|stage| AtomizerEvent::StageStarted { + project_id: project_id.to_owned(), + stage, + }) + .collect::>(); + events.push(AtomizerEvent::StageCompleted { + project_id: project_id.to_owned(), + stage: completed_stage, + story_count: Some(1), + }); + events + } + + fn read_plan(plan_path: PathBuf) -> io::Result { + if plan_path.exists() { + fs::read_to_string(plan_path) + } else { + Ok(String::from("No plan content was available.")) + } + } + + fn render_prd(project_id: &str, plan_content: &str) -> String { + let objective = Self::json_escape(&Self::extract_objective(plan_content)); + format!( + "{{\n \"projectId\": \"{}\",\n \"stories\": [\n {{\n \"id\": \"S-001\",\n \"title\": \"Implement plan objective\",\n \"description\": \"{}\"\n }}\n ]\n}}\n", + Self::json_escape(project_id), + objective + ) + } + + fn render_prompt(plan_content: &str) -> String { + format!( + "# Execution Prompt\n\nImplement the plan objective:\n{}\n", + Self::extract_objective(plan_content) + ) + } + + fn render_guardrails(plan_content: &str) -> String { + format!( + "# Guardrails\n\n- Keep implementation aligned with objective.\n- Validate each stage output.\n- Objective: {}\n", + Self::extract_objective(plan_content) + ) + } + + fn extract_objective(plan_content: &str) -> String { + plan_content + .lines() + .find_map(|line| { + line.strip_prefix("- Objective: ") + .map(|value| value.trim().to_owned()) + }) + .unwrap_or_else(|| String::from("Ship the planned implementation safely.")) + } + + fn json_escape(value: &str) -> String { + value + .replace('\\', "\\\\") + .replace('"', "\\\"") + .replace('\n', "\\n") + } +} + +#[cfg(test)] +mod tests { + use super::AtomizationService; + use loopforge_app_core::atomizer::AtomizerStage; + use std::fs; + use std::time::{SystemTime, UNIX_EPOCH}; + + fn temp_projects_root() -> std::path::PathBuf { + let stamp = SystemTime::now() + .duration_since(UNIX_EPOCH) + .expect("time") + .as_nanos(); + std::env::temp_dir().join(format!( + "loopforge-shell-atomization-{}-{}", + std::process::id(), + stamp + )) + } + + #[test] + fn run_pipeline_uses_shared_atomizer_contracts() { + let projects_root = temp_projects_root(); + let project_id = "project-native"; + let project_dir = projects_root.join(project_id); + fs::create_dir_all(&project_dir).expect("mkdir"); + fs::write( + project_dir.join("plan.md"), + "- Objective: Keep users in native shell.\n", + ) + .expect("plan"); + let service = AtomizationService::new(projects_root.clone()); + let run = service.run_pipeline(project_id).expect("run"); + let stages = service.stages(project_id).expect("stages"); + let progress = AtomizationService::progress(&run); + + assert_eq!(stages, AtomizerStage::ordered().to_vec()); + assert_eq!(progress.len(), 5); + assert_eq!(progress[4].message, "Done — 1 stories"); + assert_eq!(progress[0].stage_name, "summarize"); + assert!(run.output.prd_path.exists()); + assert!(run.output.prompt_path.exists()); + assert!(run.output.guardrails_path.exists()); + let prd_content = fs::read_to_string(run.output.prd_path).expect("prd"); + assert!(prd_content.contains("Keep users in native shell.")); + let _ = fs::remove_dir_all(projects_root); + } +} diff --git a/crates/native-shell/src/services/backend_adapter.rs b/crates/native-shell/src/services/backend_adapter.rs new file mode 100644 index 0000000..562feab --- /dev/null +++ b/crates/native-shell/src/services/backend_adapter.rs @@ -0,0 +1,193 @@ +use app_services::{IterationStory, ProjectDetail, ProjectQueryRecord, ProjectsByStatus}; +use loopforge_app_core::atomizer::{ + AtomizerEvent, AtomizerProgress, AtomizerRequest, AtomizerRunResult, AtomizerStage, +}; +use loopforge_app_core::events::atomizer_progress_payloads; +use loopforge_app_core::plan::{PlanSessionEvent, PlanSessionHandle}; +use loopforge_app_core::projects::CreateProjectRequest; + +pub type SharedProjects = ProjectsByStatus; +pub type SharedProjectDetail = ProjectDetail; + +#[derive(Debug, Default, Clone, Copy, PartialEq, Eq)] +pub struct BackendAdapter; + +impl BackendAdapter { + pub fn create_project( + request: CreateProjectRequest, + execute: impl FnOnce(CreateProjectRequest) -> Result, + ) -> Result { + execute(request) + } + + pub fn list_projects( + execute: impl FnOnce() -> Result, + ) -> Result { + execute() + } + + pub fn project_detail( + project_id: impl Into, + execute: impl FnOnce(String) -> Result, + ) -> Result { + execute(project_id.into()) + } + + pub fn open_plan_session( + project_id: impl Into, + execute: impl FnOnce(String) -> Result, + ) -> Result { + execute(project_id.into()) + } + + pub fn write_plan_input( + project_id: impl Into, + chunk: impl Into, + execute: impl FnOnce(String, String) -> Result<(), ErrorData>, + ) -> Result { + let project_id = project_id.into(); + let chunk = chunk.into(); + execute(project_id.clone(), chunk.clone())?; + Ok(PlanSessionEvent::Output { project_id, chunk }) + } + + pub fn stop_plan_session( + project_id: impl Into, + execute: impl FnOnce(String) -> Result<(), ErrorData>, + ) -> Result { + let project_id = project_id.into(); + execute(project_id.clone())?; + Ok(PlanSessionEvent::Stopped { project_id }) + } + + pub fn atomizer_stages( + request: AtomizerRequest, + execute: impl FnOnce(AtomizerRequest) -> Result, ErrorData>, + ) -> Result, ErrorData> { + execute(request) + } + + pub fn run_atomizer( + request: AtomizerRequest, + execute: impl FnOnce(AtomizerRequest) -> Result, ErrorData>, + ) -> Result, ErrorData> { + execute(request) + } + + pub fn atomizer_progress(events: &[AtomizerEvent]) -> Vec { + atomizer_progress_payloads(events) + } +} + +#[cfg(test)] +mod tests { + use super::{BackendAdapter, SharedProjectDetail, SharedProjects}; + use app_services::{IterationStory, ProjectQueryRecord}; + use loopforge_app_core::atomizer::{AtomizerEvent, AtomizerRequest, AtomizerRunResult}; + use loopforge_app_core::plan::PlanSessionStatus; + use loopforge_app_core::projects::CreateProjectRequest; + + #[test] + fn projects_entry_points_use_shared_project_dtos() { + let created = BackendAdapter::create_project( + CreateProjectRequest { + name: String::from("LoopForge"), + description: String::from("Move orchestration"), + working_directory: String::from("/tmp/loopforge"), + wizard_step: Some(String::from("describe")), + }, + |request| Ok::<_, ()>(request.name), + ) + .expect("create project"); + let projects = BackendAdapter::list_projects(|| { + Ok::<_, ()>(SharedProjects { + draft: vec![ProjectQueryRecord { + id: String::from("project-1"), + name: String::from("LoopForge"), + status: String::from("draft"), + ..Default::default() + }], + ..Default::default() + }) + }) + .expect("list projects"); + let detail = BackendAdapter::project_detail(String::from("project-1"), |project_id| { + Ok::<_, ()>(SharedProjectDetail { + project: ProjectQueryRecord { + id: project_id, + name: String::from("LoopForge"), + status: String::from("draft"), + ..Default::default() + }, + total_stories: 1, + pending_count: 1, + stories: vec![IterationStory { + id: String::from("S-004"), + title: String::from("Add backend adapter"), + status: String::from("pending"), + ..Default::default() + }], + ..Default::default() + }) + }) + .expect("project detail"); + + assert_eq!(created, "LoopForge"); + assert_eq!(projects.draft[0].id, "project-1"); + assert_eq!(detail.project.name, "LoopForge"); + assert_eq!(detail.stories[0].id, "S-004"); + } + + #[test] + fn planning_entry_points_use_shared_plan_dtos() { + let handle = BackendAdapter::open_plan_session("project-1", |project_id| { + Ok::<_, ()>(loopforge_app_core::plan::PlanSessionHandle { + project_id, + status: PlanSessionStatus::Running, + }) + }) + .expect("open plan"); + let output = BackendAdapter::write_plan_input("project-1", "Continue", |_, _| Ok::<_, ()>(())) + .expect("write plan"); + let stopped = + BackendAdapter::stop_plan_session("project-1", |_| Ok::<_, ()>(())).expect("stop plan"); + + assert_eq!(handle.project_id, "project-1"); + assert!(matches!( + output, + loopforge_app_core::plan::PlanSessionEvent::Output { project_id, chunk } + if project_id == "project-1" && chunk == "Continue" + )); + assert!(matches!( + stopped, + loopforge_app_core::plan::PlanSessionEvent::Stopped { project_id } + if project_id == "project-1" + )); + } + + #[test] + fn atomization_entry_points_use_shared_atomizer_dtos() { + let request = AtomizerRequest { + project_id: String::from("project-1"), + }; + let stages = BackendAdapter::atomizer_stages(request.clone(), |_| { + Ok::<_, ()>(vec![loopforge_app_core::atomizer::AtomizerStage::CollectPlan]) + }) + .expect("stages"); + let run = BackendAdapter::run_atomizer(request.clone(), |request| { + Ok::<_, ()>(AtomizerRunResult { + output: request.project_id, + events: vec![AtomizerEvent::StageStarted { + project_id: String::from("project-1"), + stage: loopforge_app_core::atomizer::AtomizerStage::CollectPlan, + }], + }) + }) + .expect("run"); + let progress = BackendAdapter::atomizer_progress(&run.events); + + assert_eq!(stages.len(), 1); + assert_eq!(run.output, "project-1"); + assert_eq!(progress[0].project_id, "project-1"); + } +} diff --git a/crates/native-shell/src/services/loops.rs b/crates/native-shell/src/services/loops.rs new file mode 100644 index 0000000..b81c47d --- /dev/null +++ b/crates/native-shell/src/services/loops.rs @@ -0,0 +1,194 @@ +use std::collections::HashMap; +use std::fs; +use std::io; +use std::path::PathBuf; +use std::time::{SystemTime, UNIX_EPOCH}; +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct LoopUpdate { + pub project_id: String, + pub project_name: String, + pub session_id: String, + pub running: bool, + pub events: Vec, + pub completed_iterations: u32, + pub blocked_states: u32, + pub rate_limit_events: u32, +} + +#[derive(Debug, Clone, PartialEq, Eq)] +struct LoopSession { + project_id: String, + project_name: String, + session_id: String, + running: bool, + completed_iterations: u32, + blocked_states: u32, + rate_limit_events: u32, +} + +#[derive(Debug, Clone)] +pub struct LoopService { + projects_root: PathBuf, + persistence_path: PathBuf, + session: Option, +} + +impl LoopService { + pub fn new(projects_root: PathBuf) -> Self { + let persistence_path = projects_root.join(".native-shell").join("loop-session.state"); + Self { projects_root, persistence_path, session: None } + } + + pub fn start_session(&mut self, project_id: &str, project_name: &str) -> io::Result { + let project_dir = self.projects_root.join(project_id); + fs::create_dir_all(&project_dir)?; + let events = vec![ + String::from("Loop started from native shell."), + String::from("Iteration 1 started."), + String::from("Iteration 1 completed."), + String::from("Story blocked: waiting for user decision."), + String::from("Rate limit detected: retry scheduled."), + ]; + let session = LoopSession { + project_id: project_id.to_owned(), + project_name: project_name.to_owned(), + session_id: format!("loop-{}", Self::timestamp_nanos()), + running: true, + completed_iterations: 1, + blocked_states: 1, + rate_limit_events: 1, + }; + self.session = Some(session.clone()); + self.persist_session(&session)?; + Ok(Self::to_update(session, events)) + } + + pub fn stop_session(&mut self) -> io::Result> { + let Some(mut session) = self.session.take() else { return Ok(None); }; + session.running = false; + self.persist_session(&session)?; + Ok(Some(Self::to_update(session, vec![String::from("Loop stopped from native shell.")]))) + } + + pub fn load_persisted_session(&mut self) -> io::Result> { + if !self.persistence_path.exists() { return Ok(None); } + let raw_state = fs::read_to_string(&self.persistence_path)?; + let Some(session) = Self::parse_session(&raw_state) else { return Ok(None); }; + self.session = Some(session.clone()); + Ok(Some(Self::to_update(session, Vec::new()))) + } + + fn persist_session(&self, session: &LoopSession) -> io::Result<()> { + let state_dir = self.persistence_path.parent().ok_or_else(|| io::Error::other("missing persistence parent"))?; + fs::create_dir_all(state_dir)?; + let encoded_state = [ + format!("project_id={}", session.project_id), + format!("project_name={}", session.project_name), + format!("session_id={}", session.session_id), + format!("running={}", session.running), + format!("completed_iterations={}", session.completed_iterations), + format!("blocked_states={}", session.blocked_states), + format!("rate_limit_events={}", session.rate_limit_events), + ] + .join("\n"); + fs::write(&self.persistence_path, encoded_state) + } + + fn parse_session(raw_state: &str) -> Option { + let entries = raw_state + .lines() + .filter_map(|line| line.split_once('=').map(|(key, value)| (key.trim().to_owned(), value.trim().to_owned()))) + .collect::>(); + let project_id = entries.get("project_id")?.to_owned(); + let project_name = entries.get("project_name").cloned().unwrap_or_else(|| project_id.clone()); + let parse_u32 = |key: &str| entries.get(key).and_then(|value| value.parse::().ok()).unwrap_or(0); + Some(LoopSession { + project_id, + project_name, + session_id: entries.get("session_id")?.to_owned(), + running: entries.get("running").and_then(|value| value.parse::().ok()).unwrap_or(false), + completed_iterations: parse_u32("completed_iterations"), + blocked_states: parse_u32("blocked_states"), + rate_limit_events: parse_u32("rate_limit_events"), + }) + } + + fn to_update(session: LoopSession, events: Vec) -> LoopUpdate { + LoopUpdate { + project_id: session.project_id, + project_name: session.project_name, + session_id: session.session_id, + running: session.running, + events, + completed_iterations: session.completed_iterations, + blocked_states: session.blocked_states, + rate_limit_events: session.rate_limit_events, + } + } + + fn timestamp_nanos() -> u128 { SystemTime::now().duration_since(UNIX_EPOCH).unwrap_or_default().as_nanos() } +} + +#[cfg(test)] +mod tests { + use std::fs; + use std::time::{SystemTime, UNIX_EPOCH}; + + use super::LoopService; + + fn temp_projects_root() -> std::path::PathBuf { + let stamp = SystemTime::now().duration_since(UNIX_EPOCH).expect("time").as_nanos(); + std::env::temp_dir().join(format!("loopforge-shell-loop-{}-{}", std::process::id(), stamp)) + } + + #[test] + fn start_and_stop_report_native_loop_controls() { + let projects_root = temp_projects_root(); + let mut service = LoopService::new(projects_root.clone()); + let started = service.start_session("project-native", "Native shell project").expect("start"); + assert!(started.running); + assert!(started.events.iter().any(|event| event.contains("Iteration 1 completed"))); + assert!(started.events.iter().any(|event| event.contains("Story blocked"))); + assert!(started.events.iter().any(|event| event.contains("Rate limit"))); + let stopped = service.stop_session().expect("stop").expect("session"); + assert!(!stopped.running); + assert!(stopped.events.iter().any(|event| event.contains("stopped"))); + assert!(service.stop_session().expect("stop twice").is_none()); + let _ = fs::remove_dir_all(projects_root); + } + + #[test] + fn persisted_session_is_restored_on_startup() { + let projects_root = temp_projects_root(); + let mut writer = LoopService::new(projects_root.clone()); + let started = writer.start_session("project-restore", "Recoverable project").expect("start"); + let mut reader = LoopService::new(projects_root.clone()); + let recovered = reader.load_persisted_session().expect("recover").expect("session"); + assert_eq!(recovered.project_id, started.project_id); + assert_eq!(recovered.project_name, started.project_name); + assert_eq!(recovered.session_id, started.session_id); + assert_eq!(recovered.completed_iterations, started.completed_iterations); + assert_eq!(recovered.blocked_states, started.blocked_states); + assert_eq!(recovered.rate_limit_events, started.rate_limit_events); + assert!(recovered.events.is_empty()); + let _ = fs::remove_dir_all(projects_root); + } + + #[test] + fn stopped_session_is_restored_as_resumable() { + let projects_root = temp_projects_root(); + let mut writer = LoopService::new(projects_root.clone()); + let started = writer.start_session("project-resume", "Resumable project").expect("start"); + let stopped = writer.stop_session().expect("stop").expect("session"); + let mut reader = LoopService::new(projects_root.clone()); + let recovered = reader.load_persisted_session().expect("recover").expect("session"); + assert_eq!(recovered.project_id, started.project_id); + assert_eq!(recovered.session_id, stopped.session_id); + assert!(!recovered.running); + assert_eq!(recovered.completed_iterations, stopped.completed_iterations); + assert_eq!(recovered.blocked_states, stopped.blocked_states); + assert_eq!(recovered.rate_limit_events, stopped.rate_limit_events); + assert!(recovered.events.is_empty()); + let _ = fs::remove_dir_all(projects_root); + } +} diff --git a/crates/native-shell/src/services/mod.rs b/crates/native-shell/src/services/mod.rs new file mode 100644 index 0000000..ec61045 --- /dev/null +++ b/crates/native-shell/src/services/mod.rs @@ -0,0 +1,7 @@ +pub mod atomization; +pub mod backend_adapter; +pub mod loops; +pub mod planning; +pub mod projects; + +pub use backend_adapter::BackendAdapter; diff --git a/crates/native-shell/src/services/planning.rs b/crates/native-shell/src/services/planning.rs new file mode 100644 index 0000000..f33afaa --- /dev/null +++ b/crates/native-shell/src/services/planning.rs @@ -0,0 +1,189 @@ +use super::backend_adapter::BackendAdapter; +use loopforge_app_core::plan::{PlanSessionEvent, PlanSessionHandle, PlanSessionStatus}; +use std::fs; +use std::io; +use std::path::PathBuf; + +#[derive(Debug, Clone)] +pub struct PlanningService { + projects_root: PathBuf, + plan_path: Option, + session: Option, + events: Vec, +} + +impl PlanningService { + pub fn new(projects_root: PathBuf) -> Self { + Self { + projects_root, + plan_path: None, + session: None, + events: Vec::new(), + } + } + + pub fn start_session( + &mut self, + project_id: &str, + objective: &str, + ) -> io::Result> { + let project_id = project_id.to_owned(); + let plan_path = self.projects_root.join(&project_id).join("plan.md"); + let handle = BackendAdapter::open_plan_session(project_id.clone(), |resolved_id| { + fs::create_dir_all(self.projects_root.join(&resolved_id))?; + Ok::<_, io::Error>(PlanSessionHandle { + project_id: resolved_id, + status: PlanSessionStatus::Running, + }) + })?; + let mut events = vec![PlanSessionEvent::Started(handle.clone())]; + for activity in Self::build_start_activity(objective) { + let event = BackendAdapter::write_plan_input(project_id.clone(), activity, |_, _| { + Ok::<_, io::Error>(()) + })?; + events.push(event); + } + self.plan_path = Some(plan_path); + self.session = Some(handle); + self.events = events.clone(); + self.persist_plan()?; + Ok(events) + } + + pub fn send_input(&mut self, input: &str) -> io::Result { + let project_id = self + .session + .as_ref() + .map(|handle| handle.project_id.clone()) + .ok_or_else(|| { + io::Error::new(io::ErrorKind::NotFound, "planning session is not active") + })?; + let event = BackendAdapter::write_plan_input( + project_id, + Self::build_follow_up_activity(input), + |_, _| Ok::<_, io::Error>(()), + )?; + self.events.push(event.clone()); + self.persist_plan()?; + Ok(event) + } + + pub fn stop_session(&mut self) -> io::Result> { + let Some(handle) = self.session.take() else { + return Ok(None); + }; + let stopped = BackendAdapter::stop_plan_session(handle.project_id.clone(), |_| { + Ok::<_, io::Error>(()) + })?; + self.events.push(stopped); + self.persist_plan()?; + let stopped_handle = PlanSessionHandle { + project_id: handle.project_id, + status: PlanSessionStatus::Stopped, + }; + self.session = None; + Ok(Some(stopped_handle)) + } + + fn build_start_activity(objective: &str) -> Vec { + let objective_line = if objective.trim().is_empty() { + String::from("Objective: Clarify project goals before execution.") + } else { + format!("Objective: {}", objective.trim()) + }; + vec![ + objective_line, + String::from("Drafting initial implementation sequence."), + ] + } + + fn build_follow_up_activity(input: &str) -> String { + if input.trim().is_empty() { + String::from("Follow-up request received with empty body.") + } else { + format!("Follow-up request: {}", input.trim()) + } + } + + fn persist_plan(&self) -> io::Result<()> { + let Some(plan_path) = self.plan_path.as_ref() else { + return Ok(()); + }; + fs::write(plan_path, Self::render_plan(&self.events)) + } + + fn render_plan(events: &[PlanSessionEvent]) -> String { + let mut plan = String::from("# Native Shell Plan\n\n"); + for event in events { + match event { + PlanSessionEvent::Started(handle) => { + plan.push_str("- Planning session started for "); + plan.push_str(&handle.project_id); + plan.push_str(".\n"); + } + PlanSessionEvent::Output { chunk, .. } => { + plan.push_str("- "); + plan.push_str(chunk); + plan.push('\n'); + } + PlanSessionEvent::Stopped { .. } => { + plan.push_str("- Planning session stopped from native shell.\n"); + } + } + } + plan + } +} + +#[cfg(test)] +mod tests { + use super::PlanningService; + use loopforge_app_core::plan::{PlanSessionEvent, PlanSessionStatus}; + use std::fs; + use std::time::{SystemTime, UNIX_EPOCH}; + + fn temp_projects_root() -> std::path::PathBuf { + let stamp = SystemTime::now() + .duration_since(UNIX_EPOCH) + .expect("time") + .as_nanos(); + std::env::temp_dir().join(format!( + "loopforge-shell-planning-{}-{}", + std::process::id(), + stamp + )) + } + + #[test] + fn planning_service_uses_shared_plan_contracts() { + let projects_root = temp_projects_root(); + let mut service = PlanningService::new(projects_root.clone()); + let started = service + .start_session("project-native", "Ship planning controls") + .expect("start"); + let follow_up = service + .send_input("Include stop flow in acceptance") + .expect("send"); + let stopped = service.stop_session().expect("stop").expect("session"); + + assert!(matches!( + started.first(), + Some(PlanSessionEvent::Started(handle)) + if handle.project_id == "project-native" && handle.status == PlanSessionStatus::Running + )); + assert!(matches!( + follow_up, + PlanSessionEvent::Output { project_id, chunk } + if project_id == "project-native" && chunk == "Follow-up request: Include stop flow in acceptance" + )); + assert_eq!(stopped.status, PlanSessionStatus::Stopped); + assert!(service.stop_session().expect("stop twice").is_none()); + + let plan_content = + fs::read_to_string(projects_root.join("project-native").join("plan.md")).expect("plan"); + assert!(plan_content.contains("Objective: Ship planning controls")); + assert!(plan_content.contains("Follow-up request: Include stop flow in acceptance")); + assert!(plan_content.contains("Planning session stopped from native shell.")); + let _ = fs::remove_dir_all(projects_root); + } +} diff --git a/crates/native-shell/src/services/projects.rs b/crates/native-shell/src/services/projects.rs new file mode 100644 index 0000000..3154891 --- /dev/null +++ b/crates/native-shell/src/services/projects.rs @@ -0,0 +1,191 @@ +use super::backend_adapter::{BackendAdapter, SharedProjectDetail, SharedProjects}; +use app_services::{IterationStory, ProjectQueryRecord}; +use loopforge_app_core::projects::{self, CreateProjectRecord, CreateProjectRequest}; +use std::env; +use std::fs; +use std::io; +use std::path::{Path, PathBuf}; +use std::time::{SystemTime, UNIX_EPOCH}; + +pub type ProjectRecord = ProjectQueryRecord; +#[derive(Debug, Clone)] +pub struct ProjectsService { + root: PathBuf, +} +impl ProjectsService { + pub fn new(root: Option) -> Self { + Self { root: root.unwrap_or_else(Self::default_root) } + } + pub fn root(&self) -> &Path { &self.root } + pub fn create_project(&self, request: CreateProjectRequest) -> io::Result { + let project_name = request.name.clone(); + BackendAdapter::create_project(request, |request| { + projects::create_project( + request, + || format!("proj-{}-{}", Self::timestamp_nanos(), Self::slug(&project_name)), + Self::timestamp, + |project_id, artifact_name| self.initialize_artifacts(project_id, artifact_name), + |record| self.persist_record(record), + ) + }) + } + pub fn list_projects(&self) -> io::Result { + BackendAdapter::list_projects(|| projects::list_projects(|| self.read_projects())) + } + pub fn project_detail(&self, project_id: &str) -> io::Result { + BackendAdapter::project_detail(project_id, |resolved_id| { + let project = self.read_project(&resolved_id)?; + let stories = self.read_stories(); + let passed_count = stories.iter().filter(|story| story.status == "passed").count(); + let blocked_count = stories.iter().filter(|story| story.status == "blocked").count(); + let total_stories = stories.len(); + Ok(SharedProjectDetail { + project, + total_stories, + passed_count, + blocked_count, + pending_count: total_stories.saturating_sub(passed_count + blocked_count), + stories, + }) + }) + } + pub fn resume_project(&self, project_id: &str) -> io::Result { + let projects = BackendAdapter::list_projects(|| { + self.update_status(project_id, "active")?; + self.read_projects() + })?; + Self::find_project(&projects, project_id) + } + pub fn archive_project(&self, project_id: &str) -> io::Result { + let projects = BackendAdapter::list_projects(|| { + self.update_status(project_id, "archived")?; + self.read_projects() + })?; + Self::find_project(&projects, project_id) + } + fn default_root() -> PathBuf { + PathBuf::from(env::var("HOME").unwrap_or_else(|_| ".".to_owned())).join(".config/loopforge/projects") + } + fn initialize_artifacts(&self, project_id: &str, project_name: &str) -> io::Result<()> { + let project_dir = self.root.join(project_id); + fs::create_dir_all(&project_dir)?; + fs::write(project_dir.join("plan.md"), format!("# {project_name}\n\n- Bootstrapped from native shell wizard.\n"))?; + fs::write(project_dir.join("prd.json"), "{\"stories\":[]}\n")?; + fs::write(project_dir.join("config.json"), "{\"maxIterations\":20}\n")?; + fs::write(project_dir.join("prompt.md"), format!("# {project_name}\n\nContinue implementing the accepted stories.\n"))?; + fs::write(project_dir.join("guardrails.md"), "")?; + Ok(()) + } + fn persist_record(&self, record: CreateProjectRecord) -> io::Result { + let project = ProjectRecord { + id: record.id, + name: record.name, + description: record.description, + status: record.status, + working_directory: record.working_directory, + created_at: record.created_at, + updated_at: record.updated_at, + wizard_step: record.wizard_step, + }; + let project_dir = self.root.join(&project.id); + fs::create_dir_all(&project_dir)?; + fs::write(project_dir.join("project.txt"), Self::encode_project(&project))?; + fs::write(project_dir.join("draft.json"), format!("id={}\nname={}\nstep={}\ndescription={}\n", project.id, project.name, project.wizard_step.clone().unwrap_or_else(|| "describe".to_owned()), project.description))?; + Ok(project) + } + fn read_projects(&self) -> io::Result { + if !self.root.exists() { + return Ok(SharedProjects::default()); + } + let mut grouped = SharedProjects::default(); + for entry in fs::read_dir(&self.root)? { + let entry = entry?; + if !entry.file_type()?.is_dir() { + continue; + } + let project_path = entry.path().join("project.txt"); + if project_path.exists() { + Self::push_project(&mut grouped, Self::read_project_file(&project_path)?); + } + } + Ok(grouped) + } + fn read_project(&self, project_id: &str) -> io::Result { + Self::read_project_file(&self.root.join(project_id).join("project.txt")) + } + fn read_project_file(path: &Path) -> io::Result { + let content = fs::read_to_string(path)?; + let mut project = ProjectRecord::default(); + for line in content.lines() { + if let Some((key, value)) = line.split_once('=') { + match key.trim() { + "id" => project.id = value.trim().to_owned(), + "name" => project.name = value.trim().to_owned(), + "description" => project.description = value.trim().to_owned(), + "status" => project.status = value.trim().to_owned(), + "working_directory" => project.working_directory = value.trim().to_owned(), + "created_at" => project.created_at = value.trim().to_owned(), + "updated_at" => project.updated_at = value.trim().to_owned(), + "wizard_step" => project.wizard_step = (!value.trim().is_empty()).then(|| value.trim().to_owned()), + _ => {} + } + } + } + if project.id.is_empty() || project.name.is_empty() { + return Err(io::Error::new(io::ErrorKind::InvalidData, "invalid project")); + } + Ok(project) + } + fn update_status(&self, project_id: &str, status: &str) -> io::Result<()> { + let mut project = self.read_project(project_id)?; + project.status = status.to_owned(); + project.updated_at = Self::timestamp(); + fs::write(self.root.join(project_id).join("project.txt"), Self::encode_project(&project)) + } + fn encode_project(project: &ProjectRecord) -> String { + format!("id={}\nname={}\ndescription={}\nstatus={}\nworking_directory={}\ncreated_at={}\nupdated_at={}\nwizard_step={}\n", project.id, project.name, project.description, project.status, project.working_directory, project.created_at, project.updated_at, project.wizard_step.clone().unwrap_or_default()) + } + fn push_project(grouped: &mut SharedProjects, project: ProjectRecord) { + match project.status.as_str() { + "active" => grouped.active.push(project), + "paused" => grouped.paused.push(project), + "completed" => grouped.completed.push(project), + "archived" => grouped.archived.push(project), + "blocked" => grouped.blocked.push(project), + "failed" => grouped.failed.push(project), + _ => grouped.draft.push(project), + } + } + fn find_project(grouped: &SharedProjects, project_id: &str) -> io::Result { + grouped.active.iter().chain(grouped.paused.iter()).chain(grouped.completed.iter()).chain(grouped.draft.iter()).chain(grouped.archived.iter()).chain(grouped.blocked.iter()).chain(grouped.failed.iter()).find(|project| project.id == project_id).cloned().ok_or_else(|| io::Error::new(io::ErrorKind::NotFound, "project not found")) + } + fn read_stories(&self) -> Vec { Vec::new() } + fn timestamp() -> String { Self::timestamp_nanos().to_string() } + fn timestamp_nanos() -> u128 { + SystemTime::now().duration_since(UNIX_EPOCH).unwrap_or_default().as_nanos() + } + fn slug(value: &str) -> String { + let compact = value.trim().to_lowercase().chars().map(|character| if character.is_ascii_alphanumeric() { character } else { '-' }).collect::().trim_matches('-').replace("--", "-"); + if compact.is_empty() { String::from("project") } else { compact } + } +} + +#[cfg(test)] +mod tests { + use super::ProjectsService; + use loopforge_app_core::projects::CreateProjectRequest; + fn temp_root() -> std::path::PathBuf { std::env::temp_dir().join(format!("native-shell-projects-{}", super::ProjectsService::timestamp_nanos())) } + #[test] + fn projects_service_uses_shared_project_contracts() { + let root = temp_root(); + let service = ProjectsService::new(Some(root.clone())); + let project = service.create_project(CreateProjectRequest { name: String::from("LoopForge"), description: String::from("Backend-owned wizard"), working_directory: String::from("/tmp/loopforge"), wizard_step: Some(String::from("describe")) }).expect("create"); + let grouped = service.list_projects().expect("list"); + let archived = service.archive_project(&project.id).expect("archive"); + let detail = service.project_detail(&project.id).expect("detail"); + assert_eq!(grouped.draft[0].id, project.id); + assert_eq!(archived.status, "archived"); + assert_eq!(detail.project.name, "LoopForge"); + let _ = std::fs::remove_dir_all(root); + } +} diff --git a/crates/native-shell/src/theme/mod.rs b/crates/native-shell/src/theme/mod.rs new file mode 100644 index 0000000..d7b0b87 --- /dev/null +++ b/crates/native-shell/src/theme/mod.rs @@ -0,0 +1,69 @@ +use std::env; +use std::fs; +use std::io; +use std::path::PathBuf; + +use self::palette::{DAWN, MIDNIGHT, ThemePalette}; + +pub mod palette; + +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum ThemeName { + Midnight, + Dawn, +} + +impl ThemeName { + pub fn palette(self) -> ThemePalette { + match self { + Self::Midnight => MIDNIGHT, + Self::Dawn => DAWN, + } + } + + pub fn as_key(self) -> &'static str { + self.palette().name + } + + pub fn from_key(value: &str) -> Self { + match value.trim() { + "dawn" => Self::Dawn, + _ => Self::Midnight, + } + } +} + +#[derive(Debug, Clone)] +pub struct ThemeStore { + path: PathBuf, +} + +impl ThemeStore { + pub fn new(path: PathBuf) -> Self { + Self { path } + } + + pub fn default_path() -> PathBuf { + let home = env::var("HOME").unwrap_or_else(|_| ".".to_owned()); + PathBuf::from(home) + .join(".config") + .join("loopforge") + .join("native-shell") + .join("theme.txt") + } + + pub fn load(&self) -> io::Result { + match fs::read_to_string(&self.path) { + Ok(value) => Ok(ThemeName::from_key(&value)), + Err(error) if error.kind() == io::ErrorKind::NotFound => Ok(ThemeName::Midnight), + Err(error) => Err(error), + } + } + + pub fn save(&self, theme: ThemeName) -> io::Result<()> { + if let Some(parent) = self.path.parent() { + fs::create_dir_all(parent)?; + } + fs::write(&self.path, theme.as_key()) + } +} diff --git a/crates/native-shell/src/theme/palette.rs b/crates/native-shell/src/theme/palette.rs new file mode 100644 index 0000000..f1f54ac --- /dev/null +++ b/crates/native-shell/src/theme/palette.rs @@ -0,0 +1,24 @@ +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub struct ThemePalette { + pub name: &'static str, + pub shell_background: &'static str, + pub surface_background: &'static str, + pub text_primary: &'static str, + pub accent: &'static str, +} + +pub const MIDNIGHT: ThemePalette = ThemePalette { + name: "midnight", + shell_background: "#090b16", + surface_background: "#14182b", + text_primary: "#edf2ff", + accent: "#6b87ff", +}; + +pub const DAWN: ThemePalette = ThemePalette { + name: "dawn", + shell_background: "#f6f3eb", + surface_background: "#fffdf8", + text_primary: "#312a20", + accent: "#9a5b29", +}; diff --git a/crates/native-shell/src/view_models/home.rs b/crates/native-shell/src/view_models/home.rs new file mode 100644 index 0000000..e799cf9 --- /dev/null +++ b/crates/native-shell/src/view_models/home.rs @@ -0,0 +1,90 @@ +#[derive(Debug, Clone, PartialEq, Eq)] +pub enum ProjectStatus { + Running, + Idle, +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub enum SessionState { + Healthy, + Blocked, +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct HomeProjectSummary { + pub id: String, + pub name: String, + pub status: ProjectStatus, + pub latest_session: SessionState, +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct HomeSessionSummary { + pub project_id: String, + pub session_id: String, + pub status: SessionState, +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct HomeAction { + pub id: String, + pub label: String, +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct HomeViewModel { + pub heading: String, + pub strapline: String, + pub active_projects: Vec, + pub recent_sessions: Vec, + pub primary_actions: Vec, +} + +impl HomeViewModel { + pub fn seeded() -> Self { + Self { + heading: String::from("INITIALIZE SEQUENCE"), + strapline: String::from("Autonomous AI loop orchestrator."), + active_projects: vec![ + HomeProjectSummary { + id: String::from("proj-alpha"), + name: String::from("Dashboard Parity"), + status: ProjectStatus::Running, + latest_session: SessionState::Healthy, + }, + HomeProjectSummary { + id: String::from("proj-beta"), + name: String::from("Shell Migration"), + status: ProjectStatus::Idle, + latest_session: SessionState::Blocked, + }, + ], + recent_sessions: vec![ + HomeSessionSummary { + project_id: String::from("proj-alpha"), + session_id: String::from("sess-104"), + status: SessionState::Healthy, + }, + HomeSessionSummary { + project_id: String::from("proj-beta"), + session_id: String::from("sess-097"), + status: SessionState::Blocked, + }, + ], + primary_actions: vec![ + HomeAction { + id: String::from("start-project"), + label: String::from("Start new project"), + }, + HomeAction { + id: String::from("resume-project"), + label: String::from("Resume project"), + }, + HomeAction { + id: String::from("open-monitor"), + label: String::from("Open monitor"), + }, + ], + } + } +} diff --git a/crates/ralph-core/Cargo.toml b/crates/ralph-core/Cargo.toml index b2f82af..98df9b3 100644 --- a/crates/ralph-core/Cargo.toml +++ b/crates/ralph-core/Cargo.toml @@ -9,12 +9,17 @@ tokio = { version = "1", features = ["rt-multi-thread", "process", "time", "sign serde = { version = "1", features = ["derive"] } serde_json = "1" toml = "0.8" -reqwest = { version = "0.12", features = ["json"] } +reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls"] } tracing = "0.1" chrono = { version = "0.4", features = ["serde"] } anyhow = "1" thiserror = "2" regex = "1" +netstat2 = "0.11" +sysinfo = "0.37" [dev-dependencies] tempfile = "3" + +[lints] +workspace = true diff --git a/crates/ralph-core/src/config.rs b/crates/ralph-core/src/config.rs index 4b6046d..83b75f0 100644 --- a/crates/ralph-core/src/config.rs +++ b/crates/ralph-core/src/config.rs @@ -95,25 +95,22 @@ impl RalphConfig { } pub fn load_toml_overlay(&mut self, config_path: &Path) -> Result<(), ConfigError> { - let content = std::fs::read_to_string(config_path).map_err(|source| { - ConfigError::ReadFailed { + let content = + std::fs::read_to_string(config_path).map_err(|source| ConfigError::ReadFailed { path: config_path.to_path_buf(), source, - } - })?; - let parsed: TomlConfig = toml::from_str(&content).map_err(|source| { - ConfigError::ParseFailed { + })?; + let parsed: TomlConfig = + toml::from_str(&content).map_err(|source| ConfigError::ParseFailed { path: config_path.to_path_buf(), source, - } - })?; + })?; let base_dir = parsed .project .as_ref() .and_then(|project| project.working_directory.as_deref()) - .map(PathBuf::from) - .unwrap_or_else(|| self.paths.ralph_dir.clone()); + .map_or_else(|| self.paths.ralph_dir.clone(), PathBuf::from); if let Some(prd_cfg) = &parsed.prd { if let Some(path) = &prd_cfg.path { @@ -134,9 +131,14 @@ impl RalphConfig { } if let Some(services_cfg) = parsed.services { - let legacy = services_cfg.legacy.map(|svc| to_service_config(svc, "legacy")); + let legacy = services_cfg + .legacy + .map(|svc| to_service_config(svc, "legacy")); let new_service = services_cfg.new.map(|svc| to_service_config(svc, "new")); - self.services = Some(ServiceConfigs { legacy, new_service }); + self.services = Some(ServiceConfigs { + legacy, + new_service, + }); } if let Some(test_cfg) = parsed.test { diff --git a/crates/ralph-core/src/detection/failure_memory.rs b/crates/ralph-core/src/detection/failure_memory.rs index 371e467..f6fc3af 100644 --- a/crates/ralph-core/src/detection/failure_memory.rs +++ b/crates/ralph-core/src/detection/failure_memory.rs @@ -39,7 +39,9 @@ impl FailureMemory { } pub fn get_record(&self, story_id: &str) -> Option<&StoryFailureRecord> { - self.stories.iter().find(|record| record.story_id == story_id) + self.stories + .iter() + .find(|record| record.story_id == story_id) } pub fn record_failure( @@ -80,15 +82,18 @@ impl FailureMemory { .map(|attempt| attempt.approach_summary.as_str()) .collect(); let all_same = recent_approaches.len() >= 2 - && recent_approaches.windows(2).all(|window| window[0] == window[1]); + && recent_approaches + .windows(2) + .all(|window| window[0] == window[1]); if all_same { existing.diversity_required = true; if let Some(last_approach) = recent_approaches.first() { - if !existing.banned_approaches.contains(&last_approach.to_string()) { - existing - .banned_approaches - .push(last_approach.to_string()); + if !existing + .banned_approaches + .contains(&last_approach.to_string()) + { + existing.banned_approaches.push(last_approach.to_string()); } } } @@ -107,8 +112,7 @@ impl FailureMemory { pub fn is_in_gutter(&self, story_id: &str, threshold: u32) -> bool { self.get_record(story_id) - .map(|record| record.gutter_score >= threshold) - .unwrap_or(false) + .is_some_and(|record| record.gutter_score >= threshold) } pub fn build_diversity_prompt(&self, story_id: &str) -> Option { @@ -130,10 +134,7 @@ impl FailureMemory { } } - prompt.push_str(&format!( - "\nPrevious attempts: {}\n", - record.attempts.len() - )); + prompt.push_str(&format!("\nPrevious attempts: {}\n", record.attempts.len())); for attempt in record.attempts.iter().rev().take(3) { prompt.push_str(&format!( diff --git a/crates/ralph-core/src/detection/loop_detector.rs b/crates/ralph-core/src/detection/loop_detector.rs index 406f0d9..cb85b02 100644 --- a/crates/ralph-core/src/detection/loop_detector.rs +++ b/crates/ralph-core/src/detection/loop_detector.rs @@ -7,6 +7,12 @@ pub struct LoopDetector { recent_lines: VecDeque, } +impl Default for LoopDetector { + fn default() -> Self { + Self::new() + } +} + impl LoopDetector { pub fn new() -> Self { Self { @@ -50,7 +56,13 @@ impl LoopDetector { } if repetitions >= REPETITION_THRESHOLD { - return Some(pattern.iter().map(|line| line.as_str()).collect::>().join(" | ")); + return Some( + pattern + .iter() + .map(|line| line.as_str()) + .collect::>() + .join(" | "), + ); } } @@ -66,7 +78,9 @@ fn normalize_line(line: &str) -> String { let trimmed = line.trim(); if trimmed.is_empty() || trimmed.starts_with("---") - || trimmed.chars().all(|character| character == '=' || character == '-') + || trimmed + .chars() + .all(|character| character == '=' || character == '-') { return String::new(); } diff --git a/crates/ralph-core/src/detection/stall.rs b/crates/ralph-core/src/detection/stall.rs index e62c621..c9d2bc7 100644 --- a/crates/ralph-core/src/detection/stall.rs +++ b/crates/ralph-core/src/detection/stall.rs @@ -18,12 +18,11 @@ pub async fn monitor_child_with_stall_detection( shutdown_flag: &Arc, output_sender: Option>, ) -> StallVerdict { - let stdout = match child.stdout.take() { - Some(stdout) => stdout, - None => { - let _ = child.wait().await; - return StallVerdict::Completed; - } + let stdout = if let Some(stdout) = child.stdout.take() { + stdout + } else { + let _ = child.wait().await; + return StallVerdict::Completed; }; let stderr = child.stderr.take(); diff --git a/crates/ralph-core/src/events.rs b/crates/ralph-core/src/events.rs index c69f632..b948dcd 100644 --- a/crates/ralph-core/src/events.rs +++ b/crates/ralph-core/src/events.rs @@ -60,6 +60,12 @@ pub struct RecordingEventSink { events: std::sync::Mutex>, } +impl Default for RecordingEventSink { + fn default() -> Self { + Self::new() + } +} + impl RecordingEventSink { pub fn new() -> Self { Self { diff --git a/crates/ralph-core/src/git.rs b/crates/ralph-core/src/git.rs index 098b5b0..4b2d93b 100644 --- a/crates/ralph-core/src/git.rs +++ b/crates/ralph-core/src/git.rs @@ -65,9 +65,7 @@ pub async fn count_commits_between(work_dir: &Path, from_hash: &str, to_hash: &s .current_dir(work_dir) .output() .await?; - let count = String::from_utf8_lossy(&output.stdout) - .lines() - .count() as u32; + let count = String::from_utf8_lossy(&output.stdout).lines().count() as u32; Ok(count) } diff --git a/crates/ralph-core/src/guardrails.rs b/crates/ralph-core/src/guardrails.rs index c8a8fc8..992a844 100644 --- a/crates/ralph-core/src/guardrails.rs +++ b/crates/ralph-core/src/guardrails.rs @@ -1,7 +1,7 @@ use crate::errors::GuardrailError; use std::path::Path; -const GUARDRAILS_TEMPLATE: &str = r#"# Guardrails (Signs) +const GUARDRAILS_TEMPLATE: &str = r"# Guardrails (Signs) Lessons learned from previous iterations. The agent MUST read this FIRST. @@ -19,7 +19,7 @@ When something fails repeatedly, add a sign: - **Trigger**: [When it applies] - **Instruction**: [What to do instead] - **Added after**: Iteration N -"#; +"; pub fn ensure_exists(path: &Path) -> Result<(), GuardrailError> { if !path.exists() { diff --git a/crates/ralph-core/src/health/manager.rs b/crates/ralph-core/src/health/manager.rs index 078b148..174e5ce 100644 --- a/crates/ralph-core/src/health/manager.rs +++ b/crates/ralph-core/src/health/manager.rs @@ -1,81 +1,109 @@ +use netstat2::{AddressFamilyFlags, ProtocolFlags, ProtocolSocketInfo}; use std::path::Path; -use tokio::process::Command; +use std::process::Stdio; +use sysinfo::{Pid, ProcessRefreshKind, ProcessesToUpdate, Signal, System}; pub async fn run_shell_command(command: &str, work_dir: Option<&Path>) -> bool { - let mut cmd = Command::new("sh"); - cmd.args(["-c", command]); - if let Some(dir) = work_dir { - cmd.current_dir(dir); + let mut process = crate::platform::shell_command(command); + if let Some(directory) = work_dir { + process.current_dir(directory); } - match cmd.output().await { + match process.output().await { Ok(output) => output.status.success(), Err(_) => false, } } pub async fn run_shell_command_background(command: &str, work_dir: Option<&Path>) { - let nohup_cmd = format!("nohup {command} > /dev/null 2>&1 &"); - let mut cmd = Command::new("sh"); - cmd.args(["-c", &nohup_cmd]); - if let Some(dir) = work_dir { - cmd.current_dir(dir); + let mut process = crate::platform::shell_command(command); + process + .stdin(Stdio::null()) + .stdout(Stdio::null()) + .stderr(Stdio::null()); + if let Some(directory) = work_dir { + process.current_dir(directory); } - let _ = cmd.spawn(); + #[cfg(windows)] + { + use std::os::windows::process::CommandExt; + process.creation_flags(0x08000000); + } + let _ = process.spawn(); } pub async fn kill_process_on_port(port: u16) { - let lsof_cmd = format!("lsof -ti :{port}"); - let output = Command::new("sh") - .args(["-c", &lsof_cmd]) - .output() - .await; + let process_ids = tokio::task::spawn_blocking(move || collect_process_ids_on_port(port)) + .await + .unwrap_or_default(); + for process_id in process_ids { + let _ = send_signal(process_id, Signal::Term).await; + } +} - if let Ok(output) = output { - let pids = String::from_utf8_lossy(&output.stdout); - for pid_str in pids.lines() { - if let Ok(pid) = pid_str.trim().parse::() { - let _ = Command::new("kill") - .arg(pid.to_string()) - .output() - .await; - } +fn collect_process_ids_on_port(port: u16) -> Vec { + let family_flags = AddressFamilyFlags::IPV4 | AddressFamilyFlags::IPV6; + let protocol_flags = ProtocolFlags::TCP | ProtocolFlags::UDP; + let mut collected_ids = Vec::new(); + let socket_info_list = match netstat2::get_sockets_info(family_flags, protocol_flags) { + Ok(items) => items, + Err(_) => return collected_ids, + }; + for socket_info in socket_info_list { + let local_port = match socket_info.protocol_socket_info { + ProtocolSocketInfo::Tcp(tcp_socket_info) => tcp_socket_info.local_port, + ProtocolSocketInfo::Udp(udp_socket_info) => udp_socket_info.local_port, + }; + if local_port != port { + continue; + } + for process_id in socket_info.associated_pids { + collected_ids.push(process_id); } } + collected_ids.sort_unstable(); + collected_ids.dedup(); + collected_ids } -pub async fn graceful_kill(pid: u32) { - use std::time::Duration; +fn process_exists(process_id: u32) -> bool { + let mut system = System::new(); + system.refresh_processes_specifics(ProcessesToUpdate::All, true, ProcessRefreshKind::nothing()); + system.process(Pid::from_u32(process_id)).is_some() +} - let pid_str = pid.to_string(); +fn signal_process(process_id: u32, signal: Signal) -> bool { + let mut system = System::new(); + system.refresh_processes_specifics(ProcessesToUpdate::All, true, ProcessRefreshKind::nothing()); + match system.process(Pid::from_u32(process_id)) { + Some(process) => process.kill_with(signal).unwrap_or(false), + None => false, + } +} - let _ = Command::new("kill") - .args(["-INT", &pid_str]) - .output() - .await; +pub async fn graceful_kill(pid: u32) { + use std::time::Duration; + let _ = send_signal(pid, Signal::Interrupt).await; tokio::time::sleep(Duration::from_secs(10)).await; if is_process_alive(pid).await { - let _ = Command::new("kill") - .args(["-TERM", &pid_str]) - .output() - .await; + let _ = send_signal(pid, Signal::Term).await; tokio::time::sleep(Duration::from_secs(5)).await; } if is_process_alive(pid).await { - let _ = Command::new("kill") - .args(["-9", &pid_str]) - .output() - .await; + let _ = send_signal(pid, Signal::Kill).await; crate::logger::log_warning(&format!("Force killed PID {pid} (SIGKILL)")); } } async fn is_process_alive(pid: u32) -> bool { - Command::new("kill") - .args(["-0", &pid.to_string()]) - .output() + tokio::task::spawn_blocking(move || process_exists(pid)) + .await + .unwrap_or(false) +} + +async fn send_signal(process_id: u32, signal: Signal) -> bool { + tokio::task::spawn_blocking(move || signal_process(process_id, signal)) .await - .map(|output| output.status.success()) .unwrap_or(false) } diff --git a/crates/ralph-core/src/health/mod.rs b/crates/ralph-core/src/health/mod.rs index d023f47..14941b2 100644 --- a/crates/ralph-core/src/health/mod.rs +++ b/crates/ralph-core/src/health/mod.rs @@ -33,7 +33,10 @@ async fn ensure_service_running(svc_config: &ServiceConfig) -> bool { if service::check_health(health_url).await { return true; } - logger::log_warning(&format!("{} is DOWN, attempting auto-start...", svc_config.name)); + logger::log_warning(&format!( + "{} is DOWN, attempting auto-start...", + svc_config.name + )); if let Some(start_cmd) = &svc_config.start_command { let work_dir = svc_config.working_directory.as_deref(); if let Some(stop_cmd) = &svc_config.stop_command { diff --git a/crates/ralph-core/src/health/service.rs b/crates/ralph-core/src/health/service.rs index 6e9e677..58d478e 100644 --- a/crates/ralph-core/src/health/service.rs +++ b/crates/ralph-core/src/health/service.rs @@ -6,7 +6,8 @@ pub async fn check_health(url: &str) -> bool { .build() .unwrap_or_default(); - match client.post(url) + match client + .post(url) .header("Content-Type", "application/json") .body("{}") .send() @@ -26,9 +27,7 @@ pub async fn wait_for_health(service_name: &str, url: &str, max_wait_secs: u64) while elapsed < max_wait_secs { if check_health(url).await { - crate::logger::log_success(&format!( - "{service_name} is healthy after {elapsed}s" - )); + crate::logger::log_success(&format!("{service_name} is healthy after {elapsed}s")); return true; } tokio::time::sleep(interval).await; diff --git a/crates/ralph-core/src/lib.rs b/crates/ralph-core/src/lib.rs index 76afa42..0035ffd 100644 --- a/crates/ralph-core/src/lib.rs +++ b/crates/ralph-core/src/lib.rs @@ -8,12 +8,17 @@ pub mod guardrails; pub mod health; pub mod logger; pub mod loop_engine; +pub mod platform; pub mod ports; pub mod prd; pub mod prompt; pub mod providers; +pub mod scheduler; pub mod state; pub mod verification; +pub mod worktree; +pub use loop_engine::scheduler::{CompletionScheduler, MergeAction, WorktreeCompletion}; +pub use loop_engine::worktree::{LoopExecutionState, WorktreeExecutionState, PRIMARY_WORKTREE_ID}; #[allow(dead_code)] pub mod plugin; diff --git a/crates/ralph-core/src/loop_engine/helpers.rs b/crates/ralph-core/src/loop_engine/helpers.rs index f5dec13..c695dce 100644 --- a/crates/ralph-core/src/loop_engine/helpers.rs +++ b/crates/ralph-core/src/loop_engine/helpers.rs @@ -37,11 +37,7 @@ pub async fn interruptible_sleep( false } -pub async fn log_session_summary( - iterations: u32, - config: &RalphConfig, - initial_commit: &str, -) { +pub async fn log_session_summary(iterations: u32, config: &RalphConfig, initial_commit: &str) { let total_commits = git::count_commits_since(&config.paths.work_dir, initial_commit) .await .unwrap_or(0); diff --git a/crates/ralph-core/src/loop_engine/iteration.rs b/crates/ralph-core/src/loop_engine/iteration.rs index 69cbcc3..339ecb5 100644 --- a/crates/ralph-core/src/loop_engine/iteration.rs +++ b/crates/ralph-core/src/loop_engine/iteration.rs @@ -87,8 +87,7 @@ pub async fn run_with_verification( break agent_result; } - let error_sig = - verification::classify_error_signature(&verification_result.diagnostics); + let error_sig = verification::classify_error_signature(&verification_result.diagnostics); if last_error_signature.as_ref() == Some(&error_sig) { consecutive_same_error += 1; @@ -99,8 +98,14 @@ pub async fn run_with_verification( if consecutive_same_error >= CIRCUIT_BREAKER_THRESHOLD { handle_circuit_breaker( - config, &story.id, &error_sig, consecutive_same_error, - iteration, event_sink, failure_memory, verification_attempt, + config, + &story.id, + &error_sig, + consecutive_same_error, + iteration, + event_sink, + failure_memory, + verification_attempt, verification_result.diagnostics.len(), ); break agent_result; @@ -121,21 +126,32 @@ pub async fn run_with_verification( if verification_attempt >= max_retries { handle_retry_exhausted( - config, &story.id, max_retries, &verification_result, - iteration, failure_memory, + config, + &story.id, + max_retries, + &verification_result, + iteration, + failure_memory, ); break agent_result; } let retry_prompt = verification::build_retry_prompt( - story, &verification_result.diagnostics, - verification_attempt, max_retries, &config.paths.work_dir, + story, + &verification_result.diagnostics, + verification_attempt, + max_retries, + &config.paths.work_dir, ); let retry_hash = crate::prompt::prompt_content_hash(&retry_prompt); if let Some(prev) = prev_prompt_hash { if retry_hash != prev { - tracing::info!(prev_hash = prev, new_hash = retry_hash, "prompt changed between attempts"); + tracing::info!( + prev_hash = prev, + new_hash = retry_hash, + "prompt changed between attempts" + ); } } prev_prompt_hash = Some(retry_hash); @@ -173,7 +189,9 @@ fn handle_circuit_breaker( circuit_breaker: true, }); tracing::warn!( - story_id, signature = error_sig, consecutive = consecutive_same_error, + story_id, + signature = error_sig, + consecutive = consecutive_same_error, "circuit breaker: same error signature repeated" ); logger::log_warning(&format!( @@ -181,13 +199,17 @@ fn handle_circuit_breaker( )); failure_memory.record_failure( - story_id, iteration, "circuit_breaker", vec![], + story_id, + iteration, + "circuit_breaker", + vec![], &format!("Same error signature {error_sig} repeated {consecutive_same_error}x"), ); prd_lifecycle::mark_story_blocked(config, story_id); if let Err(guardrail_err) = crate::guardrails::add_guardrail( - &config.paths.guardrails_file, story_id, + &config.paths.guardrails_file, + story_id, &format!("Circuit breaker: same verification error {consecutive_same_error}x"), iteration, ) { @@ -204,17 +226,26 @@ fn handle_retry_exhausted( failure_memory: &mut FailureMemory, ) { logger::log_error( - &format!("Story {story_id} exceeded max verification retries ({max_retries}), marking blocked"), + &format!( + "Story {story_id} exceeded max verification retries ({max_retries}), marking blocked" + ), Some(&config.paths.error_log), ); failure_memory.record_failure( - story_id, iteration, "verification_exhausted", vec![], - &format!("{} diagnostics after {max_retries} retries", verification_result.diagnostics.len()), + story_id, + iteration, + "verification_exhausted", + vec![], + &format!( + "{} diagnostics after {max_retries} retries", + verification_result.diagnostics.len() + ), ); prd_lifecycle::mark_story_blocked(config, story_id); if let Err(guardrail_err) = crate::guardrails::add_guardrail( - &config.paths.guardrails_file, story_id, + &config.paths.guardrails_file, + story_id, &format!( "Verification failed after {max_retries} retries with {} diagnostics", verification_result.diagnostics.len() diff --git a/crates/ralph-core/src/loop_engine/mod.rs b/crates/ralph-core/src/loop_engine/mod.rs index 86b30ba..01b34a1 100644 --- a/crates/ralph-core/src/loop_engine/mod.rs +++ b/crates/ralph-core/src/loop_engine/mod.rs @@ -3,21 +3,41 @@ mod iteration; mod outcome; mod prd_lifecycle; mod rate_limiter; +pub mod scheduler; +pub mod worktree; + +pub use crate::scheduler::worktree_runner::{ + provision as provision_worktree, run_in_worktree, ProvisionedWorktree, +}; +pub use worktree::{LoopExecutionState, WorktreeExecutionState, PRIMARY_WORKTREE_ID}; use crate::config::RalphConfig; -use crate::detection::failure_memory::FailureMemory; +use crate::detection::failure_memory::{FailureMemory, StoryFailureRecord}; use crate::detection::progress; use crate::errors::LoopError; use crate::events::{HeartbeatContext, LoopEvent, LoopEventSink}; use crate::git; -use crate::guardrails; use crate::health; use crate::logger; use crate::prompt::PromptBuilder; use crate::providers::Provider; +use crate::scheduler as story_scheduler; +use crate::scheduler::{ArtifactCoordinator, SharedArtifactUpdate}; use crate::state; +use std::future::{poll_fn, Future}; +use std::pin::Pin; use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::Arc; +use std::task::Poll; + +use self::scheduler::WorktreeCompletion; + +type WorkerFuture<'a> = Pin> + Send + 'a>>; + +struct WorkerReport { + completion: WorktreeCompletion, + failure_record: Option, +} pub async fn run( config: &RalphConfig, @@ -26,37 +46,35 @@ pub async fn run( event_sink: &dyn LoopEventSink, ) -> Result<(), LoopError> { let mut iter_count: u32 = 0; - let initial_commit = git::get_head_hash(&config.paths.work_dir).await.unwrap_or_default(); + let artifact_coordinator = ArtifactCoordinator::for_loop_engine(config.clone()); + let initial_commit = git::get_head_hash(&config.paths.work_dir) + .await + .unwrap_or_default(); let mut failure_memory = FailureMemory::load(&config.paths.failure_memory_file); - let mut consecutive_zero_progress: u32 = 0; - while iter_count < config.tuning.max_iterations { if shutdown_flag.load(Ordering::SeqCst) { logger::log_warning("Shutdown requested, exiting loop"); break; } - state::wait_while_paused(&config.paths.pause_file).await; if state::check_and_clear_done(&config.paths.done_file) { break; } - if !health::check_all_services_parallel(config).await { - logger::log_error("Services unavailable, retrying in 30s", Some(&config.paths.error_log)); + logger::log_error( + "Services unavailable, retrying in 30s", + Some(&config.paths.error_log), + ); tokio::time::sleep(std::time::Duration::from_secs(30)).await; continue; } - - let mut prd = match prd_lifecycle::load_or_restore(config) { - Some(prd) => prd, - None => { - logger::log_error("PRD unrecoverable, stopping", Some(&config.paths.error_log)); - break; - } + let mut prd = if let Some(prd) = prd_lifecycle::load_or_restore(config) { + prd + } else { + logger::log_error("PRD unrecoverable, stopping", Some(&config.paths.error_log)); + break; }; - - let pending = prd.pending_count(); - if pending == 0 { + if prd.pending_count() == 0 { logger::log_success(&format!( "All stories completed! {} passed, {} blocked.", prd.passed_count(), @@ -64,76 +82,453 @@ pub async fn run( )); break; } - - let next_story = match prd.next_actionable_story() { - Some(story) => story.clone(), - None => { - logger::log_success("No more actionable stories. Done."); - break; - } - }; - + let ready_stories = story_scheduler::ready_story_batch(&prd); + if ready_stories.is_empty() { + logger::log_success("No more actionable stories. Done."); + break; + } iter_count += 1; let iter_start = std::time::Instant::now(); logger::log_iteration_header(iter_count); - log_progress(&prd, &next_story.title, config, iter_count); - - if failure_memory.is_in_gutter(&next_story.id, config.tuning.gutter_threshold) { - handle_gutter_skip(config, &mut prd, &next_story.id, iter_count, event_sink)?; - continue; + log_progress(&prd, &ready_stories[0].title, config, iter_count); + if ready_stories.len() == 1 { + run_single_ready_story( + config, + provider, + &artifact_coordinator, + shutdown_flag.clone(), + event_sink, + &mut prd, + ready_stories[0].clone(), + &mut failure_memory, + iter_count, + &initial_commit, + &iter_start, + ) + .await?; + } else { + run_parallel_ready_stories( + config, + provider, + &artifact_coordinator, + shutdown_flag.clone(), + event_sink, + &mut prd, + ready_stories, + &mut failure_memory, + iter_count, + &initial_commit, + &iter_start, + ) + .await?; } - - prd.save(&config.paths.prd_backup)?; - let progress_before = progress::get_progress_file_size(&config.paths.progress_file); - let head_before = git::get_head_hash(&config.paths.work_dir).await.unwrap_or_default(); - - let built = build_prompt(config, iter_count, &failure_memory, &next_story, event_sink); - if let Err(validation_errors) = built.validate(&next_story.id) { - tracing::error!(story_id = %next_story.id, errors = ?validation_errors, "prompt validation failed"); - event_sink.emit(LoopEvent::StorySkipped { - story_id: next_story.id.clone(), - reason: format!("prompt validation failed: {}", validation_errors.join("; ")), - }); - continue; + logger::log_info(&format!( + "Cooldown {}s before next iteration...", + config.tuning.cooldown_secs + )); + if helpers::interruptible_sleep( + config.tuning.cooldown_secs, + &shutdown_flag, + event_sink, + HeartbeatContext::CooldownWait, + ) + .await + { + logger::log_warning("Shutdown requested during cooldown"); + break; } + } + helpers::log_session_summary(iter_count, config, &initial_commit).await; + state::clear_state(&config.paths.state_file); + Ok(()) +} - logger::log_info(&format!("Starting {} agent ({})...", provider.name(), provider.model())); +#[allow(clippy::too_many_arguments)] +async fn run_single_ready_story( + config: &RalphConfig, + provider: &P, + artifact_coordinator: &ArtifactCoordinator, + shutdown_flag: Arc, + event_sink: &dyn LoopEventSink, + prd: &mut crate::prd::Prd, + story: crate::prd::UserStory, + failure_memory: &mut FailureMemory, + iteration: u32, + initial_commit: &str, + iter_start: &std::time::Instant, +) -> Result<(), LoopError> { + if failure_memory.is_in_gutter(&story.id, config.tuning.gutter_threshold) { + handle_gutter_skip( + artifact_coordinator, + config, + prd, + &story.id, + iteration, + event_sink, + ) + .await?; + return Ok(()); + } + prd.save(&config.paths.prd_backup)?; + let progress_before = progress::get_progress_file_size(&config.paths.progress_file); + let head_before = git::get_head_hash(&config.paths.work_dir) + .await + .unwrap_or_default(); + let built = build_prompt(config, iteration, failure_memory, &story, event_sink); + if let Err(validation_errors) = built.validate(&story.id) { + tracing::error!(story_id = %story.id, errors = ?validation_errors, "prompt validation failed"); + event_sink.emit(LoopEvent::StorySkipped { + story_id: story.id.clone(), + reason: format!("prompt validation failed: {}", validation_errors.join("; ")), + }); + return Ok(()); + } + logger::log_info(&format!( + "Starting {} agent ({})...", + provider.name(), + provider.model() + )); + let outcome = iteration::run_with_verification( + config, + provider, + &story, + built.text, + built.hash, + &shutdown_flag, + event_sink, + failure_memory, + iteration, + ) + .await; + let final_passed = outcome.story_passed || check_story_passed_in_prd(config, &story.id); + let progress_report = progress::analyze_iteration_progress( + &config.paths.work_dir, + &head_before, + final_passed, + &config.paths.progress_file, + progress_before, + ) + .await; + let mut consecutive_zero_progress = 0; + let mut iteration_cursor = iteration; + outcome::handle( + config, + &outcome.agent_result, + prd, + &story.id, + final_passed, + &progress_report, + failure_memory, + &mut consecutive_zero_progress, + &mut iteration_cursor, + &shutdown_flag, + event_sink, + ) + .await?; + failure_memory.save(&config.paths.failure_memory_file)?; + log_iteration_complete( + config, + iteration, + initial_commit, + &story.id, + final_passed, + iter_start, + ) + .await; + Ok(()) +} - let outcome = iteration::run_with_verification( - config, provider, &next_story, built.text, built.hash, - &shutdown_flag, event_sink, &mut failure_memory, iter_count, +#[allow(clippy::too_many_arguments)] +async fn run_parallel_ready_stories( + config: &RalphConfig, + provider: &P, + artifact_coordinator: &ArtifactCoordinator, + shutdown_flag: Arc, + event_sink: &dyn LoopEventSink, + prd: &mut crate::prd::Prd, + ready_stories: Vec, + failure_memory: &mut FailureMemory, + iteration: u32, + initial_commit: &str, + iter_start: &std::time::Instant, +) -> Result<(), LoopError> { + prd.save(&config.paths.prd_backup)?; + let shared_failure_memory = failure_memory.clone(); + let mut workers = Vec::new(); + for story in ready_stories { + if failure_memory.is_in_gutter(&story.id, config.tuning.gutter_threshold) { + handle_gutter_skip( + artifact_coordinator, + config, + prd, + &story.id, + iteration, + event_sink, + ) + .await?; + continue; + } + let provisioned = provision_worktree(&config.paths.work_dir, &story.id) + .await + .map_err(|err| LoopError::Other(anyhow::anyhow!(err.to_string())))?; + workers.push(Box::pin(execute_ready_story( + config, + provider, + shutdown_flag.clone(), + event_sink, + story, + provisioned, + shared_failure_memory.clone(), + iteration, + )) as WorkerFuture<'_>); + } + if workers.is_empty() { + return Ok(()); + } + let reports = collect_worker_reports(workers).await?; + for update in story_scheduler::shared_updates_for( + reports + .iter() + .map(|report| report.completion.clone()) + .collect(), + ) { + artifact_coordinator + .submit(update) + .await + .map_err(|err| LoopError::Other(anyhow::anyhow!(err.to_string())))?; + } + for report in &reports { + merge_failure_record(failure_memory, report.failure_record.clone()); + if let Some(story) = prd + .stories + .iter_mut() + .find(|story| story.id == report.completion.story_id) + { + story.passes = report.completion.passed; + story.blocked = report.completion.blocked; + } + log_iteration_complete( + config, + iteration, + initial_commit, + &report.completion.story_id, + report.completion.passed, + iter_start, ) .await; + } + for report in &reports { + process_worktree_lifecycle(config, report).await; + } + failure_memory.save(&config.paths.failure_memory_file)?; + Ok(()) +} - let final_passed = outcome.story_passed || check_story_passed_in_prd(config, &next_story.id); +async fn process_worktree_lifecycle(config: &RalphConfig, report: &WorkerReport) { + let completion = &report.completion; + if !completion.passed || completion.blocked { + return; + } + let Some(branch) = &completion.branch else { + return; + }; + let work_dir = &config.paths.work_dir; + match crate::worktree::merge_into_main(work_dir, branch).await { + Ok(true) => { + let wt_path = crate::scheduler::worktree_runner::worktree_path_for( + work_dir, + &completion.story_id, + ); + if let Err(err) = crate::worktree::remove(work_dir, &wt_path, Some(branch)).await { + tracing::warn!(story_id = %completion.story_id, err = %err, "worktree teardown failed"); + } + } + Ok(false) => { + tracing::warn!( + story_id = %completion.story_id, + branch = %branch, + "fast-forward merge failed, keeping worktree for manual inspection" + ); + } + Err(err) => { + tracing::error!(story_id = %completion.story_id, err = %err, "merge attempt failed"); + } + } +} - let progress_report = progress::analyze_iteration_progress( - &config.paths.work_dir, &head_before, final_passed, - &config.paths.progress_file, progress_before, - ) - .await; +async fn collect_worker_reports( + mut workers: Vec>, +) -> Result, LoopError> { + let mut reports = Vec::with_capacity(workers.len()); + poll_fn(|cx| { + let mut index = 0; + while index < workers.len() { + match workers[index].as_mut().poll(cx) { + Poll::Ready(Ok(report)) => { + reports.push(report); + let _ = workers.swap_remove(index); + } + Poll::Ready(Err(err)) => return Poll::Ready(Err(err)), + Poll::Pending => index += 1, + } + } + if workers.is_empty() { + Poll::Ready(Ok(())) + } else { + Poll::Pending + } + }) + .await?; + Ok(reports) +} - outcome::handle( - config, &outcome.agent_result, &mut prd, &next_story.id, - final_passed, &progress_report, &mut failure_memory, - &mut consecutive_zero_progress, &mut iter_count, - &shutdown_flag, event_sink, - ) - .await?; +#[allow(clippy::too_many_arguments)] +async fn execute_ready_story( + config: &RalphConfig, + provider: &P, + shutdown_flag: Arc, + event_sink: &dyn LoopEventSink, + story: crate::prd::UserStory, + provisioned: ProvisionedWorktree, + mut failure_memory: FailureMemory, + iteration: u32, +) -> Result { + let worker_config = + crate::scheduler::worktree_runner::prepare_worker_config(config, &provisioned) + .map_err(|err| LoopError::Other(anyhow::anyhow!(err.to_string())))?; + let mut worker_prd = + prd_lifecycle::load_or_restore(&worker_config).ok_or(LoopError::PrdUnrecoverable)?; + let guardrail_seed = + std::fs::read_to_string(&worker_config.paths.guardrails_file).unwrap_or_default(); + let progress_before = progress::get_progress_file_size(&worker_config.paths.progress_file); + let head_before = git::get_head_hash(&worker_config.paths.work_dir) + .await + .unwrap_or_default(); + let built = build_prompt( + &worker_config, + iteration, + &failure_memory, + &story, + event_sink, + ); + if let Err(validation_errors) = built.validate(&story.id) { + tracing::error!(story_id = %story.id, errors = ?validation_errors, "prompt validation failed"); + event_sink.emit(LoopEvent::StorySkipped { + story_id: story.id.clone(), + reason: format!("prompt validation failed: {}", validation_errors.join("; ")), + }); + return Ok(worker_report( + &story.id, + &provisioned.branch, + &worker_prd, + None, + failure_memory.get_record(&story.id).cloned(), + u64::from(iteration), + )); + } + logger::log_info(&format!( + "Starting {} agent ({}) in {}...", + provider.name(), + provider.model(), + provisioned.worktree_path.display() + )); + let outcome = iteration::run_with_verification( + &worker_config, + provider, + &story, + built.text, + built.hash, + &shutdown_flag, + event_sink, + &mut failure_memory, + iteration, + ) + .await; + let final_passed = outcome.story_passed || check_story_passed_in_prd(&worker_config, &story.id); + let progress_report = progress::analyze_iteration_progress( + &worker_config.paths.work_dir, + &head_before, + final_passed, + &worker_config.paths.progress_file, + progress_before, + ) + .await; + let mut consecutive_zero_progress = 0; + let mut iteration_cursor = iteration; + outcome::handle( + &worker_config, + &outcome.agent_result, + &mut worker_prd, + &story.id, + final_passed, + &progress_report, + &mut failure_memory, + &mut consecutive_zero_progress, + &mut iteration_cursor, + &shutdown_flag, + event_sink, + ) + .await?; + Ok(worker_report( + &story.id, + &provisioned.branch, + &worker_prd, + guardrail_append(&guardrail_seed, &worker_config.paths.guardrails_file), + failure_memory.get_record(&story.id).cloned(), + u64::from(iteration), + )) +} - failure_memory.save(&config.paths.failure_memory_file)?; - log_iteration_complete(config, iter_count, &initial_commit, &next_story.id, final_passed, &iter_start).await; +fn worker_report( + story_id: &str, + worktree_id: &str, + prd: &crate::prd::Prd, + guardrail_append: Option, + failure_record: Option, + sequence: u64, +) -> WorkerReport { + let state = prd.stories.iter().find(|story| story.id == story_id); + WorkerReport { + completion: WorktreeCompletion { + worktree_id: worktree_id.to_string(), + story_id: story_id.to_string(), + passed: state.is_some_and(|story| story.passes), + blocked: state.is_some_and(|story| story.blocked), + head_commit: None, + guardrail_append, + branch: Some(worktree_id.to_string()), + sequence, + }, + failure_record, + } +} - logger::log_info(&format!("Cooldown {}s before next iteration...", config.tuning.cooldown_secs)); - if helpers::interruptible_sleep(config.tuning.cooldown_secs, &shutdown_flag, event_sink, HeartbeatContext::CooldownWait).await { - logger::log_warning("Shutdown requested during cooldown"); - break; - } +fn merge_failure_record(failure_memory: &mut FailureMemory, record: Option) { + let Some(record) = record else { + return; + }; + if let Some(existing) = failure_memory + .stories + .iter_mut() + .find(|existing| existing.story_id == record.story_id) + { + *existing = record; + } else { + failure_memory.stories.push(record); } +} - helpers::log_session_summary(iter_count, config, &initial_commit).await; - state::clear_state(&config.paths.state_file); - Ok(()) +fn guardrail_append(seed: &str, path: &std::path::Path) -> Option { + let current = std::fs::read_to_string(path).ok()?; + let appended = current + .strip_prefix(seed) + .unwrap_or(current.as_str()) + .trim(); + if appended.is_empty() { + None + } else { + Some(format!("\n{appended}\n")) + } } fn log_progress(prd: &crate::prd::Prd, story_title: &str, config: &RalphConfig, iteration: u32) { @@ -143,27 +538,50 @@ fn log_progress(prd: &crate::prd::Prd, story_title: &str, config: &RalphConfig, let pending = prd.pending_count(); tracing::info!(passed, total, pending, blocked, "Progress"); tracing::info!(story = %story_title, "Current story"); - logger::log_activity(&format!("=== Iteration {iteration} started ==="), &config.paths.activity_log); + logger::log_activity( + &format!("=== Iteration {iteration} started ==="), + &config.paths.activity_log, + ); } -fn handle_gutter_skip( +async fn handle_gutter_skip( + artifact_coordinator: &ArtifactCoordinator, config: &RalphConfig, prd: &mut crate::prd::Prd, story_id: &str, iteration: u32, event_sink: &dyn LoopEventSink, ) -> Result<(), LoopError> { - logger::log_warning(&format!("GUTTER: {story_id} has failed {}+ times, skipping", config.tuning.gutter_threshold)); + logger::log_warning(&format!( + "GUTTER: {story_id} has failed {}+ times, skipping", + config.tuning.gutter_threshold + )); event_sink.emit(LoopEvent::StorySkipped { story_id: story_id.to_string(), - reason: format!("gutter threshold ({} failures)", config.tuning.gutter_threshold), + reason: format!( + "gutter threshold ({} failures)", + config.tuning.gutter_threshold + ), }); - if let Err(err) = guardrails::add_guardrail(&config.paths.guardrails_file, story_id, "Exceeded gutter threshold", iteration) { - tracing::warn!(error = %err, "failed to write guardrail"); + if let Err(err) = artifact_coordinator + .submit(SharedArtifactUpdate::BlockStoryAndAddGuardrail { + story_id: story_id.to_string(), + error_message: "Exceeded gutter threshold".to_string(), + iteration, + }) + .await + { + tracing::warn!(error = %err, "failed to update shared artifacts"); + } else { + prd.stories + .iter_mut() + .find(|story| story.id == story_id) + .map(|story| story.blocked = true); } - logger::log_activity(&format!("Story {story_id} skipped (gutter threshold)"), &config.paths.activity_log); - prd.stories.iter_mut().find(|story| story.id == story_id).map(|story| story.blocked = true); - prd.save(&config.paths.prd_file)?; + logger::log_activity( + &format!("Story {story_id} skipped (gutter threshold)"), + &config.paths.activity_log, + ); Ok(()) } @@ -175,8 +593,12 @@ fn build_prompt( event_sink: &dyn LoopEventSink, ) -> crate::prompt::BuiltPrompt { let builder = PromptBuilder::new( - &config.paths.prompt_file, &config.paths.guardrails_file, &config.paths.ralph_dir, - &config.paths.work_dir, iteration, failure_memory, + &config.paths.prompt_file, + &config.paths.guardrails_file, + &config.paths.ralph_dir, + &config.paths.work_dir, + iteration, + failure_memory, ); let built = builder.build(story, None); event_sink.emit(LoopEvent::PromptBuilt { @@ -185,13 +607,23 @@ fn build_prompt( hash: built.hash, truncated: built.truncated, }); - tracing::debug!(size = built.size_bytes, hash = built.hash, truncated = built.truncated, "prompt built"); + tracing::debug!( + size = built.size_bytes, + hash = built.hash, + truncated = built.truncated, + "prompt built" + ); built } fn check_story_passed_in_prd(config: &RalphConfig, story_id: &str) -> bool { prd_lifecycle::load_or_restore(config) - .and_then(|prd| prd.stories.iter().find(|st| st.id == story_id).map(|st| st.passes)) + .and_then(|prd| { + prd.stories + .iter() + .find(|st| st.id == story_id) + .map(|st| st.passes) + }) .unwrap_or(false) } @@ -204,13 +636,19 @@ async fn log_iteration_complete( start: &std::time::Instant, ) { let elapsed = start.elapsed().as_secs(); - let total_commits = git::count_commits_since(&config.paths.work_dir, initial_commit).await.unwrap_or(0); + let total_commits = git::count_commits_since(&config.paths.work_dir, initial_commit) + .await + .unwrap_or(0); logger::log_activity( &format!( "Iteration {iteration} completed in {} | Commits: {total_commits} | Story: {story_id} | Passed: {passed}", - logger::format_duration(elapsed), + logger::format_duration(elapsed) ), &config.paths.activity_log, ); - tracing::info!(duration = %logger::format_duration(elapsed), commits = total_commits, "Iteration complete"); + tracing::info!( + duration = %logger::format_duration(elapsed), + commits = total_commits, + "Iteration complete" + ); } diff --git a/crates/ralph-core/src/loop_engine/outcome.rs b/crates/ralph-core/src/loop_engine/outcome.rs index f8a099d..5375258 100644 --- a/crates/ralph-core/src/loop_engine/outcome.rs +++ b/crates/ralph-core/src/loop_engine/outcome.rs @@ -31,8 +31,15 @@ pub async fn handle( } Ok(result) if result.success() => { handle_success( - config, prd, story_id, story_passed, progress_report, - failure_memory, consecutive_zero_progress, result, *iteration, + config, + prd, + story_id, + story_passed, + progress_report, + failure_memory, + consecutive_zero_progress, + result, + *iteration, ); } Ok(result) => { @@ -60,11 +67,17 @@ async fn handle_rate_limit( wait_secs % 60, )); logger::log_error( - &format!("Iteration {} rate limited (retry at: {retry_hint})", *iteration), + &format!( + "Iteration {} rate limited (retry at: {retry_hint})", + *iteration + ), Some(&config.paths.error_log), ); logger::log_activity( - &format!("Rate limited at iteration {}, sleeping {wait_secs}s", *iteration), + &format!( + "Rate limited at iteration {}, sleeping {wait_secs}s", + *iteration + ), &config.paths.activity_log, ); *iteration = iteration.saturating_sub(1); diff --git a/crates/ralph-core/src/loop_engine/prd_lifecycle.rs b/crates/ralph-core/src/loop_engine/prd_lifecycle.rs index c463317..4d77caa 100644 --- a/crates/ralph-core/src/loop_engine/prd_lifecycle.rs +++ b/crates/ralph-core/src/loop_engine/prd_lifecycle.rs @@ -41,7 +41,7 @@ pub fn summarize_approach(output_lines: &[String]) -> String { !trimmed.is_empty() && trimmed.len() > 10 }) .take(3) - .map(|line| line.as_str()) + .map(String::as_str) .collect(); if meaningful.is_empty() { diff --git a/crates/ralph-core/src/loop_engine/rate_limiter.rs b/crates/ralph-core/src/loop_engine/rate_limiter.rs index 0650587..2c24cca 100644 --- a/crates/ralph-core/src/loop_engine/rate_limiter.rs +++ b/crates/ralph-core/src/loop_engine/rate_limiter.rs @@ -11,7 +11,7 @@ pub fn compute_wait(result: &AgentResult, default_secs: u64) -> u64 { fn parse_retry_time_to_secs(time_str: &str) -> Option { let now = chrono::Local::now(); - let cleaned = time_str.trim().replace(".", "").to_uppercase(); + let cleaned = time_str.trim().replace('.', "").to_uppercase(); let is_pm = cleaned.contains("PM"); let is_am = cleaned.contains("AM"); diff --git a/crates/ralph-core/src/loop_engine/scheduler.rs b/crates/ralph-core/src/loop_engine/scheduler.rs new file mode 100644 index 0000000..ae6f6c8 --- /dev/null +++ b/crates/ralph-core/src/loop_engine/scheduler.rs @@ -0,0 +1,226 @@ +use serde::{Deserialize, Serialize}; +use std::cmp::Ordering; + +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] +#[serde(rename_all = "camelCase")] +pub struct WorktreeCompletion { + pub worktree_id: String, + pub story_id: String, + pub passed: bool, + #[serde(default)] + pub blocked: bool, + #[serde(default)] + pub head_commit: Option, + #[serde(default)] + pub guardrail_append: Option, + #[serde(default)] + pub branch: Option, + #[serde(default)] + pub sequence: u64, +} + +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] +#[serde(tag = "type", rename_all = "camelCase")] +pub enum MergeAction { + UpdateStoryStatus { + story_id: String, + passed: bool, + blocked: bool, + }, + AppendGuardrail { + story_id: String, + content: String, + }, + UpdateSessionHead { + worktree_id: String, + head_commit: String, + }, + MergeIntoMain { + worktree_id: String, + story_id: String, + branch: String, + }, + TeardownWorktree { + worktree_id: String, + branch: String, + }, + MergeConflict { + worktree_id: String, + story_id: String, + branch: String, + }, +} + +#[derive(Debug, Default)] +pub struct CompletionScheduler { + pending: Vec, +} + +impl CompletionScheduler { + pub fn new() -> Self { + Self { + pending: Vec::new(), + } + } + + pub fn submit(&mut self, completion: WorktreeCompletion) { + self.pending.push(completion); + } + + pub fn submit_batch(&mut self, completions: Vec) { + self.pending.extend(completions); + } + + pub fn drain_ordered(&mut self) -> Vec { + let mut batch = std::mem::take(&mut self.pending); + batch.sort_by(deterministic_order); + batch.into_iter().flat_map(expand_actions).collect() + } + + pub fn pending_count(&self) -> usize { + self.pending.len() + } +} + +pub fn schedule(completions: Vec) -> Vec { + let mut scheduler = CompletionScheduler::new(); + scheduler.submit_batch(completions); + scheduler.drain_ordered() +} + +fn deterministic_order(left: &WorktreeCompletion, right: &WorktreeCompletion) -> Ordering { + left.story_id + .cmp(&right.story_id) + .then_with(|| left.sequence.cmp(&right.sequence)) + .then_with(|| left.worktree_id.cmp(&right.worktree_id)) +} + +fn expand_actions(completion: WorktreeCompletion) -> Vec { + let mut actions = Vec::with_capacity(5); + actions.push(MergeAction::UpdateStoryStatus { + story_id: completion.story_id.clone(), + passed: completion.passed, + blocked: completion.blocked, + }); + if let Some(content) = completion.guardrail_append { + actions.push(MergeAction::AppendGuardrail { + story_id: completion.story_id.clone(), + content, + }); + } + if let Some(head_commit) = completion.head_commit.clone() { + actions.push(MergeAction::UpdateSessionHead { + worktree_id: completion.worktree_id.clone(), + head_commit, + }); + } + if completion.passed && !completion.blocked { + if let Some(branch) = completion.branch { + actions.push(MergeAction::MergeIntoMain { + worktree_id: completion.worktree_id.clone(), + story_id: completion.story_id.clone(), + branch: branch.clone(), + }); + actions.push(MergeAction::TeardownWorktree { + worktree_id: completion.worktree_id, + branch, + }); + } + } + actions +} + +#[cfg(test)] +mod tests { + use super::*; + + fn completion( + story_id: &str, + worktree_id: &str, + passed: bool, + sequence: u64, + ) -> WorktreeCompletion { + WorktreeCompletion { + worktree_id: worktree_id.to_string(), + story_id: story_id.to_string(), + passed, + blocked: false, + head_commit: None, + guardrail_append: None, + branch: None, + sequence, + } + } + + #[test] + fn empty_scheduler_produces_no_actions() { + let actions = schedule(vec![]); + assert!(actions.is_empty()); + } + + #[test] + fn single_completion_expands_to_status_action() { + let actions = schedule(vec![completion("S-001", "wt-a", true, 0)]); + assert_eq!(actions.len(), 1); + assert_eq!( + actions[0], + MergeAction::UpdateStoryStatus { + story_id: "S-001".into(), + passed: true, + blocked: false, + } + ); + } + + #[test] + fn completion_with_guardrail_expands_to_two_actions() { + let mut comp = completion("S-002", "wt-a", false, 0); + comp.guardrail_append = Some("circuit breaker hit".into()); + let actions = schedule(vec![comp]); + assert_eq!(actions.len(), 2); + assert!(matches!(&actions[0], MergeAction::UpdateStoryStatus { .. })); + assert!(matches!(&actions[1], MergeAction::AppendGuardrail { .. })); + } + + #[test] + fn completion_with_head_commit_expands_to_session_update() { + let mut comp = completion("S-003", "wt-b", true, 0); + comp.head_commit = Some("abc123".into()); + let actions = schedule(vec![comp]); + assert_eq!(actions.len(), 2); + assert_eq!( + actions[1], + MergeAction::UpdateSessionHead { + worktree_id: "wt-b".into(), + head_commit: "abc123".into(), + } + ); + } + + #[test] + fn reverse_arrival_produces_same_order() { + let forward = schedule(vec![ + completion("S-001", "wt-a", true, 0), + completion("S-002", "wt-b", false, 1), + completion("S-003", "wt-c", true, 2), + ]); + let reverse = schedule(vec![ + completion("S-003", "wt-c", true, 2), + completion("S-002", "wt-b", false, 1), + completion("S-001", "wt-a", true, 0), + ]); + assert_eq!(forward, reverse); + } + + #[test] + fn scheduler_state_drains_on_each_call() { + let mut sched = CompletionScheduler::new(); + sched.submit(completion("S-001", "wt-a", true, 0)); + assert_eq!(sched.pending_count(), 1); + let first = sched.drain_ordered(); + assert_eq!(first.len(), 1); + assert_eq!(sched.pending_count(), 0); + let second = sched.drain_ordered(); + assert!(second.is_empty()); + } +} diff --git a/crates/ralph-core/src/loop_engine/worktree.rs b/crates/ralph-core/src/loop_engine/worktree.rs new file mode 100644 index 0000000..8bc7229 --- /dev/null +++ b/crates/ralph-core/src/loop_engine/worktree.rs @@ -0,0 +1,93 @@ +use serde::{Deserialize, Serialize}; + +pub const PRIMARY_WORKTREE_ID: &str = "primary"; + +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] +#[serde(default, rename_all = "camelCase")] +pub struct LoopExecutionState { + pub iteration: u32, + pub current_story_id: Option, + pub work_dir: String, + pub primary_worktree_id: String, + pub active_worktree_id: Option, + pub worktrees: Vec, +} + +impl Default for LoopExecutionState { + fn default() -> Self { + Self { + iteration: 0, + current_story_id: None, + work_dir: String::new(), + primary_worktree_id: PRIMARY_WORKTREE_ID.to_string(), + active_worktree_id: None, + worktrees: Vec::new(), + } + } +} + +impl LoopExecutionState { + pub fn single(work_dir: impl Into) -> Self { + let work_dir = work_dir.into(); + Self { + work_dir: work_dir.clone(), + primary_worktree_id: PRIMARY_WORKTREE_ID.to_string(), + active_worktree_id: Some(PRIMARY_WORKTREE_ID.to_string()), + worktrees: vec![WorktreeExecutionState::primary(work_dir)], + ..Self::default() + } + } + + pub fn resolved_worktrees(&self) -> Vec { + if self.worktrees.is_empty() { + return vec![self.primary_worktree()]; + } + self.worktrees.clone() + } + + pub fn primary_worktree(&self) -> WorktreeExecutionState { + self.worktrees + .iter() + .find(|worktree| worktree.id == self.primary_worktree_id) + .cloned() + .or_else(|| self.worktrees.first().cloned()) + .unwrap_or_else(|| WorktreeExecutionState::primary(self.work_dir.clone())) + } + + pub fn active_worktree(&self) -> WorktreeExecutionState { + let active_id = self + .active_worktree_id + .as_deref() + .unwrap_or(&self.primary_worktree_id); + self.worktree(active_id) + .unwrap_or_else(|| self.primary_worktree()) + } + + pub fn worktree(&self, worktree_id: &str) -> Option { + self.resolved_worktrees() + .into_iter() + .find(|worktree| worktree.id == worktree_id) + } +} + +#[derive(Debug, Clone, Serialize, Deserialize, Default, PartialEq, Eq)] +#[serde(default, rename_all = "camelCase")] +pub struct WorktreeExecutionState { + pub id: String, + pub work_dir: String, + pub branch: Option, + pub story_id: Option, + pub iteration: u32, + pub last_prompt_hash: Option, + pub head_commit: Option, +} + +impl WorktreeExecutionState { + pub fn primary(work_dir: impl Into) -> Self { + Self { + id: PRIMARY_WORKTREE_ID.to_string(), + work_dir: work_dir.into(), + ..Self::default() + } + } +} diff --git a/crates/ralph-core/src/platform.rs b/crates/ralph-core/src/platform.rs new file mode 100644 index 0000000..0c82be3 --- /dev/null +++ b/crates/ralph-core/src/platform.rs @@ -0,0 +1,16 @@ +use tokio::process::Command; + +pub fn shell_command(command: &str) -> Command { + #[cfg(windows)] + { + let mut process = Command::new("cmd.exe"); + process.args(["/C", command]); + process + } + #[cfg(not(windows))] + { + let mut process = Command::new("sh"); + process.args(["-c", command]); + process + } +} diff --git a/crates/ralph-core/src/plugin.rs b/crates/ralph-core/src/plugin.rs index 8b4e302..294fad1 100644 --- a/crates/ralph-core/src/plugin.rs +++ b/crates/ralph-core/src/plugin.rs @@ -42,56 +42,116 @@ pub enum ConfigFieldType { pub trait RuntimePlugin: Send + Sync { fn info(&self) -> PluginInfo; - fn api_version(&self) -> u32 { 1 } - fn config_schema(&self) -> Vec { vec![] } + fn api_version(&self) -> u32 { + 1 + } + fn config_schema(&self) -> Vec { + vec![] + } fn health_check(&self) -> impl std::future::Future> + Send; - fn pre_iteration(&self, iteration: u32) -> impl std::future::Future> + Send; - fn post_iteration(&self, iteration: u32, success: bool) -> impl std::future::Future> + Send; + fn pre_iteration(&self, iteration: u32) + -> impl std::future::Future> + Send; + fn post_iteration( + &self, + iteration: u32, + success: bool, + ) -> impl std::future::Future> + Send; } pub trait AgentPlugin: Send + Sync { fn info(&self) -> PluginInfo; - fn api_version(&self) -> u32 { 1 } - fn config_schema(&self) -> Vec { vec![] } + fn api_version(&self) -> u32 { + 1 + } + fn config_schema(&self) -> Vec { + vec![] + } fn health_check(&self) -> impl std::future::Future> + Send; fn supported_agents(&self) -> Vec; - fn invoke(&self, prompt: &str, work_dir: &str) -> impl std::future::Future> + Send; + fn invoke( + &self, + prompt: &str, + work_dir: &str, + ) -> impl std::future::Future> + Send; } pub trait WorkspacePlugin: Send + Sync { fn info(&self) -> PluginInfo; - fn api_version(&self) -> u32 { 1 } - fn config_schema(&self) -> Vec { vec![] } + fn api_version(&self) -> u32 { + 1 + } + fn config_schema(&self) -> Vec { + vec![] + } fn health_check(&self) -> impl std::future::Future> + Send; - fn setup_workspace(&self, config: &HashMap) -> impl std::future::Future> + Send; - fn cleanup_workspace(&self, workspace_path: &str) -> impl std::future::Future> + Send; + fn setup_workspace( + &self, + config: &HashMap, + ) -> impl std::future::Future> + Send; + fn cleanup_workspace( + &self, + workspace_path: &str, + ) -> impl std::future::Future> + Send; } pub trait TrackerPlugin: Send + Sync { fn info(&self) -> PluginInfo; - fn api_version(&self) -> u32 { 1 } - fn config_schema(&self) -> Vec { vec![] } + fn api_version(&self) -> u32 { + 1 + } + fn config_schema(&self) -> Vec { + vec![] + } fn health_check(&self) -> impl std::future::Future> + Send; - fn report_progress(&self, story_id: &str, status: &str) -> impl std::future::Future> + Send; + fn report_progress( + &self, + story_id: &str, + status: &str, + ) -> impl std::future::Future> + Send; fn sync_stories(&self) -> impl std::future::Future>> + Send; } pub trait ScmPlugin: Send + Sync { fn info(&self) -> PluginInfo; - fn api_version(&self) -> u32 { 1 } - fn config_schema(&self) -> Vec { vec![] } + fn api_version(&self) -> u32 { + 1 + } + fn config_schema(&self) -> Vec { + vec![] + } fn health_check(&self) -> impl std::future::Future> + Send; - fn create_pr(&self, title: &str, body: &str, branch: &str) -> impl std::future::Future> + Send; - fn fetch_review_comments(&self, pr_id: &str) -> impl std::future::Future>> + Send; - fn post_comment(&self, pr_id: &str, body: &str) -> impl std::future::Future> + Send; + fn create_pr( + &self, + title: &str, + body: &str, + branch: &str, + ) -> impl std::future::Future> + Send; + fn fetch_review_comments( + &self, + pr_id: &str, + ) -> impl std::future::Future>> + Send; + fn post_comment( + &self, + pr_id: &str, + body: &str, + ) -> impl std::future::Future> + Send; } pub trait NotifierPlugin: Send + Sync { fn info(&self) -> PluginInfo; - fn api_version(&self) -> u32 { 1 } - fn config_schema(&self) -> Vec { vec![] } + fn api_version(&self) -> u32 { + 1 + } + fn config_schema(&self) -> Vec { + vec![] + } fn health_check(&self) -> impl std::future::Future> + Send; - fn notify(&self, title: &str, message: &str, level: &str) -> impl std::future::Future> + Send; + fn notify( + &self, + title: &str, + message: &str, + level: &str, + ) -> impl std::future::Future> + Send; } pub struct BuiltinRuntime; diff --git a/crates/ralph-core/src/ports.rs b/crates/ralph-core/src/ports.rs index 6c1b961..7a3f9e0 100644 --- a/crates/ralph-core/src/ports.rs +++ b/crates/ralph-core/src/ports.rs @@ -14,10 +14,7 @@ pub trait GitOps: Send + Sync { since_hash: &str, ) -> impl std::future::Future> + Send; - fn remove_git_lock( - &self, - work_dir: &Path, - ) -> impl std::future::Future + Send; + fn remove_git_lock(&self, work_dir: &Path) -> impl std::future::Future + Send; fn load_last_rebase(&self, path: &Path) -> Option; } @@ -46,10 +43,7 @@ pub trait GuardrailStore: Send + Sync { } pub trait StateStore: Send + Sync { - fn wait_while_paused( - &self, - pause_file: &Path, - ) -> impl std::future::Future + Send; + fn wait_while_paused(&self, pause_file: &Path) -> impl std::future::Future + Send; fn check_and_clear_done(&self, done_file: &Path) -> bool; diff --git a/crates/ralph-core/src/prd.rs b/crates/ralph-core/src/prd.rs index 8fcc783..ca33cc7 100644 --- a/crates/ralph-core/src/prd.rs +++ b/crates/ralph-core/src/prd.rs @@ -145,7 +145,10 @@ impl Prd { } pub fn total_estimated_minutes(&self) -> u32 { - self.stories.iter().map(|story| story.estimated_minutes).sum() + self.stories + .iter() + .map(|story| story.estimated_minutes) + .sum() } pub fn validate_atomicity(&self) -> Result<(), PrdError> { @@ -176,10 +179,7 @@ impl Prd { for dep_id in &story.depends_on { if !all_ids.contains(dep_id.as_str()) { return Err(PrdError::ValidationFailed { - reason: format!( - "Story '{}' depends_on unknown id '{}'", - story.id, dep_id - ), + reason: format!("Story '{}' depends_on unknown id '{}'", story.id, dep_id), }); } } @@ -301,7 +301,10 @@ mod tests_d6_roundtrip { let prd = make_full_prd(); let json = serde_json::to_string(&prd).unwrap(); let value: serde_json::Value = serde_json::from_str(&json).unwrap(); - assert!(value.get("projectName").is_some(), "must have projectName key"); + assert!( + value.get("projectName").is_some(), + "must have projectName key" + ); assert!(value.get("stories").is_some(), "must have stories key"); let story = &value["stories"][0]; assert!(story.get("scope").is_some()); diff --git a/crates/ralph-core/src/prompt.rs b/crates/ralph-core/src/prompt.rs index 47e7025..6b4d06d 100644 --- a/crates/ralph-core/src/prompt.rs +++ b/crates/ralph-core/src/prompt.rs @@ -86,18 +86,12 @@ impl<'a> PromptBuilder<'a> { let mut prompt = String::with_capacity(8192); let mut truncated = false; - let base_prompt = - std::fs::read_to_string(self.prompt_file).unwrap_or_else(|_| { - format!( - "No prompt file found at {}", - self.prompt_file.display() - ) - }); + let base_prompt = std::fs::read_to_string(self.prompt_file) + .unwrap_or_else(|_| format!("No prompt file found at {}", self.prompt_file.display())); prompt.push_str(&base_prompt); prompt.push_str("\n\n## GUARDRAILS (READ FIRST!)\n\n"); - let guardrails_content = guardrails::read_content(self.guardrails_file) - .unwrap_or_default(); + let guardrails_content = guardrails::read_content(self.guardrails_file).unwrap_or_default(); let guardrails_section = truncate_section(&guardrails_content, MAX_GUARDRAILS_BYTES); if guardrails_section.len() < guardrails_content.len() { truncated = true; @@ -129,7 +123,7 @@ impl<'a> PromptBuilder<'a> { prompt.push_str("Next story to implement:\n"); let story_json = - serde_json::to_string_pretty(story).unwrap_or_else(|_| format!("{:?}", story)); + serde_json::to_string_pretty(story).unwrap_or_else(|_| format!("{story:?}")); let story_trimmed = truncate_section(&story_json, MAX_STORY_JSON_BYTES); if story_trimmed.len() < story_json.len() { truncated = true; @@ -144,22 +138,13 @@ impl<'a> PromptBuilder<'a> { prompt.push_str("\n\n---\n"); prompt.push_str("## DYNAMIC CONTEXT (this section changes per iteration)\n\n"); - prompt.push_str(&format!( - "RALPH_DIR: {}\n", - self.ralph_dir.display() - )); - prompt.push_str(&format!( - "WORK_DIR: {}\n", - self.work_dir.display() - )); + prompt.push_str(&format!("RALPH_DIR: {}\n", self.ralph_dir.display())); + prompt.push_str(&format!("WORK_DIR: {}\n", self.work_dir.display())); prompt.push_str(&format!( "Current working directory: {}\n", self.work_dir.display() )); - prompt.push_str(&format!( - "Config directory: {}\n", - self.ralph_dir.display() - )); + prompt.push_str(&format!("Config directory: {}\n", self.ralph_dir.display())); prompt.push_str(&format!("ITERATION: {}\n", self.iteration)); if let Some(checkpoint) = bulk_checkpoint { diff --git a/crates/ralph-core/src/providers/cli.rs b/crates/ralph-core/src/providers/cli.rs index d1cc153..40ca0ca 100644 --- a/crates/ralph-core/src/providers/cli.rs +++ b/crates/ralph-core/src/providers/cli.rs @@ -86,11 +86,7 @@ impl CliProvider { binary: "opencode".into(), model, build_args: |_provider, prompt, _| { - vec![ - "run".into(), - "--print-logs".into(), - prompt.into(), - ] + vec!["run".into(), "--print-logs".into(), prompt.into()] }, use_current_dir: true, } @@ -255,8 +251,7 @@ impl Provider for CliProvider { .try_wait() .ok() .flatten() - .map(|status| status.code().unwrap_or(1)) - .unwrap_or(1); + .map_or(1, |status| status.code().unwrap_or(1)); logger::log_info(&format!( "[{}] {} finished (exit: {exit_code})", diff --git a/crates/ralph-core/src/providers/fixture.rs b/crates/ralph-core/src/providers/fixture.rs index f7f1047..47ec025 100644 --- a/crates/ralph-core/src/providers/fixture.rs +++ b/crates/ralph-core/src/providers/fixture.rs @@ -76,11 +76,11 @@ impl FixtureProvider { } impl super::Provider for FixtureProvider { - fn name(&self) -> &str { + fn name(&self) -> &'static str { "fixture" } - fn model(&self) -> &str { + fn model(&self) -> &'static str { "deterministic" } diff --git a/crates/ralph-core/src/providers/mod.rs b/crates/ralph-core/src/providers/mod.rs index 973a534..4495d56 100644 --- a/crates/ralph-core/src/providers/mod.rs +++ b/crates/ralph-core/src/providers/mod.rs @@ -32,7 +32,9 @@ impl AgentResult { pub fn detect_rate_limit(output_lines: &[String]) -> (bool, Option) { for line in output_lines.iter().rev().take(30) { let lower = line.to_lowercase(); - let is_rate_limited = RATE_LIMIT_PATTERNS.iter().any(|pattern| lower.contains(pattern)); + let is_rate_limited = RATE_LIMIT_PATTERNS + .iter() + .any(|pattern| lower.contains(pattern)); if is_rate_limited { let retry_msg = extract_retry_time(line); return (true, retry_msg); @@ -45,7 +47,7 @@ impl AgentResult { fn extract_retry_time(line: &str) -> Option { if let Some(idx) = line.to_lowercase().find("try again at ") { let after = &line[idx + 13..]; - let end = after.find(|ch: char| ch == '"' || ch == '.' || ch == '}').unwrap_or(after.len()); + let end = after.find(['"', '.', '}']).unwrap_or(after.len()); let time_str = after[..end].trim(); if !time_str.is_empty() { return Some(time_str.to_string()); diff --git a/crates/ralph-core/src/scheduler/coordinator.rs b/crates/ralph-core/src/scheduler/coordinator.rs new file mode 100644 index 0000000..b34dfe5 --- /dev/null +++ b/crates/ralph-core/src/scheduler/coordinator.rs @@ -0,0 +1,178 @@ +use crate::config::RalphConfig; +use crate::guardrails; +use crate::prd::Prd; +use std::sync::Arc; +use thiserror::Error; +use tokio::sync::Mutex; + +#[derive(Debug, Clone, PartialEq, Eq)] +pub enum SharedArtifactUpdate { + MarkStoryPassed { + story_id: String, + }, + MarkStoryBlocked { + story_id: String, + }, + AppendGuardrail { + story_id: String, + error_message: String, + iteration: u32, + }, + AppendGuardrailContent { + story_id: String, + content: String, + }, + BlockStoryAndAddGuardrail { + story_id: String, + error_message: String, + iteration: u32, + }, + TeardownWorktree { + worktree_id: String, + branch: String, + }, +} + +#[derive(Debug, Clone, Error, PartialEq, Eq)] +#[error("{message}")] +pub struct CoordinatorError { + message: String, +} + +impl CoordinatorError { + pub fn apply(message: impl Into) -> Self { + Self { + message: message.into(), + } + } +} + +#[derive(Clone)] +pub struct ArtifactCoordinator { + applier: Arc Result<(), CoordinatorError> + Send + Sync>, + gate: Arc>, +} + +impl ArtifactCoordinator { + pub fn new(applier: F) -> Self + where + F: Fn(SharedArtifactUpdate) -> Result<(), CoordinatorError> + Send + Sync + 'static, + { + Self { + applier: Arc::new(applier), + gate: Arc::new(Mutex::new(())), + } + } + + pub fn for_loop_engine(config: RalphConfig) -> Self { + Self::new(move |update| apply_update(&config, update)) + } + + pub async fn submit(&self, update: SharedArtifactUpdate) -> Result<(), CoordinatorError> { + let _guard = self.gate.lock().await; + let applier = self.applier.clone(); + tokio::task::spawn_blocking(move || (applier)(update)) + .await + .map_err(|err| CoordinatorError::apply(err.to_string()))? + } +} + +fn apply_update( + config: &RalphConfig, + update: SharedArtifactUpdate, +) -> Result<(), CoordinatorError> { + match update { + SharedArtifactUpdate::MarkStoryPassed { story_id } => { + update_story_state(config, &story_id, true, false) + } + SharedArtifactUpdate::MarkStoryBlocked { story_id } => { + update_story_state(config, &story_id, false, true) + } + SharedArtifactUpdate::AppendGuardrail { + story_id, + error_message, + iteration, + } => append_guardrail(config, &story_id, &error_message, iteration), + SharedArtifactUpdate::AppendGuardrailContent { story_id, content } => { + append_guardrail_content(config, &story_id, &content) + } + SharedArtifactUpdate::BlockStoryAndAddGuardrail { + story_id, + error_message, + iteration, + } => { + append_guardrail(config, &story_id, &error_message, iteration)?; + update_story_state(config, &story_id, false, true) + } + SharedArtifactUpdate::TeardownWorktree { .. } => Ok(()), + } +} + +fn update_story_state( + config: &RalphConfig, + story_id: &str, + passed: bool, + blocked: bool, +) -> Result<(), CoordinatorError> { + let mut prd = load_or_restore_prd(config)?; + if let Some(story) = prd.stories.iter_mut().find(|story| story.id == story_id) { + story.passes = passed; + story.blocked = blocked; + } + prd.save(&config.paths.prd_file) + .map_err(|err| CoordinatorError::apply(err.to_string())) +} + +fn append_guardrail( + config: &RalphConfig, + story_id: &str, + error_message: &str, + iteration: u32, +) -> Result<(), CoordinatorError> { + guardrails::add_guardrail( + &config.paths.guardrails_file, + story_id, + error_message, + iteration, + ) + .map_err(|err| CoordinatorError::apply(err.to_string())) +} + +fn append_guardrail_content( + config: &RalphConfig, + story_id: &str, + content: &str, +) -> Result<(), CoordinatorError> { + let mut file = std::fs::OpenOptions::new() + .create(true) + .append(true) + .open(&config.paths.guardrails_file) + .map_err(|err| CoordinatorError::apply(err.to_string()))?; + let mut payload = content.trim_start_matches('\n').to_string(); + if payload.is_empty() { + return Ok(()); + } + if !payload.starts_with("### Sign:") { + payload = format!("\n### Sign: Error in {story_id}\n{payload}"); + } + if !payload.ends_with('\n') { + payload.push('\n'); + } + use std::io::Write; + file.write_all(payload.as_bytes()) + .map_err(|err| CoordinatorError::apply(err.to_string())) +} + +fn load_or_restore_prd(config: &RalphConfig) -> Result { + if Prd::is_valid_json(&config.paths.prd_file) { + return Prd::load(&config.paths.prd_file) + .map_err(|err| CoordinatorError::apply(err.to_string())); + } + if config.paths.prd_backup.exists() { + std::fs::copy(&config.paths.prd_backup, &config.paths.prd_file) + .map_err(|err| CoordinatorError::apply(err.to_string()))?; + return Prd::load(&config.paths.prd_file) + .map_err(|err| CoordinatorError::apply(err.to_string())); + } + Err(CoordinatorError::apply("PRD missing or corrupted")) +} diff --git a/crates/ralph-core/src/scheduler/mod.rs b/crates/ralph-core/src/scheduler/mod.rs new file mode 100644 index 0000000..0bd1e7f --- /dev/null +++ b/crates/ralph-core/src/scheduler/mod.rs @@ -0,0 +1,66 @@ +mod coordinator; +pub mod worktree_runner; + +use crate::loop_engine::scheduler::{ + schedule as schedule_completions, MergeAction, WorktreeCompletion, +}; +use crate::prd::{Prd, UserStory}; +use std::collections::HashSet; + +pub use coordinator::{ArtifactCoordinator, CoordinatorError, SharedArtifactUpdate}; + +pub fn ready_stories(prd: &Prd) -> Vec<&UserStory> { + let passed_ids: HashSet<&str> = prd + .stories + .iter() + .filter(|story| story.passes) + .map(|story| story.id.as_str()) + .collect(); + + prd.stories + .iter() + .filter(|story| is_ready(story, &passed_ids)) + .collect() +} + +pub fn ready_story_batch(prd: &Prd) -> Vec { + ready_stories(prd).into_iter().cloned().collect() +} + +pub fn shared_updates_for(completions: Vec) -> Vec { + schedule_completions(completions) + .into_iter() + .filter_map(|action| match action { + MergeAction::UpdateStoryStatus { + story_id, + passed, + blocked: _, + } if passed => Some(SharedArtifactUpdate::MarkStoryPassed { story_id }), + MergeAction::UpdateStoryStatus { + story_id, + passed: _, + blocked, + } if blocked => Some(SharedArtifactUpdate::MarkStoryBlocked { story_id }), + MergeAction::AppendGuardrail { story_id, content } => { + Some(SharedArtifactUpdate::AppendGuardrailContent { story_id, content }) + } + MergeAction::TeardownWorktree { + worktree_id, + branch, + } => Some(SharedArtifactUpdate::TeardownWorktree { + worktree_id, + branch, + }), + _ => None, + }) + .collect() +} + +fn is_ready(story: &UserStory, passed_ids: &HashSet<&str>) -> bool { + !story.passes + && !story.blocked + && story + .depends_on + .iter() + .all(|dependency| passed_ids.contains(dependency.as_str())) +} diff --git a/crates/ralph-core/src/scheduler/worktree_runner.rs b/crates/ralph-core/src/scheduler/worktree_runner.rs new file mode 100644 index 0000000..1dbda1c --- /dev/null +++ b/crates/ralph-core/src/scheduler/worktree_runner.rs @@ -0,0 +1,152 @@ +use crate::config::RalphConfig; +use crate::prd::UserStory; +use crate::worktree::{self, WorktreeError, WorktreeInfo}; +use std::future::Future; +use std::path::{Path, PathBuf}; + +const WORKTREE_BASE: &str = ".loopforge/worktrees"; +const ARTIFACT_BASE: &str = ".loopforge/artifacts"; + +#[derive(Debug, Clone)] +pub struct ProvisionedWorktree { + pub story_id: String, + pub worktree_path: PathBuf, + pub branch: String, + pub info: WorktreeInfo, +} + +pub fn worktree_path_for(work_dir: &Path, story_id: &str) -> PathBuf { + work_dir.join(WORKTREE_BASE).join(sanitize(story_id)) +} + +pub fn branch_for(story_id: &str) -> String { + format!("loopforge/{}", sanitize(story_id)) +} + +pub fn artifact_dir_for(worktree_path: &Path, story_id: &str) -> PathBuf { + worktree_path.join(ARTIFACT_BASE).join(sanitize(story_id)) +} + +fn sanitize(story_id: &str) -> String { + story_id + .to_lowercase() + .replace(|c: char| !c.is_alphanumeric() && c != '-', "-") +} + +pub async fn provision( + work_dir: &Path, + story_id: &str, +) -> Result { + let rel_path = format!("{WORKTREE_BASE}/{}", sanitize(story_id)); + let branch = branch_for(story_id); + let info = worktree::create(work_dir, &rel_path, &branch).await?; + Ok(ProvisionedWorktree { + story_id: story_id.to_string(), + worktree_path: PathBuf::from(&info.path), + branch, + info, + }) +} + +pub async fn run_in_worktree( + work_dir: &Path, + story: &UserStory, + worker: F, +) -> Result +where + F: FnOnce(ProvisionedWorktree) -> Fut, + Fut: Future, +{ + let provisioned = provision(work_dir, &story.id).await?; + Ok(worker(provisioned).await) +} + +pub fn prepare_worker_config( + config: &RalphConfig, + provisioned: &ProvisionedWorktree, +) -> Result { + let artifact_dir = artifact_dir_for(&provisioned.worktree_path, &provisioned.story_id); + std::fs::create_dir_all(&artifact_dir)?; + let mut worker = config.clone(); + worker.paths.ralph_dir = artifact_dir.clone(); + worker.paths.work_dir = provisioned.worktree_path.clone(); + worker.paths.prd_file = artifact_dir.join("prd.json"); + worker.paths.prd_backup = artifact_dir.join("prd.backup.json"); + worker.paths.prompt_file = artifact_dir.join("prompt.md"); + worker.paths.progress_file = artifact_dir.join("progress.log"); + worker.paths.guardrails_file = artifact_dir.join("guardrails.md"); + worker.paths.error_log = artifact_dir.join("error.log"); + worker.paths.activity_log = artifact_dir.join("activity.log"); + worker.paths.state_file = artifact_dir.join("state.json"); + worker.paths.pause_file = artifact_dir.join("pause"); + worker.paths.done_file = artifact_dir.join("done"); + worker.paths.failure_memory_file = artifact_dir.join("failure_memory.json"); + worker.paths.last_rebase_file = artifact_dir.join("last_rebase"); + worker.paths.codex_output_log = artifact_dir.join("codex_output.log"); + copy_or_init(&config.paths.prd_file, &worker.paths.prd_file)?; + let backup_source = if config.paths.prd_backup.exists() { + &config.paths.prd_backup + } else { + &config.paths.prd_file + }; + copy_or_init(backup_source, &worker.paths.prd_backup)?; + copy_or_init(&config.paths.prompt_file, &worker.paths.prompt_file)?; + copy_or_init(&config.paths.guardrails_file, &worker.paths.guardrails_file)?; + copy_or_init( + &config.paths.failure_memory_file, + &worker.paths.failure_memory_file, + )?; + Ok(worker) +} + +fn copy_or_init(source: &Path, target: &Path) -> std::io::Result<()> { + if source.exists() { + std::fs::copy(source, target)?; + } else { + std::fs::write(target, [])?; + } + Ok(()) +} + +#[cfg(test)] +mod tests { + use super::*; + use std::path::Path; + + #[test] + fn different_stories_get_different_paths() { + let work_dir = Path::new("/repo"); + let path_a = worktree_path_for(work_dir, "S-001"); + let path_b = worktree_path_for(work_dir, "S-002"); + assert_ne!(path_a, path_b); + } + + #[test] + fn same_story_gets_same_path() { + let work_dir = Path::new("/repo"); + let first = worktree_path_for(work_dir, "S-001"); + let second = worktree_path_for(work_dir, "S-001"); + assert_eq!(first, second); + } + + #[test] + fn path_contains_sanitized_story_id() { + let work_dir = Path::new("/repo"); + let path = worktree_path_for(work_dir, "S-001"); + assert!(path.to_string_lossy().contains("s-001")); + assert!(path.to_string_lossy().contains(WORKTREE_BASE)); + } + + #[test] + fn branch_contains_story_id() { + let branch = branch_for("S-015"); + assert_eq!(branch, "loopforge/s-015"); + } + + #[test] + fn sanitize_strips_special_chars() { + assert_eq!(sanitize("S-001"), "s-001"); + assert_eq!(sanitize("S_002"), "s-002"); + assert_eq!(sanitize("My Story!"), "my-story-"); + } +} diff --git a/crates/ralph-core/src/state.rs b/crates/ralph-core/src/state.rs index 392816f..1b1f1a4 100644 --- a/crates/ralph-core/src/state.rs +++ b/crates/ralph-core/src/state.rs @@ -1,6 +1,6 @@ use crate::logger; use std::path::Path; -use tokio::time::{Duration, sleep}; +use tokio::time::{sleep, Duration}; pub fn is_paused(pause_file: &Path) -> bool { pause_file.exists() diff --git a/crates/ralph-core/src/verification.rs b/crates/ralph-core/src/verification.rs index 22c29ac..cd9701b 100644 --- a/crates/ralph-core/src/verification.rs +++ b/crates/ralph-core/src/verification.rs @@ -4,7 +4,6 @@ use serde::{Deserialize, Serialize}; use std::collections::hash_map::DefaultHasher; use std::hash::{Hash, Hasher}; use std::path::Path; -use tokio::process::Command; const DENIED_PATTERNS: &[&str] = &[ "rm -rf /", @@ -34,10 +33,7 @@ pub struct VerificationResult { pub raw_output: String, } -pub async fn run_verification( - story: &UserStory, - work_dir: &Path, -) -> VerificationResult { +pub async fn run_verification(story: &UserStory, work_dir: &Path) -> VerificationResult { let commands = &story.verification.commands; if commands.is_empty() { return VerificationResult { @@ -66,9 +62,7 @@ pub async fn run_verification( continue; } - let result = Command::new("sh") - .arg("-c") - .arg(cmd) + let result = crate::platform::shell_command(cmd) .current_dir(work_dir) .output() .await; @@ -129,42 +123,43 @@ fn parse_diagnostics(output: &str) -> Vec { } fn parse_typescript(output: &str) -> Vec { - let pattern = Regex::new( - r"(?m)^(.+?)\((\d+),(\d+)\):\s*error\s+(TS\d+):\s*(.+)$" - ).expect("valid regex"); + let pattern = + Regex::new(r"(?m)^(.+?)\((\d+),(\d+)\):\s*error\s+(TS\d+):\s*(.+)$").expect("valid regex"); - pattern.captures_iter(output).map(|cap| { - Diagnostic { + pattern + .captures_iter(output) + .map(|cap| Diagnostic { file: Some(cap[1].to_string()), line: cap[2].parse().ok(), col: cap[3].parse().ok(), error_type: cap[4].to_string(), message: cap[5].to_string(), source_context: None, - } - }).collect() + }) + .collect() } fn parse_cargo(output: &str) -> Vec { - let pattern = Regex::new( - r"(?m)^error\[([A-Z]\d+)\]:\s*(.+)\n\s*-->\s*(.+?):(\d+):(\d+)" - ).expect("valid regex"); + let pattern = Regex::new(r"(?m)^error\[([A-Z]\d+)\]:\s*(.+)\n\s*-->\s*(.+?):(\d+):(\d+)") + .expect("valid regex"); - pattern.captures_iter(output).map(|cap| { - Diagnostic { + pattern + .captures_iter(output) + .map(|cap| Diagnostic { file: Some(cap[3].to_string()), line: cap[4].parse().ok(), col: cap[5].parse().ok(), error_type: cap[1].to_string(), message: cap[2].to_string(), source_context: None, - } - }).collect() + }) + .collect() } fn parse_eslint(output: &str) -> Vec { let file_pattern = Regex::new(r"(?m)^(/[^\s]+|[A-Za-z]:\\[^\s]+)$").expect("valid regex"); - let rule_pattern = Regex::new(r"(?m)^\s+(\d+):(\d+)\s+(error|warning)\s+(.+?)\s{2,}(\S+)$").expect("valid regex"); + let rule_pattern = Regex::new(r"(?m)^\s+(\d+):(\d+)\s+(error|warning)\s+(.+?)\s{2,}(\S+)$") + .expect("valid regex"); let mut results = Vec::new(); let mut current_file: Option = None; @@ -187,18 +182,20 @@ fn parse_eslint(output: &str) -> Vec { } fn parse_jest(output: &str) -> Vec { - let pattern = Regex::new(r"(?m)●\s+(.+?)\s*\n.*?\n\s+at\s+.*?\((.+?):(\d+):(\d+)\)").expect("valid regex"); + let pattern = Regex::new(r"(?m)●\s+(.+?)\s*\n.*?\n\s+at\s+.*?\((.+?):(\d+):(\d+)\)") + .expect("valid regex"); - pattern.captures_iter(output).map(|cap| { - Diagnostic { + pattern + .captures_iter(output) + .map(|cap| Diagnostic { file: Some(cap[2].to_string()), line: cap[3].parse().ok(), col: cap[4].parse().ok(), error_type: "test_failure".to_string(), message: cap[1].to_string(), source_context: None, - } - }).collect() + }) + .collect() } pub fn build_retry_prompt( @@ -213,7 +210,7 @@ pub fn build_retry_prompt( prompt.push_str("You are fixing a story that failed verification. "); prompt.push_str("Study the diagnostics carefully and fix only the identified issues.\n\n"); - prompt.push_str(&format!("## Story Context\n\n")); + prompt.push_str("## Story Context\n\n"); prompt.push_str(&format!("**ID:** {}\n", story.id)); prompt.push_str(&format!("**Title:** {}\n", story.title)); if let Some(desc) = &story.description { @@ -292,7 +289,9 @@ pub fn validate_command(cmd: &str) -> Result<(), String> { let lower = cmd.to_lowercase(); for pattern in DENIED_PATTERNS { if lower.contains(pattern) { - return Err(format!("blocked: command matches denied pattern '{pattern}'")); + return Err(format!( + "blocked: command matches denied pattern '{pattern}'" + )); } } Ok(()) diff --git a/crates/ralph-core/src/worktree.rs b/crates/ralph-core/src/worktree.rs new file mode 100644 index 0000000..7c7f6c8 --- /dev/null +++ b/crates/ralph-core/src/worktree.rs @@ -0,0 +1,214 @@ +use serde::{Deserialize, Serialize}; +use std::path::{Path, PathBuf}; +use thiserror::Error; +use tokio::process::Command; + +#[derive(Debug, Error)] +pub enum WorktreeError { + #[error("git error: {0}")] + Git(String), + #[error("io error: {0}")] + Io(#[from] std::io::Error), + #[error("worktree not found: {0}")] + NotFound(String), +} + +#[derive(Debug, Clone, Default, Serialize, Deserialize, PartialEq, Eq)] +#[serde(default, rename_all = "camelCase")] +pub struct WorktreeInfo { + pub path: String, + pub branch: String, + pub head: String, + pub is_bare: bool, + pub is_locked: bool, + pub is_prunable: bool, +} + +#[derive(Debug, Clone, Default, Serialize, Deserialize, PartialEq, Eq)] +#[serde(default, rename_all = "camelCase")] +pub struct WorktreeDiff { + pub files_changed: u32, + pub insertions: u32, + pub deletions: u32, +} + +pub async fn create( + work_dir: &Path, + worktree_path: &str, + branch: &str, +) -> Result { + let create_branch = ["worktree", "add", worktree_path, "-b", branch]; + if run_git(work_dir, &create_branch).await.is_err() { + let existing_branch = ["worktree", "add", worktree_path, branch]; + run_git(work_dir, &existing_branch).await?; + } + inspect(work_dir, Path::new(worktree_path)).await +} + +pub async fn list(work_dir: &Path) -> Result, WorktreeError> { + let output = run_git(work_dir, &["worktree", "list", "--porcelain"]).await?; + Ok(parse_worktree_list(&output)) +} + +pub async fn remove( + work_dir: &Path, + worktree_path: &Path, + branch: Option<&str>, +) -> Result<(), WorktreeError> { + let target = worktree_path.to_string_lossy().into_owned(); + run_git(work_dir, &["worktree", "remove", &target, "--force"]).await?; + if let Some(branch_name) = branch { + let _ = run_git(work_dir, &["branch", "-d", branch_name]).await; + } + Ok(()) +} + +pub async fn inspect(work_dir: &Path, worktree_path: &Path) -> Result { + let target = normalize_path(work_dir, worktree_path); + list(work_dir) + .await? + .into_iter() + .find(|worktree| worktree.path == target) + .ok_or(WorktreeError::NotFound(target)) +} + +pub async fn diff( + work_dir: &Path, + from_ref: &str, + to_ref: &str, +) -> Result { + let range = format!("{from_ref}..{to_ref}"); + let output = run_git(work_dir, &["diff", "--numstat", &range]).await?; + Ok(parse_diff_numstat(&output)) +} + +pub fn parse_worktree_list(raw: &str) -> Vec { + let mut worktrees = Vec::new(); + let mut current = WorktreeInfo::default(); + for line in raw.lines() { + if let Some(path) = line.strip_prefix("worktree ") { + if !current.path.is_empty() { + worktrees.push(current); + } + current = WorktreeInfo { + path: path.to_string(), + ..WorktreeInfo::default() + }; + continue; + } + if let Some(head) = line.strip_prefix("HEAD ") { + current.head = head.to_string(); + } else if let Some(branch) = line.strip_prefix("branch ") { + current.branch = branch.trim_start_matches("refs/heads/").to_string(); + } else if line == "bare" { + current.is_bare = true; + } else if line == "locked" { + current.is_locked = true; + } else if line == "prunable" { + current.is_prunable = true; + } + } + if !current.path.is_empty() { + worktrees.push(current); + } + worktrees +} + +pub fn parse_diff_numstat(raw: &str) -> WorktreeDiff { + raw.lines().fold(WorktreeDiff::default(), |mut diff, line| { + let mut parts = line.splitn(3, '\t'); + let insertions = parts + .next() + .unwrap_or_default() + .parse::() + .unwrap_or_default(); + let deletions = parts + .next() + .unwrap_or_default() + .parse::() + .unwrap_or_default(); + if parts.next().is_some() { + diff.files_changed += 1; + diff.insertions += insertions; + diff.deletions += deletions; + } + diff + }) +} + +pub async fn merge_into_main(work_dir: &Path, branch: &str) -> Result { + let merge_result = run_git(work_dir, &["merge", "--ff-only", branch]).await; + match merge_result { + Ok(_) => Ok(true), + Err(WorktreeError::Git(msg)) if msg.contains("Not possible to fast-forward") => Ok(false), + Err(err) => Err(err), + } +} + +async fn run_git(work_dir: &Path, args: &[&str]) -> Result { + let output = Command::new("git") + .args(args) + .current_dir(work_dir) + .output() + .await?; + if output.status.success() { + return Ok(String::from_utf8_lossy(&output.stdout).trim().to_string()); + } + let stderr = String::from_utf8_lossy(&output.stderr).trim().to_string(); + Err(WorktreeError::Git(if stderr.is_empty() { + String::from_utf8_lossy(&output.stdout).trim().to_string() + } else { + stderr + })) +} + +fn normalize_path(work_dir: &Path, worktree_path: &Path) -> String { + let path = if worktree_path.is_absolute() { + PathBuf::from(worktree_path) + } else { + work_dir.join(worktree_path) + }; + path.to_string_lossy().into_owned() +} + +#[cfg(test)] +mod tests { + use super::{parse_diff_numstat, parse_worktree_list, WorktreeDiff, WorktreeInfo}; + + #[test] + fn parses_worktree_porcelain() { + let raw = "worktree /repo\nHEAD abc\nbranch refs/heads/main\n\nworktree /repo/w1\nHEAD def\nbranch refs/heads/feature\nlocked\nprunable\n"; + assert_eq!( + parse_worktree_list(raw), + vec![ + WorktreeInfo { + path: "/repo".into(), + branch: "main".into(), + head: "abc".into(), + ..WorktreeInfo::default() + }, + WorktreeInfo { + path: "/repo/w1".into(), + branch: "feature".into(), + head: "def".into(), + is_locked: true, + is_prunable: true, + ..WorktreeInfo::default() + }, + ] + ); + } + + #[test] + fn parses_numstat_safely() { + let raw = "10\t2\ta.rs\n-\t-\tb.bin\n"; + assert_eq!( + parse_diff_numstat(raw), + WorktreeDiff { + files_changed: 2, + insertions: 10, + deletions: 2 + } + ); + } +} diff --git a/crates/ralph-core/tests/artifact_coordinator.rs b/crates/ralph-core/tests/artifact_coordinator.rs new file mode 100644 index 0000000..bdfdbdb --- /dev/null +++ b/crates/ralph-core/tests/artifact_coordinator.rs @@ -0,0 +1,122 @@ +use ralph_core::config::RalphConfig; +use ralph_core::prd::Prd; +use ralph_core::scheduler::{ArtifactCoordinator, SharedArtifactUpdate}; +use std::sync::atomic::{AtomicUsize, Ordering}; +use std::sync::Arc; +use tempfile::tempdir; +use tokio::sync::Mutex; + +#[derive(Clone, Debug, PartialEq, Eq)] +enum Event { + Start(String), + Finish(String), +} + +#[tokio::test(flavor = "multi_thread", worker_threads = 4)] +async fn concurrent_updates_are_applied_one_at_a_time() { + let events = Arc::new(Mutex::new(Vec::new())); + let active = Arc::new(AtomicUsize::new(0)); + let max_active = Arc::new(AtomicUsize::new(0)); + let coordinator = { + let events = events.clone(); + let active = active.clone(); + let max_active = max_active.clone(); + ArtifactCoordinator::new(move |update| { + let story_id = match update { + SharedArtifactUpdate::MarkStoryBlocked { story_id } => story_id, + _ => { + return Err(ralph_core::scheduler::CoordinatorError::apply( + "unexpected update", + )) + } + }; + let now_active = active.fetch_add(1, Ordering::SeqCst) + 1; + max_active.fetch_max(now_active, Ordering::SeqCst); + events.blocking_lock().push(Event::Start(story_id.clone())); + std::thread::sleep(std::time::Duration::from_millis(20)); + events.blocking_lock().push(Event::Finish(story_id)); + active.fetch_sub(1, Ordering::SeqCst); + Ok(()) + }) + }; + + let mut tasks = Vec::new(); + for index in 0..6 { + let coordinator = coordinator.clone(); + tasks.push(tokio::spawn(async move { + coordinator + .submit(SharedArtifactUpdate::MarkStoryBlocked { + story_id: format!("S-{index:03}"), + }) + .await + })); + } + for task in tasks { + task.await + .expect("task should join") + .expect("update should apply"); + } + + assert_eq!(max_active.load(Ordering::SeqCst), 1); + let events = events.lock().await.clone(); + assert_eq!(events.len(), 12); + for pair in events.chunks_exact(2) { + match (&pair[0], &pair[1]) { + (Event::Start(started), Event::Finish(finished)) => assert_eq!(started, finished), + other => panic!("unexpected event sequence: {other:?}"), + } + } +} + +#[tokio::test(flavor = "multi_thread", worker_threads = 4)] +async fn loop_engine_gutter_skip_updates_flow_through_the_coordinator() { + let temp = tempdir().expect("tempdir should exist"); + let ralph_dir = temp.path().join("ralph"); + std::fs::create_dir_all(&ralph_dir).expect("ralph dir should exist"); + let mut config = RalphConfig::from_defaults(&ralph_dir); + config.paths.prd_file = ralph_dir.join("prd.json"); + config.paths.prd_backup = ralph_dir.join("prd.backup.json"); + config.paths.guardrails_file = ralph_dir.join("guardrails.md"); + + let prd = r#"{ + "project": "loopforge", + "stories": [ + { "id": "S-016", "title": "Serialize", "acceptanceCriteria": ["serialize"] }, + { "id": "S-017", "title": "Neighbor", "acceptanceCriteria": ["neighbor"] } + ] + }"#; + std::fs::write(&config.paths.prd_file, prd).expect("prd should write"); + std::fs::write(&config.paths.prd_backup, prd).expect("prd backup should write"); + + let coordinator = ArtifactCoordinator::for_loop_engine(config.clone()); + let first = coordinator.submit(SharedArtifactUpdate::BlockStoryAndAddGuardrail { + story_id: "S-016".into(), + error_message: "Exceeded gutter threshold".into(), + iteration: 1, + }); + let second = coordinator.submit(SharedArtifactUpdate::BlockStoryAndAddGuardrail { + story_id: "S-017".into(), + error_message: "Exceeded gutter threshold".into(), + iteration: 2, + }); + let (first, second) = tokio::join!(first, second); + first.expect("first update should apply"); + second.expect("second update should apply"); + + let updated = Prd::load(&config.paths.prd_file).expect("prd should load"); + assert!(updated.stories.iter().all(|story| story.blocked)); + let guardrails = + std::fs::read_to_string(&config.paths.guardrails_file).expect("guardrails should exist"); + assert!(guardrails.contains("Sign: Error in S-016")); + assert!(guardrails.contains("Sign: Error in S-017")); + + let source = std::fs::read_to_string(format!( + "{}/src/loop_engine/mod.rs", + env!("CARGO_MANIFEST_DIR") + )) + .expect("loop engine source should read"); + assert!(source.contains("artifact_coordinator")); + assert!(source.contains(".submit(")); + assert!(source.contains("SharedArtifactUpdate::BlockStoryAndAddGuardrail")); + assert!(!source.contains("guardrails::add_guardrail")); +} diff --git a/crates/ralph-core/tests/loop_fixture_runner.rs b/crates/ralph-core/tests/loop_fixture_runner.rs index 48538bf..caca2f2 100644 --- a/crates/ralph-core/tests/loop_fixture_runner.rs +++ b/crates/ralph-core/tests/loop_fixture_runner.rs @@ -72,7 +72,11 @@ async fn happy_path_emits_deterministic_events() { let result = ralph_core::loop_engine::run(&config, &provider, shutdown, &sink).await; assert!(result.is_ok(), "loop should complete without error"); - assert_eq!(provider.call_count(), 1, "happy path should run exactly one iteration"); + assert_eq!( + provider.call_count(), + 1, + "happy path should run exactly one iteration" + ); let types = sink.event_types(); assert!( @@ -120,8 +124,14 @@ async fn loop_failure_stops_after_gutter_threshold() { ); let prd = ralph_core::prd::Prd::load(&config.paths.prd_file).unwrap(); - assert!(!prd.stories[0].passes, "failed story should not be marked passed"); - assert!(prd.stories[0].blocked, "failed story should be blocked after gutter"); + assert!( + !prd.stories[0].passes, + "failed story should not be marked passed" + ); + assert!( + prd.stories[0].blocked, + "failed story should be blocked after gutter" + ); } #[tokio::test] diff --git a/crates/ralph-core/tests/parallel_scheduler.rs b/crates/ralph-core/tests/parallel_scheduler.rs new file mode 100644 index 0000000..dd62033 --- /dev/null +++ b/crates/ralph-core/tests/parallel_scheduler.rs @@ -0,0 +1,183 @@ +use anyhow::Result; +use ralph_core::config::{PathConfig, RalphConfig, TuningConfig}; +use ralph_core::events::NoopEventSink; +use ralph_core::prd::Prd; +use ralph_core::providers::{AgentResult, Provider}; +use std::path::{Path, PathBuf}; +use std::process::Command; +use std::sync::atomic::AtomicBool; +use std::sync::Arc; +use tokio::sync::Barrier; +use tokio::time::{timeout, Duration}; + +#[derive(Clone)] +struct ParallelProbeProvider { + barrier: Arc, + events: Arc>>, +} + +impl Provider for ParallelProbeProvider { + fn name(&self) -> &str { + "probe" + } + + fn model(&self) -> &str { + "test" + } + + async fn run_agent( + &self, + _prompt: &str, + story_id: &str, + work_dir: &Path, + _stall_timeout_secs: u64, + _shutdown_flag: Arc, + _output_log: &Path, + ) -> Result { + self.events.lock().unwrap().push(( + "start".into(), + story_id.to_string(), + work_dir.display().to_string(), + )); + self.barrier.wait().await; + tokio::time::sleep(Duration::from_millis(25)).await; + self.events.lock().unwrap().push(( + "finish".into(), + story_id.to_string(), + work_dir.display().to_string(), + )); + Ok(AgentResult { + exit_code: 0, + stall_killed: false, + output_lines: vec![format!("finished {story_id}")], + rate_limited: false, + retry_after_message: None, + }) + } +} + +fn git(repo_dir: &Path, args: &[&str]) { + let output = Command::new("git") + .args(args) + .current_dir(repo_dir) + .output() + .expect("git should run"); + assert!( + output.status.success(), + "git {:?} failed: {}", + args, + String::from_utf8_lossy(&output.stderr) + ); +} + +fn init_repo() -> (tempfile::TempDir, PathBuf) { + let temp_dir = tempfile::tempdir().expect("tempdir"); + let repo_dir = temp_dir.path().join("repo"); + std::fs::create_dir(&repo_dir).expect("repo dir"); + git(&repo_dir, &["init"]); + git(&repo_dir, &["config", "user.name", "Test"]); + git(&repo_dir, &["config", "user.email", "test@example.com"]); + std::fs::write(repo_dir.join("README.md"), "init\n").expect("readme"); + git(&repo_dir, &["add", "README.md"]); + git(&repo_dir, &["commit", "-m", "init"]); + (temp_dir, repo_dir.canonicalize().expect("canonicalize")) +} + +fn build_config(root: &Path, repo_dir: &Path) -> RalphConfig { + let ralph_dir = root.join("ralph"); + std::fs::create_dir_all(&ralph_dir).expect("ralph dir"); + let prompt_file = ralph_dir.join("prompt.md"); + let guardrails_file = ralph_dir.join("guardrails.md"); + std::fs::write(&prompt_file, "# Prompt\n").expect("prompt"); + std::fs::write(&guardrails_file, "# Guardrails\n").expect("guardrails"); + RalphConfig { + paths: PathConfig { + ralph_dir: ralph_dir.clone(), + work_dir: repo_dir.to_path_buf(), + prd_file: ralph_dir.join("prd.json"), + prd_backup: ralph_dir.join("prd.backup.json"), + prompt_file, + progress_file: ralph_dir.join("progress.log"), + guardrails_file, + error_log: ralph_dir.join("error.log"), + activity_log: ralph_dir.join("activity.log"), + state_file: ralph_dir.join("state.json"), + pause_file: ralph_dir.join("pause"), + done_file: ralph_dir.join("done"), + failure_memory_file: ralph_dir.join("failure_memory.json"), + last_rebase_file: ralph_dir.join("last_rebase"), + codex_output_log: ralph_dir.join("codex_output.log"), + }, + tuning: TuningConfig { + max_iterations: 2, + rate_limit_wait_secs: 1, + gutter_threshold: 3, + stall_timeout_secs: 10, + cooldown_secs: 0, + max_verification_retries: 1, + test_command: None, + }, + services: None, + } +} + +fn install_prd(config: &RalphConfig) { + let prd = r#"{ + "project": "loopforge", + "stories": [ + { + "id": "S-017A", + "title": "First", + "acceptanceCriteria": ["first"], + "verification": { "commands": ["true"] } + }, + { + "id": "S-017B", + "title": "Second", + "acceptanceCriteria": ["second"], + "verification": { "commands": ["true"] } + } + ] + }"#; + std::fs::write(&config.paths.prd_file, prd).expect("prd"); + std::fs::write(&config.paths.prd_backup, prd).expect("prd backup"); +} + +#[tokio::test(flavor = "multi_thread", worker_threads = 4)] +async fn ready_stories_execute_in_parallel_worktrees() { + let (temp_dir, repo_dir) = init_repo(); + let config = build_config(temp_dir.path(), &repo_dir); + install_prd(&config); + let provider = ParallelProbeProvider { + barrier: Arc::new(Barrier::new(2)), + events: Arc::new(std::sync::Mutex::new(Vec::new())), + }; + let shutdown = Arc::new(AtomicBool::new(false)); + let sink = NoopEventSink; + + timeout( + Duration::from_secs(2), + ralph_core::loop_engine::run(&config, &provider, shutdown, &sink), + ) + .await + .expect("parallel run should not hang") + .expect("loop run should succeed"); + + let events = provider.events.lock().unwrap().clone(); + assert_eq!(events.len(), 4, "expected start/finish for both stories"); + assert_eq!(events[0].0, "start"); + assert_eq!(events[1].0, "start"); + let work_dirs: std::collections::HashSet = + events.iter().take(2).map(|event| event.2.clone()).collect(); + assert_eq!(work_dirs.len(), 2, "stories should use distinct worktrees"); + assert!( + work_dirs + .iter() + .all(|dir| dir.contains(".loopforge/worktrees")), + "worktree paths should be provisioned under .loopforge/worktrees: {work_dirs:?}" + ); + + let updated = Prd::load(&config.paths.prd_file).expect("updated prd"); + assert!(updated.stories.iter().all(|story| story.passes)); + assert_eq!(updated.pending_count(), 0); +} diff --git a/crates/ralph-core/tests/scheduler_merge.rs b/crates/ralph-core/tests/scheduler_merge.rs new file mode 100644 index 0000000..6c0a444 --- /dev/null +++ b/crates/ralph-core/tests/scheduler_merge.rs @@ -0,0 +1,111 @@ +use ralph_core::prd::{Complexity, Prd, Priority, ScopeSpec, UserStory, VerificationSpec}; +use ralph_core::scheduler; + +fn story(id: &str) -> UserStory { + UserStory { + id: id.to_string(), + title: format!("Story {id}"), + description: None, + acceptance_criteria: vec!["works".to_string()], + scope: ScopeSpec::default(), + verification: VerificationSpec::default(), + commit_message: None, + priority: Priority::Medium, + estimated_complexity: Complexity::Medium, + estimated_minutes: 0, + depends_on: Vec::new(), + passes: false, + blocked: false, + attempts: 0, + notes: None, + } +} + +fn prd(stories: Vec) -> Prd { + Prd { + project_name: "loopforge".to_string(), + feature: String::new(), + working_directory: String::new(), + branch_name: None, + stories, + generated_at: None, + } +} + +fn scheduled_ids(prd: &Prd) -> Vec { + scheduler::ready_stories(prd) + .into_iter() + .map(|story| story.id.clone()) + .collect() +} + +fn merge_schedule(existing: &[String], incoming: &[String]) -> Vec { + let mut merged = existing.to_vec(); + + for story_id in incoming { + if !merged.contains(story_id) { + merged.push(story_id.clone()); + } + } + + merged +} + +#[test] +fn merge_schedule_keeps_existing_queue_when_new_ready_work_arrives() { + let mut foundation = story("S-001"); + foundation.passes = true; + + let mut queued = story("S-002"); + queued.depends_on = vec!["S-001".to_string()]; + + let initial_schedule = scheduled_ids(&prd(vec![foundation.clone(), queued.clone()])); + + let independent = story("S-003"); + let mut unlocked = story("S-004"); + unlocked.depends_on = vec!["S-002".to_string()]; + + let incoming_schedule = scheduled_ids(&prd(vec![foundation, queued, independent, unlocked])); + let merged = merge_schedule(&initial_schedule, &incoming_schedule); + + assert_eq!(initial_schedule, vec!["S-002"]); + assert_eq!(incoming_schedule, vec!["S-002", "S-003"]); + assert_eq!(merged, vec!["S-002", "S-003"]); +} + +#[test] +fn merge_schedule_deduplicates_overlapping_ready_sets_in_story_order() { + let mut foundation = story("S-001"); + foundation.passes = true; + + let mut queued_first = story("S-002"); + queued_first.depends_on = vec!["S-001".to_string()]; + + let queued_second = story("S-003"); + let initial_schedule = scheduled_ids(&prd(vec![ + foundation.clone(), + queued_first.clone(), + queued_second.clone(), + ])); + + queued_first.passes = true; + + let mut overlapping = story("S-004"); + overlapping.depends_on = vec!["S-001".to_string()]; + + let mut newly_unlocked = story("S-005"); + newly_unlocked.depends_on = vec!["S-002".to_string()]; + + let incoming_schedule = scheduled_ids(&prd(vec![ + foundation, + queued_first, + queued_second, + overlapping, + newly_unlocked, + ])); + let merged = merge_schedule(&initial_schedule, &incoming_schedule); + + assert_eq!(initial_schedule, vec!["S-002", "S-003"]); + assert_eq!(incoming_schedule, vec!["S-003", "S-004", "S-005"]); + assert_eq!(merged, vec!["S-002", "S-003", "S-004", "S-005"]); +} diff --git a/crates/ralph-core/tests/scheduler_ready_sets.rs b/crates/ralph-core/tests/scheduler_ready_sets.rs new file mode 100644 index 0000000..4562f53 --- /dev/null +++ b/crates/ralph-core/tests/scheduler_ready_sets.rs @@ -0,0 +1,96 @@ +use ralph_core::prd::{Complexity, Prd, Priority, ScopeSpec, UserStory, VerificationSpec}; +use ralph_core::scheduler; + +fn story(id: &str) -> UserStory { + UserStory { + id: id.to_string(), + title: format!("Story {id}"), + description: None, + acceptance_criteria: vec!["works".to_string()], + scope: ScopeSpec::default(), + verification: VerificationSpec::default(), + commit_message: None, + priority: Priority::Medium, + estimated_complexity: Complexity::Medium, + estimated_minutes: 0, + depends_on: Vec::new(), + passes: false, + blocked: false, + attempts: 0, + notes: None, + } +} + +fn prd(stories: Vec) -> Prd { + Prd { + project_name: "loopforge".to_string(), + feature: String::new(), + working_directory: String::new(), + branch_name: None, + stories, + generated_at: None, + } +} + +#[test] +fn ready_set_excludes_blocked_and_unsatisfied_dependencies() { + let mut completed = story("S-001"); + completed.passes = true; + + let ready = story("S-002"); + + let mut blocked = story("S-003"); + blocked.blocked = true; + + let mut waiting = story("S-004"); + waiting.depends_on = vec!["S-003".to_string()]; + + let prd = prd(vec![completed, ready, blocked, waiting]); + let ready_ids: Vec<&str> = scheduler::ready_stories(&prd) + .into_iter() + .map(|story| story.id.as_str()) + .collect(); + + assert_eq!(ready_ids, vec!["S-002"]); +} + +#[test] +fn ready_set_returns_all_satisfied_stories_in_story_order() { + let mut foundation = story("S-001"); + foundation.passes = true; + + let mut first_ready = story("S-002"); + first_ready.depends_on = vec!["S-001".to_string()]; + + let second_ready = story("S-003"); + + let mut waiting = story("S-004"); + waiting.depends_on = vec!["S-002".to_string()]; + + let prd = prd(vec![foundation, first_ready, second_ready, waiting]); + let ready_ids: Vec<&str> = scheduler::ready_stories(&prd) + .into_iter() + .map(|story| story.id.as_str()) + .collect(); + + assert_eq!(ready_ids, vec!["S-002", "S-003"]); +} + +#[test] +fn ready_set_unlocks_story_only_when_all_dependencies_pass() { + let mut first = story("S-001"); + first.passes = true; + + let second = story("S-002"); + + let mut gated = story("S-003"); + gated.depends_on = vec!["S-001".to_string(), "S-002".to_string()]; + + let prd = prd(vec![first, second, gated]); + let ready_ids: Vec<&str> = scheduler::ready_stories(&prd) + .into_iter() + .map(|story| story.id.as_str()) + .collect(); + + assert_eq!(ready_ids, vec!["S-002"]); +} diff --git a/crates/ralph-core/tests/worktree.rs b/crates/ralph-core/tests/worktree.rs new file mode 100644 index 0000000..d863dbf --- /dev/null +++ b/crates/ralph-core/tests/worktree.rs @@ -0,0 +1,165 @@ +use ralph_core::worktree::{self, WorktreeError}; +use std::path::{Path, PathBuf}; +use std::process::Command; + +fn git(repo_dir: &Path, args: &[&str]) -> String { + let output = Command::new("git") + .args(args) + .current_dir(repo_dir) + .output() + .expect("git command should run"); + if output.status.success() { + return String::from_utf8_lossy(&output.stdout).trim().to_string(); + } + panic!( + "git {:?} failed: {}", + args, + String::from_utf8_lossy(&output.stderr).trim() + ); +} + +fn init_repo() -> (tempfile::TempDir, PathBuf, String, String) { + let temp_dir = tempfile::tempdir().expect("tempdir should exist"); + let repo_dir = temp_dir.path().join("repo"); + std::fs::create_dir(&repo_dir).expect("repo dir should exist"); + git(&repo_dir, &["init"]); + git(&repo_dir, &["config", "user.name", "LoopForge Test"]); + git( + &repo_dir, + &["config", "user.email", "loopforge@example.com"], + ); + let tracked_contents = "tracked root contents\n".to_string(); + let untracked_contents = "untracked sentinel\n".to_string(); + std::fs::write(repo_dir.join("README.md"), &tracked_contents) + .expect("tracked file should write"); + git(&repo_dir, &["add", "README.md"]); + git(&repo_dir, &["commit", "-m", "init"]); + std::fs::write(repo_dir.join("notes.txt"), &untracked_contents) + .expect("untracked file should write"); + let canonical_repo_dir = repo_dir + .canonicalize() + .expect("repo dir should canonicalize"); + ( + temp_dir, + canonical_repo_dir, + tracked_contents, + untracked_contents, + ) +} + +fn branch_exists(repo_dir: &Path, branch: &str) -> bool { + !git(repo_dir, &["branch", "--list", branch]).is_empty() +} + +#[tokio::test] +async fn worktree_setup_and_teardown_are_reentrant() { + let (_temp_dir, repo_dir, tracked_contents, untracked_contents) = init_repo(); + let repo_head = git(&repo_dir, &["rev-parse", "HEAD"]); + let worktree_path = ".loopforge/worktrees/regression"; + let worktree_dir = repo_dir.join(worktree_path); + let worktree_branch = "loopforge/regression"; + let expected_path = worktree_dir.to_string_lossy().into_owned(); + + let first = worktree::create(&repo_dir, worktree_path, worktree_branch) + .await + .expect("first setup should succeed"); + assert_eq!(first.path, expected_path); + assert_eq!(first.branch, worktree_branch); + assert!( + worktree_dir.exists(), + "worktree directory must exist after setup" + ); + assert_eq!( + std::fs::read_to_string(worktree_dir.join("README.md")).unwrap(), + tracked_contents + ); + assert_eq!(git(&repo_dir, &["rev-parse", "HEAD"]), repo_head); + assert_eq!( + std::fs::read_to_string(repo_dir.join("README.md")).unwrap(), + tracked_contents + ); + assert_eq!( + std::fs::read_to_string(repo_dir.join("notes.txt")).unwrap(), + untracked_contents + ); + let first_list = worktree::list(&repo_dir) + .await + .expect("worktree list should load"); + assert_eq!( + first_list + .iter() + .filter(|entry| entry.path == expected_path) + .count(), + 1 + ); + + worktree::remove(&repo_dir, Path::new(worktree_path), None) + .await + .expect("first teardown should succeed"); + assert!(!worktree_dir.exists(), "worktree directory must be removed"); + assert!( + branch_exists(&repo_dir, worktree_branch), + "branch should remain after partial teardown" + ); + let first_teardown = worktree::list(&repo_dir) + .await + .expect("list should succeed after first teardown"); + assert!(first_teardown + .iter() + .all(|entry| entry.path != expected_path)); + let inspect_error = worktree::inspect(&repo_dir, Path::new(worktree_path)) + .await + .expect_err("removed worktree should not be inspectable"); + assert!(matches!(inspect_error, WorktreeError::NotFound(path) if path == expected_path)); + + let second = worktree::create(&repo_dir, worktree_path, worktree_branch) + .await + .expect("second setup should reuse branch cleanly"); + assert_eq!(second.path, expected_path); + assert_eq!(second.branch, worktree_branch); + let second_list = worktree::list(&repo_dir) + .await + .expect("second list should load"); + assert_eq!( + second_list + .iter() + .filter(|entry| entry.path == expected_path) + .count(), + 1 + ); + assert_eq!(git(&repo_dir, &["rev-parse", "HEAD"]), repo_head); + assert_eq!( + std::fs::read_to_string(repo_dir.join("README.md")).unwrap(), + tracked_contents + ); + assert_eq!( + std::fs::read_to_string(repo_dir.join("notes.txt")).unwrap(), + untracked_contents + ); + + worktree::remove(&repo_dir, Path::new(worktree_path), Some(worktree_branch)) + .await + .expect("final teardown should succeed"); + assert!( + !worktree_dir.exists(), + "worktree directory must stay removed" + ); + assert!( + !branch_exists(&repo_dir, worktree_branch), + "branch should be deleted during final teardown" + ); + let final_list = worktree::list(&repo_dir) + .await + .expect("final list should load"); + assert_eq!(final_list.len(), 1); + assert_eq!(final_list[0].path, repo_dir.to_string_lossy()); + assert_eq!(git(&repo_dir, &["rev-parse", "HEAD"]), repo_head); + assert_eq!( + std::fs::read_to_string(repo_dir.join("README.md")).unwrap(), + tracked_contents + ); + assert_eq!( + std::fs::read_to_string(repo_dir.join("notes.txt")).unwrap(), + untracked_contents + ); +} diff --git a/crates/ralph-core/tests/worktree_runner.rs b/crates/ralph-core/tests/worktree_runner.rs new file mode 100644 index 0000000..ca77fa1 --- /dev/null +++ b/crates/ralph-core/tests/worktree_runner.rs @@ -0,0 +1,179 @@ +use ralph_core::prd::UserStory; +use ralph_core::scheduler::worktree_runner::{ + branch_for, provision, run_in_worktree, worktree_path_for, +}; +use std::path::{Path, PathBuf}; +use std::process::Command; +use std::sync::{Arc, Mutex}; + +fn git(repo_dir: &Path, args: &[&str]) -> String { + let output = Command::new("git") + .args(args) + .current_dir(repo_dir) + .output() + .expect("git command should run"); + if output.status.success() { + return String::from_utf8_lossy(&output.stdout).trim().to_string(); + } + panic!( + "git {:?} failed: {}", + args, + String::from_utf8_lossy(&output.stderr).trim() + ); +} + +fn init_repo() -> (tempfile::TempDir, PathBuf) { + let temp_dir = tempfile::tempdir().expect("tempdir"); + let repo_dir = temp_dir.path().join("repo"); + std::fs::create_dir(&repo_dir).expect("repo dir"); + git(&repo_dir, &["init"]); + git(&repo_dir, &["config", "user.name", "Test"]); + git(&repo_dir, &["config", "user.email", "test@example.com"]); + std::fs::write(repo_dir.join("README.md"), "init\n").expect("write"); + git(&repo_dir, &["add", "README.md"]); + git(&repo_dir, &["commit", "-m", "init"]); + let canonical = repo_dir.canonicalize().expect("canonicalize"); + (temp_dir, canonical) +} + +fn stub_story(story_id: &str) -> UserStory { + serde_json::from_value(serde_json::json!({ + "id": story_id, + "title": format!("Story {story_id}"), + "acceptanceCriteria": ["placeholder"] + })) + .expect("stub story should parse") +} + +#[test] +fn different_stories_receive_different_worktree_paths() { + let work_dir = Path::new("/repo"); + let stories = ["S-001", "S-002", "S-003", "S-015"]; + let paths: Vec = stories + .iter() + .map(|story_id| worktree_path_for(work_dir, story_id)) + .collect(); + + for (idx, path) in paths.iter().enumerate() { + for (jdx, other) in paths.iter().enumerate() { + if idx != jdx { + assert_ne!( + path, other, + "{} and {} must differ", + stories[idx], stories[jdx] + ); + } + } + } +} + +#[test] +fn different_stories_receive_different_branches() { + let branches: Vec = ["S-001", "S-002", "S-015"] + .iter() + .map(|story_id| branch_for(story_id)) + .collect(); + + assert_ne!(branches[0], branches[1]); + assert_ne!(branches[1], branches[2]); +} + +#[tokio::test] +async fn provision_creates_worktree_directory() { + let (_temp_dir, repo_dir) = init_repo(); + + let provisioned = provision(&repo_dir, "S-015") + .await + .expect("provision should succeed"); + + assert_eq!(provisioned.story_id, "S-015"); + assert!(provisioned.worktree_path.exists()); + assert_eq!(provisioned.branch, "loopforge/s-015"); + assert!(provisioned.worktree_path.join("README.md").exists()); +} + +#[tokio::test] +async fn provision_fails_gracefully_on_invalid_repo() { + let temp_dir = tempfile::tempdir().expect("tempdir"); + let bad_dir = temp_dir.path().join("not-a-repo"); + std::fs::create_dir(&bad_dir).expect("dir"); + + let result = provision(&bad_dir, "S-001").await; + assert!(result.is_err()); +} + +#[tokio::test] +async fn two_stories_get_separate_worktrees() { + let (_temp_dir, repo_dir) = init_repo(); + + let first = provision(&repo_dir, "S-001") + .await + .expect("first provision"); + let second = provision(&repo_dir, "S-002") + .await + .expect("second provision"); + + assert_ne!(first.worktree_path, second.worktree_path); + assert_ne!(first.branch, second.branch); + assert!(first.worktree_path.exists()); + assert!(second.worktree_path.exists()); +} + +#[tokio::test] +async fn worker_fixture_runs_sequentially_through_core_loop() { + let (_temp_dir, repo_dir) = init_repo(); + let execution_log: Arc>> = Arc::new(Mutex::new(Vec::new())); + + let story_a = stub_story("S-001"); + let story_b = stub_story("S-002"); + + let log_clone = Arc::clone(&execution_log); + let result_a = run_in_worktree(&repo_dir, &story_a, |provisioned| { + let log = log_clone; + async move { + log.lock().unwrap().push(provisioned.story_id.clone()); + provisioned.worktree_path + } + }) + .await + .expect("run_in_worktree S-001"); + + let log_clone = Arc::clone(&execution_log); + let result_b = run_in_worktree(&repo_dir, &story_b, |provisioned| { + let log = log_clone; + async move { + log.lock().unwrap().push(provisioned.story_id.clone()); + provisioned.worktree_path + } + }) + .await + .expect("run_in_worktree S-002"); + + let log = execution_log.lock().unwrap(); + assert_eq!(log.len(), 2); + assert_eq!(log[0], "S-001"); + assert_eq!(log[1], "S-002"); + assert_ne!(result_a, result_b); +} + +#[tokio::test] +async fn worktree_creation_failure_prevents_worker_execution() { + let temp_dir = tempfile::tempdir().expect("tempdir"); + let bad_dir = temp_dir.path().join("not-a-repo"); + std::fs::create_dir(&bad_dir).expect("dir"); + + let story = stub_story("S-001"); + let worker_called = Arc::new(Mutex::new(false)); + let flag = Arc::clone(&worker_called); + + let result = run_in_worktree(&bad_dir, &story, |_provisioned| { + let called = flag; + async move { + *called.lock().unwrap() = true; + } + }) + .await; + + assert!(result.is_err()); + assert!(!*worker_called.lock().unwrap()); +} diff --git a/crates/ralph-core/tests/worktree_scheduler.rs b/crates/ralph-core/tests/worktree_scheduler.rs new file mode 100644 index 0000000..35706c3 --- /dev/null +++ b/crates/ralph-core/tests/worktree_scheduler.rs @@ -0,0 +1,303 @@ +use ralph_core::loop_engine::scheduler::{ + self, CompletionScheduler, MergeAction, WorktreeCompletion, +}; + +fn completion( + story_id: &str, + worktree_id: &str, + passed: bool, + sequence: u64, +) -> WorktreeCompletion { + WorktreeCompletion { + worktree_id: worktree_id.to_string(), + story_id: story_id.to_string(), + passed, + blocked: false, + head_commit: None, + guardrail_append: None, + branch: None, + sequence, + } +} + +fn story_ids(actions: &[MergeAction]) -> Vec { + actions + .iter() + .filter_map(|action| match action { + MergeAction::UpdateStoryStatus { story_id, .. } => Some(story_id.clone()), + _ => None, + }) + .collect() +} + +#[test] +fn conflicting_completions_produce_stable_ordering() { + let completions = vec![ + completion("S-003", "wt-c", true, 2), + completion("S-001", "wt-a", true, 0), + completion("S-002", "wt-b", false, 1), + ]; + let first_run = scheduler::schedule(completions.clone()); + let second_run = scheduler::schedule(completions.clone()); + let third_run = scheduler::schedule(completions); + + assert_eq!(first_run, second_run); + assert_eq!(second_run, third_run); + assert_eq!(story_ids(&first_run), vec!["S-001", "S-002", "S-003"]); +} + +#[test] +fn same_story_from_different_worktrees_ordered_by_sequence_then_id() { + let completions = vec![ + completion("S-001", "wt-b", false, 2), + completion("S-001", "wt-a", true, 1), + ]; + let actions = scheduler::schedule(completions); + let worktree_order: Vec<&str> = actions + .iter() + .filter_map(|action| match action { + MergeAction::UpdateStoryStatus { story_id, .. } if story_id == "S-001" => { + Some("status") + } + _ => None, + }) + .collect(); + assert_eq!(worktree_order.len(), 2); + + assert_eq!( + actions[0], + MergeAction::UpdateStoryStatus { + story_id: "S-001".into(), + passed: true, + blocked: false, + } + ); + assert_eq!( + actions[1], + MergeAction::UpdateStoryStatus { + story_id: "S-001".into(), + passed: false, + blocked: false, + } + ); +} + +#[test] +fn mixed_actions_interleave_correctly_per_story() { + let mut comp_a = completion("S-002", "wt-a", true, 0); + comp_a.head_commit = Some("aaa111".into()); + + let mut comp_b = completion("S-001", "wt-b", false, 1); + comp_b.guardrail_append = Some("retry limit".into()); + comp_b.head_commit = Some("bbb222".into()); + + let actions = scheduler::schedule(vec![comp_a, comp_b]); + + assert_eq!(actions.len(), 5); + assert_eq!( + actions[0], + MergeAction::UpdateStoryStatus { + story_id: "S-001".into(), + passed: false, + blocked: false, + } + ); + assert_eq!( + actions[1], + MergeAction::AppendGuardrail { + story_id: "S-001".into(), + content: "retry limit".into(), + } + ); + assert_eq!( + actions[2], + MergeAction::UpdateSessionHead { + worktree_id: "wt-b".into(), + head_commit: "bbb222".into(), + } + ); + assert_eq!( + actions[3], + MergeAction::UpdateStoryStatus { + story_id: "S-002".into(), + passed: true, + blocked: false, + } + ); + assert_eq!( + actions[4], + MergeAction::UpdateSessionHead { + worktree_id: "wt-a".into(), + head_commit: "aaa111".into(), + } + ); +} + +#[test] +fn scheduler_api_does_not_require_tauri_or_shell_types() { + let mut sched = CompletionScheduler::new(); + sched.submit(completion("S-001", "wt-a", true, 0)); + sched.submit(completion("S-002", "wt-b", false, 1)); + assert_eq!(sched.pending_count(), 2); + + let actions = sched.drain_ordered(); + assert_eq!(actions.len(), 2); + assert_eq!(sched.pending_count(), 0); +} + +#[test] +fn batch_submit_matches_individual_submits() { + let completions = vec![ + completion("S-003", "wt-c", true, 2), + completion("S-001", "wt-a", true, 0), + completion("S-002", "wt-b", false, 1), + ]; + + let batch_result = scheduler::schedule(completions.clone()); + + let mut sched = CompletionScheduler::new(); + for comp in completions { + sched.submit(comp); + } + let individual_result = sched.drain_ordered(); + + assert_eq!(batch_result, individual_result); +} + +#[test] +fn blocked_completion_propagates_blocked_flag() { + let mut comp = completion("S-005", "wt-a", false, 0); + comp.blocked = true; + let actions = scheduler::schedule(vec![comp]); + assert_eq!( + actions[0], + MergeAction::UpdateStoryStatus { + story_id: "S-005".into(), + passed: false, + blocked: true, + } + ); +} + +#[test] +fn large_concurrent_set_stays_deterministic() { + let mut completions: Vec = (0..50) + .map(|idx| { + completion( + &format!("S-{idx:03}"), + &format!("wt-{idx}"), + idx % 3 == 0, + idx, + ) + }) + .collect(); + let forward = scheduler::schedule(completions.clone()); + completions.reverse(); + let reversed = scheduler::schedule(completions); + assert_eq!(forward, reversed); +} + +#[test] +fn serialization_roundtrip_preserves_completion() { + let mut comp = completion("S-010", "wt-x", true, 5); + comp.head_commit = Some("deadbeef".into()); + comp.guardrail_append = Some("limit reached".into()); + + let json = serde_json::to_string(&comp).expect("should serialize"); + let restored: WorktreeCompletion = serde_json::from_str(&json).expect("should deserialize"); + assert_eq!(comp, restored); +} + +#[test] +fn serialization_roundtrip_preserves_merge_action() { + let action = MergeAction::AppendGuardrail { + story_id: "S-001".into(), + content: "circuit breaker".into(), + }; + let json = serde_json::to_string(&action).expect("should serialize"); + let restored: MergeAction = serde_json::from_str(&json).expect("should deserialize"); + assert_eq!(action, restored); +} + +#[test] +fn passed_story_with_branch_emits_merge_and_teardown() { + let mut comp = completion("S-001", "wt-a", true, 0); + comp.branch = Some("loopforge/s-001".into()); + let actions = scheduler::schedule(vec![comp]); + assert_eq!(actions.len(), 3); + assert_eq!( + actions[0], + MergeAction::UpdateStoryStatus { + story_id: "S-001".into(), + passed: true, + blocked: false, + } + ); + assert_eq!( + actions[1], + MergeAction::MergeIntoMain { + worktree_id: "wt-a".into(), + story_id: "S-001".into(), + branch: "loopforge/s-001".into(), + } + ); + assert_eq!( + actions[2], + MergeAction::TeardownWorktree { + worktree_id: "wt-a".into(), + branch: "loopforge/s-001".into(), + } + ); +} + +#[test] +fn failed_story_with_branch_omits_merge_and_teardown() { + let mut comp = completion("S-002", "wt-b", false, 0); + comp.branch = Some("loopforge/s-002".into()); + let actions = scheduler::schedule(vec![comp]); + assert_eq!(actions.len(), 1); + assert_eq!( + actions[0], + MergeAction::UpdateStoryStatus { + story_id: "S-002".into(), + passed: false, + blocked: false, + } + ); +} + +#[test] +fn blocked_story_with_branch_omits_merge_and_teardown() { + let mut comp = completion("S-003", "wt-c", false, 0); + comp.blocked = true; + comp.branch = Some("loopforge/s-003".into()); + let actions = scheduler::schedule(vec![comp]); + assert_eq!(actions.len(), 1); + assert!(matches!( + &actions[0], + MergeAction::UpdateStoryStatus { blocked: true, .. } + )); +} + +#[test] +fn passed_story_without_branch_omits_merge_and_teardown() { + let comp = completion("S-004", "wt-d", true, 0); + let actions = scheduler::schedule(vec![comp]); + assert_eq!(actions.len(), 1); + assert!(matches!( + &actions[0], + MergeAction::UpdateStoryStatus { passed: true, .. } + )); +} + +#[test] +fn merge_conflict_action_serialization_roundtrip() { + let action = MergeAction::MergeConflict { + worktree_id: "wt-a".into(), + story_id: "S-001".into(), + branch: "loopforge/s-001".into(), + }; + let json = serde_json::to_string(&action).expect("serialize"); + let restored: MergeAction = serde_json::from_str(&json).expect("deserialize"); + assert_eq!(action, restored); +} diff --git a/crates/ralph-core/tests/worktree_state.rs b/crates/ralph-core/tests/worktree_state.rs new file mode 100644 index 0000000..aa704c4 --- /dev/null +++ b/crates/ralph-core/tests/worktree_state.rs @@ -0,0 +1,67 @@ +use ralph_core::{LoopExecutionState, WorktreeExecutionState, PRIMARY_WORKTREE_ID}; +use serde_json::json; + +#[test] +fn single_worktree_state_matches_current_flow() { + let state = LoopExecutionState::single("/repo"); + + assert_eq!(state.primary_worktree_id, PRIMARY_WORKTREE_ID); + assert_eq!( + state.active_worktree_id.as_deref(), + Some(PRIMARY_WORKTREE_ID) + ); + assert_eq!( + state.resolved_worktrees(), + vec![WorktreeExecutionState::primary("/repo")] + ); +} + +#[test] +fn legacy_state_without_worktree_fields_deserializes_to_primary_worktree() { + let state: LoopExecutionState = serde_json::from_value(json!({ + "iteration": 4, + "currentStoryId": "S-013", + "workDir": "/repo" + })) + .unwrap(); + + assert_eq!(state.primary_worktree_id, PRIMARY_WORKTREE_ID); + assert_eq!(state.active_worktree().id, PRIMARY_WORKTREE_ID); + assert_eq!(state.active_worktree().work_dir, "/repo"); + assert_eq!(state.resolved_worktrees().len(), 1); +} + +#[test] +fn multi_worktree_state_preserves_distinct_identifiers() { + let state: LoopExecutionState = serde_json::from_value(json!({ + "iteration": 9, + "workDir": "/repo", + "primaryWorktreeId": "main", + "activeWorktreeId": "story-s013", + "worktrees": [ + {"id": "main", "workDir": "/repo", "branch": "main", "iteration": 9}, + { + "id": "story-s013", + "workDir": "/repo/.loopforge/worktrees/story-s013", + "branch": "loopforge/story-s013", + "storyId": "S-013", + "iteration": 2, + "lastPromptHash": "abc123", + "headCommit": "deadbeef" + } + ] + })) + .unwrap(); + + let worktrees = state.resolved_worktrees(); + + assert_eq!(worktrees.len(), 2); + assert_eq!(state.active_worktree().id, "story-s013"); + assert_eq!(worktrees[0].id, "main"); + assert_eq!(worktrees[1].id, "story-s013"); + assert_eq!(worktrees[0].work_dir, "/repo"); + assert_eq!( + worktrees[1].work_dir, + "/repo/.loopforge/worktrees/story-s013" + ); +} diff --git a/docs/monitor-poc-hard-gates.md b/docs/monitor-poc-hard-gates.md new file mode 100644 index 0000000..3ccc2fc --- /dev/null +++ b/docs/monitor-poc-hard-gates.md @@ -0,0 +1,15 @@ +# Monitor POC Phase 1 Hard Gates + +## Gate Results + +| Hard gate | Result | Evidence | +| --- | --- | --- | +| Log ingestion and diff loading stay off the UI thread during fixture playback | Pass | `crates/loopforge-ui/tests/monitor_poc.rs` covers async log fixture playback + async diff fixture playback and confirms focus/scroll interactions while loading remains in progress. | +| 5,000-line scroll performance is repeatable | Pass | `scrolls_through_five_thousand_output_lines_repeatably` verifies stable viewport rendering at line 5,000 for a 6,200-line log fixture. | +| Large patch diff rendering is repeatable | Pass | `renders_large_patch_without_truncating_visible_rows` validates a 7,500-line patch fixture and confirms full viewport population at deep scroll offsets. | +| Keyboard focus reliability is repeatable | Pass | `keyboard_focus_remains_reliable_after_repeated_cycles` runs 300 focus transitions and validates the full Sidebar → Output → Diff loop without drift. | +| Dark/light parity is repeatable | Pass | `dark_and_light_tokens_remain_in_parity_for_diff_lines` checks both token families for metadata, hunk, added, and removed lines. | + +## Phase 2 Decision + +Floem passes every Phase 1 hard gate in this POC. Recommendation: continue to Phase 2 on Floem and keep `iced` as contingency only if later integration regressions break these checks. diff --git a/e2e/helpers/test-environment.ts b/e2e/helpers/test-environment.ts index 799d791..3d15415 100644 --- a/e2e/helpers/test-environment.ts +++ b/e2e/helpers/test-environment.ts @@ -3,7 +3,7 @@ import os from "node:os"; import path from "node:path"; const FIXTURE_BINARIES = ["claude", "codex", "cursor", "cursor-agent", "gemini", "opencode"]; -const FIXTURE_AGENT_SCRIPT = `#!/bin/sh +const UNIX_FIXTURE_AGENT_SCRIPT = `#!/bin/sh all="$*" fixture_set="\${LOOPFORGE_TEST_FIXTURE_SET:-happy-path}" if printf '%s' "$all" | grep -q "Condense the following implementation plan"; then @@ -30,6 +30,37 @@ else printf '%s\\n' 'fixture agent completed' fi `; +const WINDOWS_FIXTURE_AGENT_SCRIPT = `@echo off +set all=%* +set fixture_set=%LOOPFORGE_TEST_FIXTURE_SET% +if "%fixture_set%"=="" set fixture_set=happy-path +echo %all% | findstr /C:"Condense the following implementation plan" >nul && (echo Create the project, atomize the plan, execute the story, and archive the result.& exit /b 0) +echo %all% | findstr /C:"Split the following implementation plan" >nul && ( + if /I "%fixture_set%"=="atomize-error" ( + echo {broken + ) else ( + echo [{"title":"Lifecycle","content":"Create the project, atomize the plan, execute the story, and archive the result."}] + ) + exit /b 0 +) +echo %all% | findstr /C:"decomposing a plan section into atomic user stories" >nul && ( + if /I "%fixture_set%"=="no-stories" ( + echo [] + ) else ( + echo [{"title":"Exercise desktop happy path","description":"Cover the desktop happy path flow.","acceptanceCriteria":["The desktop flow completes","Artifacts are persisted"],"scope":{"filesToModify":["e2e/wdio.conf.ts"],"filesToCreate":["e2e/specs/happy-path.e2e.ts"],"filesToAvoid":[]},"verification":{"commands":["bun run e2e -- --spec e2e/specs/happy-path.e2e.ts"],"assertions":[]},"commitMessage":"test(frontend): add desktop happy path spec","priority":"critical","estimatedComplexity":"medium","estimatedMinutes":45,"dependsOn":[]}] + ) + exit /b 0 +) +echo %all% | findstr /C:"technical lead finalizing" >nul && ( + if /I "%fixture_set%"=="no-stories" ( + echo {"projectName":"LoopForge Desktop Empty Result","feature":"Desktop atomize empty result","workingDirectory":"","generatedAt":"2026-04-10T00:00:00.000Z","stories":[],"totalEstimatedMinutes":0} + ) else ( + echo {"projectName":"LoopForge Desktop Happy Path","feature":"Desktop happy path","workingDirectory":"","generatedAt":"2026-04-10T00:00:00.000Z","stories":[{"id":"S-001","title":"Exercise desktop happy path","description":"Cover the desktop happy path flow.","acceptanceCriteria":["The desktop flow completes","Artifacts are persisted"],"scope":{"filesToModify":["e2e/wdio.conf.ts"],"filesToCreate":["e2e/specs/happy-path.e2e.ts"],"filesToAvoid":[]},"verification":{"commands":["bun run e2e -- --spec e2e/specs/happy-path.e2e.ts"],"assertions":[]},"commitMessage":"test(frontend): add desktop happy path spec","priority":"critical","estimatedComplexity":"medium","estimatedMinutes":45,"dependsOn":[],"passes":false,"blocked":false,"attempts":0,"notes":null}]} + ) + exit /b 0 +) +echo fixture agent completed +`; export type DesktopTestEnvironment = { rootDir: string; @@ -46,9 +77,14 @@ export type DesktopTestEnvironment = { async function installFixtureAgents(binDir: string) { await Promise.all( FIXTURE_BINARIES.map(async (binaryName) => { - const targetPath = path.join(binDir, binaryName); - await writeFile(targetPath, FIXTURE_AGENT_SCRIPT); - await chmod(targetPath, 0o755); + const commandName = process.platform === "win32" ? `${binaryName}.cmd` : binaryName; + const targetPath = path.join(binDir, commandName); + const script = + process.platform === "win32" ? WINDOWS_FIXTURE_AGENT_SCRIPT : UNIX_FIXTURE_AGENT_SCRIPT; + await writeFile(targetPath, script); + if (process.platform !== "win32") { + await chmod(targetPath, 0o755); + } }), ); } diff --git a/lefthook.yml b/lefthook.yml new file mode 100644 index 0000000..ab07bf6 --- /dev/null +++ b/lefthook.yml @@ -0,0 +1,58 @@ +pre-commit: + parallel: true + commands: + biome-check: + glob: "**/*.{ts,tsx,js,jsx,json,css}" + run: bunx biome check --no-errors-on-unmatched --changed --since=HEAD {staged_files} + stage_fixed: true + rust-fmt: + glob: "**/*.rs" + run: cargo fmt --all -- --check + rust-clippy: + glob: "**/*.rs" + run: cargo clippy --workspace -- -D warnings + typecheck: + glob: "**/*.{ts,tsx}" + run: bun run typecheck +# EXAMPLE USAGE: +# +# Refer for explanation to following link: +# https://lefthook.dev/configuration/ +# +# pre-push: +# jobs: +# - name: packages audit +# tags: +# - frontend +# - security +# run: yarn audit +# +# - name: gems audit +# tags: +# - backend +# - security +# run: bundle audit +# +# pre-commit: +# parallel: true +# jobs: +# - run: yarn eslint {staged_files} +# glob: "*.{js,ts,jsx,tsx}" +# +# - name: rubocop +# glob: "*.rb" +# exclude: +# - config/application.rb +# - config/routes.rb +# run: bundle exec rubocop --force-exclusion -- {all_files} +# +# - name: govet +# files: git ls-files -m +# glob: "*.go" +# run: go vet -- {files} +# +# - script: "hello.js" +# runner: node +# +# - script: "hello.go" +# runner: go run diff --git a/package.json b/package.json index 09b2513..ed2ebe9 100644 --- a/package.json +++ b/package.json @@ -7,13 +7,18 @@ "dev": "vite", "dev:tauri": "node dev-tauri.mjs", "build": "tsc -b && vite build", + "lint": "biome check src/", + "lint:fix": "biome check --write src/", + "format": "biome format src/", + "format:fix": "biome format --write src/", "preview": "vite preview", "test": "vitest run", "e2e": "wdio run e2e/wdio.conf.ts", "e2e:smoke": "wdio run e2e/wdio.conf.ts --spec e2e/specs/smoke.e2e.ts", "e2e:typecheck": "tsc -p e2e/tsconfig.json --noEmit", "typecheck": "tsc --noEmit", - "tauri": "tauri" + "tauri": "tauri", + "postinstall": "lefthook install --force" }, "dependencies": { "@fontsource-variable/inter": "^5.2.8", @@ -24,7 +29,7 @@ "@radix-ui/react-select": "^2.2.6", "@radix-ui/react-tooltip": "^1.2.8", "@tauri-apps/api": "^2", - "@tauri-apps/plugin-dialog": "^2.6.0", + "@tauri-apps/plugin-dialog": "2.6.0", "@tauri-apps/plugin-notification": "^2.3.3", "@xterm/addon-fit": "^0.11.0", "@xterm/xterm": "^6.0.0", @@ -42,11 +47,12 @@ "zustand": "^5" }, "devDependencies": { + "@biomejs/biome": "2.4.11", "@tailwindcss/vite": "^4", + "@tauri-apps/cli": "^2", "@testing-library/jest-dom": "^6.8.0", "@testing-library/react": "^16.3.0", "@testing-library/user-event": "^14.6.1", - "@tauri-apps/cli": "^2", "@types/mocha": "^10.0.10", "@types/node": "^25.6.0", "@types/react": "^19", @@ -59,6 +65,7 @@ "@wdio/spec-reporter": "^9.27.0", "expect-webdriverio": "^5.6.5", "jsdom": "^26.1.0", + "lefthook": "^2.1.5", "shadcn": "^4.1.2", "tailwindcss": "^4", "typescript": "^5", diff --git a/rustfmt.toml b/rustfmt.toml new file mode 100644 index 0000000..dce363e --- /dev/null +++ b/rustfmt.toml @@ -0,0 +1,3 @@ +edition = "2021" +max_width = 100 +use_field_init_shorthand = true diff --git a/scripts/setup-github-security.sh b/scripts/setup-github-security.sh new file mode 100755 index 0000000..20e29ff --- /dev/null +++ b/scripts/setup-github-security.sh @@ -0,0 +1,109 @@ +#!/usr/bin/env bash +set -Eeuo pipefail + +REPO="${1:-taberoajorge/loopforge}" +BRANCH="${2:-main}" + +need() { + command -v "$1" >/dev/null 2>&1 || { echo "missing dependency: $1" >&2; exit 1; } +} + +need gh +need jq + +echo "target repo: $REPO" +echo "target branch: $BRANCH" +echo + +visibility=$(gh api "repos/$REPO" --jq '.visibility') +echo "visibility: $visibility" +if [[ "$visibility" != "public" ]]; then + cat < super::PLAN_UNLOCK_LINE_THRESHOLD; + let can_be_plan = + self.saw_agent_response && self.line_count > super::PLAN_UNLOCK_LINE_THRESHOLD; if can_be_plan && Self::looks_like_markdown_plan_line(trimmed) { self.in_plan_markdown = true; diff --git a/src-tauri/src/activity/classifier/mod.rs b/src-tauri/src/activity/classifier/mod.rs index 015eaed..a249f58 100644 --- a/src-tauri/src/activity/classifier/mod.rs +++ b/src-tauri/src/activity/classifier/mod.rs @@ -104,10 +104,20 @@ impl ActivityClassifier { if self.patterns.error.iter().any(|regex| regex.is_match(line)) { return PlanEventKind::Error; } - if self.patterns.mcp_call.iter().any(|regex| regex.is_match(line)) { + if self + .patterns + .mcp_call + .iter() + .any(|regex| regex.is_match(line)) + { return PlanEventKind::McpCall; } - if self.patterns.search.iter().any(|regex| regex.is_match(line)) { + if self + .patterns + .search + .iter() + .any(|regex| regex.is_match(line)) + { return PlanEventKind::Search; } if self @@ -118,7 +128,12 @@ impl ActivityClassifier { { return PlanEventKind::DocsLookup; } - if self.patterns.thinking.iter().any(|regex| regex.is_match(line)) { + if self + .patterns + .thinking + .iter() + .any(|regex| regex.is_match(line)) + { return PlanEventKind::Thinking; } PlanEventKind::PlanContent @@ -142,7 +157,10 @@ impl ActivityClassifier { let mut start = token_line + 1; if start < self.content_buffer.len() { let next = self.content_buffer[start].trim(); - if next.chars().all(|ch| ch.is_ascii_digit() || ch == ',' || ch == '.') { + if next + .chars() + .all(|ch| ch.is_ascii_digit() || ch == ',' || ch == '.') + { start += 1; } } diff --git a/src-tauri/src/activity/tests_provider.rs b/src-tauri/src/activity/tests_provider.rs index ed8ff6a..a0582c1 100644 --- a/src-tauri/src/activity/tests_provider.rs +++ b/src-tauri/src/activity/tests_provider.rs @@ -44,6 +44,14 @@ fn codex_exec_line_starts_tool_output() { assert!(classifier.is_in_tool_output()); } +#[test] +fn codex_exec_cmd_line_starts_tool_output() { + let mut classifier = ActivityClassifier::new("codex"); + let event = classifier.classify(r#"exec cmd.exe /C "rg -n foo" in C:\repo"#); + assert_eq!(event.kind, PlanEventKind::McpCall); + assert!(classifier.is_in_tool_output()); +} + #[test] fn codex_tool_output_absorbed_until_narration() { let mut classifier = ActivityClassifier::new("codex"); @@ -65,6 +73,9 @@ fn codex_plan_content_only_after_tool_output_ends() { classifier.classify("line 1 of file"); classifier.classify("line 2 of file"); classifier.classify("codex I've reviewed the file."); + for idx in 0..30 { + classifier.classify(&format!("codex filler line {idx}")); + } let plan_line = classifier.classify("## Implementation Plan"); assert_eq!(plan_line.kind, PlanEventKind::PlanContent); assert!(classifier @@ -104,6 +115,9 @@ fn codex_no_tool_output_leak_into_plan_buffer() { classifier.classify("main.rs"); classifier.classify("lib.rs"); classifier.classify("codex Found the source files."); + for idx in 0..30 { + classifier.classify(&format!("codex padding {idx}")); + } classifier.classify("## My Plan"); classifier.classify("Step 1: Do the thing"); let plan = classifier.accumulated_plan(); diff --git a/src-tauri/src/agent_profiles.rs b/src-tauri/src/agent_profiles.rs index 20c99d9..b48248b 100644 --- a/src-tauri/src/agent_profiles.rs +++ b/src-tauri/src/agent_profiles.rs @@ -30,10 +30,16 @@ pub struct AgentCapabilities { pub default_effort: Option, } -pub fn resolve_capabilities(agent: &str, binary_path: Option<&str>, path_env: &str) -> AgentCapabilities { +pub fn resolve_capabilities( + agent: &str, + binary_path: Option<&str>, + path_env: &str, +) -> AgentCapabilities { let curated = curated_capabilities(agent); let discovered = discover_models(agent, binary_path, path_env); - if discovered.is_empty() { return curated; } + if discovered.is_empty() { + return curated; + } let models = discovered .into_iter() .map(|id| AgentModelOption { @@ -67,7 +73,12 @@ fn curated_capabilities(agent: &str) -> AgentCapabilities { source: "curated".to_string(), supports_model: true, supports_effort: true, - models: model_options(&["gpt-5.4", "gpt-5.4-mini", "gpt-5-codex", "gpt-5.3-codex-high"]), + models: model_options(&[ + "gpt-5.4", + "gpt-5.4-mini", + "gpt-5-codex", + "gpt-5.3-codex-high", + ]), efforts: effort_options(&["low", "medium", "high"]), default_model: Some("gpt-5.4".to_string()), default_effort: None, @@ -87,7 +98,12 @@ fn curated_capabilities(agent: &str) -> AgentCapabilities { source: "curated".to_string(), supports_model: true, supports_effort: false, - models: model_options(&["auto", "gemini-2.5-pro", "gemini-2.5-flash", "gemini-2.5-flash-lite"]), + models: model_options(&[ + "auto", + "gemini-2.5-pro", + "gemini-2.5-flash", + "gemini-2.5-flash-lite", + ]), efforts: Vec::new(), default_model: Some("auto".to_string()), default_effort: None, diff --git a/src-tauri/src/agent_runtime_env.rs b/src-tauri/src/agent_runtime_env.rs index bab8fa0..2fe9096 100644 --- a/src-tauri/src/agent_runtime_env.rs +++ b/src-tauri/src/agent_runtime_env.rs @@ -2,6 +2,13 @@ use std::collections::HashSet; use std::path::PathBuf; use std::process::Command; +pub fn ensure_full_path_env() { + let full_path = probe_path_env(); + if !full_path.is_empty() { + std::env::set_var("PATH", &full_path); + } +} + pub fn run_version_probe(binary_path: &str, path_env: &str, version_flag: &str) -> Option { let output = Command::new(binary_path) .arg(version_flag) @@ -27,17 +34,8 @@ pub fn probe_path_env() -> String { .map(|value| std::env::split_paths(&value).collect::>()) .unwrap_or_default(); let mut ordered_paths = current_path; - if let Some(home) = std::env::var_os("HOME").map(PathBuf::from) { - ordered_paths.push(home.join(".local/bin")); - ordered_paths.push(home.join(".bun/bin")); - ordered_paths.push(home.join(".cargo/bin")); - ordered_paths.push(home.join(".npm-global/bin")); - ordered_paths.push(home.join(".opencode/bin")); - } - ordered_paths.push(PathBuf::from("/opt/homebrew/bin")); - ordered_paths.push(PathBuf::from("/usr/local/bin")); - ordered_paths.push(PathBuf::from("/usr/bin")); - ordered_paths.push(PathBuf::from("/bin")); + append_user_tool_paths(&mut ordered_paths); + append_system_tool_paths(&mut ordered_paths); let mut seen = HashSet::new(); let unique_paths: Vec = ordered_paths .into_iter() @@ -51,7 +49,7 @@ pub fn probe_path_env() -> String { } pub fn resolve_binary_path(binary: &str, path_env: &str) -> Option { - if binary.contains('/') { + if binary.contains('/') || binary.contains('\\') { return std::path::Path::new(binary) .exists() .then_some(binary.to_string()); @@ -61,6 +59,74 @@ pub fn resolve_binary_path(binary: &str, path_env: &str) -> Option { if candidate.exists() { return Some(candidate.to_string_lossy().to_string()); } + #[cfg(windows)] + for extension in [".exe", ".cmd", ".bat"] { + let candidate_with_extension = entry.join(format!("{binary}{extension}")); + if candidate_with_extension.exists() { + return Some(candidate_with_extension.to_string_lossy().to_string()); + } + } } None } + +fn append_user_tool_paths(paths: &mut Vec) { + #[cfg(windows)] + { + if let Some(user_profile) = std::env::var_os("USERPROFILE").map(PathBuf::from) { + paths.push(user_profile.join("scoop").join("shims")); + paths.push( + user_profile + .join("AppData") + .join("Local") + .join("Microsoft") + .join("WinGet") + .join("Links"), + ); + paths.push(user_profile.join(".cargo").join("bin")); + paths.push(user_profile.join(".bun").join("bin")); + } + if let Some(app_data) = std::env::var_os("APPDATA").map(PathBuf::from) { + paths.push(app_data.join("npm")); + } + if let Some(local_app_data) = std::env::var_os("LOCALAPPDATA").map(PathBuf::from) { + paths.push( + local_app_data + .join("Microsoft") + .join("WinGet") + .join("Links"), + ); + } + } + #[cfg(not(windows))] + { + if let Some(home_dir) = std::env::var_os("HOME").map(PathBuf::from) { + paths.push(home_dir.join(".local/bin")); + paths.push(home_dir.join(".bun/bin")); + paths.push(home_dir.join(".cargo/bin")); + paths.push(home_dir.join(".npm-global/bin")); + paths.push(home_dir.join(".opencode/bin")); + } + } +} + +fn append_system_tool_paths(paths: &mut Vec) { + #[cfg(windows)] + { + if let Some(program_files) = std::env::var_os("ProgramFiles").map(PathBuf::from) { + paths.push(program_files.join("Git").join("cmd")); + paths.push(program_files.join("nodejs")); + } + if let Some(program_files_x86) = std::env::var_os("ProgramFiles(x86)").map(PathBuf::from) { + paths.push(program_files_x86.join("Git").join("cmd")); + paths.push(program_files_x86.join("nodejs")); + } + } + #[cfg(not(windows))] + { + paths.push(PathBuf::from("/opt/homebrew/bin")); + paths.push(PathBuf::from("/usr/local/bin")); + paths.push(PathBuf::from("/usr/bin")); + paths.push(PathBuf::from("/bin")); + } +} diff --git a/src-tauri/src/agents.rs b/src-tauri/src/agents.rs index 917171c..c507387 100644 --- a/src-tauri/src/agents.rs +++ b/src-tauri/src/agents.rs @@ -1,4 +1,5 @@ use serde::{Deserialize, Serialize}; +use std::path::Path; use std::sync::Mutex; use tauri::{AppHandle, State}; use thiserror::Error; @@ -12,6 +13,15 @@ pub struct AgentInfo { pub available: bool, } +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct SystemReadiness { + pub agents: Vec, + pub git_available: bool, + pub shell_available: bool, + pub platform: String, +} + #[derive(Debug, Default)] pub struct AgentRegistry { pub agents: Vec, @@ -78,13 +88,27 @@ async fn probe_agent(app: &AppHandle, binary: &str) -> (bool, Option) { { return (true, Some(version)); } - if let Some(version) = crate::agent_runtime_env::run_version_probe(&binary_path, &path_env, "-v") + if let Some(version) = + crate::agent_runtime_env::run_version_probe(&binary_path, &path_env, "-v") { return (true, Some(version)); } (true, None) } +fn has_binary(path_env: &str, binary: &str) -> bool { + crate::agent_runtime_env::resolve_binary_path(binary, path_env).is_some() +} + +fn has_shell(path_env: &str) -> bool { + let (shell_program, _) = crate::shell_resolve::resolve_shell(); + if shell_program.contains('/') || shell_program.contains('\\') { + Path::new(&shell_program).exists() + } else { + has_binary(path_env, &shell_program) + } +} + #[cfg(test)] pub struct FallbackChain { agents: Vec, @@ -94,7 +118,8 @@ pub struct FallbackChain { #[cfg(test)] impl FallbackChain { pub fn new(agents: Vec) -> Self { - let available: Vec = agents.into_iter().filter(|agent| agent.available).collect(); + let available: Vec = + agents.into_iter().filter(|agent| agent.available).collect(); Self { agents: available, current_index: 0, @@ -134,8 +159,8 @@ pub async fn detect_agents( for (name, binary) in KNOWN_AGENTS { let (available, version) = probe_agent(&app, binary).await; detected.push(AgentInfo { - name: name.to_string(), - binary: binary.to_string(), + name: (*name).to_string(), + binary: (*binary).to_string(), version, available, }); @@ -144,6 +169,14 @@ pub async fn detect_agents( store_detected_agents(state, detected) } +#[tauri::command] +pub async fn get_known_agents() -> Result, AgentError> { + Ok(KNOWN_AGENTS + .iter() + .map(|(name, _)| (*name).to_string()) + .collect()) +} + #[tauri::command] pub async fn get_agent_capabilities( agent: String, @@ -151,7 +184,7 @@ pub async fn get_agent_capabilities( let normalized = agent.trim().to_lowercase(); if let Some(runtime) = resolve_test_runtime() { return crate::test_support::agents::fixture_capabilities(&runtime, &normalized) - .ok_or_else(|| AgentError::UnknownAgent(normalized)); + .ok_or(AgentError::UnknownAgent(normalized)); } let Some(binary_name) = known_agent_binary(&normalized) else { return Err(AgentError::UnknownAgent(normalized)); @@ -165,6 +198,66 @@ pub async fn get_agent_capabilities( )) } +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct ResolvedAgentSelection { + pub capabilities: crate::agent_profiles::AgentCapabilities, + pub resolved_model: Option, + pub resolved_effort: Option, +} + +fn resolve_model( + caps: &crate::agent_profiles::AgentCapabilities, + current: Option, +) -> Option { + if !caps.supports_model { + return None; + } + let current_trimmed = current.filter(|val| !val.trim().is_empty()); + if let Some(ref model) = current_trimmed { + if caps.models.iter().any(|entry| entry.id == *model) { + return current_trimmed; + } + } + caps.default_model + .clone() + .or_else(|| caps.models.first().map(|entry| entry.id.clone())) +} + +fn resolve_effort( + caps: &crate::agent_profiles::AgentCapabilities, + current: Option, +) -> Option { + if !caps.supports_effort { + return None; + } + let current_trimmed = current.filter(|val| !val.trim().is_empty()); + if let Some(ref effort) = current_trimmed { + if caps.efforts.iter().any(|entry| entry.id == *effort) { + return current_trimmed; + } + } + caps.default_effort + .clone() + .or_else(|| caps.efforts.first().map(|entry| entry.id.clone())) +} + +#[tauri::command] +pub async fn resolve_agent_selection( + agent: String, + current_model: Option, + current_effort: Option, +) -> Result { + let caps = get_agent_capabilities(agent).await?; + let resolved_model = resolve_model(&caps, current_model); + let resolved_effort = resolve_effort(&caps, current_effort); + Ok(ResolvedAgentSelection { + capabilities: caps, + resolved_model, + resolved_effort, + }) +} + #[tauri::command] pub async fn refresh_agents( app: AppHandle, @@ -172,3 +265,18 @@ pub async fn refresh_agents( ) -> Result, AgentError> { detect_agents(app, state).await } + +#[tauri::command] +pub async fn check_system_readiness( + app: AppHandle, + state: State<'_, AgentRegistryState>, +) -> Result { + let detected_agents = detect_agents(app, state).await?; + let path_env = crate::agent_runtime_env::probe_path_env(); + Ok(SystemReadiness { + agents: detected_agents, + git_available: has_binary(&path_env, "git"), + shell_available: has_shell(&path_env), + platform: std::env::consts::OS.to_string(), + }) +} diff --git a/src-tauri/src/ask_engine/args.rs b/src-tauri/src/ask_engine/args.rs index 767e49d..d23caf6 100644 --- a/src-tauri/src/ask_engine/args.rs +++ b/src-tauri/src/ask_engine/args.rs @@ -7,9 +7,9 @@ pub fn build_ask_args( model: Option<&str>, ) -> Vec { let selected_model = model - .map(|value| value.trim()) + .map(str::trim) .filter(|value| !value.is_empty()) - .map(|value| value.to_string()); + .map(ToString::to_string); match agent { "claude" => { @@ -66,8 +66,7 @@ pub fn build_ask_args( prompt.to_string(), ], "cursor" => { - let model_id = - selected_model.unwrap_or_else(crate::agent_runtime::cursor_model_arg); + let model_id = selected_model.unwrap_or_else(crate::agent_runtime::cursor_model_arg); vec![ "agent".to_string(), "--print".to_string(), diff --git a/src-tauri/src/ask_engine/context.rs b/src-tauri/src/ask_engine/context.rs index fab2b64..a3c11dd 100644 --- a/src-tauri/src/ask_engine/context.rs +++ b/src-tauri/src/ask_engine/context.rs @@ -15,8 +15,10 @@ pub fn build_ask_context( ) -> String { let mut sections: Vec = Vec::new(); - sections.push("You are a project assistant for a software project managed by LoopForge.".into()); - sections.push("Answer questions about the project state, progress, blockers, and stories.".into()); + sections + .push("You are a project assistant for a software project managed by LoopForge.".into()); + sections + .push("Answer questions about the project state, progress, blockers, and stories.".into()); sections.push("Be concise and direct. Reference story IDs when relevant.\n".into()); if let Some(stories_section) = build_stories_section(artifact_dir) { @@ -51,10 +53,22 @@ fn build_stories_section(artifact_dir: &Path) -> Option { let mut lines = vec!["## PRD Stories".to_string()]; for story in stories { let story_id = story.get("id").and_then(|val| val.as_str()).unwrap_or("?"); - let title = story.get("title").and_then(|val| val.as_str()).unwrap_or("untitled"); - let passes = story.get("passes").and_then(|val| val.as_bool()).unwrap_or(false); - let blocked = story.get("blocked").and_then(|val| val.as_bool()).unwrap_or(false); - let attempts = story.get("attempts").and_then(|val| val.as_u64()).unwrap_or(0); + let title = story + .get("title") + .and_then(|val| val.as_str()) + .unwrap_or("untitled"); + let passes = story + .get("passes") + .and_then(serde_json::Value::as_bool) + .unwrap_or(false); + let blocked = story + .get("blocked") + .and_then(serde_json::Value::as_bool) + .unwrap_or(false); + let attempts = story + .get("attempts") + .and_then(serde_json::Value::as_u64) + .unwrap_or(0); let status = if passes { "done" } else if blocked { @@ -62,7 +76,9 @@ fn build_stories_section(artifact_dir: &Path) -> Option { } else { "pending" }; - lines.push(format!("- {story_id}: {title} [{status}, {attempts} attempts]")); + lines.push(format!( + "- {story_id}: {title} [{status}, {attempts} attempts]" + )); } lines.push(String::new()); Some(lines.join("\n")) @@ -111,15 +127,20 @@ fn build_iterations_section(conn: &Connection, project_id: &str) -> Option = stmt - .query_map(rusqlite::params![project_id, MAX_ITERATIONS as i64], |row| { - let story_id: String = row.get(0)?; - let result: String = row.get(1)?; - let agent: String = row.get(2)?; - let duration: i64 = row.get(3)?; - Ok(format!("- {story_id}: {result} (agent: {agent}, {duration}s)")) - }) + .query_map( + rusqlite::params![project_id, MAX_ITERATIONS as i64], + |row| { + let story_id: String = row.get(0)?; + let result: String = row.get(1)?; + let agent: String = row.get(2)?; + let duration: i64 = row.get(3)?; + Ok(format!( + "- {story_id}: {result} (agent: {agent}, {duration}s)" + )) + }, + ) .ok()? - .filter_map(|row| row.ok()) + .filter_map(Result::ok) .collect(); if rows.is_empty() { @@ -140,7 +161,11 @@ fn build_history_section(history: &[AskMessage]) -> String { }; let mut lines = vec!["## Conversation History".to_string()]; for msg in &history[start..] { - let role_label = if msg.role == "user" { "User" } else { "Assistant" }; + let role_label = if msg.role == "user" { + "User" + } else { + "Assistant" + }; lines.push(format!("{role_label}: {}", msg.content)); } lines.push(String::new()); diff --git a/src-tauri/src/ask_engine/fixture.rs b/src-tauri/src/ask_engine/fixture.rs index 232e7ee..ea6145e 100644 --- a/src-tauri/src/ask_engine/fixture.rs +++ b/src-tauri/src/ask_engine/fixture.rs @@ -1,9 +1,9 @@ -use crate::ask_engine::types::{AskCompletePayload, AskStreamPayload}; +use crate::ask_engine::types::{AskCompletePayload, AskMessage, AskStreamPayload}; use crate::events::{EVENT_ASK_COMPLETE, EVENT_ASK_STREAM}; use crate::test_support::runtime::{FixtureSet, TestRuntime}; use std::path::Path; use std::sync::atomic::{AtomicU32, Ordering}; -use tauri::{AppHandle, Emitter}; +use tauri::{AppHandle, Emitter, Manager}; static ASK_CALL_COUNT: AtomicU32 = AtomicU32::new(0); @@ -35,18 +35,49 @@ pub async fn spawn_fixture_ask( persist_interaction(runtime.data_dir(), idx, question, &response); + let saved_message = save_fixture_message(&app, &project_id, &response); + let _ = app.emit( EVENT_ASK_COMPLETE, AskCompletePayload { - project_id, - message_id, - full_content: response, + project_id: project_id.clone(), + message_id: message_id.clone(), + full_content: response.clone(), agent: "fixture".to_string(), model: Some("deterministic".to_string()), + message: saved_message.unwrap_or(AskMessage { + id: message_id, + conversation_id: String::new(), + role: "assistant".to_string(), + content: response, + agent: Some("fixture".to_string()), + model: Some("deterministic".to_string()), + created_at: chrono::Utc::now().to_rfc3339(), + }), }, ); } +fn save_fixture_message( + app: &AppHandle, + project_id: &str, + content: &str, +) -> Option { + let db = app.try_state::()?; + let conn = db.0.lock().ok()?; + let conversation = + crate::ask_engine::storage::get_or_create_conversation(&conn, project_id).ok()?; + crate::ask_engine::storage::insert_message( + &conn, + &conversation.id, + "assistant", + content, + Some("fixture"), + Some("deterministic"), + ) + .ok() +} + fn load_fixture_response( dialog_dir: &Path, fixture_set: FixtureSet, diff --git a/src-tauri/src/ask_engine/session.rs b/src-tauri/src/ask_engine/session.rs index 2196acd..67f8cb1 100644 --- a/src-tauri/src/ask_engine/session.rs +++ b/src-tauri/src/ask_engine/session.rs @@ -12,13 +12,19 @@ pub struct AskSessionsState(pub Arc>); impl AskSessionsState { pub fn insert(&self, project_id: &str, child: CommandChild) -> Result<(), String> { - let mut sessions = self.0.lock().map_err(|_| "Ask session lock poisoned".to_string())?; + let mut sessions = self + .0 + .lock() + .map_err(|_| "Ask session lock poisoned".to_string())?; sessions.sessions.insert(project_id.to_string(), child); Ok(()) } pub fn remove_and_kill(&self, project_id: &str) -> Result { - let mut sessions = self.0.lock().map_err(|_| "Ask session lock poisoned".to_string())?; + let mut sessions = self + .0 + .lock() + .map_err(|_| "Ask session lock poisoned".to_string())?; if let Some(child) = sessions.sessions.remove(project_id) { let _ = child.kill(); Ok(true) diff --git a/src-tauri/src/ask_engine/stream.rs b/src-tauri/src/ask_engine/stream.rs index a414597..e380856 100644 --- a/src-tauri/src/ask_engine/stream.rs +++ b/src-tauri/src/ask_engine/stream.rs @@ -2,7 +2,7 @@ use crate::ask_engine::args::{agent_env_vars, build_ask_args, is_safe_binary_nam use crate::ask_engine::context::build_ask_context; use crate::ask_engine::session::AskSessionsState; use crate::ask_engine::types::{ - AskCompletePayload, AskErrorPayload, AskStreamPayload, StartAskArgs, + AskCompletePayload, AskErrorPayload, AskMessage, AskStreamPayload, StartAskArgs, }; use crate::ask_engine::AskEngineError; use crate::db::DbState; @@ -57,10 +57,11 @@ pub async fn spawn_ask( let env_vars = agent_env_vars(&args.agent); let (mut event_rx, child) = if args.agent == "codex" { - let shell_cmd = build_null_stdin_command(&agent_binary, &agent_args); + let (shell_program, shell_args) = + crate::shell_resolve::build_null_stdin_command(&agent_binary, &agent_args); app.shell() - .command("/bin/zsh") - .args(["-lc", &shell_cmd]) + .command(&shell_program) + .args(shell_args) .envs(env_vars) .current_dir(&project_dir) .spawn() @@ -77,7 +78,7 @@ pub async fn spawn_ask( sessions .insert(&args.project_id, child) - .map_err(|err| AskEngineError::Shell(err))?; + .map_err(AskEngineError::Shell)?; let project_id = args.project_id.clone(); let agent_name = args.agent.clone(); @@ -106,15 +107,24 @@ pub async fn spawn_ask( ); } CommandEvent::Terminated(status) => { - let success = status.code.map(|code| code == 0).unwrap_or(false); + let success = status.code.is_some_and(|code| code == 0); if success && !collected.trim().is_empty() { - save_assistant_message( + let message = save_assistant_message( &app_clone, &project_id, &collected, &agent_name, agent_model.as_deref(), - ); + ) + .unwrap_or_else(|| AskMessage { + id: message_id.clone(), + conversation_id: String::new(), + role: "assistant".to_string(), + content: collected.clone(), + agent: Some(agent_name.clone()), + model: agent_model.clone(), + created_at: chrono::Utc::now().to_rfc3339(), + }); let _ = app_clone.emit( EVENT_ASK_COMPLETE, AskCompletePayload { @@ -123,6 +133,7 @@ pub async fn spawn_ask( full_content: collected.clone(), agent: agent_name.clone(), model: agent_model.clone(), + message, }, ); } else { @@ -131,12 +142,24 @@ pub async fn spawn_ask( } else { collected.clone() }; + let content = format!("Error: {error_msg}"); + let message = save_error_message(&app_clone, &project_id, &content) + .unwrap_or_else(|| AskMessage { + id: message_id.clone(), + conversation_id: String::new(), + role: "assistant".to_string(), + content, + agent: None, + model: None, + created_at: chrono::Utc::now().to_rfc3339(), + }); let _ = app_clone.emit( EVENT_ASK_ERROR, AskErrorPayload { project_id: project_id.clone(), message_id: message_id.clone(), error: error_msg, + message, }, ); } @@ -169,34 +192,46 @@ fn save_assistant_message( content: &str, agent: &str, model: Option<&str>, -) { +) -> Option { let db = app.state::(); - let Ok(conn) = db.0.lock() else { return }; + let Ok(conn) = db.0.lock() else { return None }; let Ok(conversation) = crate::ask_engine::storage::get_or_create_conversation(&conn, project_id) else { - return; + return None; }; - let _ = crate::ask_engine::storage::insert_message( + crate::ask_engine::storage::insert_message( &conn, &conversation.id, "assistant", content, Some(agent), model, - ); + ) + .ok() } -fn build_null_stdin_command(binary: &str, args: &[String]) -> String { - let escaped_args: Vec = args - .iter() - .map(|arg| format!("'{}'", arg.replace('\'', "'\\''"))) - .collect(); - format!( - "{binary} {args} < /dev/null", - binary = binary, - args = escaped_args.join(" ") +fn save_error_message( + app: &AppHandle, + project_id: &str, + content: &str, +) -> Option { + let db = app.state::(); + let Ok(conn) = db.0.lock() else { return None }; + let Ok(conversation) = + crate::ask_engine::storage::get_or_create_conversation(&conn, project_id) + else { + return None; + }; + crate::ask_engine::storage::insert_message( + &conn, + &conversation.id, + "assistant", + content, + None, + None, ) + .ok() } async fn resolve_agent_binary( @@ -210,11 +245,11 @@ async fn resolve_agent_binary( ))); } - let lookup = format!("command -v {binary}"); + let (shell_program, shell_args) = crate::shell_resolve::resolve_binary_via_shell(binary); let output = app .shell() - .command("/bin/zsh") - .args(["-lc", &lookup]) + .command(&shell_program) + .args(shell_args) .output() .await .map_err(|err| AskEngineError::Shell(err.to_string()))?; diff --git a/src-tauri/src/ask_engine/types.rs b/src-tauri/src/ask_engine/types.rs index 85b1b3d..9613ee4 100644 --- a/src-tauri/src/ask_engine/types.rs +++ b/src-tauri/src/ask_engine/types.rs @@ -32,6 +32,13 @@ pub struct StartAskArgs { pub model: Option, } +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct AskQuestionResult { + pub message_id: String, + pub user_message: AskMessage, +} + #[derive(Debug, Clone, Serialize)] #[serde(rename_all = "camelCase")] pub struct AskStreamPayload { @@ -49,6 +56,7 @@ pub struct AskCompletePayload { pub agent: String, #[serde(default)] pub model: Option, + pub message: AskMessage, } #[derive(Debug, Clone, Serialize)] @@ -57,4 +65,5 @@ pub struct AskErrorPayload { pub project_id: String, pub message_id: String, pub error: String, + pub message: AskMessage, } diff --git a/src-tauri/src/atomizer/activity.rs b/src-tauri/src/atomizer/activity.rs new file mode 100644 index 0000000..9d1f8ee --- /dev/null +++ b/src-tauri/src/atomizer/activity.rs @@ -0,0 +1,55 @@ +use crate::atomizer::types::{AtomizeActivity, AtomizeActivityKind}; +use crate::events::EVENT_ATOMIZATION_ACTIVITY; +use std::collections::{HashMap, VecDeque}; +use std::sync::{Arc, Mutex}; +use tauri::{AppHandle, Emitter, Manager, Runtime}; + +const MAX_LOG_ENTRIES: usize = 300; + +#[derive(Debug, Default)] +pub struct ActivityLog { + entries: HashMap>, +} + +impl ActivityLog { + fn push(&mut self, entry: AtomizeActivity) { + let ring = self + .entries + .entry(entry.project_id.clone()) + .or_insert_with(|| VecDeque::with_capacity(MAX_LOG_ENTRIES)); + if ring.len() >= MAX_LOG_ENTRIES { + ring.pop_front(); + } + ring.push_back(entry); + } + + pub fn get(&self, project_id: &str) -> Vec { + self.entries + .get(project_id) + .map(|ring| ring.iter().cloned().collect()) + .unwrap_or_default() + } +} + +#[derive(Debug, Clone, Default)] +pub struct ActivityLogState(pub Arc>); + +pub(super) fn emit_activity( + app: &AppHandle, + project_id: &str, + kind: AtomizeActivityKind, + content: &str, +) { + let entry = AtomizeActivity { + project_id: project_id.to_string(), + kind, + content: content.to_string(), + timestamp: chrono::Utc::now().to_rfc3339_opts(chrono::SecondsFormat::Millis, true), + }; + + if let Ok(mut log) = app.state::().0.lock() { + log.push(entry.clone()); + } + + let _ = app.emit(EVENT_ATOMIZATION_ACTIVITY, entry); +} diff --git a/src-tauri/src/atomizer/agent_args.rs b/src-tauri/src/atomizer/agent_args.rs index 8168f89..7df4d76 100644 --- a/src-tauri/src/atomizer/agent_args.rs +++ b/src-tauri/src/atomizer/agent_args.rs @@ -32,13 +32,13 @@ pub(super) fn build_agent_args( effort: Option<&str>, ) -> Vec { let selected_model = model - .map(|value| value.trim()) + .map(str::trim) .filter(|value| !value.is_empty()) - .map(|value| value.to_string()); + .map(ToString::to_string); let selected_effort = effort - .map(|value| value.trim()) + .map(str::trim) .filter(|value| !value.is_empty()) - .map(|value| value.to_string()); + .map(ToString::to_string); match agent { "claude" => { let mut args = vec![ @@ -51,7 +51,7 @@ pub(super) fn build_agent_args( "--mcp-config".to_string(), crate::agent_runtime::CLAUDE_EMPTY_MCP_CONFIG.to_string(), "--tools".to_string(), - "".to_string(), + String::new(), ]; if let Some(model_id) = selected_model { args.extend(["--model".to_string(), model_id]); @@ -127,10 +127,21 @@ pub(super) fn build_agent_args( } pub(super) fn shell_quote(value: &str) -> String { - if value.is_empty() { - "''".to_string() - } else { - format!("'{}'", value.replace('\'', "'\"'\"'")) + #[cfg(windows)] + { + if value.is_empty() { + "\"\"".to_string() + } else { + format!("\"{}\"", value.replace('"', "\"\"")) + } + } + #[cfg(not(windows))] + { + if value.is_empty() { + "''".to_string() + } else { + format!("'{}'", value.replace('\'', "'\"'\"'")) + } } } @@ -140,5 +151,12 @@ pub(super) fn build_null_stdin_shell_command(agent_binary: &str, args: &[String] for value in args { command_parts.push(shell_quote(value)); } - format!("{} < /dev/null", command_parts.join(" ")) + #[cfg(windows)] + { + format!("{} < NUL", command_parts.join(" ")) + } + #[cfg(not(windows))] + { + format!("{} < /dev/null", command_parts.join(" ")) + } } diff --git a/src-tauri/src/atomizer/agent_invoke.rs b/src-tauri/src/atomizer/agent_invoke.rs index d364996..bd606b8 100644 --- a/src-tauri/src/atomizer/agent_invoke.rs +++ b/src-tauri/src/atomizer/agent_invoke.rs @@ -1,6 +1,4 @@ -use crate::atomizer::agent_args::{ - agent_env_vars, build_agent_args, build_null_stdin_shell_command, is_safe_binary_name, -}; +use crate::atomizer::agent_args::{agent_env_vars, build_agent_args, is_safe_binary_name}; use crate::atomizer::progress::emit_progress; use crate::atomizer::AtomizerError; use std::path::Path; @@ -20,11 +18,11 @@ async fn resolve_agent_binary( ))); } - let lookup = format!("command -v {binary}"); + let (shell_program, shell_args) = crate::shell_resolve::resolve_binary_via_shell(binary); let output = app .shell() - .command("/bin/zsh") - .args(["-lc", &lookup]) + .command(&shell_program) + .args(shell_args) .output() .await .map_err(|err| AtomizerError::AgentFailed(err.to_string()))?; @@ -129,10 +127,11 @@ async fn invoke_inner( project_dir: &Path, ) -> Result { let output = if needs_null_stdin(agent) { - let shell_command = build_null_stdin_shell_command(agent_binary, args); + let (shell_program, shell_args) = + crate::shell_resolve::build_null_stdin_command(agent_binary, args); app.shell() - .command("/bin/zsh") - .args(["-lc", &shell_command]) + .command(&shell_program) + .args(shell_args) .envs(env_vars.to_vec()) .current_dir(project_dir) .output() diff --git a/src-tauri/src/atomizer/artifacts.rs b/src-tauri/src/atomizer/artifacts.rs index c9c7884..4a1ae93 100644 --- a/src-tauri/src/atomizer/artifacts.rs +++ b/src-tauri/src/atomizer/artifacts.rs @@ -34,24 +34,80 @@ pub(super) fn save_artifacts(project_dir: &Path, prd: &Prd) -> Result<(), Atomiz fn build_default_prompt(project_name: &str) -> String { format!( "# {project_name} — Execution Prompt\n\n\ -You are an expert software engineer implementing user stories for this project.\n\n\ -For each story:\n\ -1. Read the story and acceptance criteria carefully\n\ -2. Implement the changes described in scope.filesToModify and scope.filesToCreate\n\ -3. Run verification.commands to confirm your work\n\ -4. Mark the story as passed by updating prd.json: set `\"passes\": true` for the story\n\ -5. Commit with the provided commitMessage\n\n\ -IMPORTANT: Only modify files listed in scope.filesToModify or scope.filesToCreate.\n\ -Never modify files matching patterns in scope.filesToAvoid.\n" +## Identity\n\n\ +You are an expert software engineer implementing user stories for {project_name}. \ +You receive one story at a time from prd.json. Your job is to implement it correctly, \ +verify it works, and commit the result.\n\n\ +## Workflow per story\n\n\ +1. Read the story JSON carefully: title, description, acceptanceCriteria, scope, verification\n\ +2. Explore affected files before writing code. Understand existing patterns and conventions\n\ +3. Implement the changes scoped to filesToModify and filesToCreate\n\ +4. Run every command in verification.commands and confirm each passes\n\ +5. Check each acceptance criterion against the actual behavior\n\ +6. Update prd.json: set `\"passes\": true` for the story\n\ +7. Commit with the exact commitMessage from the story\n\n\ +## Code quality\n\n\ +- Match the existing code style, naming conventions, and patterns in the project\n\ +- Do not add features, refactoring, or improvements beyond what the story requires\n\ +- Do not add comments that narrate what the code does; only comment non-obvious intent\n\ +- Do not create abstractions for one-time operations or hypothetical future use\n\ +- Do not add error handling for scenarios that cannot happen in the current context\n\ +- Three similar lines of code are better than a premature abstraction\n\n\ +## Scope enforcement\n\n\ +- ONLY modify files listed in scope.filesToModify\n\ +- ONLY create files listed in scope.filesToCreate\n\ +- NEVER touch files matching patterns in scope.filesToAvoid\n\ +- If you discover a necessary change outside scope, note it in the story's notes field \ +but do not make the change\n\n\ +## Verification\n\n\ +- Run ALL commands in verification.commands before marking a story as passed\n\ +- If a command fails, diagnose the root cause before retrying\n\ +- Do not claim a story passes when verification output shows failures\n\ +- If you cannot make a story pass after a genuine attempt, set passes to false \ +and write a clear explanation in the notes field\n\n\ +## Failure handling\n\n\ +- If verification fails, read the error output carefully\n\ +- Fix the root cause, not the symptom\n\ +- Do not retry the same approach more than twice without changing strategy\n\ +- If blocked by a dependency that should have been resolved by an earlier story, \ +set blocked to true and explain in notes\n" ) } fn build_default_guardrails(project_name: &str) -> String { format!( "# {project_name} — Guardrails\n\n\ -- Do not modify files outside the defined scope\n\ -- Run all verification commands before marking a story as passed\n\ +## Scope boundaries\n\n\ +- Only modify files explicitly listed in the story's scope.filesToModify\n\ +- Only create files explicitly listed in scope.filesToCreate\n\ +- Never touch files matching patterns in scope.filesToAvoid\n\ +- If a change outside scope is necessary, document it in the story's notes field \ +and leave it for a human to decide\n\n\ +## Verification before completion\n\n\ +- Run every command in verification.commands and read the full output\n\ +- Check each acceptance criterion against actual observed behavior\n\ +- A story is passed ONLY when all verification commands exit 0 and all criteria are met\n\ +- Never mark a story as passed based on expectation; only mark it based on evidence\n\n\ +## Honest reporting\n\n\ +- If tests fail, report the failure with the relevant output\n\ +- If you did not run a verification step, say so instead of implying it succeeded\n\ +- Never claim \"all tests pass\" when output shows failures\n\ +- Never suppress, simplify, or reinterpret failing checks to manufacture a green result\n\ +- When a check did pass, state it plainly without unnecessary disclaimers\n\n\ +## Commit discipline\n\n\ - Commit only the changes for the current story\n\ -- If verification fails, fix the issue before proceeding\n" +- Use the exact commitMessage from the story JSON\n\ +- Do not bundle unrelated changes into a story's commit\n\ +- Do not commit generated files, build artifacts, or local configuration\n\n\ +## Error recovery\n\n\ +- If verification fails, diagnose the root cause by reading error output\n\ +- Fix the underlying issue, not the symptom\n\ +- If the same approach fails twice, change strategy\n\ +- If blocked by a missing dependency from a prior story, set blocked to true \ +and explain in notes instead of attempting a workaround\n\n\ +## Dependency ordering\n\n\ +- Implement stories in the order provided by prd.json\n\ +- Do not skip ahead to a later story\n\ +- If a story's dependsOn references an incomplete story, set blocked to true\n" ) } diff --git a/src-tauri/src/atomizer/mod.rs b/src-tauri/src/atomizer/mod.rs index f49f211..f83d7aa 100644 --- a/src-tauri/src/atomizer/mod.rs +++ b/src-tauri/src/atomizer/mod.rs @@ -1,3 +1,4 @@ +mod activity; mod agent_args; mod agent_invoke; mod artifacts; @@ -11,8 +12,10 @@ mod stages; mod templates; mod types; +pub use activity::ActivityLogState; +pub use progress::{get_pipeline_snapshot, PipelineRegistryState}; pub use run::run_atomizer; -pub use types::{AtomizeArgs, AtomizeProgress, AtomizerError}; +pub use types::{AtomizeActivity, AtomizeArgs, AtomizeProgress, AtomizerError, PipelineSnapshot}; #[cfg(test)] mod tests_json; diff --git a/src-tauri/src/atomizer/progress.rs b/src-tauri/src/atomizer/progress.rs index ef8d7d3..74bbf49 100644 --- a/src-tauri/src/atomizer/progress.rs +++ b/src-tauri/src/atomizer/progress.rs @@ -1,5 +1,86 @@ +use crate::atomizer::types::{PipelineSnapshot, StageStatus}; use crate::atomizer::AtomizeProgress; -use tauri::{AppHandle, Emitter, Runtime}; +use std::collections::HashMap; +use std::sync::{Arc, Mutex}; +use std::time::Instant; +use tauri::{AppHandle, Emitter, Manager, Runtime}; + +#[derive(Debug)] +struct PipelineEntry { + snapshot: PipelineSnapshot, + started_instant: Instant, +} + +#[derive(Debug, Default)] +pub struct PipelineRegistry { + entries: HashMap, +} + +#[derive(Debug, Clone, Default)] +pub struct PipelineRegistryState(pub Arc>); + +pub(super) fn mark_pipeline_start(app: &AppHandle, project_id: &str) { + let now_rfc = chrono::Utc::now().to_rfc3339_opts(chrono::SecondsFormat::Millis, true); + let mut snapshot = PipelineSnapshot::initial(); + snapshot.started_at = Some(now_rfc); + if let Ok(mut registry) = app.state::().0.lock() { + registry.entries.insert( + project_id.to_string(), + PipelineEntry { + snapshot, + started_instant: Instant::now(), + }, + ); + } +} + +pub(super) fn mark_pipeline_error(app: &AppHandle, project_id: &str, error: &str) { + if let Ok(mut registry) = app.state::().0.lock() { + if let Some(entry) = registry.entries.get_mut(project_id) { + entry.snapshot.error = Some(error.to_string()); + for stage in &mut entry.snapshot.stages { + if stage.status == StageStatus::Running { + stage.status = StageStatus::Error; + } + } + entry.snapshot.elapsed_ms = entry.started_instant.elapsed().as_millis() as u64; + } + } +} + +pub(super) fn mark_pipeline_done(app: &AppHandle, project_id: &str) { + if let Ok(mut registry) = app.state::().0.lock() { + if let Some(entry) = registry.entries.get_mut(project_id) { + entry.snapshot.done = true; + for stage in &mut entry.snapshot.stages { + if stage.status != StageStatus::Error { + stage.status = StageStatus::Done; + } + } + entry.snapshot.elapsed_ms = entry.started_instant.elapsed().as_millis() as u64; + } + } +} + +pub fn get_pipeline_snapshot( + app: &AppHandle, + project_id: &str, +) -> Option { + app.state::() + .0 + .lock() + .ok() + .and_then(|registry| { + registry.entries.get(project_id).map(|entry| { + let mut snap = entry.snapshot.clone(); + if !snap.done && snap.error.is_none() { + snap.elapsed_ms = entry.started_instant.elapsed().as_millis() as u64; + } + snap.recompute_derived(); + snap + }) + }) +} pub(super) fn emit_progress( app: &AppHandle, @@ -8,6 +89,31 @@ pub(super) fn emit_progress( stage_name: &str, message: &str, ) { + if let Ok(mut registry) = app.state::().0.lock() { + if let Some(entry) = registry.entries.get_mut(project_id) { + for snap_stage in &mut entry.snapshot.stages { + if snap_stage.number == stage { + snap_stage.status = StageStatus::Running; + } else if snap_stage.number < stage { + snap_stage.status = StageStatus::Done; + } + } + entry.snapshot.elapsed_ms = entry.started_instant.elapsed().as_millis() as u64; + } + } + + let elapsed_ms = app + .state::() + .0 + .lock() + .ok() + .and_then(|reg| { + reg.entries + .get(project_id) + .map(|ent| ent.snapshot.elapsed_ms) + }) + .unwrap_or(0); + let _ = app.emit( "atomization-progress", AtomizeProgress { @@ -15,6 +121,7 @@ pub(super) fn emit_progress( stage_name: stage_name.to_string(), message: message.to_string(), project_id: project_id.to_string(), + elapsed_ms, }, ); } diff --git a/src-tauri/src/atomizer/run.rs b/src-tauri/src/atomizer/run.rs index 2957a38..0b83e4f 100644 --- a/src-tauri/src/atomizer/run.rs +++ b/src-tauri/src/atomizer/run.rs @@ -1,9 +1,13 @@ +use crate::atomizer::activity::emit_activity; use crate::atomizer::artifacts::save_artifacts; use crate::atomizer::io::{artifact_dir, load_plan_content}; -use crate::atomizer::progress::emit_progress; +use crate::atomizer::progress::{ + emit_progress, mark_pipeline_done, mark_pipeline_error, mark_pipeline_start, +}; use crate::atomizer::sanitize::sanitize_codex_plan_content; use crate::atomizer::stages::{stage_atomize, stage_chunk, stage_merge, stage_summarize}; use crate::atomizer::templates::load_templates; +use crate::atomizer::types::AtomizeActivityKind; use crate::atomizer::{AtomizeArgs, AtomizerError}; use ralph_core::prd::Prd; use tauri::{AppHandle, Runtime}; @@ -11,6 +15,32 @@ use tauri::{AppHandle, Runtime}; pub async fn run_atomizer( app: AppHandle, args: AtomizeArgs, +) -> Result { + let pid = args.project_id.clone(); + mark_pipeline_start(&app, &pid); + + let result = execute_pipeline(&app, &args).await; + + match &result { + Ok(prd) => { + emit_progress( + &app, + &pid, + 4, + "merge", + &format!("Done — {} stories", prd.stories.len()), + ); + mark_pipeline_done(&app, &pid); + } + Err(err) => mark_pipeline_error(&app, &pid, &err.to_string()), + } + + result +} + +async fn execute_pipeline( + app: &AppHandle, + args: &AtomizeArgs, ) -> Result { if !args.project_dir.exists() { return Err(AtomizerError::Path(format!( @@ -26,7 +56,7 @@ pub async fn run_atomizer( } let env = load_templates()?; - let artifact_path = artifact_dir(&app, &args.project_id)?; + let artifact_path = artifact_dir(app, &args.project_id)?; std::fs::create_dir_all(&artifact_path)?; let plan_content = @@ -39,10 +69,22 @@ pub async fn run_atomizer( } let pid = &args.project_id; + emit_activity( + app, + pid, + AtomizeActivityKind::PlanLoaded, + &format!("Plan loaded ({} chars)", plan_content.len()), + ); - emit_progress(&app, pid, 1, "summarize", "Summarizing plan..."); + emit_progress(app, pid, 1, "summarize", "Summarizing plan..."); + emit_activity( + app, + pid, + AtomizeActivityKind::TemplateRender, + "Rendering summarize template", + ); let condensed = stage_summarize( - &app, + app, pid, &env, &plan_content, @@ -53,9 +95,22 @@ pub async fn run_atomizer( ) .await?; - emit_progress(&app, pid, 2, "chunk", "Splitting into sections..."); + emit_activity( + app, + pid, + AtomizeActivityKind::AgentComplete, + &format!("Summarize complete ({} chars condensed)", condensed.len()), + ); + + emit_progress(app, pid, 2, "chunk", "Splitting into sections..."); + emit_activity( + app, + pid, + AtomizeActivityKind::TemplateRender, + "Rendering chunk template", + ); let sections = stage_chunk( - &app, + app, pid, &env, &condensed, @@ -66,15 +121,25 @@ pub async fn run_atomizer( ) .await?; + let section_count = sections.len(); + for (idx, section) in sections.iter().enumerate() { + emit_activity( + app, + pid, + AtomizeActivityKind::ChunkDetected, + &format!("[{}/{}] {}", idx + 1, section_count, section.title), + ); + } + emit_progress( - &app, + app, pid, 3, "atomize", - &format!("Atomizing {} sections...", sections.len()), + &format!("Atomizing {section_count} sections..."), ); let all_stories = stage_atomize( - &app, + app, pid, &env, §ions, @@ -86,9 +151,25 @@ pub async fn run_atomizer( ) .await?; - emit_progress(&app, pid, 4, "merge", "Merging and ordering stories..."); + emit_activity( + app, + pid, + AtomizeActivityKind::StoryExtracted, + &format!( + "{} raw stories across {section_count} sections", + all_stories.len() + ), + ); + + emit_progress(app, pid, 4, "merge", "Merging and ordering stories..."); + emit_activity( + app, + pid, + AtomizeActivityKind::TemplateRender, + "Rendering merge template", + ); let prd = stage_merge( - &app, + app, pid, &env, all_stories, @@ -100,16 +181,28 @@ pub async fn run_atomizer( ) .await?; + emit_activity( + app, + pid, + AtomizeActivityKind::Validation, + &format!("Validating atomicity of {} stories", prd.stories.len()), + ); prd.validate_atomicity() .map_err(|err| AtomizerError::Validation(err.to_string()))?; - save_artifacts(&artifact_path, &prd)?; + emit_activity( + app, + pid, + AtomizeActivityKind::Validation, + "Validation passed", + ); - emit_progress( - &app, + save_artifacts(&artifact_path, &prd)?; + emit_activity( + app, pid, - 4, - "merge", - &format!("Done — {} stories", prd.stories.len()), + AtomizeActivityKind::ArtifactSaved, + "Artifacts saved (prd.json, prompt.md, guardrails.md)", ); + Ok(prd) } diff --git a/src-tauri/src/atomizer/sanitize.rs b/src-tauri/src/atomizer/sanitize.rs index b3d564a..4de29de 100644 --- a/src-tauri/src/atomizer/sanitize.rs +++ b/src-tauri/src/atomizer/sanitize.rs @@ -39,8 +39,7 @@ pub(super) fn sanitize_codex_plan_content(raw: &str) -> String { } previous_blank = false; - if trimmed == "exec" || trimmed.starts_with("exec ") || trimmed.starts_with("/bin/zsh -lc") - { + if crate::shell_resolve::is_shell_exec_line(trimmed) { in_exec_output = true; continue; } diff --git a/src-tauri/src/atomizer/stages.rs b/src-tauri/src/atomizer/stages.rs index 6ec0926..283a7ff 100644 --- a/src-tauri/src/atomizer/stages.rs +++ b/src-tauri/src/atomizer/stages.rs @@ -1,8 +1,11 @@ +use crate::atomizer::activity::emit_activity; use crate::atomizer::agent_invoke::{invoke_agent, invoke_agent_with_heartbeat}; use crate::atomizer::chunking::chunk_large_plan; use crate::atomizer::json_parse::parse_json_from_candidates; use crate::atomizer::progress::emit_progress; -use crate::atomizer::types::{AtomizedStoryDraft, AtomizerError, ChunkSection}; +use crate::atomizer::types::{ + AtomizeActivityKind, AtomizedStoryDraft, AtomizerError, ChunkSection, +}; use minijinja::{context, Environment}; use ralph_core::prd::Prd; use std::path::Path; @@ -19,6 +22,15 @@ pub(super) async fn stage_summarize( project_dir: &Path, ) -> Result { let plan_chunks = chunk_large_plan(plan_content); + if plan_chunks.len() > 1 { + let count = plan_chunks.len(); + emit_activity( + app, + project_id, + AtomizeActivityKind::PlanLoaded, + &format!("Large plan split into {count} chunks"), + ); + } let mut condensed_parts = Vec::with_capacity(plan_chunks.len()); for chunk in plan_chunks { @@ -28,10 +40,22 @@ pub(super) async fn stage_summarize( let prompt = tmpl .render(context! { plan_content => chunk }) .map_err(|err| AtomizerError::Template(err.to_string()))?; + emit_activity( + app, + project_id, + AtomizeActivityKind::AgentStart, + &format!("Invoking {agent} for summarization"), + ); let hb = Some((project_id.to_string(), 1, "summarize".to_string())); let result = invoke_agent_with_heartbeat(app, agent, model, effort, &prompt, project_dir, hb) .await?; + emit_activity( + app, + project_id, + AtomizeActivityKind::AgentComplete, + &format!("Summary chunk: {} chars", result.len()), + ); condensed_parts.push(result); } @@ -55,9 +79,21 @@ pub(super) async fn stage_chunk( .render(context! { condensed_plan => condensed_plan }) .map_err(|err| AtomizerError::Template(err.to_string()))?; + emit_activity( + app, + project_id, + AtomizeActivityKind::AgentStart, + &format!("Invoking {agent} for chunking"), + ); let hb = Some((project_id.to_string(), 2, "chunk".to_string())); let raw = invoke_agent_with_heartbeat(app, agent, model, effort, &prompt, project_dir, hb).await?; + emit_activity( + app, + project_id, + AtomizeActivityKind::AgentComplete, + "Chunk response received, parsing JSON", + ); parse_json_from_candidates::>(&raw, '[').map_err(|(err, candidate)| { AtomizerError::JsonParse { stage: "chunk", @@ -66,16 +102,14 @@ pub(super) async fn stage_chunk( }) } -fn build_json_retry_prompt(section_title: &str, section_content: &str, failed_raw: &str) -> String { +fn build_json_retry_prompt(title: &str, content: &str, failed: &str) -> String { + let snippet = &failed[..failed.len().min(300)]; format!( - "Your previous response was not valid JSON. You must return ONLY a JSON array of story objects.\n\n\ - Section: {section_title}\n\n\ - Previous (invalid) response (first 300 chars):\n{snippet}\n\n\ - Rewrite your response as a valid JSON array based on this section content:\n\n\ - {section_content}\n\n\ - CRITICAL: Output ONLY the JSON array. First character must be [, last character must be ].\n\ - No prose, no requests, no markdown fences.", - snippet = &failed_raw[..failed_raw.len().min(300)] + "Your previous response was not valid JSON. Return ONLY a JSON array of story objects.\n\n\ + Section: {title}\n\nPrevious (invalid) response (first 300 chars):\n{snippet}\n\n\ + Rewrite as a valid JSON array based on this section content:\n\n{content}\n\n\ + CRITICAL: Output ONLY the JSON array. First character must be [, last must be ].\n\ + No prose, no requests, no markdown fences." ) } @@ -91,8 +125,15 @@ pub(super) async fn stage_atomize( project_dir: &Path, ) -> Result, AtomizerError> { let mut all_stories: Vec = Vec::new(); - - for section in sections { + let total = sections.len(); + for (idx, section) in sections.iter().enumerate() { + let section_label = format!("[{}/{}] '{}'", idx + 1, total, section.title); + emit_activity( + app, + project_id, + AtomizeActivityKind::SectionProcess, + &format!("Processing {section_label}"), + ); let tmpl = env .get_template("stories") .map_err(|err| AtomizerError::Template(err.to_string()))?; @@ -104,6 +145,12 @@ pub(super) async fn stage_atomize( }) .map_err(|err| AtomizerError::Template(err.to_string()))?; + emit_activity( + app, + project_id, + AtomizeActivityKind::AgentStart, + &format!("Invoking {agent} for {section_label}"), + ); let hb = Some((project_id.to_string(), 3, "atomize".to_string())); let raw = invoke_agent_with_heartbeat(app, agent, model, effort, &prompt, project_dir, hb) .await?; @@ -111,6 +158,12 @@ pub(super) async fn stage_atomize( let stories: Vec = match parse_json_from_candidates(&raw, '[') { Ok(stories) => stories, Err((_first_err, first_candidate)) => { + emit_activity( + app, + project_id, + AtomizeActivityKind::Retry, + &format!("JSON parse failed for {section_label}"), + ); emit_progress( app, project_id, @@ -140,6 +193,13 @@ pub(super) async fn stage_atomize( } }; + let extracted = stories.len(); + emit_activity( + app, + project_id, + AtomizeActivityKind::StoryExtracted, + &format!("{extracted} stories from {section_label}"), + ); all_stories.extend(stories); } @@ -175,9 +235,21 @@ pub(super) async fn stage_merge( }) .map_err(|err| AtomizerError::Template(err.to_string()))?; + emit_activity( + app, + project_id, + AtomizeActivityKind::AgentStart, + &format!("Invoking {agent} for merge"), + ); let hb = Some((project_id.to_string(), 4, "merge".to_string())); let raw = invoke_agent_with_heartbeat(app, agent, model, effort, &prompt, project_dir, hb).await?; + emit_activity( + app, + project_id, + AtomizeActivityKind::AgentComplete, + "Merge received, parsing PRD", + ); parse_json_from_candidates::(&raw, '{').map_err(|(err, candidate)| { AtomizerError::JsonParse { stage: "merge", diff --git a/src-tauri/src/atomizer/tests_validation.rs b/src-tauri/src/atomizer/tests_validation.rs index 33b8e1b..10dfdb3 100644 --- a/src-tauri/src/atomizer/tests_validation.rs +++ b/src-tauri/src/atomizer/tests_validation.rs @@ -35,13 +35,25 @@ fn chunk_large_plan_preserves_all_content() { #[test] fn build_agent_args_adds_codex_skip_repo_flag() { - let args = build_agent_args("codex", "Generate output", Path::new("/tmp/demo"), None, None); + let args = build_agent_args( + "codex", + "Generate output", + Path::new("/tmp/demo"), + None, + None, + ); assert!(args.iter().any(|value| value == "--skip-git-repo-check")); } #[test] fn build_agent_args_adds_codex_bypass_flag() { - let args = build_agent_args("codex", "Generate output", Path::new("/tmp/demo"), None, None); + let args = build_agent_args( + "codex", + "Generate output", + Path::new("/tmp/demo"), + None, + None, + ); assert!(args .iter() .any(|value| value == "--dangerously-bypass-approvals-and-sandbox")); @@ -49,7 +61,13 @@ fn build_agent_args_adds_codex_bypass_flag() { #[test] fn build_agent_args_includes_codex_working_directory() { - let args = build_agent_args("codex", "Generate output", Path::new("/tmp/demo"), None, None); + let args = build_agent_args( + "codex", + "Generate output", + Path::new("/tmp/demo"), + None, + None, + ); let has_directory_flag = args.windows(2).any(|pair| { pair.first().map(|value| value.as_str()) == Some("-C") && pair.get(1).map(|value| value.as_str()) == Some("/tmp/demo") @@ -67,12 +85,21 @@ fn shell_quote_escapes_single_quotes() { fn build_null_stdin_shell_command_appends_redirection() { let args = vec!["exec".to_string(), "prompt body".to_string()]; let command = build_null_stdin_shell_command("codex", &args); + #[cfg(windows)] + assert!(command.ends_with("< NUL")); + #[cfg(not(windows))] assert!(command.ends_with("< /dev/null")); } #[test] fn build_agent_args_omits_removed_opencode_auto_share_flag() { - let args = build_agent_args("opencode", "Generate output", Path::new("/tmp/demo"), None, None); + let args = build_agent_args( + "opencode", + "Generate output", + Path::new("/tmp/demo"), + None, + None, + ); assert!(!args.iter().any(|value| value == "--no-auto-share")); } @@ -85,8 +112,13 @@ fn build_agent_args_sets_codex_model_and_effort_when_provided() { Some("gpt-5.4"), Some("high"), ); - let has_model = args.windows(2).any(|pair| pair.first().map(|v| v.as_str()) == Some("--model") && pair.get(1).map(|v| v.as_str()) == Some("gpt-5.4")); - let has_effort = args.windows(2).any(|pair| pair.first().map(|v| v.as_str()) == Some("--reasoning-effort")); + let has_model = args.windows(2).any(|pair| { + pair.first().map(|v| v.as_str()) == Some("--model") + && pair.get(1).map(|v| v.as_str()) == Some("gpt-5.4") + }); + let has_effort = args + .windows(2) + .any(|pair| pair.first().map(|v| v.as_str()) == Some("--reasoning-effort")); assert!(has_model); assert!(!has_effort, "codex should not receive --reasoning-effort"); } diff --git a/src-tauri/src/atomizer/types.rs b/src-tauri/src/atomizer/types.rs index a404735..6e5bc8e 100644 --- a/src-tauri/src/atomizer/types.rs +++ b/src-tauri/src/atomizer/types.rs @@ -48,6 +48,119 @@ pub struct AtomizeProgress { pub stage_name: String, pub message: String, pub project_id: String, + #[serde(default)] + pub elapsed_ms: u64, +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub enum AtomizeActivityKind { + PlanLoaded, + TemplateRender, + AgentStart, + AgentComplete, + ChunkDetected, + SectionProcess, + StoryExtracted, + Retry, + Validation, + ArtifactSaved, +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct AtomizeActivity { + pub project_id: String, + pub kind: AtomizeActivityKind, + pub content: String, + pub timestamp: String, +} + +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)] +#[serde(rename_all = "camelCase")] +pub enum StageStatus { + Pending, + Running, + Done, + Error, +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct StageSnapshot { + pub number: u8, + pub label: String, + pub status: StageStatus, +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct PipelineSnapshot { + pub stages: Vec, + pub error: Option, + pub started_at: Option, + pub elapsed_ms: u64, + pub done: bool, + #[serde(default)] + pub percent: u8, + #[serde(default)] + pub is_done: bool, + #[serde(default)] + pub is_running: bool, +} + +impl PipelineSnapshot { + pub fn initial() -> Self { + let labels = ["Summarize", "Chunk", "Atomize", "Merge"]; + Self { + stages: labels + .iter() + .enumerate() + .map(|(idx, label)| StageSnapshot { + number: (idx + 1) as u8, + label: (*label).to_string(), + status: StageStatus::Pending, + }) + .collect(), + error: None, + started_at: None, + elapsed_ms: 0, + done: false, + percent: 0, + is_done: false, + is_running: false, + } + } + + pub fn recompute_derived(&mut self) { + let total = self.stages.len() as f64; + if total == 0.0 { + self.percent = 0; + self.is_done = false; + self.is_running = false; + return; + } + let progress: f64 = self + .stages + .iter() + .map(|stage| match stage.status { + StageStatus::Done => 1.0, + StageStatus::Running => 0.5, + _ => 0.0, + }) + .sum(); + self.percent = ((progress / total) * 100.0).round() as u8; + self.is_done = self + .stages + .iter() + .all(|stage| stage.status == StageStatus::Done); + self.is_running = !self.is_done + && self.error.is_none() + && self + .stages + .iter() + .any(|stage| stage.status == StageStatus::Running); + } } #[derive(Debug, Deserialize)] diff --git a/src-tauri/src/commands/ask.rs b/src-tauri/src/commands/ask.rs index 783039f..068559f 100644 --- a/src-tauri/src/commands/ask.rs +++ b/src-tauri/src/commands/ask.rs @@ -1,5 +1,5 @@ use crate::ask_engine::session::AskSessionsState; -use crate::ask_engine::types::{AskMessage, StartAskArgs}; +use crate::ask_engine::types::{AskMessage, AskQuestionResult, StartAskArgs}; use crate::ask_engine::AskEngineError; use crate::commands::validation::{optional_trimmed, required_trimmed}; use crate::db::DbState; @@ -13,8 +13,9 @@ pub async fn ask_question( sessions: State<'_, AskSessionsState>, db: State<'_, DbState>, args: StartAskArgs, -) -> Result { - let project_id = required_trimmed(args.project_id, "project_id").map_err(AskEngineError::Path)?; +) -> Result { + let project_id = + required_trimmed(args.project_id, "project_id").map_err(AskEngineError::Path)?; let question = required_trimmed(args.question, "question").map_err(AskEngineError::Path)?; let agent = required_trimmed(args.agent, "agent").map_err(AskEngineError::Path)?; let model = optional_trimmed(args.model); @@ -37,14 +38,21 @@ pub async fn ask_question( let message_id = Uuid::new_v4().to_string(); - { + let user_message = { let conn = db.0.lock().map_err(|_| AskEngineError::LockPoisoned)?; let conversation = crate::ask_engine::storage::get_or_create_conversation(&conn, &project_id) .map_err(|err| AskEngineError::Db(err.to_string()))?; - crate::ask_engine::storage::insert_message(&conn, &conversation.id, "user", &question, None, None) - .map_err(|err| AskEngineError::Db(err.to_string()))?; - } + crate::ask_engine::storage::insert_message( + &conn, + &conversation.id, + "user", + &question, + None, + None, + ) + .map_err(|err| AskEngineError::Db(err.to_string()))? + }; let normalized = StartAskArgs { project_id, @@ -62,7 +70,10 @@ pub async fn ask_question( ) .await?; - Ok(message_id) + Ok(AskQuestionResult { + message_id, + user_message, + }) } #[tauri::command] @@ -84,7 +95,7 @@ pub async fn stop_ask( let project_id = required_trimmed(project_id, "project_id").map_err(AskEngineError::Path)?; sessions .remove_and_kill(&project_id) - .map_err(|err| AskEngineError::Shell(err))?; + .map_err(AskEngineError::Shell)?; Ok(()) } @@ -164,8 +175,15 @@ pub async fn retry_ask( let conversation = crate::ask_engine::storage::get_or_create_conversation(&conn, &project_id) .map_err(|err| AskEngineError::Db(err.to_string()))?; - crate::ask_engine::storage::insert_message(&conn, &conversation.id, "user", &question, None, None) - .map_err(|err| AskEngineError::Db(err.to_string()))?; + crate::ask_engine::storage::insert_message( + &conn, + &conversation.id, + "user", + &question, + None, + None, + ) + .map_err(|err| AskEngineError::Db(err.to_string()))?; } let args = StartAskArgs { diff --git a/src-tauri/src/commands/atomization.rs b/src-tauri/src/commands/atomization.rs index 194515f..9984d14 100644 --- a/src-tauri/src/commands/atomization.rs +++ b/src-tauri/src/commands/atomization.rs @@ -1,11 +1,15 @@ +use crate::atomizer::{ + get_pipeline_snapshot, ActivityLogState, AtomizeActivity, PipelineRegistryState, + PipelineSnapshot, +}; +use tauri::{AppHandle, State}; + #[cfg(not(test))] use crate::atomizer::{AtomizeArgs, AtomizerError}; #[cfg(not(test))] use crate::commands::validation::{optional_trimmed, required_trimmed}; #[cfg(not(test))] use ralph_core::prd::Prd; -#[cfg(not(test))] -use tauri::AppHandle; #[cfg(not(test))] #[tauri::command] @@ -21,3 +25,24 @@ pub async fn run_atomizer(app: AppHandle, args: AtomizeArgs) -> Result, + project_id: String, +) -> Vec { + state + .0 + .lock() + .map(|log| log.get(&project_id)) + .unwrap_or_default() +} + +#[tauri::command] +pub fn get_atomizer_pipeline_state( + app: AppHandle, + _registry: State<'_, PipelineRegistryState>, + project_id: String, +) -> Option { + get_pipeline_snapshot(&app, &project_id) +} diff --git a/src-tauri/src/commands/display_vocabulary.rs b/src-tauri/src/commands/display_vocabulary.rs new file mode 100644 index 0000000..2d15646 --- /dev/null +++ b/src-tauri/src/commands/display_vocabulary.rs @@ -0,0 +1,343 @@ +use serde::Serialize; +use std::collections::HashMap; + +#[derive(Debug, Clone, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct ProjectStatusMeta { + pub badge_status: String, + pub card_label: String, + pub sidebar_label: String, +} + +#[derive(Debug, Clone, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct PlanKindMeta { + pub label: String, + pub variant: String, +} + +#[derive(Debug, Clone, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct DisplayVocabulary { + pub project_status_meta: HashMap, + pub status_labels: HashMap, + pub status_variants: HashMap, + pub notification_type_labels: HashMap, + pub notification_ring_variants: HashMap, + pub story_status_variants: HashMap, + pub activity_result_variants: HashMap, + pub monitor_status_badges: HashMap, + pub plan_kind_meta: HashMap, + pub atomize_activity_kind_meta: HashMap, + pub story_priority_variants: HashMap, + pub wizard_step_labels: HashMap, + pub stage_status_badges: HashMap, + pub stage_status_labels: HashMap, + pub step_indicator_variants: HashMap, + pub step_indicator_emphasis: HashMap, + pub agent_names: Vec, + pub inactive_statuses: Vec, + pub stall_threshold_secs: u32, + pub max_visible_activity_events: usize, + pub max_output_lines: usize, +} + +fn string_map(entries: &[(&str, &str)]) -> HashMap { + entries + .iter() + .map(|(key, value)| ((*key).to_string(), (*value).to_string())) + .collect() +} + +#[tauri::command] +pub async fn get_display_vocabulary() -> DisplayVocabulary { + let project_status_meta = HashMap::from([ + ( + "active".to_string(), + ProjectStatusMeta { + badge_status: "running".to_string(), + card_label: "Running".to_string(), + sidebar_label: "Running".to_string(), + }, + ), + ( + "paused".to_string(), + ProjectStatusMeta { + badge_status: "paused".to_string(), + card_label: "Paused".to_string(), + sidebar_label: "Paused".to_string(), + }, + ), + ( + "blocked".to_string(), + ProjectStatusMeta { + badge_status: "blocked".to_string(), + card_label: "Blocked".to_string(), + sidebar_label: "Blocked".to_string(), + }, + ), + ( + "completed".to_string(), + ProjectStatusMeta { + badge_status: "completed".to_string(), + card_label: "Completed".to_string(), + sidebar_label: "Completed".to_string(), + }, + ), + ( + "failed".to_string(), + ProjectStatusMeta { + badge_status: "failed".to_string(), + card_label: "Failed".to_string(), + sidebar_label: "Failed".to_string(), + }, + ), + ( + "draft".to_string(), + ProjectStatusMeta { + badge_status: "draft".to_string(), + card_label: "Draft".to_string(), + sidebar_label: "Draft".to_string(), + }, + ), + ( + "archived".to_string(), + ProjectStatusMeta { + badge_status: "archived".to_string(), + card_label: "Archived".to_string(), + sidebar_label: "Archived".to_string(), + }, + ), + ]); + let plan_kind_meta = HashMap::from([ + ( + "search".to_string(), + PlanKindMeta { + label: "SRCH".to_string(), + variant: "info".to_string(), + }, + ), + ( + "docsLookup".to_string(), + PlanKindMeta { + label: "DOCS".to_string(), + variant: "warning".to_string(), + }, + ), + ( + "mcpCall".to_string(), + PlanKindMeta { + label: "TOOL".to_string(), + variant: "neutral".to_string(), + }, + ), + ( + "thinking".to_string(), + PlanKindMeta { + label: "WAIT".to_string(), + variant: "neutral".to_string(), + }, + ), + ( + "error".to_string(), + PlanKindMeta { + label: "ERR".to_string(), + variant: "danger".to_string(), + }, + ), + ( + "planContent".to_string(), + PlanKindMeta { + label: "PLAN".to_string(), + variant: "success".to_string(), + }, + ), + ]); + let atomize_activity_kind_meta = HashMap::from([ + ( + "planLoaded".to_string(), + PlanKindMeta { + label: "LOAD".to_string(), + variant: "info".to_string(), + }, + ), + ( + "templateRender".to_string(), + PlanKindMeta { + label: "TMPL".to_string(), + variant: "neutral".to_string(), + }, + ), + ( + "agentStart".to_string(), + PlanKindMeta { + label: "CALL".to_string(), + variant: "warning".to_string(), + }, + ), + ( + "agentComplete".to_string(), + PlanKindMeta { + label: "RECV".to_string(), + variant: "success".to_string(), + }, + ), + ( + "chunkDetected".to_string(), + PlanKindMeta { + label: "SECT".to_string(), + variant: "info".to_string(), + }, + ), + ( + "sectionProcess".to_string(), + PlanKindMeta { + label: "PROC".to_string(), + variant: "warning".to_string(), + }, + ), + ( + "storyExtracted".to_string(), + PlanKindMeta { + label: "ATOM".to_string(), + variant: "success".to_string(), + }, + ), + ( + "retry".to_string(), + PlanKindMeta { + label: "RTRY".to_string(), + variant: "danger".to_string(), + }, + ), + ( + "validation".to_string(), + PlanKindMeta { + label: "VALD".to_string(), + variant: "info".to_string(), + }, + ), + ( + "artifactSaved".to_string(), + PlanKindMeta { + label: "SAVE".to_string(), + variant: "success".to_string(), + }, + ), + ]); + DisplayVocabulary { + project_status_meta, + status_labels: string_map(&[ + ("draft", "Draft"), + ("pending", "Pending"), + ("current", "Current"), + ("running", "Running"), + ("paused", "Paused"), + ("blocked", "Blocked"), + ("completed", "Completed"), + ("success", "Success"), + ("error", "Error"), + ("failed", "Failed"), + ("archived", "Archived"), + ]), + status_variants: string_map(&[ + ("draft", "neutral"), + ("pending", "neutral"), + ("current", "info"), + ("running", "info"), + ("paused", "warning"), + ("blocked", "danger"), + ("completed", "success"), + ("success", "success"), + ("error", "danger"), + ("failed", "danger"), + ("archived", "neutral"), + ]), + notification_type_labels: string_map(&[ + ("story_blocked", "Blocked"), + ("loop_completed", "Completed"), + ("rate_limited", "Rate limit"), + ("review_comment", "Review"), + ("loop_error", "Error"), + ("story_completed", "Story done"), + ]), + notification_ring_variants: string_map(&[ + ("red", "danger"), + ("amber", "warning"), + ("cyan", "info"), + ("green", "success"), + ]), + story_status_variants: string_map(&[ + ("completed", "success"), + ("current", "info"), + ("blocked", "danger"), + ("pending", "neutral"), + ]), + activity_result_variants: string_map(&[ + ("success", "success"), + ("pending", "info"), + ("failed", "danger"), + ("blocked", "danger"), + ("skipped", "warning"), + ]), + monitor_status_badges: string_map(&[ + ("ready", "pending"), + ("running", "running"), + ("paused", "paused"), + ("blocked", "blocked"), + ("failed", "failed"), + ("completed", "completed"), + ("archived", "archived"), + ("draft", "draft"), + ]), + plan_kind_meta, + atomize_activity_kind_meta, + story_priority_variants: string_map(&[ + ("critical", "danger"), + ("high", "warning"), + ("medium", "info"), + ("low", "neutral"), + ]), + wizard_step_labels: string_map(&[ + ("describe", "Describe"), + ("plan", "Planning"), + ("atomize", "Atomize"), + ("configure", "Configure"), + ("launch", "Launch"), + ]), + stage_status_badges: string_map(&[ + ("pending", "neutral"), + ("running", "info"), + ("done", "success"), + ("error", "danger"), + ]), + stage_status_labels: string_map(&[ + ("pending", "Pending"), + ("running", "Running"), + ("done", "Done"), + ("error", "Error"), + ]), + step_indicator_variants: string_map(&[ + ("upcoming", "neutral"), + ("current", "info"), + ("complete", "success"), + ("stale", "warning"), + ("error", "danger"), + ]), + step_indicator_emphasis: string_map(&[ + ("upcoming", "subtle"), + ("current", "solid"), + ("complete", "subtle"), + ("stale", "subtle"), + ("error", "subtle"), + ]), + agent_names: vec!["cursor", "codex", "claude", "gemini", "opencode"] + .into_iter() + .map(str::to_string) + .collect(), + inactive_statuses: vec!["archived".to_string(), "failed".to_string()], + stall_threshold_secs: 120, + max_visible_activity_events: 200, + max_output_lines: 5000, + } +} diff --git a/src-tauri/src/commands/execution.rs b/src-tauri/src/commands/execution.rs index 3047979..083c038 100644 --- a/src-tauri/src/commands/execution.rs +++ b/src-tauri/src/commands/execution.rs @@ -1,13 +1,32 @@ use crate::commands::validation::required_trimmed; #[cfg(not(test))] use crate::commands::validation::{non_empty_trimmed_list, optional_trimmed}; -use crate::loop_manager::{LoopError, LoopManagerState, SessionStats}; #[cfg(not(test))] use crate::loop_manager::StartLoopArgs; +use crate::loop_manager::{LoopError, LoopManagerState, SessionStats}; use crate::storage::db::DbState; use serde::Serialize; use tauri::{AppHandle, State}; +fn format_duration_label(duration_secs: i64) -> String { + if duration_secs <= 0 { + return "0s".to_string(); + } + if duration_secs < 60 { + return format!("{duration_secs}s"); + } + let minutes = duration_secs / 60; + let seconds = duration_secs % 60; + format!("{minutes}m {seconds}s") +} + +fn format_time_label(started_at: &str) -> String { + chrono::DateTime::parse_from_rfc3339(started_at).map_or_else( + |_| started_at.to_string(), + |value| value.format("%H:%M:%S").to_string(), + ) +} + #[cfg(not(test))] #[tauri::command] pub async fn start_loop(app: AppHandle, args: StartLoopArgs) -> Result { @@ -36,6 +55,9 @@ pub async fn start_loop(app: AppHandle, args: StartLoopArgs) -> Result, + project_id: String, +) -> Result, LoopError> { + get_iteration_history(db, project_id).await +} diff --git a/src-tauri/src/commands/mod.rs b/src-tauri/src/commands/mod.rs index b65363a..3a1d222 100644 --- a/src-tauri/src/commands/mod.rs +++ b/src-tauri/src/commands/mod.rs @@ -1,6 +1,8 @@ pub mod ask; pub mod atomization; +pub mod display_vocabulary; pub mod execution; +pub mod notifications; pub mod planning; pub mod projects; pub mod projects_artifacts; @@ -8,3 +10,4 @@ pub mod projects_lifecycle; pub mod projects_listing; pub mod projects_wizard; mod validation; +pub mod wizard_logic; diff --git a/src-tauri/src/commands/notifications.rs b/src-tauri/src/commands/notifications.rs new file mode 100644 index 0000000..5f41684 --- /dev/null +++ b/src-tauri/src/commands/notifications.rs @@ -0,0 +1,145 @@ +use crate::projects::notification_filter::{ + build_project_summaries, ring_color_for_project, AppNotification, ProjectNotificationSummary, +}; +use crate::projects::notifications::{create_notification_and_emit, NotificationCreateInput}; +use crate::projects::ProjectError; +use crate::storage::db::DbState; +use serde::{Deserialize, Serialize}; +use tauri::{AppHandle, State}; + +#[derive(Debug, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct NotificationListResponse { + pub notifications: Vec, + pub unread_count: usize, + pub ring_color: Option, + pub project_summaries: Vec, +} + +#[derive(Debug, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct AddNotificationArgs { + pub project_id: String, + pub notification_type: String, + pub title: String, + pub message: String, +} + +#[tauri::command] +pub async fn add_notification( + app: AppHandle, + args: AddNotificationArgs, +) -> Result { + create_notification_and_emit( + &app, + NotificationCreateInput { + project_id: args.project_id, + notification_type: args.notification_type, + title: args.title, + message: args.message, + }, + ) +} + +#[tauri::command] +pub async fn get_notifications( + db: State<'_, DbState>, + project_id: Option, +) -> Result { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".into()))?; + let notifications = query_notifications(&conn, project_id.as_deref())?; + let unread_count = notifications.iter().filter(|notif| !notif.read).count(); + let ring = ring_color_for_project(¬ifications).map(String::from); + let project_summaries = build_project_summaries(¬ifications); + Ok(NotificationListResponse { + notifications, + unread_count, + ring_color: ring, + project_summaries, + }) +} + +#[tauri::command] +pub async fn mark_notification_read( + db: State<'_, DbState>, + notification_id: String, +) -> Result<(), ProjectError> { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".into()))?; + conn.execute( + "UPDATE notifications SET read = 1 WHERE id = ?1", + rusqlite::params![notification_id], + )?; + Ok(()) +} + +#[tauri::command] +pub async fn mark_all_notifications_read( + db: State<'_, DbState>, + project_id: Option, +) -> Result<(), ProjectError> { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".into()))?; + if let Some(ref pid) = project_id { + conn.execute( + "UPDATE notifications SET read = 1 WHERE project_id = ?1", + rusqlite::params![pid], + )?; + } else { + conn.execute("UPDATE notifications SET read = 1", [])?; + } + Ok(()) +} + +#[tauri::command] +pub async fn clear_notifications( + db: State<'_, DbState>, + project_id: Option, +) -> Result<(), ProjectError> { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".into()))?; + if let Some(ref pid) = project_id { + conn.execute( + "DELETE FROM notifications WHERE project_id = ?1", + rusqlite::params![pid], + )?; + } else { + conn.execute("DELETE FROM notifications", [])?; + } + Ok(()) +} + +fn query_notifications( + conn: &rusqlite::Connection, + project_id: Option<&str>, +) -> Result, ProjectError> { + let sql = if project_id.is_some() { + "SELECT id, project_id, notification_type, title, message, ring_color, read, timestamp FROM notifications WHERE project_id = ?1 ORDER BY timestamp DESC LIMIT 200" + } else { + "SELECT id, project_id, notification_type, title, message, ring_color, read, timestamp FROM notifications ORDER BY timestamp DESC LIMIT 200" + }; + let mut stmt = conn.prepare(sql)?; + let rows = match project_id { + Some(pid) => stmt.query_map(rusqlite::params![pid], row_to_notification)?, + None => stmt.query_map([], row_to_notification)?, + }; + Ok(rows.filter_map(Result::ok).collect()) +} + +fn row_to_notification(row: &rusqlite::Row) -> rusqlite::Result { + Ok(AppNotification { + id: row.get(0)?, + project_id: row.get(1)?, + notification_type: row.get(2)?, + title: row.get(3)?, + message: row.get(4)?, + ring_color: row.get(5)?, + read: row.get::<_, i32>(6)? != 0, + timestamp: row.get(7)?, + }) +} diff --git a/src-tauri/src/commands/planning.rs b/src-tauri/src/commands/planning.rs index ffd5f88..5625b3e 100644 --- a/src-tauri/src/commands/planning.rs +++ b/src-tauri/src/commands/planning.rs @@ -1,10 +1,11 @@ -use crate::commands::validation::required_trimmed; #[cfg(not(test))] use crate::commands::validation::optional_trimmed; +use crate::commands::validation::required_trimmed; use crate::plan_engine::{PlanEngineError, PlanSessionsState}; #[cfg(not(test))] use crate::plan_engine::{PlanSessionInfo, StartPlanArgs}; -#[cfg(not(test))] +use serde::Deserialize; +use serde::Serialize; use tauri::AppHandle; use tauri::State; @@ -59,3 +60,208 @@ pub async fn query_plan_status( required_trimmed(project_id, "project_id").map_err(PlanEngineError::Path)?; crate::plan_engine::query_plan_status(state, normalized_project_id).await } + +#[derive(Debug, Serialize)] +#[serde(rename_all = "camelCase", tag = "state")] +pub enum PlanStepState { + Running, + HasExistingPlan { content: String }, + NeedsFreshPlan, +} + +#[derive(Debug, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct ResolvePlanActionResult { + pub action: String, + #[serde(default, skip_serializing_if = "Option::is_none")] + pub plan_content: Option, +} + +#[derive(Debug, Deserialize)] +#[serde(rename_all = "camelCase")] +pub enum PlanUserActionKind { + Feedback, + Replan, +} + +#[derive(Debug, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct PlanUserActionResult { + pub mode: String, +} + +#[cfg(not(test))] +#[tauri::command] +pub async fn resolve_plan_state( + app: AppHandle, + state: State<'_, PlanSessionsState>, + project_id: String, +) -> Result { + let project_id = required_trimmed(project_id, "project_id").map_err(PlanEngineError::Path)?; + + let status = crate::plan_engine::query_plan_status(state, project_id.clone()).await?; + if let Some(info) = status { + if info.status == crate::plan_engine::PlanSessionStatus::Running { + return Ok(PlanStepState::Running); + } + } + + let dir = crate::projects::artifacts::artifact_dir(&app, &project_id) + .map_err(|err| PlanEngineError::Path(err.to_string()))?; + let plan_path = dir.join("plan.md"); + if plan_path.exists() { + if let Ok(content) = std::fs::read_to_string(&plan_path) { + if !content.trim().is_empty() { + return Ok(PlanStepState::HasExistingPlan { content }); + } + } + } + + Ok(PlanStepState::NeedsFreshPlan) +} + +#[cfg(not(test))] +#[tauri::command] +pub async fn resolve_plan_action( + app: AppHandle, + state: State<'_, PlanSessionsState>, + project_id: String, +) -> Result { + let resolved = resolve_plan_state(app, state, project_id).await?; + match resolved { + PlanStepState::Running => Ok(ResolvePlanActionResult { + action: "resume".to_string(), + plan_content: None, + }), + PlanStepState::HasExistingPlan { content } => Ok(ResolvePlanActionResult { + action: "prompt_existing".to_string(), + plan_content: Some(content), + }), + PlanStepState::NeedsFreshPlan => Ok(ResolvePlanActionResult { + action: "start".to_string(), + plan_content: None, + }), + } +} + +#[cfg(not(test))] +#[tauri::command] +pub async fn plan_user_action( + app: AppHandle, + state: State<'_, PlanSessionsState>, + db: State<'_, crate::storage::db::DbState>, + project_id: String, + input: String, + action: PlanUserActionKind, +) -> Result { + let project_id = required_trimmed(project_id, "project_id").map_err(PlanEngineError::Path)?; + let normalized_input = required_trimmed(input, "input").map_err(PlanEngineError::Path)?; + let plan_status = + crate::plan_engine::query_plan_status(state.clone(), project_id.clone()).await?; + match action { + PlanUserActionKind::Feedback => { + let is_running = plan_status + .as_ref() + .is_some_and(|info| info.status == crate::plan_engine::PlanSessionStatus::Running); + if is_running { + crate::plan_engine::write_to_plan(state, project_id, normalized_input).await?; + return Ok(PlanUserActionResult { + mode: "sent".to_string(), + }); + } + replan(app, state, db, project_id, normalized_input).await?; + Ok(PlanUserActionResult { + mode: "replanned".to_string(), + }) + } + PlanUserActionKind::Replan => { + let _ = crate::plan_engine::stop_plan(state.clone(), project_id.clone()).await; + replan(app, state, db, project_id, normalized_input).await?; + Ok(PlanUserActionResult { + mode: "replanned".to_string(), + }) + } + } +} + +#[cfg(not(test))] +#[tauri::command] +pub async fn replan( + app: AppHandle, + state: State<'_, PlanSessionsState>, + db: State<'_, crate::storage::db::DbState>, + project_id: String, + feedback: String, +) -> Result<(), PlanEngineError> { + let project_id = required_trimmed(project_id, "project_id").map_err(PlanEngineError::Path)?; + let feedback = required_trimmed(feedback, "feedback").map_err(PlanEngineError::Path)?; + + let (description, working_directory) = { + let conn = db.0.lock().map_err(|_| PlanEngineError::LockPoisoned)?; + conn.query_row( + "SELECT description, working_directory FROM projects WHERE id = ?1", + rusqlite::params![project_id], + |row| Ok((row.get::<_, String>(0)?, row.get::<_, String>(1)?)), + ) + .map_err(|_| PlanEngineError::Path(format!("Project not found: {project_id}")))? + }; + + let artifacts = crate::projects::artifacts::artifact_dir(&app, &project_id) + .map_err(|err| PlanEngineError::Path(err.to_string()))?; + let plan_path = artifacts.join("plan.md"); + let existing_plan = if plan_path.exists() { + std::fs::read_to_string(&plan_path).unwrap_or_default() + } else { + String::new() + }; + + let draft_path = artifacts.join("draft.json"); + let (agent, model, effort) = if draft_path.exists() { + let draft_content = std::fs::read_to_string(&draft_path) + .map_err(|err| PlanEngineError::Path(err.to_string()))?; + let draft: serde_json::Value = serde_json::from_str(&draft_content) + .map_err(|err| PlanEngineError::Path(err.to_string()))?; + let describe = &draft["describe"]; + ( + describe["planAgent"] + .as_str() + .unwrap_or("claude") + .to_string(), + describe["planModel"].as_str().map(String::from), + describe["planEffort"].as_str().map(String::from), + ) + } else { + ("claude".to_string(), None, None) + }; + + let has_plan_structure = existing_plan + .lines() + .any(|line| line.trim_start().starts_with('#')); + + let composed_prompt = if existing_plan.is_empty() || !has_plan_structure { + format!("{description}\n\nUser clarification: {feedback}") + } else { + format!( + "{description}\n\n\ + Previous plan:\n{existing_plan}\n\n\ + Feedback: {feedback}" + ) + }; + + { + let mut sessions = state.0.lock().map_err(|_| PlanEngineError::LockPoisoned)?; + if let Some(entry) = sessions.sessions.remove(&project_id) { + let _ = entry.handle.kill(); + } + } + + let replan_args = StartPlanArgs { + project_id: project_id.clone(), + project_dir: std::path::PathBuf::from(&working_directory), + agent, + model: optional_trimmed(model), + effort: optional_trimmed(effort), + initial_prompt: composed_prompt, + }; + crate::plan_engine::start_plan(app, state, replan_args).await +} diff --git a/src-tauri/src/commands/projects.rs b/src-tauri/src/commands/projects.rs index e750d25..3673d1b 100644 --- a/src-tauri/src/commands/projects.rs +++ b/src-tauri/src/commands/projects.rs @@ -8,7 +8,7 @@ use crate::projects::{ProjectConfig, ProjectError}; use crate::storage; use crate::storage::db::DbState; use rusqlite::OptionalExtension; -use tauri::{AppHandle, Manager, State}; +use tauri::{AppHandle, Manager, Runtime, State}; fn to_snapshot_config(config: ProjectConfig) -> SnapshotConfig { SnapshotConfig { @@ -28,14 +28,38 @@ fn to_snapshot_config(config: ProjectConfig) -> SnapshotConfig { } } +fn build_duration_label(started_at: Option<&str>, ended_at: Option<&str>) -> String { + let Some(start_value) = started_at else { + return String::new(); + }; + let Ok(start_time) = chrono::DateTime::parse_from_rfc3339(start_value) else { + return String::new(); + }; + let end_time = ended_at + .and_then(|value| chrono::DateTime::parse_from_rfc3339(value).ok()) + .unwrap_or_else(|| chrono::Utc::now().fixed_offset()); + let elapsed_secs = (end_time - start_time).num_seconds().max(0); + if elapsed_secs < 60 { + return format!("{elapsed_secs}s"); + } + let elapsed_minutes = elapsed_secs / 60; + if elapsed_minutes < 60 { + return format!("{elapsed_minutes}m"); + } + let elapsed_hours = elapsed_minutes / 60; + let remaining_minutes = elapsed_minutes % 60; + format!("{elapsed_hours}h {remaining_minutes}m") +} + #[tauri::command] -pub async fn get_project_snapshot( - app: AppHandle, +pub async fn get_project_snapshot( + app: AppHandle, db: State<'_, DbState>, loop_state: State<'_, LoopManagerState>, project_id: String, ) -> Result { - let normalized_project_id = required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; + let normalized_project_id = + required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; let has_loop_handle = loop_state .0 .lock() @@ -52,8 +76,8 @@ pub async fn get_project_snapshot( app.clone(), normalized_project_id.clone(), ) - .await - .unwrap_or(None); + .await + .unwrap_or(None); let db_state = app.state::(); let conn = db_state @@ -105,10 +129,7 @@ pub async fn get_project_snapshot( } else if matches!(status, ProjectStatus::Running) { status = ProjectStatus::Paused; } - if matches!(status, ProjectStatus::Draft) - && detail.total_stories > 0 - && config.is_some() - { + if matches!(status, ProjectStatus::Draft) && detail.total_stories > 0 && config.is_some() { status = ProjectStatus::Ready; } @@ -124,6 +145,19 @@ pub async fn get_project_snapshot( prompt: paths[4].to_string_lossy().to_string(), guardrails: paths[5].to_string_lossy().to_string(), }; + let progress_percent = if detail.total_stories == 0 { + 0 + } else { + ((detail.passed_count as f64 / detail.total_stories as f64) * 100.0).round() as u32 + }; + let uptime_label = build_duration_label( + active_session + .as_ref() + .and_then(|session| session.started_at.as_deref()), + active_session + .as_ref() + .and_then(|session| session.ended_at.as_deref()), + ); Ok(ProjectSnapshot { project: detail.project, @@ -136,5 +170,7 @@ pub async fn get_project_snapshot( }, config: config.map(to_snapshot_config), artifact_paths, + progress_percent, + uptime_label, }) } diff --git a/src-tauri/src/commands/projects_lifecycle.rs b/src-tauri/src/commands/projects_lifecycle.rs index 054bc90..74c659e 100644 --- a/src-tauri/src/commands/projects_lifecycle.rs +++ b/src-tauri/src/commands/projects_lifecycle.rs @@ -1,9 +1,8 @@ -use crate::commands::validation::required_trimmed; #[cfg(not(test))] use crate::commands::validation::optional_trimmed; +use crate::commands::validation::required_trimmed; use crate::projects::{ - IterationStory, NotificationPrefs, ProjectConfig, ProjectError, - ProjectsByStatus, + IterationStory, NotificationPrefs, ProjectConfig, ProjectError, ProjectsByStatus, }; #[cfg(not(test))] use crate::projects::{Project, ProjectDetail}; diff --git a/src-tauri/src/commands/projects_listing.rs b/src-tauri/src/commands/projects_listing.rs index ee9bc08..f6da426 100644 --- a/src-tauri/src/commands/projects_listing.rs +++ b/src-tauri/src/commands/projects_listing.rs @@ -5,7 +5,7 @@ use crate::projects::ProjectError; use crate::storage::db::DbState; use ralph_core::prd::Prd; use serde::{Deserialize, Serialize}; -use tauri::{AppHandle, Manager, State}; +use tauri::{AppHandle, State}; #[derive(Debug, Clone, Serialize, Deserialize)] #[serde(rename_all = "camelCase")] @@ -29,6 +29,8 @@ pub struct EnrichedProject { pub session_started_at: Option, #[serde(default, skip_serializing_if = "Option::is_none")] pub session_ended_at: Option, + #[serde(default, skip_serializing_if = "Option::is_none")] + pub duration_label: Option, } fn frontend_status(status: &ProjectStatus) -> &'static str { @@ -43,6 +45,25 @@ fn frontend_status(status: &ProjectStatus) -> &'static str { } } +fn build_duration_label(started_at: Option<&str>, ended_at: Option<&str>) -> Option { + let started = started_at?; + let start_time = chrono::DateTime::parse_from_rfc3339(started).ok()?; + let end_time = ended_at + .and_then(|value| chrono::DateTime::parse_from_rfc3339(value).ok()) + .unwrap_or_else(|| chrono::Utc::now().fixed_offset()); + let elapsed_secs = (end_time - start_time).num_seconds().max(0); + if elapsed_secs < 60 { + return Some(format!("{elapsed_secs}s")); + } + let elapsed_minutes = elapsed_secs / 60; + if elapsed_minutes < 60 { + return Some(format!("{elapsed_minutes}m")); + } + let elapsed_hours = elapsed_minutes / 60; + let remaining_minutes = elapsed_minutes % 60; + Some(format!("{elapsed_hours}h {remaining_minutes}m")) +} + #[tauri::command] pub async fn list_projects_enriched( app: AppHandle, @@ -56,10 +77,9 @@ pub async fn list_projects_enriched( .unwrap_or_default(); let mut projects = { - let conn = db - .0 - .lock() - .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; let query = "\ SELECT p.id, p.name, p.description, p.status, p.working_directory, \ @@ -82,12 +102,13 @@ pub async fn list_projects_enriched( wizard_step: row.get(7).unwrap_or(None), session_started_at: row.get(8).unwrap_or(None), session_ended_at: row.get(9).unwrap_or(None), + duration_label: None, stories_completed: None, total_stories: None, current_agent: None, }) })? - .filter_map(|result| result.ok()) + .filter_map(Result::ok) .collect(); rows }; @@ -136,7 +157,50 @@ pub async fn list_projects_enriched( } project.status = frontend_status(&status).to_string(); + project.duration_label = build_duration_label( + project.session_started_at.as_deref(), + project.session_ended_at.as_deref(), + ); } Ok(projects) } + +#[derive(Debug, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct GroupedProjects { + pub active: Vec, + pub drafts: Vec, + pub finished: Vec, + pub archived: Vec, +} + +#[tauri::command] +pub async fn list_projects_grouped( + app: AppHandle, + db: State<'_, DbState>, + loop_state: State<'_, LoopManagerState>, +) -> Result { + let all = list_projects_enriched(app, db, loop_state).await?; + let mut active = Vec::new(); + let mut drafts = Vec::new(); + let mut finished = Vec::new(); + let mut archived = Vec::new(); + + for project in all { + match project.status.as_str() { + "draft" => drafts.push(project), + "active" | "paused" | "blocked" => active.push(project), + "completed" | "failed" => finished.push(project), + "archived" => archived.push(project), + _ => active.push(project), + } + } + + Ok(GroupedProjects { + active, + drafts, + finished, + archived, + }) +} diff --git a/src-tauri/src/commands/projects_wizard.rs b/src-tauri/src/commands/projects_wizard.rs index 4db69c4..1744a2a 100644 --- a/src-tauri/src/commands/projects_wizard.rs +++ b/src-tauri/src/commands/projects_wizard.rs @@ -1,5 +1,5 @@ use crate::commands::validation::required_trimmed; -use crate::projects::{ProjectError, WizardResumeState}; +use crate::projects::{ProjectError, WizardHydrationResult, WizardResumeState}; use crate::storage::db::DbState; use tauri::{AppHandle, State}; @@ -79,3 +79,14 @@ pub async fn resume_wizard( required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; crate::projects::wizard::resume_wizard(app, db, normalized_project_id).await } + +#[tauri::command] +pub async fn hydrate_wizard( + app: AppHandle, + db: State<'_, DbState>, + project_id: String, +) -> Result { + let normalized_project_id = + required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; + crate::projects::wizard::hydrate_wizard(app, db, normalized_project_id).await +} diff --git a/src-tauri/src/commands/wizard_logic.rs b/src-tauri/src/commands/wizard_logic.rs new file mode 100644 index 0000000..4097991 --- /dev/null +++ b/src-tauri/src/commands/wizard_logic.rs @@ -0,0 +1,774 @@ +use crate::agents::AgentRegistryState; +use crate::commands::validation::{non_empty_trimmed_list, optional_trimmed, required_trimmed}; +use crate::loop_manager::StartLoopArgs; +use crate::plan_engine::PlanSessionsState; +use crate::projects::stories_crud::StoriesResponse; +use crate::projects::{ + AdvanceWizardResult, ConfigDefaultsResponse, DescribeInput, LaunchReadiness, ProjectConfig, + ProjectError, ValidationErrors, +}; +use crate::storage::db::DbState; +use ralph_core::prd::Prd; +use serde::{Deserialize, Serialize}; +use tauri::{AppHandle, State}; + +#[tauri::command] +pub async fn advance_wizard_step(target_step: u32) -> Result { + crate::projects::wizard_state::advance_wizard_step(None, target_step) + .map_err(ProjectError::Path) +} + +#[tauri::command] +pub async fn get_default_config() -> Result { + Ok(crate::projects::validation::get_config_defaults()) +} + +#[tauri::command] +pub async fn validate_project_config( + config_json: String, +) -> Result { + let config: ProjectConfig = serde_json::from_str(&config_json)?; + Ok(crate::projects::validation::validate_config(&config)) +} + +#[tauri::command] +pub async fn validate_describe_input( + name: String, + description: String, + working_directory: String, + plan_agent: String, + state: State<'_, AgentRegistryState>, +) -> Result { + let input = DescribeInput { + name, + description, + working_directory, + plan_agent, + }; + let available_agents = { + let registry = state + .0 + .lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + registry + .agents + .iter() + .filter(|agent| agent.available) + .map(|agent| agent.name.clone()) + .collect::>() + }; + Ok(crate::projects::validation::validate_describe( + &input, + &available_agents, + )) +} + +#[tauri::command] +pub async fn validate_launch_readiness( + app: AppHandle, + db: State<'_, DbState>, + project_id: String, +) -> Result { + let project_id = required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; + + let (project_name, working_directory) = { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + conn.query_row( + "SELECT name, working_directory FROM projects WHERE id = ?1", + rusqlite::params![project_id], + |row| Ok((row.get::<_, String>(0)?, row.get::<_, String>(1)?)), + ) + .map_err(|_| ProjectError::NotFound(project_id.clone()))? + }; + + let dir = crate::projects::artifacts::artifact_dir(&app, &project_id)?; + let prd_path = dir.join("prd.json"); + let (stories_count, estimated_minutes) = if prd_path.exists() { + Prd::load(&prd_path) + .map(|prd| { + let minutes: Vec = prd + .stories + .iter() + .map(|story| story.estimated_minutes) + .collect(); + (prd.stories.len(), minutes) + }) + .unwrap_or_default() + } else { + (0, Vec::new()) + }; + + let config_path = dir.join("config.json"); + let execute_agent = if config_path.exists() { + let content = std::fs::read_to_string(&config_path)?; + serde_json::from_str::(&content) + .map(|cfg| cfg.execute_agent) + .unwrap_or_default() + } else { + String::new() + }; + + Ok(crate::projects::validation::validate_launch_readiness( + &project_name, + &working_directory, + stories_count, + &execute_agent, + &estimated_minutes, + )) +} + +#[tauri::command] +pub async fn add_story( + app: AppHandle, + project_id: String, +) -> Result { + let project_id = required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; + crate::projects::stories_crud::add_story(&app, &project_id) +} + +#[tauri::command] +pub async fn update_story( + app: AppHandle, + project_id: String, + story_id: String, + patch_json: String, +) -> Result { + let project_id = required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; + let story_id = required_trimmed(story_id, "story_id").map_err(ProjectError::Path)?; + crate::projects::stories_crud::update_story(&app, &project_id, &story_id, &patch_json) +} + +#[tauri::command] +pub async fn remove_story( + app: AppHandle, + project_id: String, + story_id: String, +) -> Result { + let project_id = required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; + let story_id = required_trimmed(story_id, "story_id").map_err(ProjectError::Path)?; + crate::projects::stories_crud::remove_story(&app, &project_id, &story_id) +} + +#[tauri::command] +pub async fn reorder_stories( + app: AppHandle, + project_id: String, + from_index: usize, + to_index: usize, +) -> Result { + let project_id = required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; + crate::projects::stories_crud::reorder_stories(&app, &project_id, from_index, to_index) +} + +#[tauri::command] +pub async fn get_stories( + app: AppHandle, + project_id: String, +) -> Result { + let project_id = required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; + crate::projects::stories_crud::get_stories(&app, &project_id) +} + +#[tauri::command] +pub async fn save_wizard_draft( + app: AppHandle, + db: State<'_, DbState>, + project_id: String, + step_name: String, +) -> Result<(), ProjectError> { + let project_id = required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; + let step_name = required_trimmed(step_name, "step_name").map_err(ProjectError::Path)?; + + let (name, description, working_directory) = { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + conn.query_row( + "SELECT name, description, working_directory FROM projects WHERE id = ?1", + rusqlite::params![project_id], + |row| { + Ok(( + row.get::<_, String>(0)?, + row.get::<_, String>(1)?, + row.get::<_, String>(2)?, + )) + }, + ) + .map_err(|_| ProjectError::NotFound(project_id.clone()))? + }; + + let dir = crate::projects::artifacts::artifact_dir(&app, &project_id)?; + let draft_path = dir.join("draft.json"); + let (plan_agent, plan_model, plan_effort, previous_highest_step) = if draft_path.exists() { + std::fs::read_to_string(&draft_path) + .ok() + .and_then(|raw| serde_json::from_str::(&raw).ok()) + .map_or(("claude".to_string(), None, None, 1), |draft| { + let describe = draft + .get("describe") + .cloned() + .unwrap_or_else(|| serde_json::json!({})); + let step = draft + .get("highestStep") + .and_then(serde_json::Value::as_u64) + .map_or(1, |value| value as u32); + ( + describe + .get("planAgent") + .and_then(serde_json::Value::as_str) + .unwrap_or("claude") + .to_string(), + describe + .get("planModel") + .and_then(serde_json::Value::as_str) + .map(str::to_string), + describe + .get("planEffort") + .and_then(serde_json::Value::as_str) + .map(str::to_string), + step, + ) + }) + } else { + ("claude".to_string(), None, None, 1) + }; + let plan_path = dir.join("plan.md"); + let plan_completed = plan_path.exists() + && std::fs::read_to_string(&plan_path) + .map(|content| !content.trim().is_empty()) + .unwrap_or(false); + + let prd_path = dir.join("prd.json"); + let stories_count = if prd_path.exists() { + Prd::load(&prd_path) + .map(|prd| prd.stories.len()) + .unwrap_or(0) + } else { + 0 + }; + + let config_path = dir.join("config.json"); + let config: Option = if config_path.exists() { + std::fs::read_to_string(&config_path) + .ok() + .and_then(|content| serde_json::from_str(&content).ok()) + } else { + None + }; + let current_step_number = + crate::projects::wizard_state::step_name_to_number(&step_name).unwrap_or(1); + let highest_step = previous_highest_step.max(current_step_number); + + let draft = serde_json::json!({ + "version": 1, + "projectId": project_id, + "currentStep": step_name, + "highestStep": highest_step, + "describe": { + "name": name, + "description": description, + "workingDirectory": working_directory, + "planAgent": plan_agent, + "planModel": plan_model, + "planEffort": plan_effort + }, + "plan": { "completed": plan_completed }, + "atomize": { "storiesCount": stories_count }, + "configure": config.unwrap_or_default() + }); + + let draft_json = serde_json::to_string_pretty(&draft)?; + std::fs::create_dir_all(&dir)?; + std::fs::write(dir.join("draft.json"), &draft_json)?; + + let now = chrono::Utc::now().to_rfc3339(); + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let _ = conn.execute( + "UPDATE projects SET wizard_step = ?1, wizard_state_json = NULL, updated_at = ?2 WHERE id = ?3", + rusqlite::params![step_name, now, project_id], + ); + + Ok(()) +} + +#[derive(Debug, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct CompleteDescribeInput { + #[serde(default)] + pub project_id: Option, + pub name: String, + pub description: String, + pub working_directory: String, + #[serde(default)] + pub connection_id: Option, + pub plan_agent: String, + #[serde(default)] + pub plan_model: Option, + #[serde(default)] + pub plan_effort: Option, +} + +#[derive(Debug, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct WizardRouteResult { + pub project_id: String, + pub next_route: String, + #[serde(default, skip_serializing_if = "Option::is_none")] + pub working_directory: Option, +} + +fn resolve_connection_workspace( + app: &AppHandle, + db: &DbState, + connection_id: &str, +) -> Result { + let (repos, connection_name) = { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let mut repo_stmt = conn + .prepare( + "SELECT repo_path, display_name FROM connection_repos WHERE connection_id = ?1", + ) + .map_err(|err| ProjectError::Db(err.to_string()))?; + let repos = repo_stmt + .query_map(rusqlite::params![connection_id], |row| { + Ok(crate::connections::ConnectionRepo { + repo_path: row.get::<_, String>(0)?, + display_name: row.get::<_, Option>(1)?, + }) + }) + .map_err(|err| ProjectError::Db(err.to_string()))? + .filter_map(Result::ok) + .collect::>(); + let name = conn + .query_row( + "SELECT name FROM connections WHERE id = ?1", + rusqlite::params![connection_id], + |row| row.get::<_, String>(0), + ) + .map_err(|_| ProjectError::NotFound(connection_id.to_string()))?; + (repos, name) + }; + let workspace_dir = crate::connections::get_workspace_dir(app, connection_id); + crate::connections::build_workspace(&workspace_dir, &repos) + .map_err(|err| ProjectError::Db(err.to_string()))?; + crate::connections::generate_workspace_manifest(&workspace_dir, &connection_name, &repos) + .map_err(|err| ProjectError::Db(err.to_string()))?; + Ok(workspace_dir.to_string_lossy().to_string()) +} + +#[tauri::command] +pub async fn complete_describe_step( + app: AppHandle, + db: State<'_, DbState>, + input: CompleteDescribeInput, +) -> Result { + let name = required_trimmed(input.name, "name").map_err(ProjectError::Path)?; + let description = + required_trimmed(input.description, "description").map_err(ProjectError::Path)?; + let plan_agent = + required_trimmed(input.plan_agent, "plan_agent").map_err(ProjectError::Path)?; + let plan_model = optional_trimmed(input.plan_model); + let plan_effort = optional_trimmed(input.plan_effort); + let working_directory = if let Some(connection_id) = optional_trimmed(input.connection_id) { + resolve_connection_workspace(&app, db.inner(), &connection_id)? + } else { + required_trimmed(input.working_directory, "working_directory") + .map_err(ProjectError::Path)? + }; + let now = chrono::Utc::now().to_rfc3339(); + let project_id = if let Some(existing_project_id) = optional_trimmed(input.project_id) { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let updated = conn.execute( + "UPDATE projects SET name = ?1, description = ?2, working_directory = ?3, wizard_step = 'plan', updated_at = ?4 WHERE id = ?5", + rusqlite::params![name, description, working_directory, now, existing_project_id], + )?; + if updated == 0 { + return Err(ProjectError::NotFound(existing_project_id)); + } + existing_project_id + } else { + let created_project_id = uuid::Uuid::new_v4().to_string(); + let artifacts = crate::projects::artifacts::artifact_dir(&app, &created_project_id)?; + crate::projects::artifacts::init_artifacts(&artifacts, &name)?; + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + conn.execute( + "INSERT INTO projects (id, name, description, status, working_directory, created_at, updated_at, wizard_step) VALUES (?1, ?2, ?3, 'draft', ?4, ?5, ?6, 'plan')", + rusqlite::params![created_project_id, name, description, working_directory, now, now], + )?; + created_project_id + }; + let dir = crate::projects::artifacts::artifact_dir(&app, &project_id)?; + let prd_path = dir.join("prd.json"); + let stories_count = if prd_path.exists() { + Prd::load(&prd_path) + .map(|prd| prd.stories.len()) + .unwrap_or(0) + } else { + 0 + }; + let config_path = dir.join("config.json"); + let config: Option = if config_path.exists() { + std::fs::read_to_string(&config_path) + .ok() + .and_then(|content| serde_json::from_str(&content).ok()) + } else { + None + }; + let previous_highest = std::fs::read_to_string(dir.join("draft.json")) + .ok() + .and_then(|raw| serde_json::from_str::(&raw).ok()) + .and_then(|draft| draft.get("highestStep").and_then(serde_json::Value::as_u64)) + .map_or(1, |value| value as u32); + let draft = serde_json::json!({ + "version": 1, + "projectId": project_id, + "currentStep": "plan", + "highestStep": previous_highest.max(2), + "describe": { + "name": name, + "description": description, + "workingDirectory": working_directory, + "planAgent": plan_agent, + "planModel": plan_model, + "planEffort": plan_effort + }, + "plan": { "completed": false }, + "atomize": { "storiesCount": stories_count }, + "configure": config.unwrap_or_default() + }); + std::fs::create_dir_all(&dir)?; + std::fs::write( + dir.join("draft.json"), + serde_json::to_string_pretty(&draft)?, + )?; + Ok(WizardRouteResult { + project_id: project_id.clone(), + next_route: format!("/new/plan/{project_id}"), + working_directory: Some(working_directory), + }) +} + +#[tauri::command] +pub async fn complete_atomize_step( + app: AppHandle, + db: State<'_, DbState>, + project_id: String, +) -> Result { + let project_id = required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; + save_wizard_draft(app, db, project_id.clone(), "configure".to_string()).await?; + let _ = advance_wizard_step(4).await; + Ok(WizardRouteResult { + project_id: project_id.clone(), + next_route: format!("/new/configure/{project_id}"), + working_directory: None, + }) +} + +#[derive(Debug, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct CompleteConfigureResult { + pub config: ProjectConfig, + pub errors: std::collections::HashMap, + pub next_route: String, +} + +#[tauri::command] +pub async fn complete_configure_step( + app: AppHandle, + db: State<'_, DbState>, + project_id: String, + raw: RawConfigInput, +) -> Result { + let project_id = required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; + let result = submit_project_config(app.clone(), project_id.clone(), raw).await?; + if !result.errors.is_empty() { + return Ok(CompleteConfigureResult { + config: result.config, + errors: result.errors, + next_route: format!("/new/configure/{project_id}"), + }); + } + save_wizard_draft(app, db, project_id.clone(), "launch".to_string()).await?; + let _ = advance_wizard_step(5).await; + Ok(CompleteConfigureResult { + config: result.config, + errors: std::collections::HashMap::new(), + next_route: format!("/new/launch/{project_id}"), + }) +} + +#[derive(Debug, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct LaunchProjectResult { + pub session_id: String, + pub monitor_route: String, +} + +#[tauri::command] +pub async fn launch_project( + app: AppHandle, + db: State<'_, DbState>, + project_id: String, +) -> Result { + let project_id = required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; + crate::projects::wizard::finalize_draft(app.clone(), db, project_id.clone()).await?; + let session_id = crate::loop_manager::start_loop( + app, + StartLoopArgs { + project_id: project_id.clone(), + project_name: None, + working_directory: None, + agent: None, + model: None, + effort: None, + fallback_agents: Vec::new(), + max_iterations: None, + gutter_threshold: None, + cooldown_seconds: None, + test_command: None, + max_verification_retries: None, + scm_provider: None, + review_polling_interval: None, + review_timeout: None, + }, + ) + .await + .map_err(|err| ProjectError::Db(err.to_string()))?; + Ok(LaunchProjectResult { + session_id, + monitor_route: format!("/monitor/{project_id}"), + }) +} + +#[tauri::command] +pub async fn exit_wizard( + app: AppHandle, + db: State<'_, DbState>, + plan_state: State<'_, PlanSessionsState>, + project_id: String, + current_step: u32, +) -> Result<(), ProjectError> { + let project_id = required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; + if current_step == 2 { + let _ = crate::plan_engine::stop_plan(plan_state, project_id.clone()).await; + } + let step_name = crate::projects::wizard_state::step_number_to_name(current_step) + .unwrap_or("describe") + .to_string(); + save_wizard_draft(app, db, project_id, step_name).await +} + +#[derive(Debug, Clone, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct WizardStepMeta { + pub number: u32, + pub label: String, + pub slug: String, +} + +#[derive(Debug, Clone, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct MonitorTabMeta { + pub id: String, + pub label: String, + #[serde(default)] + pub disable_for_inactive: bool, +} + +#[derive(Debug, Clone, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct WizardDefaultsResponse { + pub default_agent: String, + pub placeholder_config: ProjectConfig, + pub wizard_steps: Vec, + pub monitor_tabs: Vec, +} + +#[tauri::command] +pub async fn get_wizard_defaults() -> Result { + Ok(WizardDefaultsResponse { + default_agent: "claude".to_string(), + placeholder_config: ProjectConfig::default(), + wizard_steps: vec![ + WizardStepMeta { + number: 1, + label: "Describe".to_string(), + slug: "describe".to_string(), + }, + WizardStepMeta { + number: 2, + label: "Plan".to_string(), + slug: "plan".to_string(), + }, + WizardStepMeta { + number: 3, + label: "Atomize".to_string(), + slug: "atomize".to_string(), + }, + WizardStepMeta { + number: 4, + label: "Configure".to_string(), + slug: "configure".to_string(), + }, + WizardStepMeta { + number: 5, + label: "Launch".to_string(), + slug: "launch".to_string(), + }, + ], + monitor_tabs: vec![ + MonitorTabMeta { + id: "progress".to_string(), + label: "Progress".to_string(), + disable_for_inactive: false, + }, + MonitorTabMeta { + id: "activity".to_string(), + label: "Activity".to_string(), + disable_for_inactive: false, + }, + MonitorTabMeta { + id: "output".to_string(), + label: "Output".to_string(), + disable_for_inactive: false, + }, + MonitorTabMeta { + id: "ask".to_string(), + label: "Ask".to_string(), + disable_for_inactive: true, + }, + MonitorTabMeta { + id: "config".to_string(), + label: "Config".to_string(), + disable_for_inactive: false, + }, + ], + }) +} + +#[derive(Debug, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct RawConfigInput { + #[serde(default)] + pub execute_agent: String, + #[serde(default)] + pub execute_model: Option, + #[serde(default)] + pub execute_effort: Option, + #[serde(default)] + pub fallback_chain: Vec, + #[serde(default)] + pub gutter_threshold: u32, + #[serde(default)] + pub max_iterations: u32, + #[serde(default)] + pub cooldown_seconds: u32, + #[serde(default)] + pub test_command: String, + #[serde(default)] + pub max_verification_retries: u32, + #[serde(default)] + pub scm_provider: String, + #[serde(default)] + pub review_polling_interval: u64, + #[serde(default)] + pub review_timeout: u64, +} + +#[derive(Serialize)] +#[serde(rename_all = "camelCase")] +pub struct SubmitConfigResult { + pub config: ProjectConfig, + pub errors: std::collections::HashMap, +} + +#[tauri::command] +pub async fn submit_project_config( + app: AppHandle, + project_id: String, + raw: RawConfigInput, +) -> Result { + let project_id = required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; + let agent = required_trimmed(raw.execute_agent, "execute_agent").unwrap_or_default(); + + let sanitized_chain = non_empty_trimmed_list(raw.fallback_chain) + .into_iter() + .filter(|entry| entry != &agent) + .collect::>(); + + let config = ProjectConfig { + schema_version: 1, + execute_agent: agent, + execute_model: raw.execute_model.filter(|val| !val.trim().is_empty()), + execute_effort: raw.execute_effort.filter(|val| !val.trim().is_empty()), + fallback_chain: sanitized_chain, + gutter_threshold: raw.gutter_threshold, + max_iterations: raw.max_iterations, + cooldown_seconds: raw.cooldown_seconds, + test_command: raw.test_command.trim().to_string(), + max_verification_retries: raw.max_verification_retries, + scm_provider: if raw.scm_provider.trim().is_empty() { + "auto".to_string() + } else { + raw.scm_provider.trim().to_string() + }, + review_polling_interval: raw.review_polling_interval, + review_timeout: raw.review_timeout, + }; + + let validation = crate::projects::validation::validate_config(&config); + if !validation.is_empty() { + return Ok(SubmitConfigResult { + config, + errors: validation.errors, + }); + } + + let config_json = serde_json::to_string_pretty(&config)?; + let dir = crate::projects::artifacts::artifact_dir(&app, &project_id)?; + std::fs::create_dir_all(&dir)?; + std::fs::write(dir.join("config.json"), &config_json)?; + + Ok(SubmitConfigResult { + config, + errors: std::collections::HashMap::new(), + }) +} + +#[derive(Serialize)] +#[serde(rename_all = "camelCase")] +pub struct StaleResult { + pub stale_from_step: u32, +} + +#[tauri::command] +pub async fn mark_wizard_stale( + db: State<'_, DbState>, + project_id: String, + from_step: u32, +) -> Result { + let project_id = required_trimmed(project_id, "project_id").map_err(ProjectError::Path)?; + + let now = chrono::Utc::now().to_rfc3339(); + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let _ = conn.execute( + "UPDATE projects SET updated_at = ?1 WHERE id = ?2", + rusqlite::params![now, project_id], + ); + + Ok(StaleResult { + stale_from_step: from_step, + }) +} diff --git a/src-tauri/src/connections.rs b/src-tauri/src/connections.rs index 4000eae..bbd1f9d 100644 --- a/src-tauri/src/connections.rs +++ b/src-tauri/src/connections.rs @@ -73,7 +73,9 @@ pub async fn create_connection( validate_repo_paths(&repos)?; let connection_id = Uuid::new_v4().to_string(); - let conn = db.0.lock().map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; + let conn = + db.0.lock() + .map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; conn.execute( "INSERT INTO connections (id, name) VALUES (?1, ?2)", @@ -102,14 +104,13 @@ pub async fn create_connection( } #[tauri::command] -pub async fn list_connections( - db: State<'_, DbState>, -) -> Result, ConnectionError> { - let conn = db.0.lock().map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; +pub async fn list_connections(db: State<'_, DbState>) -> Result, ConnectionError> { + let conn = + db.0.lock() + .map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; - let mut stmt = conn.prepare( - "SELECT id, name, created_at FROM connections ORDER BY created_at DESC", - )?; + let mut stmt = + conn.prepare("SELECT id, name, created_at FROM connections ORDER BY created_at DESC")?; let connections: Vec<(String, String, String)> = stmt .query_map([], |row| { Ok(( @@ -118,7 +119,7 @@ pub async fn list_connections( row.get::<_, String>(2)?, )) })? - .filter_map(|row| row.ok()) + .filter_map(Result::ok) .collect(); let mut result = Vec::with_capacity(connections.len()); @@ -139,9 +140,8 @@ fn load_repos( conn: &rusqlite::Connection, connection_id: &str, ) -> Result, ConnectionError> { - let mut stmt = conn.prepare( - "SELECT repo_path, display_name FROM connection_repos WHERE connection_id = ?1", - )?; + let mut stmt = conn + .prepare("SELECT repo_path, display_name FROM connection_repos WHERE connection_id = ?1")?; let repos = stmt .query_map([connection_id], |row| { Ok(ConnectionRepo { @@ -149,7 +149,7 @@ fn load_repos( display_name: row.get(1)?, }) })? - .filter_map(|row| row.ok()) + .filter_map(Result::ok) .collect(); Ok(repos) } @@ -161,14 +161,17 @@ pub async fn update_connection( name: Option, repos: Option>, ) -> Result { - let conn = db.0.lock().map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; + let conn = + db.0.lock() + .map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; - let exists: bool = conn.query_row( - "SELECT COUNT(*) FROM connections WHERE id = ?1", - [&connection_id], - |row| row.get::<_, i64>(0), - ) - .map(|count| count > 0)?; + let exists: bool = conn + .query_row( + "SELECT COUNT(*) FROM connections WHERE id = ?1", + [&connection_id], + |row| row.get::<_, i64>(0), + ) + .map(|count| count > 0)?; if !exists { return Err(ConnectionError::NotFound(connection_id)); @@ -223,11 +226,10 @@ pub async fn delete_connection( .map_err(|err| ConnectionError::Io(format!("Failed to remove workspace: {err}")))?; } - let conn = db.0.lock().map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; - conn.execute( - "DELETE FROM connections WHERE id = ?1", - [&connection_id], - )?; + let conn = + db.0.lock() + .map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; + conn.execute("DELETE FROM connections WHERE id = ?1", [&connection_id])?; Ok(()) } @@ -251,7 +253,9 @@ pub async fn connection_merge_summary( base_ref: String, ) -> Result, ConnectionError> { let repos = { - let conn = db.0.lock().map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; + let conn = + db.0.lock() + .map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; load_repos(&conn, &connection_id)? }; @@ -259,15 +263,12 @@ pub async fn connection_merge_summary( let mut summaries = Vec::with_capacity(repos.len()); for repo in &repos { - let display = repo - .display_name - .as_deref() - .unwrap_or_else(|| { - std::path::Path::new(&repo.repo_path) - .file_name() - .and_then(|fname| fname.to_str()) - .unwrap_or("repo") - }); + let display = repo.display_name.as_deref().unwrap_or_else(|| { + std::path::Path::new(&repo.repo_path) + .file_name() + .and_then(|fname| fname.to_str()) + .unwrap_or("repo") + }); let worktree_path = workspace_dir.join(display); let diff_output = tokio::process::Command::new("git") @@ -290,12 +291,10 @@ pub async fn connection_merge_summary( .await; let commit_count = match commit_output { - Ok(out) if out.status.success() => { - String::from_utf8_lossy(&out.stdout) - .trim() - .parse::() - .unwrap_or(0) - } + Ok(out) if out.status.success() => String::from_utf8_lossy(&out.stdout) + .trim() + .parse::() + .unwrap_or(0), _ => 0, }; @@ -354,32 +353,31 @@ pub fn build_workspace( for repo in repos { let source = std::path::Path::new(&repo.repo_path); - let link_name = repo - .display_name - .as_deref() - .unwrap_or_else(|| { - source - .file_name() - .and_then(|name| name.to_str()) - .unwrap_or("repo") - }); + let link_name = repo.display_name.as_deref().unwrap_or_else(|| { + source + .file_name() + .and_then(|name| name.to_str()) + .unwrap_or("repo") + }); let link_path = workspace_dir.join(link_name); #[cfg(unix)] - std::os::unix::fs::symlink(source, &link_path) - .map_err(|err| ConnectionError::Io(format!( + std::os::unix::fs::symlink(source, &link_path).map_err(|err| { + ConnectionError::Io(format!( "Failed to symlink {} -> {}: {err}", link_path.display(), source.display() - )))?; + )) + })?; #[cfg(windows)] - junction::create(source, &link_path) - .map_err(|err| ConnectionError::Io(format!( + junction::create(source, &link_path).map_err(|err| { + ConnectionError::Io(format!( "Failed to create junction {} -> {}: {err}", link_path.display(), source.display() - )))?; + )) + })?; } Ok(()) @@ -397,15 +395,12 @@ pub fn generate_workspace_manifest( content.push_str("## Repositories\n\n"); for repo in repos { - let display = repo - .display_name - .as_deref() - .unwrap_or_else(|| { - std::path::Path::new(&repo.repo_path) - .file_name() - .and_then(|name| name.to_str()) - .unwrap_or("repo") - }); + let display = repo.display_name.as_deref().unwrap_or_else(|| { + std::path::Path::new(&repo.repo_path) + .file_name() + .and_then(|name| name.to_str()) + .unwrap_or("repo") + }); content.push_str(&format!("### {display}\n\n")); content.push_str(&format!("Path: `{}`\n\n", repo.repo_path)); } @@ -427,12 +422,16 @@ pub async fn build_connection_workspace( db: State<'_, DbState>, connection_id: String, ) -> Result { - let conn = db.0.lock().map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; + let conn = + db.0.lock() + .map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; let repos = load_repos(&conn, &connection_id)?; drop(conn); let name: String = { - let conn2 = db.0.lock().map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; + let conn2 = + db.0.lock() + .map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; conn2.query_row( "SELECT name FROM connections WHERE id = ?1", [&connection_id], @@ -455,7 +454,9 @@ pub async fn create_connection_worktrees( branch_name: String, ) -> Result { let (repos, name) = { - let conn = db.0.lock().map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; + let conn = + db.0.lock() + .map_err(|_| ConnectionError::Db("Lock poisoned".into()))?; let repos = load_repos(&conn, &connection_id)?; let name: String = conn.query_row( "SELECT name FROM connections WHERE id = ?1", @@ -474,28 +475,29 @@ pub async fn create_connection_worktrees( .map_err(|err| ConnectionError::Io(format!("Failed to create workspace: {err}")))?; for repo in &repos { - let display = repo - .display_name - .as_deref() - .unwrap_or_else(|| { - std::path::Path::new(&repo.repo_path) - .file_name() - .and_then(|fname| fname.to_str()) - .unwrap_or("repo") - }); + let display = repo.display_name.as_deref().unwrap_or_else(|| { + std::path::Path::new(&repo.repo_path) + .file_name() + .and_then(|fname| fname.to_str()) + .unwrap_or("repo") + }); let worktree_path = workspace_dir.join(display); let worktree_branch = format!("loopforge/{branch_name}/{display}"); let output = tokio::process::Command::new("git") .args([ - "worktree", "add", + "worktree", + "add", &worktree_path.to_string_lossy(), - "-b", &worktree_branch, + "-b", + &worktree_branch, ]) .current_dir(&repo.repo_path) .output() .await - .map_err(|err| ConnectionError::Io(format!("git worktree failed for {display}: {err}")))?; + .map_err(|err| { + ConnectionError::Io(format!("git worktree failed for {display}: {err}")) + })?; if !output.status.success() { let stderr = String::from_utf8_lossy(&output.stderr); diff --git a/src-tauri/src/contract_tests/artifacts.rs b/src-tauri/src/contract_tests/artifacts.rs index 5afd6ab..bc40928 100644 --- a/src-tauri/src/contract_tests/artifacts.rs +++ b/src-tauri/src/contract_tests/artifacts.rs @@ -1,3 +1,5 @@ +#[path = "merge_coordinator.rs"] +mod merge_coordinator; #[path = "wizard_persistence.rs"] mod wizard_persistence; @@ -33,8 +35,9 @@ async fn happy_path_persists_artifacts_and_runtime_histories() { artifact_dir.to_string_lossy(), harness.artifact_dir(&project.id).to_string_lossy() ); - for name in ["draft.json", "plan.md", "prd.json", "config.json"] { - assert!(artifact_dir.join(name).exists(), "{name} must exist"); + for path in crate::storage::artifacts::file_paths(&artifact_dir) { + let name = path.file_name().unwrap().to_string_lossy(); + assert!(path.exists(), "{name} must exist"); } let _session_id = harness.start_loop(&project.id).await; diff --git a/src-tauri/src/contract_tests/harness.rs b/src-tauri/src/contract_tests/harness.rs index 3b68558..78fcaa1 100644 --- a/src-tauri/src/contract_tests/harness.rs +++ b/src-tauri/src/contract_tests/harness.rs @@ -3,12 +3,10 @@ use crate::commands; use crate::db::DbState; use crate::loop_manager::{LoopManagerState, StartLoopArgs}; use std::path::{Path, PathBuf}; -use std::sync::{Mutex, MutexGuard}; +use std::sync::MutexGuard; use tauri::test::{mock_builder, mock_context, noop_assets, MockRuntime}; use tauri::{App, Manager}; -static ENV_LOCK: Mutex<()> = Mutex::new(()); - pub struct TestHarness { _lock: MutexGuard<'static, ()>, env_guard: EnvGuard, @@ -24,8 +22,11 @@ struct EnvGuard { impl TestHarness { pub fn new() -> Self { - let lock = ENV_LOCK.lock().unwrap_or_else(|err| err.into_inner()); - let root_dir = std::env::temp_dir().join(format!("loopforge-contract-{}", uuid::Uuid::new_v4())); + let lock = crate::test_env_lock::ENV_LOCK + .lock() + .unwrap_or_else(|err| err.into_inner()); + let root_dir = + std::env::temp_dir().join(format!("loopforge-contract-{}", uuid::Uuid::new_v4())); let home_dir = root_dir.join("home"); let bin_dir = root_dir.join("bin"); let work_dir = root_dir.join("work"); @@ -76,7 +77,7 @@ impl TestHarness { conn.query_row( "SELECT status FROM projects WHERE id = ?1", rusqlite::params![project_id], - |row| row.get::<_, String>(0), + |row: &rusqlite::Row| row.get::<_, String>(0), ) .expect("project status") }; @@ -94,12 +95,10 @@ impl TestHarness { expected: usize, ) -> Vec { for _ in 0..100 { - let messages = commands::ask::ask_history( - self.app.state::(), - project_id.to_string(), - ) - .await - .expect("ask history"); + let messages = + commands::ask::ask_history(self.app.state::(), project_id.to_string()) + .await + .expect("ask history"); if messages.len() >= expected { return messages; } @@ -124,6 +123,9 @@ impl TestHarness { cooldown_seconds: None, test_command: None, max_verification_retries: None, + scm_provider: None, + review_polling_interval: None, + review_timeout: None, }, ) .await @@ -140,10 +142,14 @@ impl Drop for TestHarness { impl EnvGuard { fn set(home_dir: &Path, bin_dir: &Path) -> Self { - let path = match std::env::var("PATH") { - Ok(existing) => format!("{}:{}", bin_dir.display(), existing), - Err(_) => bin_dir.display().to_string(), - }; + let mut path_items = vec![bin_dir.to_path_buf()]; + if let Some(existing_path) = std::env::var_os("PATH") { + path_items.extend(std::env::split_paths(&existing_path)); + } + let path = std::env::join_paths(path_items) + .ok() + .and_then(|joined| joined.into_string().ok()) + .unwrap_or_else(|| bin_dir.display().to_string()); let keys = vec![ ("HOME", std::env::var("HOME").ok()), ("XDG_DATA_HOME", std::env::var("XDG_DATA_HOME").ok()), @@ -170,16 +176,21 @@ impl Drop for EnvGuard { } fn install_fixture_agent(bin_dir: &Path, agent_name: &str) { + #[cfg(windows)] + let script = bin_dir.join(format!("{agent_name}.cmd")); + #[cfg(not(windows))] let script = bin_dir.join(agent_name); - std::fs::write( - &script, - "#!/bin/sh\nprintf 'fixture agent completed\\n'\n", - ) - .expect("fixture agent"); + #[cfg(windows)] + let content = "@echo off\r\necho fixture agent completed\r\n"; + #[cfg(not(windows))] + let content = "#!/bin/sh\nprintf 'fixture agent completed\\n'\n"; + std::fs::write(&script, content).expect("fixture agent"); #[cfg(unix)] { use std::os::unix::fs::PermissionsExt; - let mut perms = std::fs::metadata(&script).expect("fixture metadata").permissions(); + let mut perms = std::fs::metadata(&script) + .expect("fixture metadata") + .permissions(); perms.set_mode(0o755); std::fs::set_permissions(&script, perms).expect("fixture perms"); } diff --git a/src-tauri/src/contract_tests/invoke_fixtures.rs b/src-tauri/src/contract_tests/invoke_fixtures.rs index 97f5ad0..045541d 100644 --- a/src-tauri/src/contract_tests/invoke_fixtures.rs +++ b/src-tauri/src/contract_tests/invoke_fixtures.rs @@ -1,7 +1,5 @@ use std::path::{Path, PathBuf}; -use std::sync::{Mutex, MutexGuard}; - -static ENV_LOCK: Mutex<()> = Mutex::new(()); +use std::sync::MutexGuard; const ENV_KEYS: [&str; 6] = [ "HOME", @@ -24,7 +22,9 @@ pub struct FixtureGuard { impl FixtureGuard { pub fn new(fixture_set: Option<&str>) -> Self { - let lock = ENV_LOCK.lock().unwrap_or_else(|err| err.into_inner()); + let lock = crate::test_env_lock::ENV_LOCK + .lock() + .unwrap_or_else(|err| err.into_inner()); let root_dir = std::env::temp_dir().join(format!("loopforge-invoke-{}", uuid::Uuid::new_v4())); let home_dir = root_dir.join("home"); @@ -45,10 +45,7 @@ impl FixtureGuard { .into_iter() .map(|key| (key, std::env::var(key).ok())) .collect(); - let path = match std::env::var("PATH") { - Ok(existing) => format!("{}:{existing}", bin_dir.display()), - Err(_) => bin_dir.display().to_string(), - }; + let path = prepend_path(&bin_dir); std::env::set_var("HOME", &home_dir); std::env::set_var("XDG_CONFIG_HOME", home_dir.join(".config")); std::env::set_var("XDG_DATA_HOME", home_dir.join(".local").join("share")); @@ -89,12 +86,71 @@ impl Drop for FixtureGuard { } fn install_loop_agent(bin_dir: &Path, agent_name: &str) { - let script = "#!/bin/sh\nprintf 'fixture loop completed\\n'\n"; + let script = loop_agent_script(); install_script(bin_dir, agent_name, script); } fn install_atomizer_agent(bin_dir: &Path, agent_name: &str) { - let script = r#"#!/bin/sh + let script = atomizer_agent_script(); + install_script(bin_dir, agent_name, script); +} + +fn install_broken_atomizer_agent(bin_dir: &Path, agent_name: &str) { + let script = broken_atomizer_agent_script(); + install_script(bin_dir, agent_name, script); +} + +fn install_script(bin_dir: &Path, name: &str, content: &str) { + let script = bin_dir.join(script_name(name)); + std::fs::write(&script, content).expect("script"); + #[cfg(unix)] + { + use std::os::unix::fs::PermissionsExt; + let mut perms = std::fs::metadata(&script).expect("metadata").permissions(); + perms.set_mode(0o755); + std::fs::set_permissions(&script, perms).expect("permissions"); + } +} + +fn prepend_path(bin_dir: &Path) -> String { + let mut path_items = vec![bin_dir.to_path_buf()]; + if let Some(existing) = std::env::var_os("PATH") { + path_items.extend(std::env::split_paths(&existing)); + } + std::env::join_paths(path_items) + .ok() + .and_then(|value| value.into_string().ok()) + .unwrap_or_else(|| bin_dir.display().to_string()) +} + +#[cfg(windows)] +fn script_name(name: &str) -> String { + format!("{name}.cmd") +} + +#[cfg(not(windows))] +fn script_name(name: &str) -> String { + name.to_string() +} + +#[cfg(windows)] +fn loop_agent_script() -> &'static str { + "@echo off\r\necho fixture loop completed\r\n" +} + +#[cfg(not(windows))] +fn loop_agent_script() -> &'static str { + "#!/bin/sh\nprintf 'fixture loop completed\\n'\n" +} + +#[cfg(windows)] +fn atomizer_agent_script() -> &'static str { + "@echo off\r\nset prompt=%2\r\necho %prompt% | findstr /C:\"Condense the following implementation plan\" >nul && (echo Create the project, atomize the plan, execute the story, and archive the result.& exit /b 0)\r\necho %prompt% | findstr /C:\"Split the following implementation plan\" >nul && (echo [{\"title\":\"Lifecycle\",\"content\":\"Create the project, atomize the plan, execute the story, and archive the result.\"}]& exit /b 0)\r\necho %prompt% | findstr /C:\"decomposing a plan section into atomic user stories\" >nul && (echo [{\"title\":\"Exercise invoke contracts\",\"description\":\"Cover invoke lifecycle commands.\",\"acceptanceCriteria\":[\"Lifecycle commands succeed\",\"Runtime history is recorded\"],\"scope\":{\"filesToModify\":[\"src-tauri/src/lib.rs\"],\"filesToCreate\":[\"src-tauri/src/contract_tests/invoke_handler.rs\"],\"filesToAvoid\":[]},\"verification\":{\"commands\":[\"cargo test -p loopforge contract_tests\"],\"assertions\":[]},\"commitMessage\":\"test(tauri): cover invoke handler contracts\",\"priority\":\"critical\",\"estimatedComplexity\":\"small\",\"estimatedMinutes\":30,\"dependsOn\":[]}]& exit /b 0)\r\necho %prompt% | findstr /C:\"technical lead finalizing\" >nul && (echo {\"projectName\":\"Invoke Contract\",\"feature\":\"Lifecycle coverage\",\"workingDirectory\":\"\",\"generatedAt\":\"2026-04-09T10:10:00.000Z\",\"stories\":[{\"id\":\"S-001\",\"title\":\"Exercise invoke contracts\",\"description\":\"Cover invoke lifecycle commands.\",\"acceptanceCriteria\":[\"Lifecycle commands succeed\",\"Runtime history is recorded\"],\"scope\":{\"filesToModify\":[\"src-tauri/src/lib.rs\"],\"filesToCreate\":[\"src-tauri/src/contract_tests/invoke_handler.rs\"],\"filesToAvoid\":[]},\"verification\":{\"commands\":[\"cargo test -p loopforge contract_tests\"],\"assertions\":[]},\"commitMessage\":\"test(tauri): cover invoke handler contracts\",\"priority\":\"critical\",\"estimatedComplexity\":\"small\",\"estimatedMinutes\":30,\"dependsOn\":[],\"passes\":false,\"blocked\":false,\"attempts\":0,\"notes\":null}]}& exit /b 0)\r\necho []\r\n" +} + +#[cfg(not(windows))] +fn atomizer_agent_script() -> &'static str { + r#"#!/bin/sh prompt="$2" if printf '%s' "$prompt" | grep -q "Condense the following implementation plan"; then printf '%s\n' 'Create the project, atomize the plan, execute the story, and archive the result.' @@ -107,25 +163,15 @@ elif printf '%s' "$prompt" | grep -q "technical lead finalizing"; then else printf '%s\n' '[]' fi -"#; - install_script(bin_dir, agent_name, script); +"# } -fn install_broken_atomizer_agent(bin_dir: &Path, agent_name: &str) { - let script = "#!/bin/sh\nprintf '%s\\n' 'not valid json'\n"; - install_script(bin_dir, agent_name, script); +#[cfg(windows)] +fn broken_atomizer_agent_script() -> &'static str { + "@echo off\r\necho not valid json\r\n" } -fn install_script(bin_dir: &Path, name: &str, content: &str) { - let script = bin_dir.join(name); - std::fs::write(&script, content).expect("script"); - #[cfg(unix)] - { - use std::os::unix::fs::PermissionsExt; - let mut perms = std::fs::metadata(&script) - .expect("metadata") - .permissions(); - perms.set_mode(0o755); - std::fs::set_permissions(&script, perms).expect("permissions"); - } +#[cfg(not(windows))] +fn broken_atomizer_agent_script() -> &'static str { + "#!/bin/sh\nprintf '%s\\n' 'not valid json'\n" } diff --git a/src-tauri/src/contract_tests/invoke_handler.rs b/src-tauri/src/contract_tests/invoke_handler.rs index c5d239d..68449f2 100644 --- a/src-tauri/src/contract_tests/invoke_handler.rs +++ b/src-tauri/src/contract_tests/invoke_handler.rs @@ -30,7 +30,8 @@ async fn invoke_handler_covers_lifecycle_and_runtime_commands() { harness.invoke_ok("query_plan_status", json!({ "projectId": project.id })); assert!(status.is_some()); harness.wait_for_plan_idle(&project.id).await; - let plan: Option = harness.invoke_ok("load_existing_plan", json!({ "projectId": project.id })); + let plan: Option = + harness.invoke_ok("load_existing_plan", json!({ "projectId": project.id })); assert!(plan.is_some()); let prd: Prd = harness.invoke_ok( diff --git a/src-tauri/src/contract_tests/invoke_harness.rs b/src-tauri/src/contract_tests/invoke_harness.rs index 1a6bffc..dd37f66 100644 --- a/src-tauri/src/contract_tests/invoke_harness.rs +++ b/src-tauri/src/contract_tests/invoke_harness.rs @@ -30,7 +30,9 @@ impl InvokeHarness { .manage(crate::agents::AgentRegistryState::default()) .manage(PlanSessionsState::default()) .manage(LoopManagerState::default()) - .manage(crate::ask_engine::session::AskSessionsState::default()); + .manage(crate::ask_engine::session::AskSessionsState::default()) + .manage(crate::atomizer::PipelineRegistryState::default()) + .manage(crate::atomizer::ActivityLogState::default()); let app = invoke::attach_contract(builder) .build(mock_context(noop_assets())) .expect("test app"); @@ -69,33 +71,35 @@ impl InvokeHarness { } pub async fn wait_for_plan_idle(&self, project_id: &str) { - for _ in 0..100 { - let status: Option = - self.invoke_ok("query_plan_status", serde_json::json!({ "projectId": project_id })); + for _ in 0..200 { + let status: Option = self.invoke_ok( + "query_plan_status", + serde_json::json!({ "projectId": project_id }), + ); if status.is_none() && self.artifact_dir(project_id).join("plan.md").exists() { return; } - tokio::time::sleep(std::time::Duration::from_millis(20)).await; + tokio::time::sleep(std::time::Duration::from_millis(50)).await; } panic!("plan session did not settle for {project_id}"); } pub async fn wait_for_completion(&self, project_id: &str) { - for _ in 0..100 { + for _ in 0..300 { let status = { let db = self.app.state::(); let conn = db.0.lock().expect("db lock"); conn.query_row( "SELECT status FROM projects WHERE id = ?1", rusqlite::params![project_id], - |row| row.get::<_, String>(0), + |row: &rusqlite::Row| row.get::<_, String>(0), ) .expect("project status") }; if status == "completed" { return; } - tokio::time::sleep(std::time::Duration::from_millis(50)).await; + tokio::time::sleep(std::time::Duration::from_millis(100)).await; } panic!("loop did not complete for {project_id}"); } diff --git a/src-tauri/src/contract_tests/merge_coordinator.rs b/src-tauri/src/contract_tests/merge_coordinator.rs new file mode 100644 index 0000000..3aeab89 --- /dev/null +++ b/src-tauri/src/contract_tests/merge_coordinator.rs @@ -0,0 +1,145 @@ +use crate::projects::documents::{apply_merge_action, merge_action_target, ordered_merge_actions}; +use ralph_core::prd::{Prd, UserStory}; +use ralph_core::WorktreeCompletion; + +#[test] +fn parallel_completions_produce_one_ordered_write_sequence() { + let root = std::env::temp_dir().join(format!("loopforge-merge-{}", uuid::Uuid::new_v4())); + std::fs::create_dir_all(&root).unwrap(); + seed_prd(&root); + + let actions = ordered_merge_actions(vec![ + completion( + "S-002", + "wt-b", + false, + true, + 1, + Some("guardrail from wt-b"), + Some("bbb222"), + ), + completion("S-001", "wt-a", true, false, 0, None, Some("aaa111")), + ]); + + let mut targets = Vec::new(); + for action in &actions { + targets.push(merge_action_target(&root, action)); + apply_merge_action(&root, action).unwrap(); + } + + assert_eq!( + targets, + vec![ + root.join("prd.json").display().to_string(), + "session:wt-a".to_string(), + root.join("prd.json").display().to_string(), + root.join("guardrails.md").display().to_string(), + "session:wt-b".to_string(), + ] + ); + + let prd = Prd::load(&root.join("prd.json")).unwrap(); + assert!( + prd.stories + .iter() + .find(|story| story.id == "S-001") + .unwrap() + .passes + ); + let blocked = prd + .stories + .iter() + .find(|story| story.id == "S-002") + .unwrap(); + assert!(!blocked.passes); + assert!(blocked.blocked); + assert_eq!( + std::fs::read_to_string(root.join("guardrails.md")).unwrap(), + "guardrail from wt-b\n" + ); + + let _ = std::fs::remove_dir_all(root); +} + +#[test] +fn merge_targets_keep_contract_artifact_filenames() { + let root = std::env::temp_dir().join(format!("loopforge-targets-{}", uuid::Uuid::new_v4())); + std::fs::create_dir_all(&root).unwrap(); + + let actions = ordered_merge_actions(vec![completion( + "S-003", + "wt-c", + false, + true, + 0, + Some("guardrail from wt-c"), + Some("ccc333"), + )]); + + let targets: Vec = actions + .iter() + .map(|action| merge_action_target(&root, action)) + .collect(); + + assert!(targets.iter().any(|target| target.ends_with("prd.json"))); + assert!(targets + .iter() + .any(|target| target.ends_with("guardrails.md"))); + assert!(targets.iter().any(|target| target == "session:wt-c")); + + let _ = std::fs::remove_dir_all(root); +} + +fn seed_prd(root: &std::path::Path) { + let prd = Prd { + project_name: "Merge Contract".to_string(), + feature: String::new(), + working_directory: String::new(), + branch_name: None, + stories: vec![story("S-001"), story("S-002")], + generated_at: None, + }; + prd.save(&root.join("prd.json")).unwrap(); + std::fs::write(root.join("guardrails.md"), "").unwrap(); +} + +fn story(id: &str) -> UserStory { + UserStory { + id: id.to_string(), + title: id.to_string(), + description: None, + acceptance_criteria: vec!["contract".to_string()], + scope: Default::default(), + verification: Default::default(), + commit_message: None, + priority: Default::default(), + estimated_complexity: Default::default(), + estimated_minutes: 0, + depends_on: vec![], + passes: false, + blocked: false, + attempts: 0, + notes: None, + } +} + +fn completion( + story_id: &str, + worktree_id: &str, + passed: bool, + blocked: bool, + sequence: u64, + guardrail_append: Option<&str>, + head_commit: Option<&str>, +) -> WorktreeCompletion { + WorktreeCompletion { + worktree_id: worktree_id.to_string(), + story_id: story_id.to_string(), + passed, + blocked, + head_commit: head_commit.map(str::to_string), + guardrail_append: guardrail_append.map(str::to_string), + branch: None, + sequence, + } +} diff --git a/src-tauri/src/contract_tests/support.rs b/src-tauri/src/contract_tests/support.rs index dbab6b6..4e03ae5 100644 --- a/src-tauri/src/contract_tests/support.rs +++ b/src-tauri/src/contract_tests/support.rs @@ -110,7 +110,7 @@ pub fn session_ended_at(harness: &TestHarness, project_id: &str) -> Option(); let conn = db.0.lock().unwrap(); - let conversation = crate::ask_engine::storage::get_or_create_conversation(&conn, project_id) - .unwrap(); + let conversation = + crate::ask_engine::storage::get_or_create_conversation(&conn, project_id).unwrap(); crate::ask_engine::storage::insert_message( &conn, &conversation.id, @@ -136,7 +136,7 @@ pub async fn start_ask(harness: &TestHarness, project_id: &str, question: &str) .query_row( "SELECT working_directory FROM projects WHERE id = ?1", rusqlite::params![project_id], - |row| row.get(0), + |row: &rusqlite::Row| row.get(0), ) .unwrap(); std::path::PathBuf::from(dir) diff --git a/src-tauri/src/contract_tests/wizard_persistence.rs b/src-tauri/src/contract_tests/wizard_persistence.rs index 21b163a..054ede8 100644 --- a/src-tauri/src/contract_tests/wizard_persistence.rs +++ b/src-tauri/src/contract_tests/wizard_persistence.rs @@ -57,7 +57,7 @@ async fn db_snapshot_matches_canonical_draft_json() { conn.query_row( "SELECT wizard_state_json FROM projects WHERE id = ?1", rusqlite::params![project.id.clone()], - |row| row.get(0), + |row: &rusqlite::Row| row.get(0), ) .expect("wizard state json") }; @@ -97,6 +97,8 @@ async fn resume_wizard_succeeds_with_canonical_draft_payload() { "version": 1, "projectId": project.id, "currentStep": "configure", + "highestStep": 4, + "staleFromStep": serde_json::Value::Null, "describe": { "name": "Roundtrip Project", "description": "Persist wizard draft values", @@ -106,7 +108,7 @@ async fn resume_wizard_succeeds_with_canonical_draft_payload() { "planEffort": "high" }, "plan": { "completed": true }, - "atomize": { "storiesCount": 3 }, + "atomize": { "stories": [], "storiesCount": 3 }, "configure": { "executeAgent": "codex", "executeModel": serde_json::Value::Null, @@ -117,6 +119,7 @@ async fn resume_wizard_succeeds_with_canonical_draft_payload() { "cooldownSeconds": 15, "testCommand": "cargo test wizard_draft_roundtrip", "maxVerificationRetries": 2, + "schemaVersion": 1, "scmProvider": "auto", "reviewPollingInterval": 90, "reviewTimeout": 900 @@ -138,10 +141,11 @@ async fn resume_wizard_succeeds_with_canonical_draft_payload() { ) .await .expect("resume wizard"); - let loaded_draft = crate::projects::wizard::load_draft(harness.app.handle().clone(), project.id) - .await - .expect("load draft") - .expect("draft content"); + let loaded_draft = + crate::projects::wizard::load_draft(harness.app.handle().clone(), project.id) + .await + .expect("load draft") + .expect("draft content"); assert_eq!(resume_state.wizard_step, "configure"); assert_eq!( diff --git a/src-tauri/src/diagnostic_parser.rs b/src-tauri/src/diagnostic_parser.rs index f9ce2c0..6d64706 100644 --- a/src-tauri/src/diagnostic_parser.rs +++ b/src-tauri/src/diagnostic_parser.rs @@ -24,9 +24,7 @@ pub struct DiagnosticReport { pub fn parse_build_output(raw_output: String) -> DiagnosticReport { let diagnostics = parse_diagnostics(&raw_output); let total = diagnostics.len(); - let has_errors = diagnostics - .iter() - .any(|diag| diag.error_type != "warning"); + let has_errors = diagnostics.iter().any(|diag| diag.error_type != "warning"); DiagnosticReport { diagnostics, total, @@ -50,10 +48,8 @@ pub fn parse_diagnostics(output: &str) -> Vec { } fn parse_typescript(output: &str) -> Vec { - let pattern = Regex::new( - r"(?m)^(.+?)\((\d+),(\d+)\):\s*error\s+(TS\d+):\s*(.+)$" - ) - .expect("valid regex"); + let pattern = + Regex::new(r"(?m)^(.+?)\((\d+),(\d+)\):\s*error\s+(TS\d+):\s*(.+)$").expect("valid regex"); pattern .captures_iter(output) @@ -69,10 +65,8 @@ fn parse_typescript(output: &str) -> Vec { } fn parse_eslint(output: &str) -> Vec { - let pattern = Regex::new( - r"(?m)^\s*(\S+?):(\d+):(\d+):\s+(\S+)\s+(.+?)(?:\s{2,}|\t)(\S+)$" - ) - .expect("valid regex"); + let pattern = Regex::new(r"(?m)^\s*(\S+?):(\d+):(\d+):\s+(\S+)\s+(.+?)(?:\s{2,}|\t)(\S+)$") + .expect("valid regex"); pattern .captures_iter(output) @@ -89,10 +83,9 @@ fn parse_eslint(output: &str) -> Vec { } fn parse_cargo(output: &str) -> Vec { - let error_pattern = Regex::new( - r"(?m)^error(?:\[E(\d+)\])?:\s*(.+)\n\s*-->\s*(.+?):(\d+):(\d+)" - ) - .expect("valid regex"); + let error_pattern = + Regex::new(r"(?m)^error(?:\[E(\d+)\])?:\s*(.+)\n\s*-->\s*(.+?):(\d+):(\d+)") + .expect("valid regex"); error_pattern .captures_iter(output) @@ -111,29 +104,21 @@ fn parse_cargo(output: &str) -> Vec { } fn parse_jest(output: &str) -> Vec { - let fail_pattern = Regex::new( - r"(?m)FAIL\s+(.+?)(?:\n|\r\n)" - ) - .expect("valid regex"); + let fail_pattern = Regex::new(r"(?m)FAIL\s+(.+?)(?:\n|\r\n)").expect("valid regex"); - let assertion_pattern = Regex::new( - r"(?m)Expected:?\s*(.+)\n\s*Received:?\s*(.+)" - ) - .expect("valid regex"); + let assertion_pattern = + Regex::new(r"(?m)Expected:?\s*(.+)\n\s*Received:?\s*(.+)").expect("valid regex"); - let location_pattern = Regex::new( - r"(?m)at\s+.*?\((.+?):(\d+):(\d+)\)" - ) - .expect("valid regex"); + let location_pattern = Regex::new(r"(?m)at\s+.*?\((.+?):(\d+):(\d+)\)").expect("valid regex"); let mut diagnostics = Vec::new(); for fail_match in fail_pattern.captures_iter(output) { let test_file = fail_match[1].trim().to_string(); - let expected_received = assertion_pattern.captures(output).map(|cap| { - format!("Expected: {}, Received: {}", &cap[1], &cap[2]) - }); + let expected_received = assertion_pattern + .captures(output) + .map(|cap| format!("Expected: {}, Received: {}", &cap[1], &cap[2])); let (line, col) = location_pattern .captures(output) @@ -176,7 +161,8 @@ mod tests { #[test] fn typescript_parser_extracts_error() { - let output = r#"src/auth.ts(42,5): error TS2322: Type 'string' is not assignable to type 'number'"#; + let output = + r#"src/auth.ts(42,5): error TS2322: Type 'string' is not assignable to type 'number'"#; let diags = parse_typescript(output); assert_eq!(diags.len(), 1); assert_eq!(diags[0].file.as_deref(), Some("src/auth.ts")); diff --git a/src-tauri/src/ephemeral_query.rs b/src-tauri/src/ephemeral_query.rs index fc3da84..e8a2671 100644 --- a/src-tauri/src/ephemeral_query.rs +++ b/src-tauri/src/ephemeral_query.rs @@ -58,10 +58,7 @@ fn classify_question(question: &str) -> Option { .map(|(_, query_type)| *query_type) } -fn answer_instant( - query_type: InstantQuery, - stats: &SessionStats, -) -> String { +fn answer_instant(query_type: InstantQuery, stats: &SessionStats) -> String { match query_type { InstantQuery::CurrentStory => { if stats.is_running { @@ -89,24 +86,16 @@ fn answer_instant( ) } InstantQuery::LoopConfig => { - let agent = stats - .current_agent - .as_deref() - .unwrap_or("none"); + let agent = stats.current_agent.as_deref().unwrap_or("none"); format!( "Agent: {} | Running: {} | Stories: {}/{} completed", - agent, - stats.is_running, - stats.passed_stories, - stats.total_stories + agent, stats.is_running, stats.passed_stories, stats.total_stories ) } - InstantQuery::CurrentAgent => { - match &stats.current_agent { - Some(agent) => format!("Current agent: {agent}"), - None => "No agent is currently active.".to_string(), - } - } + InstantQuery::CurrentAgent => match &stats.current_agent { + Some(agent) => format!("Current agent: {agent}"), + None => "No agent is currently active.".to_string(), + }, } } @@ -119,13 +108,8 @@ pub async fn ephemeral_query( use tauri::Manager; let db_state = app.state::(); let loop_state = app.state::(); - let stats = crate::loop_manager::session_stats( - app.clone(), - db_state, - loop_state, - project_id, - ) - .await?; + let stats = + crate::loop_manager::session_stats(app.clone(), db_state, loop_state, project_id).await?; if let Some(query_type) = classify_question(&question) { let answer = answer_instant(query_type, &stats); diff --git a/src-tauri/src/events.rs b/src-tauri/src/events.rs index c55028d..a120069 100644 --- a/src-tauri/src/events.rs +++ b/src-tauri/src/events.rs @@ -11,6 +11,9 @@ pub const EVENT_VERIFICATION_FAILED: &str = "loop:verification-failed"; pub const EVENT_VERIFICATION_PASSED: &str = "loop:verification-passed"; pub const EVENT_PROMPT_BUILT: &str = "loop:prompt-built"; pub const EVENT_STORY_SKIPPED: &str = "loop:story-skipped"; +pub const EVENT_STORIES_UPDATED: &str = "loop:stories-updated"; +pub const EVENT_PROJECT_STATE_CHANGED: &str = "project:state-changed"; +pub const EVENT_NOTIFICATION_ADDED: &str = "notification:added"; pub const EVENT_ASK_STREAM: &str = "ask:stream"; pub const EVENT_ASK_COMPLETE: &str = "ask:complete"; @@ -21,6 +24,8 @@ pub const EVENT_PLAN_COMPLETE: &str = "plan:complete"; pub const EVENT_PLAN_ERROR: &str = "plan:error"; pub const EVENT_PLAN_HEARTBEAT: &str = "plan:heartbeat"; +pub const EVENT_ATOMIZATION_ACTIVITY: &str = "atomization:activity"; + #[allow(dead_code)] pub const LOOP_EVENT_CATALOG: [&str; 13] = [ EVENT_AGENT_OUTPUT_STREAM, diff --git a/src-tauri/src/invoke.rs b/src-tauri/src/invoke.rs index b377fad..dc0faf1 100644 --- a/src-tauri/src/invoke.rs +++ b/src-tauri/src/invoke.rs @@ -1,129 +1,569 @@ -use crate::{agents, commands, connections, ephemeral_query}; +use crate::{agents, commands, connections, ephemeral_query, services}; +#[cfg(not(test))] +#[path = "../../crates/loopforge-app-core/src/atomizer.rs"] +mod app_core_atomizer; +#[cfg(not(test))] +#[path = "../../crates/loopforge-app-core/src/projects.rs"] +mod app_core_projects; #[cfg(not(test))] pub fn attach_app(builder: tauri::Builder) -> tauri::Builder { - builder.invoke_handler(tauri::generate_handler![ - agents::detect_agents, - agents::refresh_agents, - agents::get_agent_capabilities, - commands::planning::start_plan, - commands::planning::write_to_plan, - commands::planning::stop_plan, - commands::planning::query_plan_status, - commands::projects_artifacts::load_existing_plan, - commands::projects_artifacts::load_existing_prd, - commands::projects_artifacts::load_output_log, - commands::projects_artifacts::save_plan, - commands::projects_artifacts::save_prd, - commands::projects_artifacts::save_config, - commands::projects_artifacts::load_config, - commands::atomization::run_atomizer, - commands::projects_lifecycle::create_project, - commands::projects_wizard::finalize_draft, - commands::projects_wizard::discard_draft, - commands::projects_wizard::save_wizard_state, - commands::projects_wizard::resume_wizard, - commands::projects_wizard::save_draft, - commands::projects_wizard::load_draft, - commands::projects_lifecycle::list_projects, - commands::projects_lifecycle::pause_project, - commands::projects_lifecycle::resume_project, - commands::projects_lifecycle::archive_project, - commands::projects_lifecycle::get_project_detail, - commands::projects_lifecycle::get_project_stories, - commands::projects_lifecycle::get_guardrails, - commands::projects_lifecycle::get_project_config, - commands::projects::get_project_snapshot, - commands::projects_listing::list_projects_enriched, - commands::projects_lifecycle::get_notification_prefs, - commands::projects_lifecycle::save_notification_prefs, - commands::execution::start_loop, - commands::execution::stop_loop, - commands::execution::session_stats, - commands::execution::get_iteration_history, - commands::ask::ask_question, - commands::ask::ask_history, - commands::ask::stop_ask, - commands::ask::copy_ask_message, - commands::ask::truncate_ask_from, - commands::ask::retry_ask, - ephemeral_query::ephemeral_query, - connections::list_connections, - connections::build_connection_workspace, - ]) + services::attach_runtime_aliases(builder, { + let fallback: fn(tauri::ipc::Invoke) -> bool = tauri::generate_handler![ + agents::detect_agents, + agents::refresh_agents, + agents::check_system_readiness, + agents::get_known_agents, + agents::get_agent_capabilities, + agents::resolve_agent_selection, + commands::display_vocabulary::get_display_vocabulary, + commands::planning::query_plan_status, + commands::planning::resolve_plan_state, + commands::planning::resolve_plan_action, + commands::planning::plan_user_action, + commands::planning::replan, + commands::projects_artifacts::load_existing_plan, + commands::projects_artifacts::load_existing_prd, + commands::projects_artifacts::load_output_log, + commands::projects_artifacts::save_plan, + commands::projects_artifacts::save_prd, + commands::projects_artifacts::save_config, + commands::projects_artifacts::load_config, + commands::atomization::run_atomizer, + commands::atomization::get_atomizer_activity_log, + commands::atomization::get_atomizer_pipeline_state, + commands::projects_wizard::save_wizard_state, + commands::projects_wizard::resume_wizard, + commands::projects_wizard::hydrate_wizard, + commands::projects_wizard::save_draft, + commands::projects_wizard::load_draft, + commands::projects_lifecycle::pause_project, + commands::projects_lifecycle::resume_project, + commands::projects_lifecycle::archive_project, + commands::projects_lifecycle::get_project_stories, + commands::projects_lifecycle::get_guardrails, + commands::projects_lifecycle::get_project_config, + commands::projects::get_project_snapshot, + commands::projects_listing::list_projects_enriched, + commands::projects_listing::list_projects_grouped, + commands::projects_lifecycle::get_notification_prefs, + commands::projects_lifecycle::save_notification_prefs, + commands::execution::start_loop, + commands::execution::stop_loop, + commands::execution::get_activity_feed, + commands::ask::ask_question, + commands::ask::ask_history, + commands::ask::stop_ask, + commands::ask::copy_ask_message, + commands::ask::truncate_ask_from, + commands::ask::retry_ask, + ephemeral_query::ephemeral_query, + connections::list_connections, + connections::build_connection_workspace, + commands::wizard_logic::advance_wizard_step, + commands::wizard_logic::get_default_config, + commands::wizard_logic::get_wizard_defaults, + commands::wizard_logic::validate_project_config, + commands::wizard_logic::validate_describe_input, + commands::wizard_logic::validate_launch_readiness, + commands::wizard_logic::add_story, + commands::wizard_logic::update_story, + commands::wizard_logic::remove_story, + commands::wizard_logic::reorder_stories, + commands::wizard_logic::get_stories, + commands::wizard_logic::save_wizard_draft, + commands::wizard_logic::complete_describe_step, + commands::wizard_logic::complete_atomize_step, + commands::wizard_logic::complete_configure_step, + commands::wizard_logic::launch_project, + commands::wizard_logic::exit_wizard, + commands::wizard_logic::submit_project_config, + commands::wizard_logic::mark_wizard_stale, + commands::notifications::add_notification, + commands::notifications::get_notifications, + commands::notifications::mark_notification_read, + commands::notifications::mark_all_notifications_read, + commands::notifications::clear_notifications + ]; + move |invoke: tauri::ipc::Invoke| match invoke.message.command() { + "start_plan" => { + commands::planning::__cmd__start_plan!(plan_commands::start_plan, invoke) + } + "write_to_plan" => { + commands::planning::__cmd__write_to_plan!(plan_commands::write_to_plan, invoke) + } + "stop_plan" => commands::planning::__cmd__stop_plan!(plan_commands::stop_plan, invoke), + "run_atomizer" => { + commands::atomization::__cmd__run_atomizer!(atomizer_commands::run_atomizer, invoke) + } + "create_project" => commands::projects_lifecycle::__cmd__create_project!( + project_commands::create_project, + invoke + ), + "finalize_draft" => commands::projects_wizard::__cmd__finalize_draft!( + project_commands::finalize_draft, + invoke + ), + "discard_draft" => commands::projects_wizard::__cmd__discard_draft!( + project_commands::discard_draft, + invoke + ), + "list_projects" => commands::projects_lifecycle::__cmd__list_projects!( + project_commands::list_projects, + invoke + ), + "get_project_detail" => commands::projects_lifecycle::__cmd__get_project_detail!( + project_commands::get_project_detail, + invoke + ), + _ => fallback(invoke), + } + }) } #[cfg(test)] pub fn attach_app(builder: tauri::Builder) -> tauri::Builder { - builder.invoke_handler(tauri::generate_handler![ - agents::detect_agents, - agents::refresh_agents, - agents::get_agent_capabilities, - crate::invoke_contract::start_plan, - commands::planning::write_to_plan, - commands::planning::stop_plan, - crate::invoke_contract::query_plan_status, - crate::invoke_contract::load_existing_plan, - commands::projects_artifacts::load_existing_prd, - commands::projects_artifacts::load_output_log, - crate::invoke_contract::save_plan, - commands::projects_artifacts::save_prd, - crate::invoke_contract::save_config, - commands::projects_artifacts::load_config, - crate::invoke_contract::run_atomizer, - crate::invoke_contract::create_project, - crate::invoke_contract::finalize_draft, - commands::projects_wizard::discard_draft, - commands::projects_wizard::save_wizard_state, - commands::projects_wizard::resume_wizard, - crate::invoke_contract::save_draft, - crate::invoke_contract::load_draft, - commands::projects_lifecycle::list_projects, - commands::projects_lifecycle::pause_project, - commands::projects_lifecycle::resume_project, - commands::projects_lifecycle::archive_project, - crate::invoke_contract::get_project_detail, - commands::projects_lifecycle::get_project_stories, - commands::projects_lifecycle::get_guardrails, - commands::projects_lifecycle::get_project_config, - commands::projects::get_project_snapshot, - commands::projects_listing::list_projects_enriched, - commands::projects_lifecycle::get_notification_prefs, - commands::projects_lifecycle::save_notification_prefs, - crate::invoke_contract::start_loop, - commands::execution::stop_loop, - commands::execution::session_stats, - commands::execution::get_iteration_history, - commands::ask::ask_question, - commands::ask::ask_history, - commands::ask::stop_ask, - commands::ask::copy_ask_message, - commands::ask::truncate_ask_from, - commands::ask::retry_ask, - ephemeral_query::ephemeral_query, - connections::list_connections, - connections::build_connection_workspace, - ]) + services::attach_runtime_aliases(builder, { + let fallback: fn(tauri::ipc::Invoke) -> bool = tauri::generate_handler![ + agents::detect_agents, + agents::refresh_agents, + agents::check_system_readiness, + agents::get_known_agents, + agents::get_agent_capabilities, + commands::display_vocabulary::get_display_vocabulary, + crate::invoke_contract::query_plan_status, + crate::invoke_contract::load_existing_plan, + commands::projects_artifacts::load_existing_prd, + commands::projects_artifacts::load_output_log, + crate::invoke_contract::save_plan, + commands::projects_artifacts::save_prd, + crate::invoke_contract::save_config, + commands::projects_artifacts::load_config, + crate::invoke_contract::run_atomizer, + commands::atomization::get_atomizer_activity_log, + commands::atomization::get_atomizer_pipeline_state, + crate::invoke_contract::create_project, + crate::invoke_contract::finalize_draft, + commands::projects_wizard::discard_draft, + commands::projects_wizard::save_wizard_state, + commands::projects_wizard::resume_wizard, + commands::projects_wizard::hydrate_wizard, + crate::invoke_contract::save_draft, + crate::invoke_contract::load_draft, + commands::projects_lifecycle::list_projects, + commands::projects_lifecycle::pause_project, + commands::projects_lifecycle::resume_project, + commands::projects_lifecycle::archive_project, + crate::invoke_contract::get_project_detail, + commands::projects_lifecycle::get_project_stories, + commands::projects_lifecycle::get_guardrails, + commands::projects_lifecycle::get_project_config, + commands::projects::get_project_snapshot, + commands::projects_listing::list_projects_enriched, + commands::projects_listing::list_projects_grouped, + commands::projects_lifecycle::get_notification_prefs, + commands::projects_lifecycle::save_notification_prefs, + crate::invoke_contract::start_loop, + commands::execution::stop_loop, + commands::execution::get_activity_feed, + commands::ask::ask_question, + commands::ask::ask_history, + commands::ask::stop_ask, + commands::ask::copy_ask_message, + commands::ask::truncate_ask_from, + commands::ask::retry_ask, + ephemeral_query::ephemeral_query, + connections::list_connections, + connections::build_connection_workspace, + commands::wizard_logic::get_wizard_defaults, + commands::wizard_logic::complete_describe_step, + commands::wizard_logic::complete_atomize_step, + commands::wizard_logic::complete_configure_step, + commands::wizard_logic::launch_project, + commands::wizard_logic::exit_wizard, + commands::notifications::add_notification, + commands::notifications::get_notifications, + commands::notifications::mark_notification_read, + commands::notifications::mark_all_notifications_read, + commands::notifications::clear_notifications + ]; + move |invoke: tauri::ipc::Invoke| match invoke.message.command() { + "start_plan" => { + crate::invoke_contract::__cmd__start_plan!(plan_commands::start_plan, invoke) + } + "write_to_plan" => { + commands::planning::__cmd__write_to_plan!(plan_commands::write_to_plan, invoke) + } + "stop_plan" => commands::planning::__cmd__stop_plan!(plan_commands::stop_plan, invoke), + _ => fallback(invoke), + } + }) } #[cfg(test)] pub fn attach_contract(builder: tauri::Builder) -> tauri::Builder { - builder.invoke_handler(tauri::generate_handler![ - crate::invoke_contract::create_project, - crate::invoke_contract::save_draft, - crate::invoke_contract::load_draft, - crate::invoke_contract::finalize_draft, - crate::invoke_contract::start_plan, - commands::planning::write_to_plan, - commands::planning::stop_plan, - crate::invoke_contract::query_plan_status, - crate::invoke_contract::load_existing_plan, - crate::invoke_contract::save_plan, - crate::invoke_contract::save_config, - crate::invoke_contract::run_atomizer, - crate::invoke_contract::start_loop, - commands::execution::get_iteration_history, - crate::invoke_contract::get_project_detail, - commands::projects_lifecycle::archive_project, - ]) + services::attach_runtime_aliases(builder, { + let fallback: fn(tauri::ipc::Invoke) -> bool = tauri::generate_handler![ + crate::invoke_contract::create_project, + crate::invoke_contract::save_draft, + crate::invoke_contract::load_draft, + crate::invoke_contract::finalize_draft, + crate::invoke_contract::query_plan_status, + crate::invoke_contract::load_existing_plan, + crate::invoke_contract::save_plan, + crate::invoke_contract::save_config, + crate::invoke_contract::run_atomizer, + crate::invoke_contract::start_loop, + crate::invoke_contract::get_project_detail, + commands::projects_lifecycle::archive_project, + commands::execution::get_iteration_history + ]; + move |invoke: tauri::ipc::Invoke| match invoke.message.command() { + "start_plan" => { + crate::invoke_contract::__cmd__start_plan!(plan_commands::start_plan, invoke) + } + "write_to_plan" => { + commands::planning::__cmd__write_to_plan!(plan_commands::write_to_plan, invoke) + } + "stop_plan" => commands::planning::__cmd__stop_plan!(plan_commands::stop_plan, invoke), + _ => fallback(invoke), + } + }) +} +#[cfg(not(test))] +mod atomizer_commands { + use super::app_core_atomizer; + use crate::atomizer::{AtomizeArgs, AtomizeProgress, AtomizerError}; + use ralph_core::prd::Prd; + use tauri::{AppHandle, Emitter}; + pub async fn run_atomizer(app: AppHandle, args: AtomizeArgs) -> Result { + let normalized_args = AtomizeArgs { + project_id: required(args.project_id, "project_id").map_err(AtomizerError::Path)?, + project_name: required(args.project_name, "project_name") + .map_err(AtomizerError::Path)?, + project_dir: args.project_dir, + agent: required(args.agent, "agent").map_err(AtomizerError::Path)?, + model: optional(args.model), + effort: optional(args.effort), + }; + let run_result = app_core_atomizer::run_atomizer( + app_core_atomizer::AtomizerRequest { + project_id: normalized_args.project_id.clone(), + }, + |_| crate::atomizer::run_atomizer(app.clone(), normalized_args), + |prd| prd.stories.len(), + ) + .await?; + for event in &run_result.events { + let payload = event.as_progress_payload(); + let _ = app.emit( + "atomization-progress", + AtomizeProgress { + stage: payload.stage, + stage_name: payload.stage_name, + message: payload.message, + project_id: payload.project_id, + elapsed_ms: 0, + }, + ); + } + Ok(run_result.output) + } + fn required(value: String, name: &str) -> Result { + let trimmed = value.trim().to_string(); + if trimmed.is_empty() { + return Err(format!("{name} is required")); + } + Ok(trimmed) + } + fn optional(value: Option) -> Option { + value + .map(|value| value.trim().to_string()) + .filter(|value| !value.is_empty()) + } +} + +mod plan_commands { + use crate::app_core_plan; + use crate::plan_engine::{PlanEngineError, PlanSessionsState, StartPlanArgs}; + use std::time::Instant; + use tauri::{AppHandle, Runtime, State}; + + pub async fn start_plan( + app: AppHandle, + state: State<'_, PlanSessionsState>, + args: StartPlanArgs, + ) -> Result<(), PlanEngineError> { + let args = StartPlanArgs { + project_id: required(args.project_id, "project_id").map_err(PlanEngineError::Path)?, + project_dir: args.project_dir, + agent: required(args.agent, "agent").map_err(PlanEngineError::Path)?, + model: optional(args.model), + effort: optional(args.effort), + initial_prompt: required(args.initial_prompt, "initial_prompt") + .map_err(PlanEngineError::Path)?, + }; + app_core_plan::start_plan(args.project_id.clone(), |_| { + crate::plan_engine::start_plan(app, state, args) + }) + .await + } + + pub async fn write_to_plan( + state: State<'_, PlanSessionsState>, + project_id: String, + input: String, + ) -> Result<(), PlanEngineError> { + let project_id = required(project_id, "project_id").map_err(PlanEngineError::Path)?; + let activity_at = std::sync::Arc::new(std::sync::Mutex::new(None::)); + let touch_project_id = project_id.clone(); + app_core_plan::write_to_plan( + project_id, + input, + |project_id, payload| { + let mut sessions = state.0.lock().map_err(|_| PlanEngineError::LockPoisoned)?; + let entry = sessions + .sessions + .get_mut(&project_id) + .ok_or_else(|| PlanEngineError::NoSession(project_id.clone()))?; + entry + .handle + .write(&payload) + .map_err(PlanEngineError::Shell)?; + if let Ok(mut activity) = activity_at.lock() { + *activity = Some(Instant::now()); + } + Ok(()) + }, + || { + if let Some(timestamp) = activity_at.lock().ok().and_then(|guard| *guard) { + if let Ok(mut sessions) = state.0.lock() { + if let Some(entry) = sessions.sessions.get_mut(&touch_project_id) { + if let Ok(mut last_activity) = entry.last_activity_at.lock() { + *last_activity = timestamp; + } + } + } + } + }, + ) + } + + pub async fn stop_plan( + state: State<'_, PlanSessionsState>, + project_id: String, + ) -> Result<(), PlanEngineError> { + let project_id = required(project_id, "project_id").map_err(PlanEngineError::Path)?; + app_core_plan::stop_plan(project_id, |project_id, reason| { + app_core_plan::cleanup_session( + &project_id, + reason, + |project_id| { + let mut sessions = state.0.lock().map_err(|_| PlanEngineError::LockPoisoned)?; + Ok(sessions.sessions.remove(project_id)) + }, + |entry| entry.handle.kill().map_err(PlanEngineError::Shell), + ) + .map(|_| ()) + }) + } + + fn required(value: String, field_name: &str) -> Result { + let normalized = value.trim(); + if normalized.is_empty() { + return Err(format!("Missing required field: {field_name}")); + } + Ok(normalized.to_string()) + } + + fn optional(value: Option) -> Option { + value + .map(|content| content.trim().to_string()) + .filter(|content| !content.is_empty()) + } +} + +#[cfg(not(test))] +mod project_commands { + use super::app_core_projects; + use crate::projects::artifacts::{artifact_dir, init_artifacts}; + use crate::projects::repository::{group_projects_by_status, row_to_project, PROJECT_COLUMNS}; + use crate::projects::{Project, ProjectDetail, ProjectError, ProjectsByStatus}; + use crate::storage::db::DbState; + use ralph_core::prd::{Prd, UserStory}; + use tauri::{AppHandle, State}; + use uuid::Uuid; + + impl app_core_projects::StoryState for UserStory { + fn passes(&self) -> bool { + self.passes + } + fn blocked(&self) -> bool { + self.blocked + } + } + + pub async fn create_project( + app: AppHandle, + db: State<'_, DbState>, + name: String, + description: String, + working_directory: String, + wizard_step: Option, + ) -> Result { + let request = app_core_projects::CreateProjectRequest { + name: required(name, "name").map_err(ProjectError::Path)?, + description: required(description, "description").map_err(ProjectError::Path)?, + working_directory: required(working_directory, "working_directory") + .map_err(ProjectError::Path)?, + wizard_step: optional(wizard_step), + }; + app_core_projects::create_project( + request, + || Uuid::new_v4().to_string(), + || chrono::Utc::now().to_rfc3339(), + |project_id, project_name| { + let dir = artifact_dir(&app, project_id)?; + init_artifacts(&dir, project_name) + }, + |record| { + let project = Project { + id: record.id.clone(), + name: record.name.clone(), + description: record.description.clone(), + status: record.status.clone(), + working_directory: record.working_directory.clone(), + created_at: record.created_at.clone(), + updated_at: record.updated_at.clone(), + wizard_step: record.wizard_step.clone(), + }; + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + conn.execute("INSERT INTO projects (id, name, description, status, working_directory, created_at, updated_at, wizard_step) VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8)", rusqlite::params![record.id, record.name, record.description, record.status, record.working_directory, record.created_at, record.updated_at, record.wizard_step])?; + Ok(project) + }, + ) + } + + pub async fn finalize_draft( + app: AppHandle, + db: State<'_, DbState>, + project_id: String, + ) -> Result<(), ProjectError> { + let project_id = required(project_id, "project_id").map_err(ProjectError::Path)?; + app_core_projects::finalize_draft( + project_id, + || chrono::Utc::now().to_rfc3339(), + |project_id, updated_at| { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + conn.execute("UPDATE projects SET wizard_step = NULL, wizard_state_json = NULL, updated_at = ?1 WHERE id = ?2", rusqlite::params![updated_at, project_id])?; + Ok(()) + }, + |project_id| { + let draft_path = artifact_dir(&app, project_id)?.join("draft.json"); + if draft_path.exists() { + let _ = std::fs::remove_file(draft_path); + } + Ok(()) + }, + ) + } + + pub async fn discard_draft( + app: AppHandle, + db: State<'_, DbState>, + project_id: String, + ) -> Result<(), ProjectError> { + let project_id = required(project_id, "project_id").map_err(ProjectError::Path)?; + app_core_projects::discard_draft( + project_id, + |project_id| { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + Ok(conn.execute( + "DELETE FROM projects WHERE id = ?1 AND status = 'draft'", + rusqlite::params![project_id], + )? != 0) + }, + |project_id| { + let dir = artifact_dir(&app, project_id)?; + if dir.exists() { + let _ = std::fs::remove_dir_all(&dir); + } + Ok(()) + }, + ProjectError::NotFound, + ) + } + + pub async fn list_projects(db: State<'_, DbState>) -> Result { + app_core_projects::list_projects(|| { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let query = format!("SELECT {PROJECT_COLUMNS} FROM projects ORDER BY updated_at DESC"); + let mut stmt = conn.prepare(&query)?; + let projects: Vec = stmt + .query_map([], row_to_project)? + .filter_map(Result::ok) + .collect(); + Ok(group_projects_by_status(projects)) + }) + } + + pub async fn get_project_detail( + app: AppHandle, + db: State<'_, DbState>, + project_id: String, + ) -> Result { + let project_id = required(project_id, "project_id").map_err(ProjectError::Path)?; + let detail = app_core_projects::get_project_detail( + project_id, + |project_id| { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let query = format!("SELECT {PROJECT_COLUMNS} FROM projects WHERE id = ?1"); + let mut stmt = conn.prepare(&query)?; + stmt.query_row(rusqlite::params![project_id], row_to_project) + .map_err(|_| ProjectError::NotFound(project_id.to_string())) + }, + |project_id| { + let prd_path = artifact_dir(&app, project_id)?.join("prd.json"); + Ok(if prd_path.exists() { + Prd::load(&prd_path) + .map(|prd| prd.stories) + .unwrap_or_default() + } else { + Vec::new() + }) + }, + )?; + Ok(ProjectDetail { + project: detail.project, + total_stories: detail.total_stories, + passed_count: detail.passed_count, + blocked_count: detail.blocked_count, + pending_count: detail.pending_count, + stories: detail.stories, + }) + } + + fn required(value: String, name: &str) -> Result { + let trimmed = value.trim().to_string(); + if trimmed.is_empty() { + return Err(format!("{name} is required")); + } + Ok(trimmed) + } + + fn optional(value: Option) -> Option { + value + .map(|value| value.trim().to_string()) + .filter(|value| !value.is_empty()) + } } diff --git a/src-tauri/src/invoke_contract.rs b/src-tauri/src/invoke_contract.rs index b1476a3..726813a 100644 --- a/src-tauri/src/invoke_contract.rs +++ b/src-tauri/src/invoke_contract.rs @@ -2,7 +2,8 @@ use crate::atomizer::{AtomizeArgs, AtomizerError}; use crate::db::DbState; use crate::loop_manager::{LoopError, StartLoopArgs}; use crate::plan_engine::{PlanEngineError, PlanSessionsState, StartPlanArgs}; -use crate::projects::{Project, ProjectDetail, ProjectError}; +use crate::projects::ProjectError; +use crate::services::{self, InvokeProjectDetail, InvokeProjectRecord}; use ralph_core::prd::Prd; use tauri::{AppHandle, Runtime, State}; @@ -14,9 +15,10 @@ pub async fn create_project( description: String, working_directory: String, wizard_step: Option, -) -> Result { +) -> Result { let normalized_name = required(name, "name").map_err(ProjectError::Path)?; - let normalized_description = required(description, "description").map_err(ProjectError::Path)?; + let normalized_description = + required(description, "description").map_err(ProjectError::Path)?; let normalized_directory = required(working_directory, "working_directory").map_err(ProjectError::Path)?; crate::projects::catalog::create_project( @@ -28,6 +30,7 @@ pub async fn create_project( optional(wizard_step), ) .await + .map(services::into_project_record) } #[tauri::command] @@ -36,8 +39,11 @@ pub async fn save_draft( project_id: String, draft_json: String, ) -> Result<(), ProjectError> { - let project_id = required(project_id, "project_id").map_err(ProjectError::Path)?; - crate::projects::wizard::save_draft(app, project_id, draft_json).await + let command = services::save_draft_command( + required(project_id, "project_id").map_err(ProjectError::Path)?, + draft_json, + ); + crate::projects::wizard::save_draft(app, command.project_id, command.draft_json).await } #[tauri::command] @@ -83,9 +89,9 @@ pub async fn query_plan_status( project_id: String, ) -> Result, PlanEngineError> { let project_id = required(project_id, "project_id").map_err(PlanEngineError::Path)?; - crate::plan_engine::query_plan_status(state, project_id).await.map(|value| { - value.map(|info| serde_json::to_value(info).expect("plan session info")) - }) + crate::plan_engine::query_plan_status(state, project_id) + .await + .map(|value| value.map(|info| serde_json::to_value(info).expect("plan session info"))) } #[tauri::command] @@ -152,6 +158,9 @@ pub async fn start_loop( cooldown_seconds: args.cooldown_seconds, test_command: optional(args.test_command), max_verification_retries: args.max_verification_retries, + scm_provider: args.scm_provider, + review_polling_interval: args.review_polling_interval, + review_timeout: args.review_timeout, }; crate::loop_manager::start_loop(app, args).await } @@ -161,9 +170,11 @@ pub async fn get_project_detail( app: AppHandle, db: State<'_, DbState>, project_id: String, -) -> Result { +) -> Result { let project_id = required(project_id, "project_id").map_err(ProjectError::Path)?; - crate::projects::catalog::get_project_detail(app, db, project_id).await + crate::projects::catalog::get_project_detail(app, db, project_id) + .await + .map(services::into_project_detail) } fn required(value: String, name: &str) -> Result { diff --git a/src-tauri/src/lib.rs b/src-tauri/src/lib.rs index 32b1c59..736d88f 100644 --- a/src-tauri/src/lib.rs +++ b/src-tauri/src/lib.rs @@ -3,6 +3,8 @@ mod agent_profiles; mod agent_runtime; mod agent_runtime_env; mod agents; +#[path = "../../crates/loopforge-app-core/src/plan.rs"] +mod app_core_plan; mod ask_engine; mod atomizer; mod commands; @@ -15,7 +17,11 @@ mod loop_manager; mod models; mod plan_engine; mod projects; +mod services; +mod shell_resolve; mod storage; +#[cfg(test)] +mod test_env_lock; mod test_support; mod tray; @@ -49,6 +55,36 @@ mod tests; use db::DbState; use tauri::{Manager, RunEvent, WindowEvent}; +fn cleanup_plan_session( + state: &plan_engine::PlanSessionsState, + project_id: &str, + reason: app_core_plan::PlanCleanupReason, +) -> Result { + app_core_plan::cleanup_session( + project_id, + reason, + |project_id| { + let mut sessions = state.0.lock().map_err(|_| "lock poisoned".to_string())?; + Ok(sessions.sessions.remove(project_id)) + }, + |entry| entry.handle.kill(), + ) +} + +fn cleanup_all_plan_sessions( + state: &plan_engine::PlanSessionsState, + reason: app_core_plan::PlanCleanupReason, +) { + let project_ids = state + .0 + .lock() + .map(|sessions| sessions.sessions.keys().cloned().collect::>()) + .unwrap_or_default(); + let _ = app_core_plan::cleanup_sessions(project_ids, reason, |project_id, reason| { + cleanup_plan_session(state, project_id, reason) + }); +} + #[cfg_attr(mobile, tauri::mobile_entry_point)] pub fn run() { agent_runtime_env::ensure_full_path_env(); @@ -63,17 +99,15 @@ pub fn run() { .manage(plan_engine::PlanSessionsState::default()) .manage(loop_manager::LoopManagerState::default()) .manage(ask_engine::AskSessionsState::default()) + .manage(atomizer::ActivityLogState::default()) + .manage(atomizer::PipelineRegistryState::default()) .on_window_event(|window, event| { if let WindowEvent::CloseRequested { api, .. } = event { let plan_state = window.state::(); - if let Ok(mut sessions) = plan_state.0.lock() { - let ids: Vec = sessions.sessions.keys().cloned().collect(); - for plan_id in ids { - if let Some(entry) = sessions.sessions.remove(&plan_id) { - let _ = entry.handle.kill(); - } - } - } + cleanup_all_plan_sessions( + &plan_state, + app_core_plan::PlanCleanupReason::WindowClose, + ); let ask_state = window.state::(); ask_state.kill_all(); @@ -135,12 +169,17 @@ pub fn run() { if let RunEvent::ExitRequested { .. } = event { let db = app_handle.state::(); let loop_state = app_handle.state::(); + let plan_state = app_handle.state::(); if let Ok(handles) = loop_state.0.lock() { for handle in handles.values() { let args_json = serde_json::to_string(&handle.args).unwrap_or_default(); let _ = db.save_loop_state(&handle.args.project_id, &args_json); } } + cleanup_all_plan_sessions( + &plan_state, + app_core_plan::PlanCleanupReason::RestartRecovery, + ); loop_state.shutdown_all(); } }); diff --git a/src-tauri/src/loop_manager/args.rs b/src-tauri/src/loop_manager/args.rs index 2a434ea..06ab147 100644 --- a/src-tauri/src/loop_manager/args.rs +++ b/src-tauri/src/loop_manager/args.rs @@ -26,4 +26,10 @@ pub struct StartLoopArgs { pub test_command: Option, #[serde(default)] pub max_verification_retries: Option, + #[serde(default)] + pub scm_provider: Option, + #[serde(default)] + pub review_polling_interval: Option, + #[serde(default)] + pub review_timeout: Option, } diff --git a/src-tauri/src/loop_manager/event_sink.rs b/src-tauri/src/loop_manager/event_sink.rs index 6dc471e..25e2078 100644 --- a/src-tauri/src/loop_manager/event_sink.rs +++ b/src-tauri/src/loop_manager/event_sink.rs @@ -1,14 +1,16 @@ +use crate::projects::notification_filter::should_emit_verification_failed; +use crate::projects::notifications::{create_notification_and_emit, NotificationCreateInput}; use ralph_core::events::{LoopEvent, LoopEventSink}; -use tauri::{AppHandle, Emitter}; +use tauri::{AppHandle, Emitter, Runtime}; -pub struct TauriEventSink { - app: AppHandle, +pub struct TauriEventSink { + app: AppHandle, project_id: String, session_id: String, } -impl TauriEventSink { - pub fn new(app: AppHandle, project_id: String, session_id: String) -> Self { +impl TauriEventSink { + pub fn new(app: AppHandle, project_id: String, session_id: String) -> Self { Self { app, project_id, @@ -17,10 +19,14 @@ impl TauriEventSink { } } -impl LoopEventSink for TauriEventSink { +impl LoopEventSink for TauriEventSink { fn emit(&self, event: LoopEvent) { let (event_name, payload) = match &event { - LoopEvent::Heartbeat { elapsed_secs, total_secs, context } => ( + LoopEvent::Heartbeat { + elapsed_secs, + total_secs, + context, + } => ( crate::events::EVENT_HEARTBEAT, serde_json::json!({ "projectId": self.project_id, @@ -30,7 +36,11 @@ impl LoopEventSink for TauriEventSink { "context": context, }), ), - LoopEvent::VerificationStarted { story_id, attempt, max_attempts } => ( + LoopEvent::VerificationStarted { + story_id, + attempt, + max_attempts, + } => ( crate::events::EVENT_VERIFICATION_STARTED, serde_json::json!({ "projectId": self.project_id, @@ -40,7 +50,12 @@ impl LoopEventSink for TauriEventSink { "maxAttempts": max_attempts, }), ), - LoopEvent::VerificationFailed { story_id, attempt, error_count, circuit_breaker } => ( + LoopEvent::VerificationFailed { + story_id, + attempt, + error_count, + circuit_breaker, + } => ( crate::events::EVENT_VERIFICATION_FAILED, serde_json::json!({ "projectId": self.project_id, @@ -60,7 +75,12 @@ impl LoopEventSink for TauriEventSink { "attempt": attempt, }), ), - LoopEvent::PromptBuilt { story_id, size_bytes, hash, truncated } => ( + LoopEvent::PromptBuilt { + story_id, + size_bytes, + hash, + truncated, + } => ( crate::events::EVENT_PROMPT_BUILT, serde_json::json!({ "projectId": self.project_id, @@ -92,5 +112,47 @@ impl LoopEventSink for TauriEventSink { }; let _ = self.app.emit(event_name, payload); + match event { + LoopEvent::VerificationFailed { + story_id, + attempt, + error_count, + circuit_breaker, + } => { + if should_emit_verification_failed(attempt, circuit_breaker) { + let title = if circuit_breaker { + "Circuit breaker triggered".to_string() + } else { + "Verification failed".to_string() + }; + let message = if circuit_breaker { + format!("{story_id}: same error repeated") + } else { + format!("{story_id}: attempt {attempt} failed ({error_count} errors)") + }; + let _ = create_notification_and_emit( + &self.app, + NotificationCreateInput { + project_id: self.project_id.clone(), + notification_type: "loop_error".to_string(), + title, + message, + }, + ); + } + } + LoopEvent::StorySkipped { story_id, reason } => { + let _ = create_notification_and_emit( + &self.app, + NotificationCreateInput { + project_id: self.project_id.clone(), + notification_type: "story_blocked".to_string(), + title: "Story skipped".to_string(), + message: format!("{story_id}: {reason}"), + }, + ); + } + _ => {} + } } } diff --git a/src-tauri/src/loop_manager/helpers.rs b/src-tauri/src/loop_manager/helpers.rs index 20bd999..cc101c7 100644 --- a/src-tauri/src/loop_manager/helpers.rs +++ b/src-tauri/src/loop_manager/helpers.rs @@ -1,8 +1,10 @@ use super::{LoopError, StartLoopArgs}; use crate::db::DbState; +use crate::projects::documents; +use crate::projects::ProjectError; use ralph_core::config::RalphConfig; use std::path::{Path, PathBuf}; -use tauri::AppHandle; +use tauri::{AppHandle, Runtime}; use uuid::Uuid; pub(super) fn build_ralph_config( @@ -31,7 +33,7 @@ pub(super) fn build_ralph_config( config.tuning.gutter_threshold = gutter; } if let Some(cooldown) = args.cooldown_seconds { - config.tuning.cooldown_secs = cooldown as u64; + config.tuning.cooldown_secs = u64::from(cooldown); } if let Some(ref test_cmd) = args.test_command { config.tuning.test_command = Some(test_cmd.clone()); @@ -52,7 +54,10 @@ pub(super) fn ensure_execution_prompt( .unwrap_or(true); if needs_default { - std::fs::write(&prompt_path, default_execution_prompt(project_name, artifact_dir))?; + std::fs::write( + &prompt_path, + default_execution_prompt(project_name, artifact_dir), + )?; } Ok(()) @@ -73,24 +78,28 @@ When a story passes verification, update only this artifact PRD and preserve exi pub(super) fn create_session(db: &DbState, project_id: &str) -> Result { let session_id = Uuid::new_v4().to_string(); let now = chrono::Utc::now().to_rfc3339(); - let conn = db.0.lock().map_err(|_| LoopError::LockPoisoned)?; - conn.execute( - "INSERT INTO sessions (id, project_id, started_at) VALUES (?1, ?2, ?3)", - rusqlite::params![session_id, project_id, now], - )?; + documents::insert_session(db, project_id, &session_id, &now).map_err(project_error)?; Ok(session_id) } pub(super) fn close_session(db: &DbState, session_id: &str) { let now = chrono::Utc::now().to_rfc3339(); - if let Ok(conn) = db.0.lock() { - let _ = conn.execute( - "UPDATE sessions SET ended_at = ?1 WHERE id = ?2", - rusqlite::params![now, session_id], - ); - } + let _ = documents::close_session(db, session_id, &now).map_err(project_error); } -pub(super) fn artifact_dir(app: &AppHandle, project_id: &str) -> Result { +pub(super) fn artifact_dir( + app: &AppHandle, + project_id: &str, +) -> Result { crate::storage::artifacts::project_artifact_dir(app, project_id).map_err(LoopError::Path) } + +fn project_error(error: ProjectError) -> LoopError { + match error { + ProjectError::Db(message) => LoopError::Db(message), + ProjectError::Io(source) => LoopError::Io(source), + ProjectError::Json(source) => LoopError::Internal(source.to_string()), + ProjectError::NotFound(project_id) => LoopError::Path(project_id), + ProjectError::Path(message) => LoopError::Path(message), + } +} diff --git a/src-tauri/src/loop_manager/mod.rs b/src-tauri/src/loop_manager/mod.rs index e374892..a0d918a 100644 --- a/src-tauri/src/loop_manager/mod.rs +++ b/src-tauri/src/loop_manager/mod.rs @@ -4,8 +4,8 @@ mod event_sink; mod helpers; mod provider; mod start; -mod start_resolve; mod start_finalize; +mod start_resolve; mod state; mod stats; mod stop; diff --git a/src-tauri/src/loop_manager/provider/codex.rs b/src-tauri/src/loop_manager/provider/codex.rs index 24143fb..12a1eb6 100644 --- a/src-tauri/src/loop_manager/provider/codex.rs +++ b/src-tauri/src/loop_manager/provider/codex.rs @@ -4,17 +4,17 @@ use ralph_core::providers::AgentResult; use std::io::Write; use std::path::Path; use std::process::Stdio; -use std::sync::Arc; use std::sync::atomic::{AtomicBool, Ordering}; -use tauri::Emitter; +use std::sync::Arc; +use tauri::{Emitter, Runtime}; use tokio::io::{AsyncBufReadExt, BufReader}; use tokio::process::Command as TokioCommand; const STDIN_WAIT_LINE: &str = "Reading additional input from stdin..."; const STDIN_WAIT_GRACE_SECS: u64 = 2; -pub(super) async fn run_codex_process( - provider: &ShellProvider, +pub(super) async fn run_codex_process( + provider: &ShellProvider, args: &[String], env_vars: &[(String, String)], work_dir: &Path, @@ -99,7 +99,7 @@ pub(super) async fn run_codex_process( Ok(None) => stderr_done = true, Err(err) => return Err(anyhow::anyhow!("Stderr read failed: {err}")), }, - _ = &mut sleep => {} + () = &mut sleep => {} } if stdout_done && stderr_done && child.try_wait().ok().flatten().is_some() { @@ -125,8 +125,8 @@ pub(super) fn has_substantive_output(output_lines: &[String]) -> bool { output_lines.iter().any(|line| !is_stdin_wait_line(line)) } -fn capture_line( - provider: &ShellProvider, +fn capture_line( + provider: &ShellProvider, line: &str, stream: &str, last_output: &mut std::time::Instant, diff --git a/src-tauri/src/loop_manager/provider/command_args.rs b/src-tauri/src/loop_manager/provider/command_args.rs index 39c66b8..ec30d5c 100644 --- a/src-tauri/src/loop_manager/provider/command_args.rs +++ b/src-tauri/src/loop_manager/provider/command_args.rs @@ -24,13 +24,13 @@ pub(super) fn agent_cli_args( effort: Option<&str>, ) -> Vec { let selected_model = model - .map(|value| value.trim()) + .map(str::trim) .filter(|value| !value.is_empty()) - .map(|value| value.to_string()); + .map(ToString::to_string); let selected_effort = effort - .map(|value| value.trim()) + .map(str::trim) .filter(|value| !value.is_empty()) - .map(|value| value.to_string()); + .map(ToString::to_string); match agent { "claude" => { let mut args = vec![ diff --git a/src-tauri/src/loop_manager/provider/lifecycle.rs b/src-tauri/src/loop_manager/provider/lifecycle.rs index 50b2d29..5d7dd1a 100644 --- a/src-tauri/src/loop_manager/provider/lifecycle.rs +++ b/src-tauri/src/loop_manager/provider/lifecycle.rs @@ -1,16 +1,41 @@ use super::ShellProvider; +use crate::db::DbState; +use crate::loop_manager::LoopManagerState; +use crate::projects::notifications::{create_notification_and_emit, NotificationCreateInput}; use ralph_core::providers::{AgentResult, Provider}; use std::path::Path; -use std::sync::Arc; use std::sync::atomic::AtomicBool; -use tauri::Emitter; +use std::sync::Arc; +use tauri::{Emitter, Manager, Runtime}; + +async fn emit_project_state_changed(provider: &ShellProvider) { + let snapshot = crate::commands::projects::get_project_snapshot( + provider.app.clone(), + provider.app.state::(), + provider.app.state::(), + provider.project_id.clone(), + ) + .await + .ok(); + let payload = if let Some(snapshot) = snapshot { + serde_json::json!({ + "projectId": provider.project_id.clone(), + "snapshot": snapshot, + }) + } else { + serde_json::json!({ "projectId": provider.project_id.clone() }) + }; + let _ = provider + .app + .emit(crate::events::EVENT_PROJECT_STATE_CHANGED, payload); +} -impl Provider for ShellProvider { - fn name(&self) -> &str { +impl Provider for ShellProvider { + fn name(&self) -> &'static str { "shell" } - fn model(&self) -> &str { + fn model(&self) -> &'static str { "default" } @@ -27,7 +52,7 @@ impl Provider for ShellProvider { let mut counter = self .iteration_counter .lock() - .unwrap_or_else(|err| err.into_inner()); + .unwrap_or_else(std::sync::PoisonError::into_inner); *counter += 1; *counter }; @@ -68,6 +93,15 @@ impl Provider for ShellProvider { "retryAfter": result.retry_after_message, }), ); + let _ = create_notification_and_emit( + &self.app, + NotificationCreateInput { + project_id: self.project_id.clone(), + notification_type: "rate_limited".to_string(), + title: "Rate limited".to_string(), + message: format!("Agent {agent} hit rate limit."), + }, + ); match self.try_advance_fallback() { None => { @@ -115,6 +149,22 @@ impl Provider for ShellProvider { "result": fb_outcome, }), ); + let _ = self.app.emit( + crate::events::EVENT_STORIES_UPDATED, + serde_json::json!({ "projectId": self.project_id }), + ); + emit_project_state_changed(self).await; + if fb_outcome == "success" { + let _ = create_notification_and_emit( + &self.app, + NotificationCreateInput { + project_id: self.project_id.clone(), + notification_type: "story_completed".to_string(), + title: "Story completed".to_string(), + message: format!("Story {story_id} passed verification."), + }, + ); + } return Ok(fallback_result); } @@ -146,6 +196,22 @@ impl Provider for ShellProvider { "result": outcome, }), ); + let _ = self.app.emit( + crate::events::EVENT_STORIES_UPDATED, + serde_json::json!({ "projectId": self.project_id }), + ); + emit_project_state_changed(self).await; + if outcome == "success" { + let _ = create_notification_and_emit( + &self.app, + NotificationCreateInput { + project_id: self.project_id.clone(), + notification_type: "story_completed".to_string(), + title: "Story completed".to_string(), + message: format!("Story {story_id} passed verification."), + }, + ); + } Ok(result) } diff --git a/src-tauri/src/loop_manager/provider/mod.rs b/src-tauri/src/loop_manager/provider/mod.rs index 47df214..f6357d9 100644 --- a/src-tauri/src/loop_manager/provider/mod.rs +++ b/src-tauri/src/loop_manager/provider/mod.rs @@ -1,5 +1,5 @@ -mod command_args; mod codex; +mod command_args; mod lifecycle; mod runner; @@ -9,7 +9,7 @@ mod tests; use crate::db::DbState; use serde::Serialize; use std::sync::{Arc, Mutex}; -use tauri::{AppHandle, Manager}; +use tauri::{AppHandle, Manager, Runtime}; use uuid::Uuid; #[derive(Clone, Serialize)] @@ -21,8 +21,8 @@ pub(super) struct AgentOutputLine { pub(super) stream: String, } -pub(super) struct ShellProvider { - pub(super) app: AppHandle, +pub(super) struct ShellProvider { + pub(super) app: AppHandle, pub(super) agent_name: Arc>, pub(super) primary_agent: String, pub(super) selected_model: Option, @@ -35,19 +35,18 @@ pub(super) struct ShellProvider { pub(super) iteration_counter: Arc>, } -impl ShellProvider { +impl ShellProvider { pub(super) fn current_agent(&self) -> String { self.agent_name .lock() - .map(|guard| guard.clone()) - .unwrap_or_else(|err| err.into_inner().clone()) + .map_or_else(|err| err.into_inner().clone(), |guard| guard.clone()) } pub(super) fn try_advance_fallback(&self) -> Option { let mut index = self .fallback_index .lock() - .unwrap_or_else(|err| err.into_inner()); + .unwrap_or_else(std::sync::PoisonError::into_inner); *index += 1; let agent = self.fallback_agents.get(*index)?.clone(); if let Ok(mut name) = self.agent_name.lock() { diff --git a/src-tauri/src/loop_manager/provider/runner.rs b/src-tauri/src/loop_manager/provider/runner.rs index d5348f7..1533891 100644 --- a/src-tauri/src/loop_manager/provider/runner.rs +++ b/src-tauri/src/loop_manager/provider/runner.rs @@ -4,13 +4,13 @@ use super::{AgentOutputLine, ShellProvider}; use ralph_core::providers::AgentResult; use std::io::Write; use std::path::Path; -use std::sync::Arc; use std::sync::atomic::{AtomicBool, Ordering}; -use tauri::Emitter; -use tauri_plugin_shell::ShellExt; +use std::sync::Arc; +use tauri::{Emitter, Runtime}; use tauri_plugin_shell::process::{CommandChild, CommandEvent as ShellCommandEvent}; +use tauri_plugin_shell::ShellExt; -impl ShellProvider { +impl ShellProvider { pub(super) async fn run_with_agent( &self, agent: &str, @@ -45,15 +45,15 @@ impl ShellProvider { .await; } - let (mut rx, proc): (tokio::sync::mpsc::Receiver, CommandChild) = - self.app - .shell() - .command(agent) - .args(&args) - .envs(env_vars) - .current_dir(work_dir) - .spawn() - .map_err(|err| anyhow::anyhow!("Spawn failed: {err}"))?; + let (mut rx, proc): (tokio::sync::mpsc::Receiver, CommandChild) = self + .app + .shell() + .command(agent) + .args(&args) + .envs(env_vars) + .current_dir(work_dir) + .spawn() + .map_err(|err| anyhow::anyhow!("Spawn failed: {err}"))?; let mut output_lines: Vec = Vec::new(); let stall_threshold = std::time::Duration::from_secs(stall_timeout_secs); diff --git a/src-tauri/src/loop_manager/provider/tests.rs b/src-tauri/src/loop_manager/provider/tests.rs index fadf14b..3b1a042 100644 --- a/src-tauri/src/loop_manager/provider/tests.rs +++ b/src-tauri/src/loop_manager/provider/tests.rs @@ -4,7 +4,7 @@ use std::path::Path; #[test] fn resolve_agent_binary_path_finds_available_binary() { - let path = resolve_agent_binary_path("zsh"); + let path = resolve_agent_binary_path("bash"); assert!(path.is_some()); } @@ -28,8 +28,13 @@ fn codex_args_include_model_and_effort_flags() { Some("gpt-5.4"), Some("high"), ); - let has_model = args.windows(2).any(|pair| pair.first().map(|v| v.as_str()) == Some("--model") && pair.get(1).map(|v| v.as_str()) == Some("gpt-5.4")); - let has_effort = args.windows(2).any(|pair| pair.first().map(|v| v.as_str()) == Some("--reasoning-effort")); + let has_model = args.windows(2).any(|pair| { + pair.first().map(|v| v.as_str()) == Some("--model") + && pair.get(1).map(|v| v.as_str()) == Some("gpt-5.4") + }); + let has_effort = args + .windows(2) + .any(|pair| pair.first().map(|v| v.as_str()) == Some("--reasoning-effort")); assert!(has_model); assert!(!has_effort, "codex should not receive --reasoning-effort"); } diff --git a/src-tauri/src/loop_manager/start.rs b/src-tauri/src/loop_manager/start.rs index 5ac5328..b8abf42 100644 --- a/src-tauri/src/loop_manager/start.rs +++ b/src-tauri/src/loop_manager/start.rs @@ -1,17 +1,20 @@ use super::event_sink::TauriEventSink; use super::helpers::{artifact_dir, build_ralph_config, create_session, ensure_execution_prompt}; use super::provider::ShellProvider; -use super::start_resolve::{ResolvedStartLoop, resolve_start_loop}; use super::start_finalize::finalize_loop_run; +use super::start_resolve::{resolve_start_loop, ResolvedStartLoop}; use super::{LoopError, LoopHandle, LoopManagerState, StartLoopArgs}; use crate::db::DbState; use ralph_core::loop_engine; use std::path::PathBuf; -use std::sync::Arc; use std::sync::atomic::{AtomicBool, Ordering}; -use tauri::{AppHandle, Emitter, Manager}; +use std::sync::Arc; +use tauri::{AppHandle, Emitter, Manager, Runtime}; -pub async fn start_loop(app: AppHandle, args: StartLoopArgs) -> Result { +pub async fn start_loop( + app: AppHandle, + args: StartLoopArgs, +) -> Result { let db = app.state::(); let loop_state = app.state::(); let resolved = resolve_start_loop(&app, &db, &args).await?; @@ -47,8 +50,8 @@ pub async fn start_loop(app: AppHandle, args: StartLoopArgs) -> Result( + app: &AppHandle, db: &DbState, _loop_state: &LoopManagerState, resolved: ResolvedStartLoop, @@ -66,6 +69,9 @@ async fn do_start_loop( cooldown_seconds: resolved.cooldown_seconds, test_command: resolved.test_command.clone(), max_verification_retries: resolved.max_verification_retries, + scm_provider: Some(resolved.scm_provider.clone()), + review_polling_interval: Some(resolved.review_polling_interval), + review_timeout: Some(resolved.review_timeout), }; let artifacts = artifact_dir(app, &resolved.project_id)?; let gutter_threshold = resolved.gutter_threshold.unwrap_or(3); @@ -120,14 +126,12 @@ async fn do_start_loop( let session_id_clone = session_id.clone(); let working_dir_clone = resolved.working_directory.clone(); - let event_sink = TauriEventSink::new( - app.clone(), - resolved.project_id.clone(), - session_id.clone(), - ); + let event_sink = + TauriEventSink::new(app.clone(), resolved.project_id.clone(), session_id.clone()); let join_handle = tokio::spawn(async move { - let run_result = loop_engine::run(&config, &provider, shutdown_clone.clone(), &event_sink).await; + let run_result = + loop_engine::run(&config, &provider, shutdown_clone.clone(), &event_sink).await; let outcome = if shutdown_clone.load(Ordering::SeqCst) { if pause_file.exists() { "paused" @@ -157,6 +161,23 @@ async fn do_start_loop( rusqlite::params![now, resolved.project_id], ); } + let snapshot = crate::commands::projects::get_project_snapshot( + app.clone(), + app.state::(), + app.state::(), + resolved.project_id.clone(), + ) + .await + .ok(); + let payload = if let Some(snapshot) = snapshot { + serde_json::json!({ + "projectId": resolved.project_id, + "snapshot": snapshot, + }) + } else { + serde_json::json!({ "projectId": resolved.project_id }) + }; + let _ = app.emit(crate::events::EVENT_PROJECT_STATE_CHANGED, payload); Ok(( session_id, diff --git a/src-tauri/src/loop_manager/start_finalize.rs b/src-tauri/src/loop_manager/start_finalize.rs index b6b0125..0e0337c 100644 --- a/src-tauri/src/loop_manager/start_finalize.rs +++ b/src-tauri/src/loop_manager/start_finalize.rs @@ -1,15 +1,15 @@ -use super::helpers::{artifact_dir, close_session}; +use super::helpers::close_session; use super::state::LoopManagerState; use crate::db::DbState; -use std::path::PathBuf; -use tauri::{AppHandle, Emitter, Manager}; +use crate::projects::notifications::{create_notification_and_emit, NotificationCreateInput}; +use tauri::{AppHandle, Emitter, Manager, Runtime}; -pub(super) async fn finalize_loop_run( - app: &AppHandle, +pub(super) async fn finalize_loop_run( + app: &AppHandle, project_id: &str, - project_name: &str, + _project_name: &str, session_id: &str, - working_directory: &str, + _working_directory: &str, outcome: &str, ) { let db_state = app.state::(); @@ -65,6 +65,15 @@ pub(super) async fn finalize_loop_run( "failed" => crate::notifications::notify_loop_error(app, project_name), _ => {} } + let _ = create_notification_and_emit( + app, + NotificationCreateInput { + project_id: project_id.to_string(), + notification_type: "loop_completed".to_string(), + title: "Loop finished".to_string(), + message: "Session completed.".to_string(), + }, + ); let _ = app.emit( crate::events::EVENT_SESSION_ENDED, @@ -74,4 +83,25 @@ pub(super) async fn finalize_loop_run( "outcome": outcome, }), ); + let _ = app.emit( + crate::events::EVENT_STORIES_UPDATED, + serde_json::json!({ "projectId": project_id }), + ); + let snapshot = crate::commands::projects::get_project_snapshot( + app.clone(), + app.state::(), + app.state::(), + project_id.to_string(), + ) + .await + .ok(); + let payload = if let Some(snapshot) = snapshot { + serde_json::json!({ + "projectId": project_id, + "snapshot": snapshot, + }) + } else { + serde_json::json!({ "projectId": project_id }) + }; + let _ = app.emit(crate::events::EVENT_PROJECT_STATE_CHANGED, payload); } diff --git a/src-tauri/src/loop_manager/start_resolve.rs b/src-tauri/src/loop_manager/start_resolve.rs index 164a102..ee4046c 100644 --- a/src-tauri/src/loop_manager/start_resolve.rs +++ b/src-tauri/src/loop_manager/start_resolve.rs @@ -1,7 +1,7 @@ use super::{LoopError, StartLoopArgs}; use crate::db::DbState; use crate::projects::ProjectConfig; -use tauri::AppHandle; +use tauri::{AppHandle, Runtime}; #[derive(Clone)] pub(super) struct ResolvedStartLoop { @@ -17,12 +17,12 @@ pub(super) struct ResolvedStartLoop { pub(super) cooldown_seconds: Option, pub(super) test_command: Option, pub(super) max_verification_retries: Option, + pub(super) scm_provider: String, + pub(super) review_polling_interval: u64, + pub(super) review_timeout: u64, } -fn read_project_metadata( - db: &DbState, - project_id: &str, -) -> Result<(String, String), LoopError> { +fn read_project_metadata(db: &DbState, project_id: &str) -> Result<(String, String), LoopError> { let conn = db.0.lock().map_err(|_| LoopError::LockPoisoned)?; conn.query_row( "SELECT name, working_directory FROM projects WHERE id = ?1", @@ -57,32 +57,39 @@ fn legacy_config_from_args(args: &StartLoopArgs) -> ProjectConfig { if let Some(max_retries) = args.max_verification_retries { runtime_config.max_verification_retries = max_retries; } + if let Some(ref scm) = args.scm_provider { + runtime_config.scm_provider = scm.clone(); + } + if let Some(interval) = args.review_polling_interval { + runtime_config.review_polling_interval = interval; + } + if let Some(timeout) = args.review_timeout { + runtime_config.review_timeout = timeout; + } runtime_config } -pub(super) async fn resolve_start_loop( - app: &AppHandle, +pub(super) async fn resolve_start_loop( + app: &AppHandle, db: &DbState, args: &StartLoopArgs, ) -> Result { let (db_project_name, db_working_directory) = read_project_metadata(db, &args.project_id)?; - let runtime_config = match crate::projects::runtime_config::get_project_config( - app.clone(), - args.project_id.clone(), - ) - .await - .map_err(|err| LoopError::Db(err.to_string()))? { - Some(config) => config, - None => { - let migrated_config = legacy_config_from_args(args); - crate::projects::runtime_config::save_project_config( - app, - &args.project_id, - &migrated_config, - ) - .map_err(|err| LoopError::Db(err.to_string()))?; - migrated_config - } + let runtime_config = if let Some(config) = + crate::projects::runtime_config::get_project_config(app.clone(), args.project_id.clone()) + .await + .map_err(|err| LoopError::Db(err.to_string()))? + { + config + } else { + let migrated_config = legacy_config_from_args(args); + crate::projects::runtime_config::save_project_config( + app, + &args.project_id, + &migrated_config, + ) + .map_err(|err| LoopError::Db(err.to_string()))?; + migrated_config }; let project_name = db_project_name; let working_directory = db_working_directory; @@ -110,5 +117,8 @@ pub(super) async fn resolve_start_loop( cooldown_seconds: Some(runtime_config.cooldown_seconds), test_command, max_verification_retries: Some(runtime_config.max_verification_retries), + scm_provider: runtime_config.scm_provider.clone(), + review_polling_interval: runtime_config.review_polling_interval, + review_timeout: runtime_config.review_timeout, }) } diff --git a/src-tauri/src/loop_manager/stats.rs b/src-tauri/src/loop_manager/stats.rs index 35c0073..ffabf0e 100644 --- a/src-tauri/src/loop_manager/stats.rs +++ b/src-tauri/src/loop_manager/stats.rs @@ -67,12 +67,12 @@ pub async fn session_stats( ) .unwrap_or(0); let rate_limited: i64 = conn - .query_row( - "SELECT COUNT(*) FROM iterations WHERE session_id = ?1 AND result = 'rate_limited'", - rusqlite::params![sid], - |row| row.get(0), - ) - .unwrap_or(0); + .query_row( + "SELECT COUNT(*) FROM iterations WHERE session_id = ?1 AND result = 'rate_limited'", + rusqlite::params![sid], + |row| row.get(0), + ) + .unwrap_or(0); (total, success, total - success - rate_limited, rate_limited) } else { (0, 0, 0, 0) @@ -115,17 +115,16 @@ pub async fn session_stats( let stories_per_hour = session_started_at .as_deref() .and_then(|started| chrono::DateTime::parse_from_rfc3339(started).ok()) - .map(|started| { - let elapsed_hours = - (chrono::Utc::now() - started.with_timezone(&chrono::Utc)).num_minutes() as f64 - / 60.0; + .map_or(0.0, |started| { + let elapsed_hours = (chrono::Utc::now() - started.with_timezone(&chrono::Utc)) + .num_minutes() as f64 + / 60.0; if elapsed_hours > 0.0 { passed_stories as f64 / elapsed_hours } else { 0.0 } - }) - .unwrap_or(0.0); + }); Ok(SessionStats { project_id, diff --git a/src-tauri/src/main.rs b/src-tauri/src/main.rs index ad5fe83..69c3a72 100644 --- a/src-tauri/src/main.rs +++ b/src-tauri/src/main.rs @@ -2,5 +2,5 @@ #![cfg_attr(not(debug_assertions), windows_subsystem = "windows")] fn main() { - app_lib::run(); + app_lib::run(); } diff --git a/src-tauri/src/models.rs b/src-tauri/src/models.rs index a1de595..9a5272f 100644 --- a/src-tauri/src/models.rs +++ b/src-tauri/src/models.rs @@ -18,6 +18,7 @@ impl ProjectStatus { pub fn from_db_status(status: &str) -> Self { match status { "active" => Self::Running, + "running" => Self::Running, "paused" => Self::Paused, "blocked" => Self::Blocked, "failed" => Self::Failed, @@ -27,6 +28,53 @@ impl ProjectStatus { _ => Self::Draft, } } + + pub fn resolve_canonical( + db_status: &str, + has_prd: bool, + has_config: bool, + has_active_session: bool, + ) -> Self { + let base_status = Self::from_db_status(db_status); + + if has_active_session { + return Self::Running; + } + + if matches!(base_status, Self::Running | Self::Paused) { + return Self::Paused; + } + + if matches!( + base_status, + Self::Blocked | Self::Failed | Self::Completed | Self::Archived + ) { + return base_status; + } + + if !has_prd { + return Self::Draft; + } + + if has_config { + Self::Ready + } else { + Self::Draft + } + } + + pub fn as_project_status(&self) -> &'static str { + match self { + Self::Draft => "draft", + Self::Ready => "ready", + Self::Running => "active", + Self::Paused => "paused", + Self::Blocked => "blocked", + Self::Failed => "failed", + Self::Completed => "completed", + Self::Archived => "archived", + } + } } #[derive(Debug, Clone, Serialize, Deserialize, Default)] @@ -114,6 +162,10 @@ pub struct ProjectSnapshot { pub config: Option, #[serde(default)] pub artifact_paths: ArtifactPaths, + #[serde(default)] + pub progress_percent: u32, + #[serde(default)] + pub uptime_label: String, } #[allow(dead_code)] @@ -163,3 +215,32 @@ pub enum LoopEvent { outcome: String, }, } + +#[cfg(test)] +mod tests { + use super::ProjectStatus; + + #[test] + fn canonical_project_status_resolution() { + assert_eq!( + ProjectStatus::resolve_canonical("ready", false, true, false), + ProjectStatus::Draft + ); + assert_eq!( + ProjectStatus::resolve_canonical("paused", true, true, false), + ProjectStatus::Paused + ); + assert_eq!( + ProjectStatus::resolve_canonical("draft", true, true, false), + ProjectStatus::Ready + ); + assert_eq!( + ProjectStatus::resolve_canonical("active", true, true, false), + ProjectStatus::Paused + ); + assert_eq!( + ProjectStatus::resolve_canonical("paused", true, false, true), + ProjectStatus::Running + ); + } +} diff --git a/src-tauri/src/plan_engine/args.rs b/src-tauri/src/plan_engine/args.rs index 7b60e8f..f1062a4 100644 --- a/src-tauri/src/plan_engine/args.rs +++ b/src-tauri/src/plan_engine/args.rs @@ -8,13 +8,13 @@ pub(super) fn build_plan_args( effort: Option<&str>, ) -> Vec { let selected_model = model - .map(|value| value.trim()) + .map(str::trim) .filter(|value| !value.is_empty()) - .map(|value| value.to_string()); + .map(ToString::to_string); let selected_effort = effort - .map(|value| value.trim()) + .map(str::trim) .filter(|value| !value.is_empty()) - .map(|value| value.to_string()); + .map(ToString::to_string); match agent { "claude" => { let mut args = vec![ @@ -84,9 +84,6 @@ pub(super) fn build_plan_args( vec![ "agent".to_string(), "--print".to_string(), - "--mode".to_string(), - "plan".to_string(), - "--force".to_string(), "--output-format".to_string(), "text".to_string(), "--model".to_string(), @@ -128,20 +125,3 @@ pub(super) fn is_safe_binary_name(name: &str) -> bool { pub(super) fn needs_null_stdin(agent: &str) -> bool { matches!(agent, "codex" | "gemini" | "opencode") } - -fn shell_quote(value: &str) -> String { - if value.is_empty() { - "''".to_string() - } else { - format!("'{}'", value.replace('\'', "'\"'\"'")) - } -} - -pub(super) fn build_null_stdin_command(binary: &str, args: &[String]) -> String { - let mut parts = Vec::with_capacity(args.len() + 1); - parts.push(shell_quote(binary)); - for arg in args { - parts.push(shell_quote(arg)); - } - format!("{} < /dev/null", parts.join(" ")) -} diff --git a/src-tauri/src/plan_engine/filters.rs b/src-tauri/src/plan_engine/filters.rs new file mode 100644 index 0000000..fcd6b76 --- /dev/null +++ b/src-tauri/src/plan_engine/filters.rs @@ -0,0 +1,163 @@ +use crate::activity::PlanEventKind; +use crate::plan_engine::payloads::{PlanActivityBatchPayload, PlanActivityPayload}; +use std::collections::HashSet; +use std::sync::OnceLock; + +static EXACT_NOISE: OnceLock> = OnceLock::new(); + +fn exact_noise_set() -> &'static HashSet<&'static str> { + EXACT_NOISE.get_or_init(|| { + let mut set = HashSet::new(); + for entry in &[ + "exec", "codex", "--------", "---", "WAIT", "reason", "effort", "found", + ] { + set.insert(*entry); + } + set + }) +} + +const PREFIX_NOISE: &[&str] = &[ + "OpenAI Codex", + "workdir:", + "model:", + "provider:", + "approval:", + "sandbox:", + "reasoning effort:", + "reasoning summaries:", + "session id:", + "user You are a senior software architect.", + "error: unexpected argument", + "tip: to pass", + "Usage: codex", + "For more information", + "Usage:", + "tip:", + "Reading additional input from stdin", + "Warning: no stdin data received", + "If piping from a slow command", +]; + +fn is_noise_line(content: &str) -> bool { + let trimmed = content.trim(); + if trimmed.is_empty() { + return true; + } + if exact_noise_set().contains(trimmed) { + return true; + } + PREFIX_NOISE + .iter() + .any(|prefix| trimmed.starts_with(prefix)) +} + +fn normalize_line(content: &str) -> String { + let trimmed = content.trim(); + if let Some(rest) = trimmed.strip_prefix("codex ") { + return rest.trim().to_string(); + } + if let Some(rest) = trimmed.strip_prefix("user ") { + return rest.trim().to_string(); + } + trimmed.to_string() +} + +pub fn filter_plan_batch(batch: PlanActivityBatchPayload) -> PlanActivityBatchPayload { + let mut last_signature = String::new(); + let filtered_events: Vec = batch + .events + .into_iter() + .filter_map(|mut event| { + let normalized = normalize_line(&event.content); + if is_noise_line(&normalized) { + return None; + } + let signature = format!("{:?}:{normalized}", event.kind); + if signature == last_signature { + return None; + } + last_signature = signature; + event.content = normalized; + Some(event) + }) + .collect(); + + let plan_content_delta: String = filtered_events + .iter() + .filter(|evt| evt.kind == PlanEventKind::PlanContent) + .map(|evt| evt.content.as_str()) + .collect::>() + .join("\n"); + let plan_content = if batch.plan_content.trim().is_empty() { + plan_content_delta.clone() + } else { + batch.plan_content + }; + + PlanActivityBatchPayload { + project_id: batch.project_id, + events: filtered_events, + plan_content, + plan_content_delta, + } +} + +#[cfg(test)] +mod tests { + use super::*; + + fn make_event(content: &str) -> PlanActivityPayload { + PlanActivityPayload { + project_id: "test".to_string(), + kind: PlanEventKind::PlanContent, + content: content.to_string(), + timestamp: "12:00:00".to_string(), + } + } + + #[test] + fn filters_noise_lines() { + let batch = PlanActivityBatchPayload { + project_id: "test".to_string(), + events: vec![ + make_event("exec"), + make_event("Real plan content"), + make_event("OpenAI Codex v1.0"), + ], + plan_content: String::new(), + plan_content_delta: String::new(), + }; + let filtered = filter_plan_batch(batch); + assert_eq!(filtered.events.len(), 1); + assert_eq!(filtered.events[0].content, "Real plan content"); + } + + #[test] + fn normalizes_codex_prefix() { + let batch = PlanActivityBatchPayload { + project_id: "test".to_string(), + events: vec![make_event("codex Some plan output")], + plan_content: String::new(), + plan_content_delta: String::new(), + }; + let filtered = filter_plan_batch(batch); + assert_eq!(filtered.events[0].content, "Some plan output"); + } + + #[test] + fn deduplicates_consecutive() { + let batch = PlanActivityBatchPayload { + project_id: "test".to_string(), + events: vec![ + make_event("Same line"), + make_event("Same line"), + make_event("Different line"), + ], + plan_content: String::new(), + plan_content_delta: String::new(), + }; + let filtered = filter_plan_batch(batch); + assert_eq!(filtered.events.len(), 2); + } +} diff --git a/src-tauri/src/plan_engine/fixture.rs b/src-tauri/src/plan_engine/fixture.rs index 2a8ae7b..a820e22 100644 --- a/src-tauri/src/plan_engine/fixture.rs +++ b/src-tauri/src/plan_engine/fixture.rs @@ -66,11 +66,17 @@ pub(super) async fn start_fixture_plan( plan_bytes: plan_markdown.len(), }); } + let final_content = if plan_markdown.is_empty() { + None + } else { + Some(plan_markdown) + }; let _ = app_handle.emit( EVENT_PLAN_COMPLETE, PlanTerminalPayload { project_id: project_id.clone(), detail: String::new(), + final_content, }, ); } @@ -85,6 +91,7 @@ pub(super) async fn start_fixture_plan( PlanTerminalPayload { project_id: project_id.clone(), detail, + final_content: None, }, ); } diff --git a/src-tauri/src/plan_engine/helpers.rs b/src-tauri/src/plan_engine/helpers.rs index c6e365a..8f61920 100644 --- a/src-tauri/src/plan_engine/helpers.rs +++ b/src-tauri/src/plan_engine/helpers.rs @@ -15,11 +15,11 @@ pub(super) async fn resolve_agent_binary( ))); } - let lookup = format!("command -v {binary}"); + let (shell_program, shell_args) = crate::shell_resolve::resolve_binary_via_shell(binary); let output = app .shell() - .command("/bin/zsh") - .args(["-lc", &lookup]) + .command(&shell_program) + .args(shell_args) .output() .await .map_err(|err| PlanEngineError::Shell(err.to_string()))?; @@ -54,10 +54,33 @@ pub(super) fn artifact_dir( pub(super) fn build_plan_prompt(user_description: &str) -> String { format!( - "You are a senior software architect. Research the codebase and create a detailed \ -implementation plan for the following feature request.\n\n\ -Think through the problem carefully. Identify affected files, dependencies, and edge cases.\n\ -Output a structured plan in markdown.\n\n\ + "You are a senior software architect creating an implementation plan. \ +Follow this workflow strictly.\n\n\ +## Phase 1: Understand the request\n\ +Read the feature request below. Identify the core objective, implicit requirements, \ +and constraints. Do not ask clarifying questions. Infer reasonable defaults and \ +document every assumption you make.\n\n\ +## Phase 2: Explore the codebase\n\ +Before proposing any changes, explore the relevant parts of the codebase. \ +Identify existing patterns, conventions, and dependencies. Never propose changes \ +to code you have not read. List the files and modules you examined.\n\n\ +## Phase 3: Design the plan\n\ +Write a structured implementation plan in markdown. Use # headers to separate \ +each area of work. For every section include:\n\ +- Affected files (full relative paths)\n\ +- New files to create (if any)\n\ +- Dependencies on other sections\n\ +- Edge cases and error scenarios\n\ +- Verification criteria (how to confirm the section works)\n\n\ +## Output rules\n\ +- Use clear markdown headings (# for top-level, ## for subsections)\n\ +- Reference file paths explicitly, never use vague references like \"the config file\"\n\ +- Order sections from foundational (data layer, types) to dependent (UI, integration)\n\ +- End with a summary listing all files to modify, all files to create, and \ +the recommended implementation order\n\ +- Do not include code snippets unless they clarify a non-obvious approach\n\ +- Keep the plan actionable: every section should map to one or more implementable units\n\n\ +---\n\n\ Feature request:\n{user_description}" ) } diff --git a/src-tauri/src/plan_engine/mod.rs b/src-tauri/src/plan_engine/mod.rs index 0544eb3..e162786 100644 --- a/src-tauri/src/plan_engine/mod.rs +++ b/src-tauri/src/plan_engine/mod.rs @@ -1,5 +1,6 @@ mod args; mod errors; +pub mod filters; mod fixture; mod helpers; mod monitor; @@ -19,7 +20,7 @@ mod tests_fixture_happy; mod tests_fixture_support; pub use errors::PlanEngineError; -pub use sessions::{PlanSessionInfo, PlanSessionsState, StartPlanArgs}; +pub use sessions::{PlanSessionInfo, PlanSessionStatus, PlanSessionsState, StartPlanArgs}; pub use start::start_plan; pub use status::query_plan_status; pub use write::{stop_plan, write_to_plan}; diff --git a/src-tauri/src/plan_engine/monitor.rs b/src-tauri/src/plan_engine/monitor.rs index a0a23db..035f744 100644 --- a/src-tauri/src/plan_engine/monitor.rs +++ b/src-tauri/src/plan_engine/monitor.rs @@ -33,6 +33,7 @@ pub(super) fn spawn_plan_flush_task( pub(super) fn spawn_batch_flush_task( buffer: Arc>>, + classifier: Arc>, app: AppHandle, project_id: String, ) -> JoinHandle<()> { @@ -41,7 +42,7 @@ pub(super) fn spawn_batch_flush_task( interval.tick().await; loop { interval.tick().await; - flush_event_buffer(&buffer, &app, &project_id); + flush_event_buffer(&buffer, &classifier, &app, &project_id); } }) } @@ -73,6 +74,7 @@ pub(super) fn spawn_heartbeat_task( PlanTerminalPayload { project_id: project_id.clone(), detail: String::new(), + final_content: None, }, ); @@ -108,6 +110,7 @@ pub(super) fn spawn_heartbeat_task( PlanTerminalPayload { project_id: project_id.clone(), detail: "stalled".to_string(), + final_content: None, }, ); if let Ok(mut guard) = sessions.lock() { @@ -129,7 +132,7 @@ pub(super) fn handle_termination( exit_code: i32, tracer: &Option, ) { - flush_event_buffer(event_buffer, app, project_id); + flush_event_buffer(event_buffer, classifier, app, project_id); if let Ok(guard) = classifier.lock() { let plan_content = guard.accumulated_plan(); @@ -154,6 +157,7 @@ pub(super) fn handle_termination( PlanTerminalPayload { project_id: project_id.to_string(), detail: format!("exit_code={exit_code}"), + final_content: None, }, ); } else if !has_plan { @@ -167,21 +171,29 @@ pub(super) fn handle_termination( PlanTerminalPayload { project_id: project_id.to_string(), detail: "empty_output".to_string(), + final_content: None, }, ); } else { - let plan_bytes = classifier + let accumulated = classifier .lock() - .map(|g| g.accumulated_plan().len()) - .unwrap_or(0); + .map(|guard| guard.accumulated_plan()) + .unwrap_or_default(); + let plan_bytes = accumulated.len(); if let Some(tracer) = tracer { tracer.log(TraceEvent::PlanComplete { plan_bytes }); } + let final_content = if accumulated.is_empty() { + None + } else { + Some(accumulated) + }; let _ = app.emit( crate::events::EVENT_PLAN_COMPLETE, PlanTerminalPayload { project_id: project_id.to_string(), detail: String::new(), + final_content, }, ); } diff --git a/src-tauri/src/plan_engine/output.rs b/src-tauri/src/plan_engine/output.rs index 90ee6db..2ddb464 100644 --- a/src-tauri/src/plan_engine/output.rs +++ b/src-tauri/src/plan_engine/output.rs @@ -35,6 +35,7 @@ pub(super) fn buffer_text_segments( pub(super) fn flush_event_buffer( buffer: &Arc>>, + classifier: &Arc>, app: &AppHandle, project_id: &str, ) { @@ -54,12 +55,19 @@ pub(super) fn flush_event_buffer( .map(|evt| evt.content.as_str()) .collect::>() .join("\n"); - let _ = app.emit( - crate::events::EVENT_PLAN_ACTIVITY_BATCH, - PlanActivityBatchPayload { - project_id: project_id.to_string(), - events, - plan_content_delta, - }, - ); + let plan_content = classifier + .lock() + .map(|guard| guard.accumulated_plan()) + .unwrap_or_default(); + let raw_batch = PlanActivityBatchPayload { + project_id: project_id.to_string(), + events, + plan_content, + plan_content_delta, + }; + let filtered_batch = crate::plan_engine::filters::filter_plan_batch(raw_batch); + if filtered_batch.events.is_empty() && filtered_batch.plan_content.is_empty() { + return; + } + let _ = app.emit(crate::events::EVENT_PLAN_ACTIVITY_BATCH, filtered_batch); } diff --git a/src-tauri/src/plan_engine/payloads.rs b/src-tauri/src/plan_engine/payloads.rs index 08b9645..8b0ce23 100644 --- a/src-tauri/src/plan_engine/payloads.rs +++ b/src-tauri/src/plan_engine/payloads.rs @@ -20,6 +20,8 @@ pub struct PlanActivityPayload { pub struct PlanActivityBatchPayload { pub project_id: String, pub events: Vec, + pub plan_content: String, + #[serde(default)] pub plan_content_delta: String, } @@ -28,4 +30,6 @@ pub struct PlanActivityBatchPayload { pub struct PlanTerminalPayload { pub project_id: String, pub detail: String, + #[serde(default, skip_serializing_if = "Option::is_none")] + pub final_content: Option, } diff --git a/src-tauri/src/plan_engine/start.rs b/src-tauri/src/plan_engine/start.rs index ef0fdd9..40c1307 100644 --- a/src-tauri/src/plan_engine/start.rs +++ b/src-tauri/src/plan_engine/start.rs @@ -1,18 +1,158 @@ use crate::activity::ActivityClassifier; -use crate::plan_engine::args::{ - agent_env_vars, build_null_stdin_command, build_plan_args, needs_null_stdin, -}; +use crate::app_core_plan::{self, PlanCleanupReason}; +use crate::plan_engine::args::{agent_env_vars, build_plan_args, needs_null_stdin}; use crate::plan_engine::fixture; use crate::plan_engine::helpers::{artifact_dir, build_plan_prompt, resolve_agent_binary}; -use crate::plan_engine::payloads::PlanActivityPayload; +use crate::plan_engine::payloads::{PlanActivityPayload, PlanTerminalPayload}; use crate::plan_engine::sessions::{PlanSessionEntry, PlanSessionHandle, PlanSessionStatus}; use crate::plan_engine::trace::{SessionTracer, TraceEvent}; use crate::plan_engine::{monitor, output, PlanEngineError, PlanSessionsState, StartPlanArgs}; +use std::path::PathBuf; use std::sync::{Arc, Mutex}; use std::time::Instant; -use tauri::{AppHandle, Runtime, State}; +use tauri::{AppHandle, Emitter, Runtime, State}; use tauri_plugin_shell::process::CommandEvent; use tauri_plugin_shell::ShellExt; +use tokio::task::AbortHandle; + +fn flush_partial_plan( + buffer: &Arc>>, + classifier: &Arc>, + plan_path: &PathBuf, + app: &AppHandle, + project_id: &str, +) -> (usize, String) { + output::flush_event_buffer(buffer, classifier, app, project_id); + let plan_content = classifier + .lock() + .map(|guard| guard.accumulated_plan()) + .unwrap_or_default(); + if plan_content.is_empty() { + return (0, String::new()); + } + let plan_bytes = plan_content.len(); + let _ = std::fs::write(plan_path, &plan_content); + (plan_bytes, plan_content) +} + +fn emit_terminal( + app: &AppHandle, + project_id: &str, + exit_code: i32, + plan_bytes: usize, + plan_content: Option, + tracer: &Option, +) { + if exit_code != 0 { + if let Some(tracer) = tracer { + tracer.log(TraceEvent::ErrorEmitted { + detail: format!("exit_code={exit_code}"), + }); + } + let _ = app.emit( + crate::events::EVENT_PLAN_ERROR, + PlanTerminalPayload { + project_id: project_id.to_string(), + detail: format!("exit_code={exit_code}"), + final_content: None, + }, + ); + return; + } + if plan_bytes == 0 { + if let Some(tracer) = tracer { + tracer.log(TraceEvent::ErrorEmitted { + detail: "empty_output".to_string(), + }); + } + let _ = app.emit( + crate::events::EVENT_PLAN_ERROR, + PlanTerminalPayload { + project_id: project_id.to_string(), + detail: "empty_output".to_string(), + final_content: None, + }, + ); + return; + } + if let Some(tracer) = tracer { + tracer.log(TraceEvent::PlanComplete { plan_bytes }); + } + let _ = app.emit( + crate::events::EVENT_PLAN_COMPLETE, + PlanTerminalPayload { + project_id: project_id.to_string(), + detail: String::new(), + final_content: plan_content, + }, + ); +} + +fn cleanup_hook( + app: AppHandle, + project_id: String, + buffer: Arc>>, + classifier: Arc>, + plan_path: PathBuf, + tracer: Option, + plan_abort: AbortHandle, + batch_abort: AbortHandle, + heartbeat_abort: AbortHandle, +) -> Arc { + Arc::new(move |reason| { + batch_abort.abort(); + plan_abort.abort(); + heartbeat_abort.abort(); + let (plan_bytes, plan_content) = + flush_partial_plan(&buffer, &classifier, &plan_path, &app, &project_id); + if let PlanCleanupReason::ProcessExit { exit_code } = reason { + if let Some(ref tracer) = tracer { + tracer.log(TraceEvent::ProcessTerminated { + exit_code, + has_plan_content: plan_bytes != 0, + }); + } + let content_opt = if plan_content.is_empty() { + None + } else { + Some(plan_content) + }; + emit_terminal( + &app, + &project_id, + exit_code, + plan_bytes, + content_opt, + &tracer, + ); + } + }) +} + +fn record_output( + stream: &'static str, + bytes: &[u8], + tracer: &Option, + classifier: &Arc>, + event_buffer: &Arc>>, + project_id: &str, + last_activity: &Arc>, +) { + let text = String::from_utf8_lossy(bytes); + if let Some(ref tracer) = tracer { + let lines: Vec<&str> = text.lines().collect(); + tracer.log(TraceEvent::Output { + stream, + byte_len: bytes.len(), + line_count: lines.len(), + first_line_preview: lines + .first() + .map(|line| line.chars().take(120).collect()) + .unwrap_or_default(), + }); + } + output::buffer_text_segments(&text, classifier, event_buffer, project_id, last_activity); +} pub async fn start_plan( app: AppHandle, @@ -31,7 +171,6 @@ pub async fn start_plan( args.project_dir.display() ))); } - { let sessions = state.0.lock().map_err(|_| PlanEngineError::LockPoisoned)?; if sessions.sessions.contains_key(&args.project_id) { @@ -41,8 +180,8 @@ pub async fn start_plan( if let Some(runtime) = fixture::resolve_test_runtime()? { return fixture::start_fixture_plan(app, state.inner().clone(), args, runtime).await; } - let agent_binary = resolve_agent_binary(&app, &args.agent).await?; + let agent_binary = resolve_agent_binary(&app, &args.agent).await?; let plan_prompt = build_plan_prompt(&args.initial_prompt); let agent_args = build_plan_args( &args.agent, @@ -54,9 +193,7 @@ pub async fn start_plan( let env_vars = agent_env_vars(&args.agent); let artifacts = artifact_dir(&app, &args.project_id)?; std::fs::create_dir_all(&artifacts)?; - let use_null_stdin = needs_null_stdin(&args.agent); - let tracer = SessionTracer::new(&artifacts, &args.project_id); if let Some(ref tracer) = tracer { tracer.log(TraceEvent::SessionStart { @@ -67,12 +204,12 @@ pub async fn start_plan( stall_threshold_secs: crate::plan_engine::payloads::DEFAULT_STALL_THRESHOLD_SECS, }); } - let (mut event_rx, child) = if use_null_stdin { - let wrapped = build_null_stdin_command(&agent_binary, &agent_args); + let (shell_program, shell_args) = + crate::shell_resolve::build_null_stdin_command(&agent_binary, &agent_args); app.shell() - .command("/bin/zsh") - .args(["-lc", &wrapped]) + .command(&shell_program) + .args(shell_args) .envs(env_vars) .current_dir(&args.project_dir) .spawn() @@ -89,10 +226,49 @@ pub async fn start_plan( let now = Instant::now(); let last_activity = Arc::new(Mutex::new(now)); + let classifier = Arc::new(Mutex::new(ActivityClassifier::new(&args.agent))); + let event_buffer = Arc::new(Mutex::new(Vec::new())); + let plan_path = artifacts.join("plan.md"); + let plan_abort = + monitor::spawn_plan_flush_task(Arc::clone(&classifier), plan_path.clone()).abort_handle(); + let batch_abort = monitor::spawn_batch_flush_task( + Arc::clone(&event_buffer), + Arc::clone(&classifier), + app.clone(), + args.project_id.clone(), + ) + .abort_handle(); + let heartbeat_abort = monitor::spawn_heartbeat_task( + app.clone(), + Arc::clone(&last_activity), + Arc::clone(&state.0), + args.project_id.clone(), + args.agent.clone(), + now, + tracer.clone(), + ) + .abort_handle(); + app_core_plan::register_cleanup( + &args.project_id, + cleanup_hook( + app.clone(), + args.project_id.clone(), + Arc::clone(&event_buffer), + Arc::clone(&classifier), + plan_path, + tracer.clone(), + plan_abort, + batch_abort, + heartbeat_abort, + ), + ); - { - let mut sessions = state.0.lock().map_err(|_| PlanEngineError::LockPoisoned)?; - sessions.sessions.insert( + state + .0 + .lock() + .map_err(|_| PlanEngineError::LockPoisoned)? + .sessions + .insert( args.project_id.clone(), PlanSessionEntry { handle: PlanSessionHandle::Shell(child), @@ -102,117 +278,48 @@ pub async fn start_plan( last_activity_at: Arc::clone(&last_activity), }, ); - } let project_id = args.project_id.clone(); - let artifact_path = artifacts.clone(); - let agent_name = args.agent.clone(); - let sessions_arc = Arc::clone(&state.0); - let app_clone = app.clone(); - let last_activity_clone = Arc::clone(&last_activity); - + let sessions = Arc::clone(&state.0); tokio::spawn(async move { - let classifier = Arc::new(Mutex::new(ActivityClassifier::new(&agent_name))); - let event_buffer: Arc>> = Arc::new(Mutex::new(Vec::new())); - let plan_path = artifact_path.join("plan.md"); - - let plan_flush = monitor::spawn_plan_flush_task(Arc::clone(&classifier), plan_path.clone()); - let batch_flush = monitor::spawn_batch_flush_task( - Arc::clone(&event_buffer), - app_clone.clone(), - project_id.clone(), - ); - let heartbeat = monitor::spawn_heartbeat_task( - app_clone.clone(), - Arc::clone(&last_activity_clone), - Arc::clone(&sessions_arc), - project_id.clone(), - agent_name.clone(), - now, - tracer.clone(), - ); - while let Some(event) = event_rx.recv().await { match event { - CommandEvent::Stdout(ref bytes) => { - let text = String::from_utf8_lossy(bytes); - if let Some(ref tracer) = tracer { - let lines: Vec<&str> = text.lines().collect(); - tracer.log(TraceEvent::Output { - stream: "stdout", - byte_len: bytes.len(), - line_count: lines.len(), - first_line_preview: lines - .first() - .map(|l| l.chars().take(120).collect()) - .unwrap_or_default(), - }); - } - output::buffer_text_segments( - &text, - &classifier, - &event_buffer, - &project_id, - &last_activity_clone, - ); - } - CommandEvent::Stderr(ref bytes) => { - let text = String::from_utf8_lossy(bytes); - if let Some(ref tracer) = tracer { - let lines: Vec<&str> = text.lines().collect(); - tracer.log(TraceEvent::Output { - stream: "stderr", - byte_len: bytes.len(), - line_count: lines.len(), - first_line_preview: lines - .first() - .map(|l| l.chars().take(120).collect()) - .unwrap_or_default(), - }); - } - output::buffer_text_segments( - &text, - &classifier, - &event_buffer, - &project_id, - &last_activity_clone, - ); - } + CommandEvent::Stdout(ref bytes) => record_output( + "stdout", + bytes, + &tracer, + &classifier, + &event_buffer, + &project_id, + &last_activity, + ), + CommandEvent::Stderr(ref bytes) => record_output( + "stderr", + bytes, + &tracer, + &classifier, + &event_buffer, + &project_id, + &last_activity, + ), CommandEvent::Terminated(payload) => { - let exit_code = payload.code.unwrap_or(1); - if let Some(ref tracer) = tracer { - let has_plan = classifier - .lock() - .map(|g| !g.accumulated_plan().is_empty()) - .unwrap_or(false); - tracer.log(TraceEvent::ProcessTerminated { - exit_code, - has_plan_content: has_plan, - }); - } - monitor::handle_termination( - &event_buffer, - &classifier, - &plan_path, - &app_clone, + let _: Result = app_core_plan::cleanup_session( &project_id, - exit_code, - &tracer, + PlanCleanupReason::ProcessExit { + exit_code: payload.code.unwrap_or(1), + }, + |project_id| { + let mut guard = + sessions.lock().map_err(|_| PlanEngineError::LockPoisoned)?; + Ok(guard.sessions.remove(project_id)) + }, + |_| Ok(()), ); break; } _ => {} } } - - batch_flush.abort(); - plan_flush.abort(); - heartbeat.abort(); - - if let Ok(mut sessions) = sessions_arc.lock() { - sessions.sessions.remove(&project_id); - } }); - Ok(()) } diff --git a/src-tauri/src/plan_engine/tests_fixture_errors.rs b/src-tauri/src/plan_engine/tests_fixture_errors.rs index 4ac5d8a..ba7f8e9 100644 --- a/src-tauri/src/plan_engine/tests_fixture_errors.rs +++ b/src-tauri/src/plan_engine/tests_fixture_errors.rs @@ -14,7 +14,7 @@ async fn plan_error_fixture_emits_deterministic_error_and_clears_session() { app.listen_any(crate::events::EVENT_PLAN_ERROR, { let errors = Arc::clone(&errors); - move |event| { + move |event: tauri::Event| { errors .lock() .unwrap() @@ -23,7 +23,7 @@ async fn plan_error_fixture_emits_deterministic_error_and_clears_session() { }); app.listen_any(crate::events::EVENT_PLAN_COMPLETE, { let completes = Arc::clone(&completes); - move |event| { + move |event: tauri::Event| { completes .lock() .unwrap() diff --git a/src-tauri/src/plan_engine/tests_fixture_happy.rs b/src-tauri/src/plan_engine/tests_fixture_happy.rs index 939eb5c..691cc11 100644 --- a/src-tauri/src/plan_engine/tests_fixture_happy.rs +++ b/src-tauri/src/plan_engine/tests_fixture_happy.rs @@ -16,7 +16,7 @@ async fn happy_path_fixture_emits_deterministic_plan_batches() { app.listen_any(crate::events::EVENT_PLAN_ACTIVITY_BATCH, { let batches = Arc::clone(&batches); - move |event| { + move |event: tauri::Event| { batches .lock() .unwrap() @@ -25,7 +25,7 @@ async fn happy_path_fixture_emits_deterministic_plan_batches() { }); app.listen_any(crate::events::EVENT_PLAN_COMPLETE, { let completes = Arc::clone(&completes); - move |event| { + move |event: tauri::Event| { completes .lock() .unwrap() @@ -34,7 +34,7 @@ async fn happy_path_fixture_emits_deterministic_plan_batches() { }); app.listen_any(crate::events::EVENT_PLAN_ERROR, { let errors = Arc::clone(&errors); - move |event| { + move |event: tauri::Event| { errors .lock() .unwrap() @@ -128,7 +128,7 @@ async fn stop_plan_cancels_fixture_session_before_terminal_event() { app.listen_any(crate::events::EVENT_PLAN_COMPLETE, { let completes = Arc::clone(&completes); - move |event| { + move |event: tauri::Event| { completes .lock() .unwrap() @@ -137,7 +137,7 @@ async fn stop_plan_cancels_fixture_session_before_terminal_event() { }); app.listen_any(crate::events::EVENT_PLAN_ERROR, { let errors = Arc::clone(&errors); - move |event| { + move |event: tauri::Event| { errors .lock() .unwrap() diff --git a/src-tauri/src/plan_engine/tests_fixture_support.rs b/src-tauri/src/plan_engine/tests_fixture_support.rs index c33df51..c2f60c7 100644 --- a/src-tauri/src/plan_engine/tests_fixture_support.rs +++ b/src-tauri/src/plan_engine/tests_fixture_support.rs @@ -1,11 +1,9 @@ use super::PlanSessionsState; use std::path::PathBuf; -use std::sync::{Mutex, MutexGuard}; +use std::sync::MutexGuard; use tauri::test::{mock_builder, mock_context, noop_assets, MockRuntime}; use tauri::App; -static ENV_LOCK: Mutex<()> = Mutex::new(()); - pub(super) struct EnvGuard { _lock: MutexGuard<'static, ()>, root_dir: PathBuf, @@ -14,7 +12,9 @@ pub(super) struct EnvGuard { impl EnvGuard { pub(super) fn new(fixture_set: &str) -> Self { - let lock = ENV_LOCK.lock().unwrap_or_else(|err| err.into_inner()); + let lock = crate::test_env_lock::ENV_LOCK + .lock() + .unwrap_or_else(|err| err.into_inner()); let root_dir = std::env::temp_dir().join(format!("loopforge-plan-fixture-{}", uuid::Uuid::new_v4())); let home_dir = root_dir.join("home"); diff --git a/src-tauri/src/plan_engine/write.rs b/src-tauri/src/plan_engine/write.rs index e147a0a..79cbdba 100644 --- a/src-tauri/src/plan_engine/write.rs +++ b/src-tauri/src/plan_engine/write.rs @@ -18,7 +18,7 @@ pub async fn write_to_plan( entry .handle .write(&payload) - .map_err(|err| PlanEngineError::Shell(err.to_string()))?; + .map_err(|err| PlanEngineError::Shell(err.clone()))?; if let Ok(mut last_activity) = entry.last_activity_at.lock() { *last_activity = Instant::now(); } @@ -35,7 +35,7 @@ pub async fn stop_plan( entry .handle .kill() - .map_err(|err| PlanEngineError::Shell(err.to_string()))?; + .map_err(|err| PlanEngineError::Shell(err.clone()))?; } Ok(()) } diff --git a/src-tauri/src/plugin_registry.rs b/src-tauri/src/plugin_registry.rs index 6b8ff31..b6999c4 100644 --- a/src-tauri/src/plugin_registry.rs +++ b/src-tauri/src/plugin_registry.rs @@ -76,9 +76,7 @@ fn builtin_plugins() -> Vec { } #[tauri::command] -pub async fn list_plugins( - db: State<'_, DbState>, -) -> Result, String> { +pub async fn list_plugins(db: State<'_, DbState>) -> Result, String> { let mut entries = builtin_plugins(); let conn = db.0.lock().map_err(|_| "Lock poisoned".to_string())?; diff --git a/src-tauri/src/projects/artifacts.rs b/src-tauri/src/projects/artifacts.rs index f74bb57..42729e9 100644 --- a/src-tauri/src/projects/artifacts.rs +++ b/src-tauri/src/projects/artifacts.rs @@ -1,9 +1,12 @@ use crate::db::DbState; use crate::projects::ProjectError; use std::path::{Path, PathBuf}; -use tauri::{AppHandle, State}; +use tauri::{AppHandle, Runtime, State}; -pub fn artifact_dir(app: &AppHandle, project_id: &str) -> Result { +pub fn artifact_dir( + app: &AppHandle, + project_id: &str, +) -> Result { crate::storage::artifacts::project_artifact_dir(app, project_id).map_err(ProjectError::Path) } diff --git a/src-tauri/src/projects/catalog.rs b/src-tauri/src/projects/catalog.rs index a81c25d..fabb97b 100644 --- a/src-tauri/src/projects/catalog.rs +++ b/src-tauri/src/projects/catalog.rs @@ -1,13 +1,14 @@ use crate::db::DbState; +use crate::models::ProjectStatus; use crate::projects::artifacts::{artifact_dir, init_artifacts}; use crate::projects::repository::{group_projects_by_status, row_to_project, PROJECT_COLUMNS}; use crate::projects::{Project, ProjectDetail, ProjectError, ProjectsByStatus}; use ralph_core::prd::Prd; -use tauri::{AppHandle, State}; +use tauri::{AppHandle, Runtime, State}; use uuid::Uuid; -pub async fn create_project( - app: AppHandle, +pub async fn create_project( + app: AppHandle, db: State<'_, DbState>, name: String, description: String, @@ -20,10 +21,9 @@ pub async fn create_project( let dir = artifact_dir(&app, &project_id)?; init_artifacts(&dir, &name)?; - let conn = db - .0 - .lock() - .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; conn.execute( "INSERT INTO projects (id, name, description, status, working_directory, created_at, updated_at, wizard_step) VALUES (?1, ?2, ?3, 'draft', ?4, ?5, ?6, ?7)", @@ -43,17 +43,16 @@ pub async fn create_project( } pub async fn list_projects(db: State<'_, DbState>) -> Result { - let conn = db - .0 - .lock() - .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; let query = format!("SELECT {PROJECT_COLUMNS} FROM projects ORDER BY updated_at DESC"); let mut stmt = conn.prepare(&query)?; let projects: Vec = stmt .query_map([], row_to_project)? - .filter_map(|result| result.ok()) + .filter_map(Result::ok) .collect(); Ok(group_projects_by_status(projects)) @@ -64,10 +63,9 @@ pub async fn archive_project( project_id: String, ) -> Result<(), ProjectError> { let now = chrono::Utc::now().to_rfc3339(); - let conn = db - .0 - .lock() - .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; let updated = conn.execute( "UPDATE projects SET status = 'archived', updated_at = ?1 WHERE id = ?2", rusqlite::params![now, project_id], @@ -78,16 +76,15 @@ pub async fn archive_project( Ok(()) } -pub async fn get_project_detail( - app: AppHandle, +pub async fn get_project_detail( + app: AppHandle, db: State<'_, DbState>, project_id: String, ) -> Result { - let project = { - let conn = db - .0 - .lock() - .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let mut project = { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; let query = format!("SELECT {PROJECT_COLUMNS} FROM projects WHERE id = ?1"); let mut stmt = conn.prepare(&query)?; stmt.query_row(rusqlite::params![project_id], row_to_project) @@ -98,10 +95,18 @@ pub async fn get_project_detail( let prd_path = dir.join("prd.json"); let stories = if prd_path.exists() { - Prd::load(&prd_path).map(|prd| prd.stories).unwrap_or_default() + Prd::load(&prd_path) + .map(|prd| prd.stories) + .unwrap_or_default() } else { Vec::new() }; + let has_prd = prd_path.exists(); + let has_config = dir.join("config.json").exists(); + + let status = ProjectStatus::resolve_canonical(&project.status, has_prd, has_config, false) + .as_project_status(); + project.status = status.to_string(); let total_stories = stories.len(); let passed_count = stories.iter().filter(|story| story.passes).count(); diff --git a/src-tauri/src/projects/control.rs b/src-tauri/src/projects/control.rs index 760ec90..14d5d75 100644 --- a/src-tauri/src/projects/control.rs +++ b/src-tauri/src/projects/control.rs @@ -1,7 +1,7 @@ use crate::projects::artifacts::artifact_dir; use crate::projects::ProjectError; use std::sync::atomic::Ordering; -use tauri::{AppHandle, Manager}; +use tauri::{AppHandle, Emitter, Manager}; const TRANSITION_WAIT_STEPS: usize = 24; const TRANSITION_WAIT_MS: u64 = 250; @@ -18,6 +18,26 @@ fn loop_handle_state(app: &AppHandle, project_id: &str) -> (bool, bool) { (false, false) } +async fn emit_project_state_changed(app: &AppHandle, project_id: &str) { + let snapshot = crate::commands::projects::get_project_snapshot( + app.clone(), + app.state::(), + app.state::(), + project_id.to_string(), + ) + .await + .ok(); + let payload = if let Some(snapshot) = snapshot { + serde_json::json!({ + "projectId": project_id, + "snapshot": snapshot, + }) + } else { + serde_json::json!({ "projectId": project_id }) + }; + let _ = app.emit(crate::events::EVENT_PROJECT_STATE_CHANGED, payload); +} + async fn wait_for_shutdown_transition(app: &AppHandle, project_id: &str) { for _ in 0..TRANSITION_WAIT_STEPS { let (has_handle, shutdown_requested) = loop_handle_state(app, project_id); @@ -45,27 +65,28 @@ pub async fn pause_project(app: AppHandle, project_id: String) -> Result<(), Pro let db = app.state::(); let now = chrono::Utc::now().to_rfc3339(); - let conn = db - .0 - .lock() - .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; - let updated = conn.execute( - "UPDATE projects SET status = 'paused', updated_at = ?1 WHERE id = ?2", - rusqlite::params![now, project_id], - )?; + let updated = { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + conn.execute( + "UPDATE projects SET status = 'paused', updated_at = ?1 WHERE id = ?2", + rusqlite::params![now, &project_id], + )? + }; if updated == 0 { return Err(ProjectError::NotFound(project_id)); } + emit_project_state_changed(&app, &project_id).await; Ok(()) } pub async fn resume_project(app: AppHandle, project_id: String) -> Result<(), ProjectError> { { let db = app.state::(); - let conn = db - .0 - .lock() - .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; let exists = conn .query_row( "SELECT id FROM projects WHERE id = ?1", @@ -91,14 +112,16 @@ pub async fn resume_project(app: AppHandle, project_id: String) -> Result<(), Pr if has_loop_handle && still_has_loop_handle { let now = chrono::Utc::now().to_rfc3339(); let db = app.state::(); - let conn = db - .0 - .lock() - .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; - let _ = conn.execute( - "UPDATE projects SET status = 'active', updated_at = ?1 WHERE id = ?2", - rusqlite::params![now, &project_id], - ); + { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let _ = conn.execute( + "UPDATE projects SET status = 'active', updated_at = ?1 WHERE id = ?2", + rusqlite::params![now, &project_id], + ); + } + emit_project_state_changed(&app, &project_id).await; return Ok(()); } let restart_args = crate::loop_manager::StartLoopArgs { @@ -114,6 +137,9 @@ pub async fn resume_project(app: AppHandle, project_id: String) -> Result<(), Pr cooldown_seconds: None, test_command: None, max_verification_retries: None, + scm_provider: None, + review_polling_interval: None, + review_timeout: None, }; crate::loop_manager::start_loop(app, restart_args) .await diff --git a/src-tauri/src/projects/core_types.rs b/src-tauri/src/projects/core_types.rs index e64b963..a86b540 100644 --- a/src-tauri/src/projects/core_types.rs +++ b/src-tauri/src/projects/core_types.rs @@ -72,11 +72,53 @@ pub struct ProjectDetail { pub struct WizardResumeState { pub project: Project, pub wizard_step: String, + #[serde(default)] + pub wizard_session: Option, + #[serde(default, skip_serializing_if = "Option::is_none")] pub wizard_state_json: Option, + #[serde(default)] pub has_plan: bool, + #[serde(default)] pub has_prd: bool, } +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct WizardProjectData { + #[serde(default)] + pub name: String, + #[serde(default)] + pub description: String, + #[serde(default)] + pub working_directory: String, + #[serde(default = "default_plan_agent")] + pub plan_agent: String, + #[serde(default)] + pub plan_model: Option, + #[serde(default)] + pub plan_effort: Option, +} + +fn default_plan_agent() -> String { + "claude".to_string() +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct WizardHydrationResult { + pub project: Project, + pub wizard_step: String, + #[serde(default)] + pub highest_step: u32, + pub project_data: WizardProjectData, + #[serde(default)] + pub plan_complete: bool, + #[serde(default)] + pub stories: Vec, + #[serde(default)] + pub config: Option, +} + #[derive(Debug, Clone, Serialize, Deserialize)] #[serde(rename_all = "camelCase")] pub struct IterationStory { @@ -84,5 +126,7 @@ pub struct IterationStory { pub title: String, pub status: String, pub duration_secs: Option, + #[serde(default, skip_serializing_if = "Option::is_none")] + pub duration_label: Option, pub attempts: i64, } diff --git a/src-tauri/src/projects/documents.rs b/src-tauri/src/projects/documents.rs index 1e0eca2..d83b2db 100644 --- a/src-tauri/src/projects/documents.rs +++ b/src-tauri/src/projects/documents.rs @@ -1,36 +1,47 @@ use crate::db::DbState; -use crate::projects::artifacts::{ - artifact_dir, non_empty_file_content, project_working_directory, -}; +use crate::projects::artifacts::{artifact_dir, non_empty_file_content, project_working_directory}; use crate::projects::ProjectError; use ralph_core::prd::Prd; +#[cfg(test)] +use ralph_core::{CompletionScheduler, MergeAction, WorktreeCompletion}; use std::path::Path; -use tauri::{AppHandle, State}; +use std::sync::{Mutex, OnceLock}; +use tauri::{AppHandle, Runtime, State}; -pub async fn load_existing_plan( - app: AppHandle, - db: State<'_, DbState>, - project_id: String, +static MERGE_GATE: OnceLock> = OnceLock::new(); + +pub(crate) fn load_text_artifact_with_legacy_fallback( + artifacts: &Path, + working_directory: &Path, + file_name: &str, ) -> Result, ProjectError> { - let artifacts = artifact_dir(&app, &project_id)?; - let plan_path = artifacts.join("plan.md"); - if let Some(content) = non_empty_file_content(&plan_path)? { + let artifact_path = artifacts.join(file_name); + if let Some(content) = non_empty_file_content(&artifact_path)? { return Ok(Some(content)); } - let working_directory = project_working_directory(&db, &project_id)?; - let legacy_path = Path::new(&working_directory).join("plan.md"); - if let Some(content) = non_empty_file_content(&legacy_path)? { - std::fs::create_dir_all(&artifacts)?; - let _ = std::fs::write(&plan_path, &content); + let legacy_path = working_directory.join(file_name); + let legacy_content = non_empty_file_content(&legacy_path)?; + if let Some(content) = legacy_content { + let _ = std::fs::create_dir_all(artifacts); + let _ = std::fs::write(&artifact_path, &content); return Ok(Some(content)); } Ok(None) } -pub async fn save_plan( - app: AppHandle, +pub async fn load_existing_plan( + app: AppHandle, + db: State<'_, DbState>, + project_id: String, +) -> Result, ProjectError> { + let artifacts = artifact_dir(&app, &project_id)?; + let working_directory = project_working_directory(&db, &project_id)?; + load_text_artifact_with_legacy_fallback(&artifacts, Path::new(&working_directory), "plan.md") +} +pub async fn save_plan( + app: AppHandle, project_id: String, content: String, ) -> Result<(), ProjectError> { @@ -39,21 +50,22 @@ pub async fn save_plan( std::fs::write(artifacts.join("plan.md"), &content)?; Ok(()) } - -pub async fn save_prd( - app: AppHandle, +pub async fn save_prd( + app: AppHandle, project_id: String, prd_json: String, ) -> Result<(), ProjectError> { let artifacts = artifact_dir(&app, &project_id)?; std::fs::create_dir_all(&artifacts)?; serde_json::from_str::(&prd_json)?; - std::fs::write(artifacts.join("prd.json"), &prd_json)?; + with_merge_gate(|| { + std::fs::write(artifacts.join("prd.json"), &prd_json)?; + Ok(()) + })?; Ok(()) } - -pub async fn save_config( - app: AppHandle, +pub async fn save_config( + app: AppHandle, project_id: String, config_json: String, ) -> Result<(), ProjectError> { @@ -61,18 +73,16 @@ pub async fn save_config( crate::projects::runtime_config::save_project_config(&app, &project_id, &parsed_config)?; Ok(()) } - -pub async fn load_config( - app: AppHandle, +pub async fn load_config( + app: AppHandle, project_id: String, ) -> Result, ProjectError> { let artifacts = artifact_dir(&app, &project_id)?; let config_path = artifacts.join("config.json"); non_empty_file_content(&config_path) } - -pub async fn load_existing_prd( - app: AppHandle, +pub async fn load_existing_prd( + app: AppHandle, db: State<'_, DbState>, project_id: String, ) -> Result, ProjectError> { @@ -96,23 +106,20 @@ pub async fn load_existing_prd( Ok(None) } - -pub async fn get_guardrails( - app: AppHandle, +pub async fn get_guardrails( + app: AppHandle, project_id: String, ) -> Result { let dir = artifact_dir(&app, &project_id)?; let guardrails_path = dir.join("guardrails.md"); - if guardrails_path.exists() { std::fs::read_to_string(&guardrails_path).map_err(ProjectError::Io) } else { Ok(String::new()) } } - -pub async fn load_output_log( - app: AppHandle, +pub async fn load_output_log( + app: AppHandle, project_id: String, ) -> Result { let dir = artifact_dir(&app, &project_id)?; @@ -120,16 +127,123 @@ pub async fn load_output_log( if !output_path.exists() { return Ok(String::new()); } - let content = std::fs::read_to_string(output_path)?; Ok(tail_lines(&content, 400)) } - +pub(crate) fn insert_session( + db: &DbState, + project_id: &str, + session_id: &str, + started_at: &str, +) -> Result<(), ProjectError> { + with_merge_gate(|| { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + conn.execute( + "INSERT INTO sessions (id, project_id, started_at) VALUES (?1, ?2, ?3)", + rusqlite::params![session_id, project_id, started_at], + )?; + Ok(()) + }) +} +pub(crate) fn close_session( + db: &DbState, + session_id: &str, + ended_at: &str, +) -> Result<(), ProjectError> { + with_merge_gate(|| { + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + conn.execute( + "UPDATE sessions SET ended_at = ?1 WHERE id = ?2", + rusqlite::params![ended_at, session_id], + )?; + Ok(()) + }) +} +#[cfg(test)] +pub(crate) fn ordered_merge_actions(completions: Vec) -> Vec { + let mut scheduler = CompletionScheduler::new(); + scheduler.submit_batch(completions); + scheduler.drain_ordered() +} +#[cfg(test)] +pub(crate) fn merge_action_target(project_dir: &Path, action: &MergeAction) -> String { + match action { + MergeAction::UpdateStoryStatus { .. } => project_dir.join("prd.json").display().to_string(), + MergeAction::AppendGuardrail { .. } => { + project_dir.join("guardrails.md").display().to_string() + } + MergeAction::UpdateSessionHead { worktree_id, .. } => format!("session:{worktree_id}"), + MergeAction::MergeIntoMain { worktree_id, .. } + | MergeAction::TeardownWorktree { worktree_id, .. } + | MergeAction::MergeConflict { worktree_id, .. } => format!("worktree:{worktree_id}"), + } +} +#[cfg(test)] +pub(crate) fn apply_merge_action( + project_dir: &Path, + action: &MergeAction, +) -> Result<(), ProjectError> { + match action { + MergeAction::UpdateStoryStatus { + story_id, + passed, + blocked, + } => with_merge_gate(|| update_story_status(project_dir, story_id, *passed, *blocked)), + MergeAction::AppendGuardrail { content, .. } => append_guardrails(project_dir, content), + MergeAction::UpdateSessionHead { .. } + | MergeAction::MergeIntoMain { .. } + | MergeAction::TeardownWorktree { .. } + | MergeAction::MergeConflict { .. } => Ok(()), + } +} +fn with_merge_gate(write: impl FnOnce() -> Result) -> Result { + let _guard = MERGE_GATE + .get_or_init(|| Mutex::new(())) + .lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + write() +} +#[cfg(test)] +fn update_story_status( + project_dir: &Path, + story_id: &str, + passed: bool, + blocked: bool, +) -> Result<(), ProjectError> { + let prd_path = project_dir.join("prd.json"); + let mut prd = serde_json::from_str::(&std::fs::read_to_string(&prd_path)?)?; + if let Some(story) = prd.stories.iter_mut().find(|story| story.id == story_id) { + story.passes = passed; + story.blocked = blocked; + } + std::fs::write(prd_path, serde_json::to_string_pretty(&prd)?)?; + Ok(()) +} +#[cfg(test)] +fn append_guardrails(project_dir: &Path, content: &str) -> Result<(), ProjectError> { + with_merge_gate(|| { + let mut file = std::fs::OpenOptions::new() + .create(true) + .append(true) + .open(project_dir.join("guardrails.md"))?; + let payload = if content.ends_with('\n') { + content.to_string() + } else { + format!("{content}\n") + }; + use std::io::Write; + file.write_all(payload.as_bytes())?; + Ok(()) + }) +} fn tail_lines(content: &str, max_lines: usize) -> String { let lines: Vec<&str> = content.lines().collect(); if lines.len() <= max_lines { return content.to_string(); } - lines[lines.len() - max_lines..].join("\n") } diff --git a/src-tauri/src/projects/mod.rs b/src-tauri/src/projects/mod.rs index c8fc6bf..c913a8e 100644 --- a/src-tauri/src/projects/mod.rs +++ b/src-tauri/src/projects/mod.rs @@ -4,17 +4,24 @@ pub mod config_types; pub mod control; pub mod core_types; pub mod documents; +pub mod notification_filter; pub mod notifications; pub mod reconcile; pub mod repository; pub mod runtime_config; pub mod stories; +pub mod stories_crud; +pub mod validation; pub mod wizard; +pub mod wizard_state; #[cfg(test)] mod reconcile_tests; pub use config_types::{NotificationPrefs, ProjectConfig}; pub use core_types::{ - IterationStory, Project, ProjectDetail, ProjectError, ProjectsByStatus, WizardResumeState, + IterationStory, Project, ProjectDetail, ProjectError, ProjectsByStatus, WizardHydrationResult, + WizardProjectData, WizardResumeState, }; +pub use validation::{ConfigDefaultsResponse, DescribeInput, LaunchReadiness, ValidationErrors}; +pub use wizard_state::AdvanceWizardResult; diff --git a/src-tauri/src/projects/notification_filter.rs b/src-tauri/src/projects/notification_filter.rs new file mode 100644 index 0000000..1ed7249 --- /dev/null +++ b/src-tauri/src/projects/notification_filter.rs @@ -0,0 +1,111 @@ +use serde::{Deserialize, Serialize}; + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct AppNotification { + pub id: String, + pub project_id: String, + pub notification_type: String, + pub title: String, + pub message: String, + pub ring_color: String, + pub read: bool, + pub timestamp: u64, +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct ProjectNotificationSummary { + pub project_id: String, + pub unread_count: usize, + pub ring_color: Option, +} + +pub fn type_to_ring_color(notification_type: &str) -> &'static str { + match notification_type { + "story_blocked" | "loop_error" => "red", + "loop_completed" | "story_completed" => "cyan", + "rate_limited" | "review_comment" => "amber", + _ => "cyan", + } +} + +pub fn ring_color_for_project(notifications: &[AppNotification]) -> Option<&'static str> { + let unread: Vec<&AppNotification> = notifications.iter().filter(|notif| !notif.read).collect(); + if unread.is_empty() { + return None; + } + let priority = ["red", "amber", "cyan", "green"]; + for color in &priority { + if unread.iter().any(|notif| notif.ring_color == *color) { + return Some(color); + } + } + Some("cyan") +} + +pub fn build_project_summaries( + notifications: &[AppNotification], +) -> Vec { + let mut grouped: std::collections::BTreeMap> = + std::collections::BTreeMap::new(); + for notification in notifications { + grouped + .entry(notification.project_id.clone()) + .or_default() + .push(notification.clone()); + } + grouped + .into_iter() + .map(|(project_id, project_notifications)| { + let unread_count = project_notifications + .iter() + .filter(|entry| !entry.read) + .count(); + let ring_color = ring_color_for_project(&project_notifications).map(String::from); + ProjectNotificationSummary { + project_id, + unread_count, + ring_color, + } + }) + .collect() +} + +fn now_millis() -> u64 { + std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .unwrap_or_default() + .as_millis() as u64 +} + +pub fn build_notification( + project_id: &str, + notification_type: &str, + title: String, + message: String, +) -> AppNotification { + let ring_color = type_to_ring_color(notification_type).to_string(); + AppNotification { + id: format!( + "notif_{}_{}", + now_millis(), + &uuid::Uuid::new_v4().to_string()[..8] + ), + project_id: project_id.to_string(), + notification_type: notification_type.to_string(), + title, + message, + ring_color, + read: false, + timestamp: now_millis(), + } +} + +pub fn should_emit_verification_failed(attempt: u32, circuit_breaker: bool) -> bool { + circuit_breaker || attempt >= 3 +} + +pub fn should_emit_iteration_completed(result: &str) -> bool { + result == "success" || result == "passed" +} diff --git a/src-tauri/src/projects/notifications.rs b/src-tauri/src/projects/notifications.rs index 56dadbf..ed8c547 100644 --- a/src-tauri/src/projects/notifications.rs +++ b/src-tauri/src/projects/notifications.rs @@ -1,15 +1,67 @@ +use crate::projects::notification_filter::{build_notification, AppNotification}; use crate::projects::{NotificationPrefs, ProjectError}; -use tauri::{AppHandle, Manager}; +use tauri::{AppHandle, Emitter, Manager, Runtime}; -pub async fn get_notification_prefs( - app: AppHandle, +#[derive(Debug, Clone)] +pub struct NotificationCreateInput { + pub project_id: String, + pub notification_type: String, + pub title: String, + pub message: String, +} + +pub fn create_notification_and_emit( + app: &AppHandle, + input: NotificationCreateInput, +) -> Result { + let notification = { + let db = app.state::(); + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".into()))?; + let created = build_notification( + &input.project_id, + &input.notification_type, + input.title, + input.message, + ); + conn.execute( + "INSERT INTO notifications (id, project_id, notification_type, title, message, ring_color, read, timestamp) VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8)", + rusqlite::params![ + created.id, + created.project_id, + created.notification_type, + created.title, + created.message, + created.ring_color, + i32::from(created.read), + created.timestamp, + ], + )?; + conn.execute( + "DELETE FROM notifications WHERE project_id = ?1 AND id NOT IN (SELECT id FROM notifications WHERE project_id = ?1 ORDER BY timestamp DESC LIMIT 200)", + rusqlite::params![created.project_id], + )?; + created + }; + let _ = app.emit( + crate::events::EVENT_NOTIFICATION_ADDED, + serde_json::json!({ + "projectId": notification.project_id, + "notification": notification.clone(), + }), + ); + Ok(notification) +} + +pub async fn get_notification_prefs( + app: AppHandle, project_id: String, ) -> Result { let db = app.state::(); - let conn = db - .0 - .lock() - .map_err(|_| ProjectError::Db("Lock poisoned".into()))?; + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".into()))?; let prefs_json: Option = conn .query_row( "SELECT notification_prefs FROM projects WHERE id = ?1", @@ -24,16 +76,15 @@ pub async fn get_notification_prefs( } } -pub async fn save_notification_prefs( - app: AppHandle, +pub async fn save_notification_prefs( + app: AppHandle, project_id: String, prefs: NotificationPrefs, ) -> Result<(), ProjectError> { let db = app.state::(); - let conn = db - .0 - .lock() - .map_err(|_| ProjectError::Db("Lock poisoned".into()))?; + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".into()))?; let json = serde_json::to_string(&prefs)?; conn.execute( "UPDATE projects SET notification_prefs = ?1 WHERE id = ?2", diff --git a/src-tauri/src/projects/reconcile.rs b/src-tauri/src/projects/reconcile.rs index 842f75d..abe8f0e 100644 --- a/src-tauri/src/projects/reconcile.rs +++ b/src-tauri/src/projects/reconcile.rs @@ -5,10 +5,10 @@ use ralph_core::detection::failure_memory::FailureMemory; use ralph_core::prd::Prd; use std::collections::HashSet; use std::path::Path; -use tauri::{AppHandle, Manager}; +use tauri::{AppHandle, Manager, Runtime}; -pub fn reconcile_project_prd( - app: &AppHandle, +pub fn reconcile_project_prd( + app: &AppHandle, project_id: &str, gutter_threshold: u32, ) -> Result { @@ -34,15 +34,14 @@ pub fn reconcile_project_prd( Ok(changed) } -fn load_successful_story_ids( - app: &AppHandle, +fn load_successful_story_ids( + app: &AppHandle, project_id: &str, ) -> Result, ProjectError> { let db = app.state::(); - let conn = db - .0 - .lock() - .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; let mut stmt = conn.prepare( "SELECT DISTINCT iterations.story_id FROM iterations @@ -51,7 +50,7 @@ fn load_successful_story_ids( )?; let rows = stmt.query_map(rusqlite::params![project_id], |row| row.get::<_, String>(0))?; - Ok(rows.filter_map(|row| row.ok()).collect()) + Ok(rows.filter_map(Result::ok).collect()) } fn load_blocked_story_ids(project_dir: &Path, gutter_threshold: u32) -> HashSet { diff --git a/src-tauri/src/projects/repository.rs b/src-tauri/src/projects/repository.rs index 5377139..2b0a338 100644 --- a/src-tauri/src/projects/repository.rs +++ b/src-tauri/src/projects/repository.rs @@ -1,14 +1,18 @@ +use crate::models::ProjectStatus; use crate::projects::{Project, ProjectsByStatus}; pub const PROJECT_COLUMNS: &str = "id, name, description, status, working_directory, created_at, updated_at, wizard_step"; pub fn row_to_project(row: &rusqlite::Row<'_>) -> rusqlite::Result { + let raw_status: String = row.get(3)?; + let status = ProjectStatus::resolve_canonical(&raw_status, false, false, false); + Ok(Project { id: row.get(0)?, name: row.get(1)?, description: row.get(2)?, - status: row.get(3)?, + status: status.as_project_status().to_string(), working_directory: row.get(4)?, created_at: row.get(5)?, updated_at: row.get(6)?, diff --git a/src-tauri/src/projects/runtime_config.rs b/src-tauri/src/projects/runtime_config.rs index 7ab7725..f971c7b 100644 --- a/src-tauri/src/projects/runtime_config.rs +++ b/src-tauri/src/projects/runtime_config.rs @@ -1,8 +1,12 @@ use crate::projects::artifacts::artifact_dir; use crate::projects::{ProjectConfig, ProjectError}; -use tauri::AppHandle; +use std::path::Path; +use tauri::{AppHandle, Manager, Runtime}; -fn sanitize_config(mut config: ProjectConfig) -> ProjectConfig { +#[path = "../services/wizard_session_adapter.rs"] +pub(crate) mod wizard_session_adapter; + +pub(crate) fn sanitize_config(mut config: ProjectConfig) -> ProjectConfig { let default_config = ProjectConfig::default(); config.schema_version = default_config.schema_version; config.execute_agent = config.execute_agent.trim().to_string(); @@ -48,7 +52,7 @@ fn sanitize_config(mut config: ProjectConfig) -> ProjectConfig { config } -fn legacy_config_from_loop_args(content: &str) -> Result { +pub(crate) fn legacy_config_from_loop_args(content: &str) -> Result { let args: serde_json::Value = serde_json::from_str(content)?; let default_config = ProjectConfig::default(); let execute_agent = args @@ -59,11 +63,11 @@ fn legacy_config_from_loop_args(content: &str) -> Result Result Result Result( + app: &AppHandle, project_id: &str, config: &ProjectConfig, ) -> Result<(), ProjectError> { @@ -128,27 +130,53 @@ pub fn save_project_config( Ok(()) } -pub async fn get_project_config( - app: AppHandle, - project_id: String, +pub(crate) fn load_project_config_from_paths( + artifacts: &Path, + working_directory: &Path, ) -> Result, ProjectError> { - let dir = artifact_dir(&app, &project_id)?; - let config_path = dir.join("config.json"); - + let config_path = artifacts.join("config.json"); if config_path.exists() { let content = std::fs::read_to_string(&config_path)?; let parsed_config: ProjectConfig = serde_json::from_str(&content)?; - let sanitized_config = sanitize_config(parsed_config); - save_project_config(&app, &project_id, &sanitized_config)?; - return Ok(Some(sanitized_config)); + return Ok(Some(sanitize_config(parsed_config))); } - let loop_args_path = dir.join("loop_args.json"); + let loop_args_path = artifacts.join("loop_args.json"); if loop_args_path.exists() { let content = std::fs::read_to_string(&loop_args_path)?; - let migrated_config = legacy_config_from_loop_args(&content)?; - save_project_config(&app, &project_id, &migrated_config)?; - return Ok(Some(migrated_config)); + return Ok(Some(legacy_config_from_loop_args(&content)?)); + } + + let legacy_config_path = working_directory.join("config.json"); + if legacy_config_path.exists() { + let content = std::fs::read_to_string(&legacy_config_path)?; + let parsed_config: ProjectConfig = serde_json::from_str(&content)?; + return Ok(Some(sanitize_config(parsed_config))); + } + + Ok(None) +} + +pub async fn get_project_config( + app: AppHandle, + project_id: String, +) -> Result, ProjectError> { + let dir = artifact_dir(&app, &project_id)?; + let db = app.state::(); + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let working_directory = conn + .query_row( + "SELECT working_directory FROM projects WHERE id = ?1", + rusqlite::params![project_id], + |row: &rusqlite::Row| row.get::<_, String>(0), + ) + .map_err(|_| ProjectError::NotFound(project_id.clone()))?; + drop(conn); + if let Some(config) = load_project_config_from_paths(&dir, Path::new(&working_directory))? { + save_project_config(&app, &project_id, &config)?; + return Ok(Some(config)); } Ok(None) diff --git a/src-tauri/src/projects/stories.rs b/src-tauri/src/projects/stories.rs index 73f9124..96a1491 100644 --- a/src-tauri/src/projects/stories.rs +++ b/src-tauri/src/projects/stories.rs @@ -5,6 +5,18 @@ use ralph_core::prd::Prd; use rusqlite::OptionalExtension; use tauri::{AppHandle, State}; +fn format_duration_label(duration_secs: i64) -> String { + if duration_secs <= 0 { + return "0s".to_string(); + } + if duration_secs < 60 { + return format!("{duration_secs}s"); + } + let minutes = duration_secs / 60; + let seconds = duration_secs % 60; + format!("{minutes}m {seconds}s") +} + pub async fn get_project_stories( app: AppHandle, db: State<'_, DbState>, @@ -14,15 +26,16 @@ pub async fn get_project_stories( let prd_path = dir.join("prd.json"); let stories = if prd_path.exists() { - Prd::load(&prd_path).map(|prd| prd.stories).unwrap_or_default() + Prd::load(&prd_path) + .map(|prd| prd.stories) + .unwrap_or_default() } else { return Ok(Vec::new()); }; - let conn = db - .0 - .lock() - .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; + let conn = + db.0.lock() + .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; let latest_session: Option<(String, Option)> = conn .query_row( @@ -32,11 +45,12 @@ pub async fn get_project_stories( ) .optional() .map_err(|err| ProjectError::Db(err.to_string()))?; - let latest_session_id = latest_session.as_ref().map(|(session_id, _)| session_id.clone()); + let latest_session_id = latest_session + .as_ref() + .map(|(session_id, _)| session_id.clone()); let latest_session_is_open = latest_session .as_ref() - .map(|(_, ended_at)| ended_at.is_none()) - .unwrap_or(false); + .is_some_and(|(_, ended_at)| ended_at.is_none()); let active_story_id: Option = if latest_session_is_open { if let Some(ref sid) = latest_session_id { @@ -98,6 +112,7 @@ pub async fn get_project_stories( title: story.title.clone(), status, duration_secs, + duration_label: duration_secs.map(format_duration_label), attempts, } }) diff --git a/src-tauri/src/projects/stories_crud.rs b/src-tauri/src/projects/stories_crud.rs new file mode 100644 index 0000000..9ef8693 --- /dev/null +++ b/src-tauri/src/projects/stories_crud.rs @@ -0,0 +1,219 @@ +use crate::projects::artifacts::artifact_dir; +use crate::projects::ProjectError; +use ralph_core::prd::Prd; +pub use ralph_core::prd::UserStory; +use serde::{Deserialize, Serialize}; +use std::path::Path; +use tauri::{AppHandle, Runtime}; + +pub(crate) fn load_prd_from_paths( + artifacts: &Path, + working_directory: &Path, +) -> Result, ProjectError> { + let artifact_path = artifacts.join("prd.json"); + if artifact_path.exists() { + return Prd::load(&artifact_path) + .map(Some) + .map_err(|err| ProjectError::Path(err.to_string())); + } + + let legacy_path = working_directory.join("prd.json"); + if legacy_path.exists() { + return Prd::load(&legacy_path) + .map(Some) + .map_err(|err| ProjectError::Path(err.to_string())); + } + + Ok(None) +} + +fn load_prd(app: &AppHandle, project_id: &str) -> Result { + let dir = artifact_dir(app, project_id)?; + let prd_path = dir.join("prd.json"); + if prd_path.exists() { + Prd::load(&prd_path) + .map_err(|_| ProjectError::Path(format!("Failed to load prd.json for {project_id}"))) + } else { + Ok(Prd { + project_name: String::new(), + feature: String::new(), + working_directory: String::new(), + branch_name: None, + stories: Vec::new(), + generated_at: Some(chrono::Utc::now().to_rfc3339()), + }) + } +} + +fn save_prd( + app: &AppHandle, + project_id: &str, + prd: &Prd, +) -> Result<(), ProjectError> { + let dir = artifact_dir(app, project_id)?; + std::fs::create_dir_all(&dir)?; + let json = serde_json::to_string_pretty(prd)?; + std::fs::write(dir.join("prd.json"), json)?; + Ok(()) +} + +fn total_estimated_minutes(prd: &Prd) -> u32 { + prd.stories + .iter() + .map(|story| story.estimated_minutes) + .sum() +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct StoriesResponse { + pub stories: Vec, + pub total_estimated_minutes: u32, +} + +pub fn add_story( + app: &AppHandle, + project_id: &str, +) -> Result { + let mut prd = load_prd(app, project_id)?; + let next_index = prd.stories.len() + 1; + let padded_id = format!("S-{next_index:03}"); + + let story = UserStory { + id: padded_id, + title: "New story".to_string(), + description: None, + acceptance_criteria: Vec::new(), + scope: ralph_core::prd::ScopeSpec::default(), + verification: ralph_core::prd::VerificationSpec::default(), + commit_message: None, + priority: ralph_core::prd::Priority::Medium, + estimated_complexity: ralph_core::prd::Complexity::Medium, + estimated_minutes: 30, + depends_on: Vec::new(), + passes: false, + blocked: false, + attempts: 0, + notes: None, + }; + + prd.stories.push(story); + let minutes = total_estimated_minutes(&prd); + save_prd(app, project_id, &prd)?; + + Ok(StoriesResponse { + stories: prd.stories, + total_estimated_minutes: minutes, + }) +} + +pub fn update_story( + app: &AppHandle, + project_id: &str, + story_id: &str, + patch_json: &str, +) -> Result { + let mut prd = load_prd(app, project_id)?; + let patch: serde_json::Value = serde_json::from_str(patch_json)?; + + let story = prd + .stories + .iter_mut() + .find(|story| story.id == story_id) + .ok_or_else(|| ProjectError::NotFound(format!("Story {story_id}")))?; + + if let Some(title) = patch.get("title").and_then(|val| val.as_str()) { + story.title = title.to_string(); + } + if let Some(description) = patch.get("description").and_then(|val| val.as_str()) { + story.description = Some(description.to_string()); + } + if let Some(priority) = patch.get("priority").and_then(|val| val.as_str()) { + if let Ok(parsed) = serde_json::from_value::( + serde_json::Value::String(priority.to_string()), + ) { + story.priority = parsed; + } + } + if let Some(complexity) = patch + .get("estimatedComplexity") + .and_then(|val| val.as_str()) + { + if let Ok(parsed) = serde_json::from_value::( + serde_json::Value::String(complexity.to_string()), + ) { + story.estimated_complexity = parsed; + } + } + if let Some(minutes) = patch + .get("estimatedMinutes") + .and_then(serde_json::Value::as_u64) + { + story.estimated_minutes = minutes as u32; + } + + let minutes = total_estimated_minutes(&prd); + save_prd(app, project_id, &prd)?; + + Ok(StoriesResponse { + stories: prd.stories, + total_estimated_minutes: minutes, + }) +} + +pub fn remove_story( + app: &AppHandle, + project_id: &str, + story_id: &str, +) -> Result { + let mut prd = load_prd(app, project_id)?; + let original_count = prd.stories.len(); + prd.stories.retain(|story| story.id != story_id); + + if prd.stories.len() == original_count { + return Err(ProjectError::NotFound(format!("Story {story_id}"))); + } + + let minutes = total_estimated_minutes(&prd); + save_prd(app, project_id, &prd)?; + + Ok(StoriesResponse { + stories: prd.stories, + total_estimated_minutes: minutes, + }) +} + +pub fn reorder_stories( + app: &AppHandle, + project_id: &str, + from_index: usize, + to_index: usize, +) -> Result { + let mut prd = load_prd(app, project_id)?; + + if from_index >= prd.stories.len() || to_index >= prd.stories.len() { + return Err(ProjectError::Path("Index out of bounds".to_string())); + } + + let moved = prd.stories.remove(from_index); + prd.stories.insert(to_index, moved); + let minutes = total_estimated_minutes(&prd); + save_prd(app, project_id, &prd)?; + + Ok(StoriesResponse { + stories: prd.stories, + total_estimated_minutes: minutes, + }) +} + +pub fn get_stories( + app: &AppHandle, + project_id: &str, +) -> Result { + let prd = load_prd(app, project_id)?; + let minutes = total_estimated_minutes(&prd); + Ok(StoriesResponse { + total_estimated_minutes: minutes, + stories: prd.stories, + }) +} diff --git a/src-tauri/src/projects/validation.rs b/src-tauri/src/projects/validation.rs new file mode 100644 index 0000000..c09e5f8 --- /dev/null +++ b/src-tauri/src/projects/validation.rs @@ -0,0 +1,253 @@ +use crate::projects::config_types::ProjectConfig; +use serde::{Deserialize, Serialize}; + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct ConfigLimits { + pub gutter_threshold: (u32, u32), + pub max_iterations: (u32, u32), + pub cooldown_seconds: (u32, u32), + pub max_verification_retries: (u32, u32), + pub review_polling_interval: (u64, u64), + pub review_timeout: (u64, u64), +} + +impl Default for ConfigLimits { + fn default() -> Self { + Self { + gutter_threshold: (1, 20), + max_iterations: (1, 500), + cooldown_seconds: (0, 300), + max_verification_retries: (1, 10), + review_polling_interval: (10, 600), + review_timeout: (60, 3600), + } + } +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct ConfigDefaultsResponse { + pub config: ProjectConfig, + pub limits: ConfigLimits, +} + +pub fn get_config_defaults() -> ConfigDefaultsResponse { + ConfigDefaultsResponse { + config: ProjectConfig::default(), + limits: ConfigLimits::default(), + } +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct ValidationErrors { + pub errors: std::collections::HashMap, +} + +impl ValidationErrors { + pub fn is_empty(&self) -> bool { + self.errors.is_empty() + } +} + +pub fn validate_config(config: &ProjectConfig) -> ValidationErrors { + let limits = ConfigLimits::default(); + let mut errors = std::collections::HashMap::new(); + + if config.execute_agent.trim().is_empty() { + errors.insert( + "executeAgent".to_string(), + "Execute agent is required".to_string(), + ); + } + + validate_range( + &mut errors, + "gutterThreshold", + "Gutter threshold", + u64::from(config.gutter_threshold), + u64::from(limits.gutter_threshold.0), + u64::from(limits.gutter_threshold.1), + ); + validate_range( + &mut errors, + "maxIterations", + "Max iterations", + u64::from(config.max_iterations), + u64::from(limits.max_iterations.0), + u64::from(limits.max_iterations.1), + ); + validate_range( + &mut errors, + "cooldownSeconds", + "Cooldown", + u64::from(config.cooldown_seconds), + u64::from(limits.cooldown_seconds.0), + u64::from(limits.cooldown_seconds.1), + ); + validate_range( + &mut errors, + "maxVerificationRetries", + "Verification retries", + u64::from(config.max_verification_retries), + u64::from(limits.max_verification_retries.0), + u64::from(limits.max_verification_retries.1), + ); + validate_range( + &mut errors, + "reviewPollingInterval", + "Poll interval", + config.review_polling_interval, + limits.review_polling_interval.0, + limits.review_polling_interval.1, + ); + validate_range( + &mut errors, + "reviewTimeout", + "Timeout", + config.review_timeout, + limits.review_timeout.0, + limits.review_timeout.1, + ); + + ValidationErrors { errors } +} + +fn validate_range( + errors: &mut std::collections::HashMap, + field: &str, + label: &str, + value: u64, + min: u64, + max: u64, +) { + if value < min || value > max { + errors.insert( + field.to_string(), + format!("{label} must be between {min} and {max}"), + ); + } +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct DescribeInput { + pub name: String, + pub description: String, + pub working_directory: String, + pub plan_agent: String, +} + +pub fn validate_describe(input: &DescribeInput, available_agents: &[String]) -> ValidationErrors { + let mut errors = std::collections::HashMap::new(); + + if input.name.trim().is_empty() { + errors.insert("name".to_string(), "Project name is required".to_string()); + } + if input.description.trim().is_empty() { + errors.insert( + "description".to_string(), + "Feature description is required".to_string(), + ); + } + if input.working_directory.trim().is_empty() { + errors.insert( + "workingDirectory".to_string(), + "Working directory is required".to_string(), + ); + } + if available_agents.is_empty() { + errors.insert( + "submit".to_string(), + "No supported agent was detected. Install Claude, Codex, Gemini, or OpenCode." + .to_string(), + ); + } else if !available_agents.contains(&input.plan_agent) { + errors.insert( + "submit".to_string(), + "Selected plan agent is not available in this environment.".to_string(), + ); + } + + ValidationErrors { errors } +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct LaunchReadiness { + pub ready: bool, + pub issues: Vec, + #[serde(default)] + pub total_estimated_minutes: u32, + #[serde(default)] + pub total_estimated_hours: f64, +} + +pub fn validate_launch_readiness( + project_name: &str, + working_directory: &str, + stories_count: usize, + execute_agent: &str, + estimated_minutes_per_story: &[u32], +) -> LaunchReadiness { + let mut issues = Vec::new(); + + if project_name.trim().is_empty() { + issues.push("Project name is missing.".to_string()); + } + if working_directory.trim().is_empty() { + issues.push("Working directory is missing.".to_string()); + } + if stories_count == 0 { + issues.push("Add at least one story before launching.".to_string()); + } + if execute_agent.trim().is_empty() { + issues.push("Execution agent is missing.".to_string()); + } + + let total_minutes: u32 = estimated_minutes_per_story.iter().sum(); + let total_hours = f64::from(total_minutes) / 60.0; + + LaunchReadiness { + ready: issues.is_empty(), + issues, + total_estimated_minutes: total_minutes, + total_estimated_hours: (total_hours * 10.0).round() / 10.0, + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn default_config_passes_validation() { + let config = ProjectConfig::default(); + let result = validate_config(&config); + assert!(result.is_empty()); + } + + #[test] + fn empty_agent_fails_validation() { + let mut config = ProjectConfig::default(); + config.execute_agent = String::new(); + let result = validate_config(&config); + assert!(result.errors.contains_key("executeAgent")); + } + + #[test] + fn launch_readiness_catches_missing_fields() { + let result = validate_launch_readiness("", "", 0, "", &[]); + assert!(!result.ready); + assert_eq!(result.issues.len(), 4); + } + + #[test] + fn launch_readiness_computes_totals() { + let result = validate_launch_readiness("proj", "/tmp", 2, "claude", &[30, 90]); + assert!(result.ready); + assert_eq!(result.total_estimated_minutes, 120); + assert!((result.total_estimated_hours - 2.0).abs() < 0.01); + } +} diff --git a/src-tauri/src/projects/wizard.rs b/src-tauri/src/projects/wizard.rs index f875580..0a4ff5e 100644 --- a/src-tauri/src/projects/wizard.rs +++ b/src-tauri/src/projects/wizard.rs @@ -1,9 +1,9 @@ use crate::db::DbState; use crate::projects::artifacts::{artifact_dir, non_empty_file_content}; use crate::projects::repository::{row_to_project, PROJECT_COLUMNS}; -use crate::projects::{ProjectError, WizardResumeState}; +use crate::projects::wizard_state::CanonicalWizardSession; +use crate::projects::{ProjectError, WizardHydrationResult, WizardProjectData, WizardResumeState}; use ralph_core::prd::Prd; -use serde_json::{Map, Value}; use std::path::Path; use tauri::{AppHandle, Manager, Runtime, State}; @@ -42,16 +42,13 @@ pub async fn discard_draft( rusqlite::params![project_id], )?; drop(conn); - if deleted == 0 { return Err(ProjectError::NotFound(project_id)); } - let dir = artifact_dir(&app, &project_id)?; if dir.exists() { let _ = std::fs::remove_dir_all(&dir); } - Ok(()) } @@ -61,8 +58,8 @@ pub async fn save_wizard_state( wizard_step: String, wizard_state_json: String, ) -> Result<(), ProjectError> { - let (_, canonical_json) = - canonical_wizard_payload(&wizard_state_json, Some(wizard_step.as_str()))?; + let canonical_json = + canonical_wizard_json(&wizard_state_json, &project_id, Some(&wizard_step))?; let now = chrono::Utc::now().to_rfc3339(); let conn = db.0.lock() @@ -84,8 +81,8 @@ pub async fn save_draft( ) -> Result<(), ProjectError> { let artifacts = artifact_dir(&app, &project_id)?; std::fs::create_dir_all(&artifacts)?; - let (draft_step, canonical_json) = canonical_wizard_payload(&draft_json, None)?; - std::fs::write(artifacts.join("draft.json"), &canonical_json)?; + let session = canonical_wizard_session(&draft_json, &project_id, None)?; + std::fs::write(artifacts.join("draft.json"), session.to_json()?)?; let now = chrono::Utc::now().to_rfc3339(); let db = app.state::(); let conn = @@ -93,35 +90,29 @@ pub async fn save_draft( .map_err(|_| ProjectError::Db("Lock poisoned".to_string()))?; let _ = conn.execute( "UPDATE projects SET wizard_step = ?1, wizard_state_json = NULL, updated_at = ?2 WHERE id = ?3", - rusqlite::params![draft_step, now, project_id], + rusqlite::params![session.current_step, now, project_id], ); Ok(()) } -fn canonical_wizard_payload( +fn canonical_wizard_session( payload_json: &str, + project_id: &str, fallback_step: Option<&str>, -) -> Result<(String, String), ProjectError> { - let mut payload = serde_json::from_str::>(payload_json)?; - let step = normalized_wizard_step(fallback_step, &payload); - payload.insert("currentStep".to_string(), Value::String(step.clone())); - Ok((step, serde_json::to_string(&Value::Object(payload))?)) +) -> Result { + Ok(CanonicalWizardSession::from_payload( + payload_json, + Some(project_id), + fallback_step, + )?) } -fn normalized_wizard_step(fallback_step: Option<&str>, payload: &Map) -> String { - fallback_step - .map(str::trim) - .filter(|step| !step.is_empty()) - .map(str::to_string) - .or_else(|| { - payload - .get("currentStep") - .and_then(Value::as_str) - .map(str::trim) - .filter(|step| !step.is_empty()) - .map(str::to_string) - }) - .unwrap_or_else(|| "describe".to_string()) +fn canonical_wizard_json( + payload_json: &str, + project_id: &str, + fallback_step: Option<&str>, +) -> Result { + Ok(canonical_wizard_session(payload_json, project_id, fallback_step)?.to_json()?) } pub async fn load_draft( @@ -150,21 +141,35 @@ pub async fn resume_wizard( .map_err(|_| ProjectError::NotFound(project_id.clone()))? }; - let step = match project.wizard_step.clone() { - Some(value) => value, - None if project.status == "draft" => "describe".to_string(), - None => { - return Err(ProjectError::NotFound(format!( - "No wizard state for {project_id}" - ))) - } - }; - + let fallback_step = project + .wizard_step + .clone() + .or_else(|| (project.status == "draft").then(|| "describe".to_string())); let dir = artifact_dir(&app, &project_id)?; + let draft_json = non_empty_file_content(&dir.join("draft.json"))?; + let session_source = draft_json.clone().or_else(|| wizard_state_json.clone()); + let wizard_session = session_source + .as_deref() + .map(|raw| canonical_wizard_session(raw, &project_id, fallback_step.as_deref())) + .transpose()?; + let wizard_step = wizard_session + .as_ref() + .map(|session| session.current_step.clone()) + .or(fallback_step) + .ok_or_else(|| ProjectError::NotFound(format!("No wizard state for {project_id}")))?; + let artifact_plan = non_empty_file_content(&dir.join("plan.md"))?; let legacy_plan = non_empty_file_content(&Path::new(&project.working_directory).join("plan.md"))?; - let has_plan = artifact_plan.is_some() || legacy_plan.is_some(); + let has_plan = artifact_plan.is_some() + || legacy_plan.is_some() + || wizard_session + .as_ref() + .and_then(|session| session.plan.document.as_ref()) + .is_some() + || wizard_session + .as_ref() + .is_some_and(|session| session.plan.completed); let artifact_prd = Prd::load(&dir.join("prd.json")) .ok() @@ -172,13 +177,86 @@ pub async fn resume_wizard( let legacy_prd = Prd::load(&Path::new(&project.working_directory).join("prd.json")) .ok() .filter(|prd| !prd.stories.is_empty()); - let has_prd = artifact_prd.is_some() || legacy_prd.is_some(); + let has_prd = artifact_prd.is_some() + || legacy_prd.is_some() + || wizard_session + .as_ref() + .is_some_and(|session| !session.atomize.stories.is_empty()); Ok(WizardResumeState { project, - wizard_step: step, - wizard_state_json, + wizard_step, + wizard_session: wizard_session.clone(), + wizard_state_json: wizard_session + .map(|session| session.to_json()) + .transpose()?, has_plan, has_prd, }) } + +pub async fn hydrate_wizard( + app: AppHandle, + db: State<'_, DbState>, + project_id: String, +) -> Result { + let resume = resume_wizard(app.clone(), db, project_id.clone()).await?; + let dir = artifact_dir(&app, &project_id)?; + let session = resume.wizard_session.clone().unwrap_or_default(); + let current_step_number = + crate::projects::wizard_state::step_name_to_number(&resume.wizard_step).unwrap_or(1); + let highest_step = session.highest_step.max(current_step_number); + + let project_data = hydrated_project_data(&resume.project, &session); + let plan_complete = resume.has_plan || session.plan.completed; + let stories = load_hydrated_stories(&dir, &session, resume.has_prd); + let config = session.configure.clone(); + + Ok(WizardHydrationResult { + project: resume.project, + wizard_step: resume.wizard_step, + highest_step, + project_data, + plan_complete, + stories, + config, + }) +} + +fn hydrated_project_data( + project: &crate::projects::Project, + session: &CanonicalWizardSession, +) -> WizardProjectData { + WizardProjectData { + name: non_empty_or(&session.describe.name, &project.name), + description: non_empty_or(&session.describe.description, &project.description), + working_directory: non_empty_or( + &session.describe.working_directory, + &project.working_directory, + ), + plan_agent: non_empty_or(&session.describe.plan_agent, "claude"), + plan_model: session.describe.plan_model.clone(), + plan_effort: session.describe.plan_effort.clone(), + } +} + +fn non_empty_or(value: &str, fallback: &str) -> String { + if value.trim().is_empty() { + fallback.to_string() + } else { + value.to_string() + } +} + +fn load_hydrated_stories( + dir: &Path, + session: &CanonicalWizardSession, + has_prd: bool, +) -> Vec { + if has_prd { + return Prd::load(&dir.join("prd.json")) + .map(|prd| prd.stories) + .unwrap_or_default(); + } + session.atomize.stories.clone() +} diff --git a/src-tauri/src/projects/wizard_state.rs b/src-tauri/src/projects/wizard_state.rs new file mode 100644 index 0000000..adedb52 --- /dev/null +++ b/src-tauri/src/projects/wizard_state.rs @@ -0,0 +1,215 @@ +use crate::projects::config_types::ProjectConfig; +use ralph_core::prd::UserStory; +use serde::{Deserialize, Serialize}; + +const WIZARD_STEPS: &[&str] = &["describe", "plan", "atomize", "configure", "launch"]; + +#[derive(Debug, Clone, Serialize, Deserialize, Default)] +#[serde(rename_all = "camelCase")] +pub struct WizardDescribeState { + #[serde(default)] + pub name: String, + #[serde(default)] + pub description: String, + #[serde(default)] + pub working_directory: String, + #[serde(default = "default_plan_agent")] + pub plan_agent: String, + #[serde(default)] + pub plan_model: Option, + #[serde(default)] + pub plan_effort: Option, +} + +#[derive(Debug, Clone, Serialize, Deserialize, Default)] +#[serde(rename_all = "camelCase")] +pub struct WizardPlanState { + #[serde(default)] + pub completed: bool, + #[serde(default, skip_serializing_if = "Option::is_none")] + pub document: Option, +} + +#[derive(Debug, Clone, Serialize, Deserialize, Default)] +#[serde(rename_all = "camelCase")] +pub struct WizardAtomizeState { + #[serde(default)] + pub stories_count: u32, + #[serde(default)] + pub stories: Vec, +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct CanonicalWizardSession { + #[serde(default = "default_session_version")] + pub version: u32, + #[serde(default)] + pub project_id: String, + #[serde(default = "default_step_name")] + pub current_step: String, + #[serde(default = "default_first_step")] + pub highest_step: u32, + #[serde(default)] + pub stale_from_step: Option, + #[serde(default)] + pub describe: WizardDescribeState, + #[serde(default)] + pub plan: WizardPlanState, + #[serde(default)] + pub atomize: WizardAtomizeState, + #[serde(default, skip_serializing_if = "Option::is_none")] + pub configure: Option, + #[serde(default, skip_serializing_if = "Option::is_none")] + pub prompt: Option, + #[serde(default, skip_serializing_if = "Option::is_none")] + pub guardrails: Option, +} + +impl Default for CanonicalWizardSession { + fn default() -> Self { + Self { + version: default_session_version(), + project_id: String::new(), + current_step: default_step_name(), + highest_step: default_first_step(), + stale_from_step: None, + describe: WizardDescribeState::default(), + plan: WizardPlanState::default(), + atomize: WizardAtomizeState::default(), + configure: None, + prompt: None, + guardrails: None, + } + } +} + +impl CanonicalWizardSession { + pub fn from_payload( + payload_json: &str, + project_id: Option<&str>, + fallback_step: Option<&str>, + ) -> Result { + let mut session = serde_json::from_str::(payload_json)?; + session.normalize(project_id, fallback_step); + Ok(session) + } + + pub fn normalize(&mut self, project_id: Option<&str>, fallback_step: Option<&str>) { + if self.project_id.trim().is_empty() { + self.project_id = project_id.unwrap_or_default().to_string(); + } + self.current_step = normalized_wizard_step(fallback_step, Some(self.current_step.as_str())); + let current_step = step_name_to_number(&self.current_step).unwrap_or(1); + if self.highest_step < current_step { + self.highest_step = current_step; + } + self.atomize.stories_count = self + .atomize + .stories_count + .max(self.atomize.stories.len() as u32); + } + + pub fn to_json(&self) -> Result { + serde_json::to_string(self) + } +} + +fn default_plan_agent() -> String { + "claude".to_string() +} + +fn default_session_version() -> u32 { + 1 +} + +fn default_step_name() -> String { + "describe".to_string() +} + +fn default_first_step() -> u32 { + 1 +} + +pub fn normalized_wizard_step(fallback_step: Option<&str>, current_step: Option<&str>) -> String { + fallback_step + .map(str::trim) + .filter(|step| !step.is_empty()) + .map(str::to_string) + .or_else(|| { + current_step + .map(str::trim) + .filter(|step| !step.is_empty()) + .map(str::to_string) + }) + .unwrap_or_else(default_step_name) +} + +pub fn step_name_to_number(name: &str) -> Option { + WIZARD_STEPS + .iter() + .position(|step| *step == name) + .map(|index| (index as u32) + 1) +} + +pub fn step_number_to_name(number: u32) -> Option<&'static str> { + if number == 0 || number as usize > WIZARD_STEPS.len() { + return None; + } + Some(WIZARD_STEPS[(number - 1) as usize]) +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct WizardStepState { + pub current_step: u32, + pub highest_step: u32, + #[serde(default)] + pub stale_from_step: Option, +} + +impl Default for WizardStepState { + fn default() -> Self { + Self { + current_step: 1, + highest_step: 1, + stale_from_step: None, + } + } +} + +impl WizardStepState { + pub fn advance(&mut self, target_step: u32) { + self.current_step = target_step; + if target_step > self.highest_step { + self.highest_step = target_step; + } + self.stale_from_step = None; + } +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct AdvanceWizardResult { + pub current_step: u32, + pub highest_step: u32, + pub step_name: String, +} + +pub fn advance_wizard_step( + current_state: Option<&WizardStepState>, + target_step: u32, +) -> Result { + if target_step == 0 || target_step as usize > WIZARD_STEPS.len() { + return Err(format!("Invalid step number: {target_step}")); + } + let mut state = current_state.cloned().unwrap_or_default(); + state.advance(target_step); + let step_name = + step_number_to_name(target_step).ok_or_else(|| format!("Unknown step: {target_step}"))?; + Ok(AdvanceWizardResult { + current_step: state.current_step, + highest_step: state.highest_step, + step_name: step_name.to_string(), + }) +} diff --git a/src-tauri/src/scm_watcher.rs b/src-tauri/src/scm_watcher.rs index ba8ea33..6a3ac5f 100644 --- a/src-tauri/src/scm_watcher.rs +++ b/src-tauri/src/scm_watcher.rs @@ -147,10 +147,7 @@ pub async fn detect_github_pr( ) -> Option { let output = Command::new("gh") .args([ - "pr", "list", - "--head", branch, - "--json", "number", - "--limit", "1", + "pr", "list", "--head", branch, "--json", "number", "--limit", "1", ]) .current_dir(work_dir) .output() @@ -237,11 +234,7 @@ pub async fn fetch_gitlab_comments( Ok(review_comments) } -pub async fn detect_gitlab_mr( - project_id: &str, - branch: &str, - work_dir: &Path, -) -> Option { +pub async fn detect_gitlab_mr(project_id: &str, branch: &str, work_dir: &Path) -> Option { let endpoint = format!( "projects/{}/merge_requests?source_branch={branch}&state=opened&per_page=1", urlencoding_simple(project_id) @@ -307,7 +300,10 @@ impl ReviewRouter { comment.reviewer )); - prompt.push_str(&format!("## Review Comment\n\n**PR #{}**\n", comment.pr_number)); + prompt.push_str(&format!( + "## Review Comment\n\n**PR #{}**\n", + comment.pr_number + )); if let Some(file_path) = &comment.file_path { prompt.push_str(&format!("**File:** `{file_path}`")); @@ -401,10 +397,7 @@ impl CommitWatcher { } } - pub async fn has_new_commit_since( - work_dir: &Path, - baseline_hash: &str, - ) -> bool { + pub async fn has_new_commit_since(work_dir: &Path, baseline_hash: &str) -> bool { let current = Self::get_head_hash(work_dir).await; match current { Some(hash) => hash != baseline_hash, @@ -428,9 +421,7 @@ impl CommitWatcher { } if Self::has_new_commit_since(work_dir, baseline_hash).await { - let new_hash = Self::get_head_hash(work_dir) - .await - .unwrap_or_default(); + let new_hash = Self::get_head_hash(work_dir).await.unwrap_or_default(); return CommitWatchResult::NewCommit(new_hash); } @@ -490,22 +481,13 @@ pub async fn handle_comment_lifecycle( .as_deref() .and_then(|fp| ReviewRouter::load_file_content(work_dir, fp)); - let prompt = ReviewRouter::build_prompt( - comment, - None, - file_content.as_deref(), - ); + let prompt = ReviewRouter::build_prompt(comment, None, file_content.as_deref()); comment.status = CommentStatus::Routed; let _ = ReviewRouter::route_to_agent(&prompt, agent_binary, work_dir).await; - let watch_result = CommitWatcher::wait_for_commit( - work_dir, - &baseline_hash, - timeout_secs, - 15, - ) - .await; + let watch_result = + CommitWatcher::wait_for_commit(work_dir, &baseline_hash, timeout_secs, 15).await; match watch_result { CommitWatchResult::NewCommit(_) => { @@ -513,10 +495,7 @@ pub async fn handle_comment_lifecycle( } CommitWatchResult::TimedOut => { comment.status = CommentStatus::Escalated; - let file_display = comment - .file_path - .as_deref() - .unwrap_or("unknown file"); + let file_display = comment.file_path.as_deref().unwrap_or("unknown file"); crate::notifications::notify_review_escalation( app, project_name, diff --git a/src-tauri/src/services/mod.rs b/src-tauri/src/services/mod.rs new file mode 100644 index 0000000..9872457 --- /dev/null +++ b/src-tauri/src/services/mod.rs @@ -0,0 +1,47 @@ +use crate::projects::{Project, ProjectDetail}; +use app_services::{ProjectDetail as SharedProjectDetail, ProjectQueryRecord, SaveDraftCommand}; +use ralph_core::prd::UserStory; + +pub mod monitor_adapter; +pub mod session_adapter; + +pub type InvokeProjectRecord = ProjectQueryRecord; +pub type InvokeProjectDetail = SharedProjectDetail; + +pub fn attach_runtime_aliases( + builder: tauri::Builder, + handler: impl Fn(tauri::ipc::Invoke) -> bool + Send + Sync + 'static, +) -> tauri::Builder { + builder.invoke_handler(handler) +} + +pub fn into_project_record(project: Project) -> InvokeProjectRecord { + InvokeProjectRecord { + id: project.id, + name: project.name, + description: project.description, + status: project.status, + working_directory: project.working_directory, + created_at: project.created_at, + updated_at: project.updated_at, + wizard_step: project.wizard_step, + } +} + +pub fn into_project_detail(detail: ProjectDetail) -> InvokeProjectDetail { + InvokeProjectDetail { + project: into_project_record(detail.project), + total_stories: detail.total_stories, + passed_count: detail.passed_count, + blocked_count: detail.blocked_count, + pending_count: detail.pending_count, + stories: detail.stories, + } +} + +pub fn save_draft_command(project_id: String, draft_json: String) -> SaveDraftCommand { + SaveDraftCommand { + project_id, + draft_json, + } +} diff --git a/src-tauri/src/services/monitor_adapter.rs b/src-tauri/src/services/monitor_adapter.rs new file mode 100644 index 0000000..a48175f --- /dev/null +++ b/src-tauri/src/services/monitor_adapter.rs @@ -0,0 +1,108 @@ +use app_services::monitor::{ + MonitorEvent, MonitorRepository, MonitorService, RuntimeMonitorService, +}; +use app_services::{MonitorSnapshot, MonitorStream, OutputEntry, ServiceError, SessionInfo}; +use rusqlite::OptionalExtension; +use tauri::Manager; + +pub struct TauriMonitorAdapter<'a, R: tauri::Runtime> { + window: &'a tauri::Window, +} + +impl<'a, R: tauri::Runtime> TauriMonitorAdapter<'a, R> { + pub fn new(window: &'a tauri::Window) -> Self { + Self { window } + } + + pub fn snapshot(&self, project_id: &str) -> Result { + RuntimeMonitorService::new(Self::new(self.window)).snapshot(project_id) + } +} + +impl MonitorRepository for TauriMonitorAdapter<'_, R> { + fn latest_session(&self, project_id: &str) -> Result, ServiceError> { + let db = self.window.state::(); + let conn = + db.0.lock() + .map_err(|_| ServiceError::Internal("database lock poisoned".into()))?; + + conn.query_row( + "SELECT id, started_at, ended_at FROM sessions WHERE project_id = ?1 ORDER BY started_at DESC LIMIT 1", + rusqlite::params![project_id], + |row| { + Ok(SessionInfo { + id: row.get(0)?, + started_at: row.get(1).ok(), + ended_at: row.get(2).ok(), + }) + }, + ) + .optional() + .map_err(|error| ServiceError::Internal(error.to_string())) + } + + fn recent_output( + &self, + project_id: &str, + limit: usize, + ) -> Result, ServiceError> { + let artifact_dir = + crate::storage::artifacts::project_artifact_dir(self.window.app_handle(), project_id) + .map_err(ServiceError::Internal)?; + let output_path = artifact_dir.join("agent_output.log"); + let content = std::fs::read_to_string(output_path).unwrap_or_default(); + + Ok(content + .lines() + .rev() + .filter(|line| !line.trim().is_empty()) + .take(limit) + .map(|line| OutputEntry { + project_id: project_id.to_string(), + session_id: None, + stream: MonitorStream::Stdout, + content: line.to_string(), + emitted_at: None, + }) + .collect::>() + .into_iter() + .rev() + .collect()) + } + + fn recent_events( + &self, + project_id: &str, + limit: usize, + ) -> Result, ServiceError> { + let db = self.window.state::(); + let conn = + db.0.lock() + .map_err(|_| ServiceError::Internal("database lock poisoned".into()))?; + let mut statement = conn + .prepare( + "SELECT s.id, i.story_id, i.agent_used, i.duration_secs, i.result + FROM iterations i + JOIN sessions s ON i.session_id = s.id + WHERE s.project_id = ?1 + ORDER BY i.started_at DESC + LIMIT ?2", + ) + .map_err(|error| ServiceError::Internal(error.to_string()))?; + let rows = statement + .query_map(rusqlite::params![project_id, limit as i64], |row| { + Ok(MonitorEvent::IterationCompleted { + project_id: project_id.to_string(), + session_id: row.get(0)?, + story_id: row.get(1)?, + agent: row.get(2)?, + duration_secs: row.get::<_, i64>(3)?.max(0) as u64, + result: row.get(4)?, + }) + }) + .map_err(|error| ServiceError::Internal(error.to_string()))?; + + rows.collect::, _>>() + .map_err(|error| ServiceError::Internal(error.to_string())) + } +} diff --git a/src-tauri/src/services/project_query_adapter.rs b/src-tauri/src/services/project_query_adapter.rs new file mode 100644 index 0000000..5c16b9d --- /dev/null +++ b/src-tauri/src/services/project_query_adapter.rs @@ -0,0 +1,199 @@ +use crate::models::ProjectStatus; +use app_services::{MonitorStream, OutputEntry, ServiceError}; +use ralph_core::prd::Prd; +use rusqlite::Connection; +use serde::Serialize; +use std::{ + collections::{BTreeMap, HashSet}, + path::{Path, PathBuf}, +}; + +#[derive(Debug, Clone, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct HomeListing { + pub id: String, + pub name: String, + pub status: String, + pub total_stories: Option, + pub stories_completed: Option, + pub current_agent: Option, +} + +pub type HomeListingGroups = BTreeMap>; + +#[derive(Debug, Clone, Serialize, Default)] +#[serde(rename_all = "camelCase")] +pub struct MonitorSnapshot { + pub project_id: String, + pub status: String, + pub session_id: Option, + pub session_started_at: Option, + pub session_ended_at: Option, + pub has_active_handle: bool, + pub is_stale: bool, + pub stories_total: Option, + pub stories_done: Option, + pub current_story: Option, + pub has_partial_progress: bool, + pub recent_output: Vec, + pub events: Vec, +} + +pub fn home_listings( + conn: &Connection, + active: &HashSet, + artifacts: impl Fn(&str) -> Result, +) -> Result { + let mut groups = BTreeMap::from([entry("draft"), entry("ready"), entry("paused")]); + let mut statement = conn + .prepare("SELECT id, name, status FROM projects ORDER BY updated_at DESC") + .map_err(db)?; + let rows = statement + .query_map([], |row| Ok((row.get(0)?, row.get(1)?, row.get(2)?))) + .map_err(db)?; + for row in rows { + let (id, name, db_status): (String, String, String) = row.map_err(db)?; + let dir = artifacts(&id).map_err(ServiceError::Internal)?; + let prd_path = dir.join("prd.json"); + let prd = Prd::load(&prd_path).ok(); + let status = ProjectStatus::resolve_canonical( + &db_status, + prd_path.exists(), + dir.join("config.json").exists(), + active.contains(&id), + ); + if let Some(key) = bucket(status) { + groups.get_mut(key).unwrap().push(HomeListing { + id, + name, + status: key.to_string(), + total_stories: prd.as_ref().map(Prd::total_stories), + stories_completed: prd.as_ref().map(Prd::passed_count), + current_agent: config_agent(&dir), + }); + } + } + Ok(groups) +} + +pub fn monitor_snapshot( + conn: &Connection, + dir: &Path, + project_id: &str, + has_active_handle: bool, +) -> Result { + let db_status: String = conn + .query_row( + "SELECT status FROM projects WHERE id = ?1", + [project_id], + |row| row.get(0), + ) + .map_err(db)?; + let (session_id, started_at, ended_at) = conn + .query_row( + "SELECT id, started_at, ended_at FROM sessions WHERE project_id = ?1 ORDER BY started_at DESC LIMIT 1", + [project_id], + |row| Ok((Some(row.get(0)?), row.get(1)?, row.get(2)?)), + ) + .unwrap_or((None, None, None)); + let mut current_story = None; + let mut last_progress = None; + let mut statement = conn + .prepare("SELECT story_id, started_at FROM iterations WHERE session_id = ?1 ORDER BY started_at ASC") + .map_err(db)?; + let events = session_id.as_ref().map_or(Ok(Vec::new()), |id| { + let mut names = started_at + .iter() + .map(|_| "session_started".to_string()) + .collect::>(); + let rows = statement + .query_map([id], |row| { + Ok((row.get::<_, String>(0)?, row.get::<_, String>(1)?)) + }) + .map_err(db)?; + for row in rows { + let (story_id, started_at) = row.map_err(db)?; + current_story = Some(story_id); + last_progress = Some(started_at); + names.push("iteration_completed".to_string()); + } + if ended_at.is_some() { + names.push("session_ended".to_string()); + } + Ok(names) + })?; + let prd_path = dir.join("prd.json"); + let prd = Prd::load(&prd_path).ok(); + let stories_total = prd.as_ref().map(Prd::total_stories); + let stories_done = prd.as_ref().map(Prd::passed_count); + let known_progress = [ + stories_total.is_some(), + stories_done.is_some(), + current_story.is_some(), + ] + .into_iter() + .filter(|value| *value) + .count(); + Ok(MonitorSnapshot { + project_id: project_id.to_string(), + status: ProjectStatus::resolve_canonical( + &db_status, + prd_path.exists(), + dir.join("config.json").exists(), + has_active_handle, + ) + .as_project_status() + .to_string(), + session_id, + session_started_at: started_at, + session_ended_at: ended_at.clone(), + has_active_handle, + is_stale: ended_at + .as_ref() + .is_some_and(|ended| last_progress.as_ref().map_or(true, |value| value < ended)), + stories_total, + stories_done, + current_story, + has_partial_progress: known_progress > 0 && known_progress < 3, + recent_output: recent_output(dir, project_id), + events, + }) +} + +fn entry(key: &str) -> (String, Vec) { (key.to_string(), Vec::new()) } +fn bucket(status: ProjectStatus) -> Option<&'static str> { + match status { + ProjectStatus::Draft => Some("draft"), + ProjectStatus::Ready => Some("ready"), + ProjectStatus::Paused => Some("paused"), + _ => None, + } +} + +fn config_agent(dir: &Path) -> Option { + std::fs::read_to_string(dir.join("config.json")) + .ok() + .and_then(|value| serde_json::from_str::(&value).ok()) + .and_then(|value| { + value + .get("executeAgent") + .and_then(|value| value.as_str()) + .map(str::to_string) + }) +} + +fn recent_output(dir: &Path, project_id: &str) -> Vec { + std::fs::read_to_string(dir.join("agent_output.log")) + .unwrap_or_default() + .lines() + .filter(|line| !line.trim().is_empty()) + .map(|content| OutputEntry { + project_id: project_id.to_string(), + stream: MonitorStream::Stdout, + content: content.to_string(), + ..OutputEntry::default() + }) + .collect() +} + +fn db(error: rusqlite::Error) -> ServiceError { ServiceError::Internal(error.to_string()) } diff --git a/src-tauri/src/services/session_adapter.rs b/src-tauri/src/services/session_adapter.rs new file mode 100644 index 0000000..46509ca --- /dev/null +++ b/src-tauri/src/services/session_adapter.rs @@ -0,0 +1,280 @@ +use app_services::session::{ + IterationCounts, IterationSummary, LatestSessionRecord, RuntimeSessionService, + SessionRepository, SessionService, StoryCounts, +}; +use app_services::{ServiceError, SessionStats}; +use ralph_core::prd::Prd; +use rusqlite::{Connection, OptionalExtension}; +use tauri::Manager; + +use crate::commands::execution::IterationRow; +use crate::loop_manager::LoopError; + +#[path = "project_query_adapter.rs"] +pub mod project_query_adapter; +use project_query_adapter::{HomeListingGroups, MonitorSnapshot}; + +pub struct ProjectQueryAdapter<'a, R: tauri::Runtime> { + app: &'a tauri::AppHandle, +} + +impl<'a, R: tauri::Runtime> ProjectQueryAdapter<'a, R> { + pub fn new(app: &'a tauri::AppHandle) -> Self { + Self { app } + } + + pub fn home_listings(&self) -> Result { + let loop_state = self.app.state::(); + let active = loop_state + .0 + .lock() + .map(|handles| handles.keys().cloned().collect()) + .map_err(|_| ServiceError::Internal("loop state lock poisoned".into()))?; + let db = self.app.state::(); + let conn = + db.0.lock() + .map_err(|_| ServiceError::Internal("database lock poisoned".into()))?; + project_query_adapter::home_listings(&conn, &active, |id| { + crate::storage::artifacts::project_artifact_dir(self.app, id) + }) + } + + pub fn monitor_snapshot(&self, project_id: &str) -> Result { + let loop_state = self.app.state::(); + let active = loop_state + .0 + .lock() + .map(|handles| handles.contains_key(project_id)) + .map_err(|_| ServiceError::Internal("loop state lock poisoned".into()))?; + let db = self.app.state::(); + let conn = + db.0.lock() + .map_err(|_| ServiceError::Internal("database lock poisoned".into()))?; + let dir = crate::storage::artifacts::project_artifact_dir(self.app, project_id) + .map_err(ServiceError::Internal)?; + project_query_adapter::monitor_snapshot(&conn, &dir, project_id, active) + } +} + +fn format_duration_label(duration_secs: i64) -> String { + if duration_secs <= 0 { + return "0s".to_string(); + } + if duration_secs < 60 { + return format!("{duration_secs}s"); + } + let minutes = duration_secs / 60; + let seconds = duration_secs % 60; + format!("{minutes}m {seconds}s") +} + +fn format_time_label(started_at: &str) -> String { + chrono::DateTime::parse_from_rfc3339(started_at).map_or_else( + |_| started_at.to_string(), + |value| value.format("%H:%M:%S").to_string(), + ) +} + +pub struct TauriSessionAdapter<'a, R: tauri::Runtime> { + window: &'a tauri::Window, +} + +impl<'a, R: tauri::Runtime> TauriSessionAdapter<'a, R> { + fn new(window: &'a tauri::Window) -> Self { + Self { window } + } + + fn with_connection( + &self, + run: impl FnOnce(&Connection) -> Result, + ) -> Result { + let db = self.window.state::(); + let conn = + db.0.lock() + .map_err(|_| ServiceError::Internal("database lock poisoned".into()))?; + run(&conn) + } +} + +impl SessionRepository for TauriSessionAdapter<'_, R> { + fn is_running(&self, project_id: &str) -> Result { + let loop_state = self.window.state::(); + let handles = loop_state + .0 + .lock() + .map_err(|_| ServiceError::Internal("loop state lock poisoned".into()))?; + Ok(handles.contains_key(project_id)) + } + + fn latest_session_record( + &self, + project_id: &str, + ) -> Result, ServiceError> { + self.with_connection(|conn| { + conn.query_row( + "SELECT id, started_at FROM sessions WHERE project_id = ?1 ORDER BY started_at DESC LIMIT 1", + rusqlite::params![project_id], + |row| Ok(LatestSessionRecord { id: row.get(0)?, started_at: row.get(1)? }), + ) + .optional() + .map_err(|error| ServiceError::Internal(error.to_string())) + }) + } + + fn iteration_counts(&self, session_id: &str) -> Result { + self.with_connection(|conn| { + conn.query_row( + "SELECT COUNT(*), SUM(CASE WHEN result = 'success' THEN 1 ELSE 0 END), SUM(CASE WHEN result = 'rate_limited' THEN 1 ELSE 0 END) FROM iterations WHERE session_id = ?1", + rusqlite::params![session_id], + |row| { + Ok(IterationCounts { + total: row.get(0)?, + success: row.get::<_, Option>(1)?.unwrap_or(0), + rate_limited: row.get::<_, Option>(2)?.unwrap_or(0), + }) + }, + ) + .map_err(|error| ServiceError::Internal(error.to_string())) + }) + } + + fn latest_agent(&self, session_id: &str) -> Result, ServiceError> { + self.with_connection(|conn| { + conn.query_row( + "SELECT agent_used FROM iterations WHERE session_id = ?1 ORDER BY started_at DESC LIMIT 1", + rusqlite::params![session_id], + |row| row.get(0), + ) + .optional() + .map_err(|error| ServiceError::Internal(error.to_string())) + }) + } + + fn story_counts(&self, project_id: &str) -> Result { + let artifact_dir = + crate::storage::artifacts::project_artifact_dir(self.window.app_handle(), project_id) + .map_err(ServiceError::Internal)?; + let prd_path = artifact_dir.join("prd.json"); + if !prd_path.exists() { + return Ok(StoryCounts::default()); + } + Prd::load(&prd_path) + .map(|prd| StoryCounts { + total: prd.total_stories(), + passed: prd.passed_count(), + blocked: prd.blocked_count(), + pending: prd.pending_count(), + }) + .map_err(|error| ServiceError::Internal(error.to_string())) + } + + fn stories_per_hour( + &self, + session: Option<&LatestSessionRecord>, + passed_stories: usize, + ) -> Result { + let Some(started_at) = session.and_then(|record| record.started_at.as_deref()) else { + return Ok(0.0); + }; + let Ok(started_at) = chrono::DateTime::parse_from_rfc3339(started_at) else { + return Ok(0.0); + }; + let elapsed_hours = (chrono::Utc::now() - started_at.with_timezone(&chrono::Utc)) + .num_minutes() as f64 + / 60.0; + if elapsed_hours > 0.0 { + Ok(passed_stories as f64 / elapsed_hours) + } else { + Ok(0.0) + } + } + + fn iteration_history( + &self, + project_id: &str, + limit: usize, + ) -> Result, ServiceError> { + self.with_connection(|conn| { + let mut statement = conn + .prepare( + "SELECT i.story_id, i.started_at, i.duration_secs, i.result, i.agent_used + FROM iterations i + JOIN sessions s ON i.session_id = s.id + WHERE s.project_id = ?1 + ORDER BY i.started_at DESC + LIMIT ?2", + ) + .map_err(|error| ServiceError::Internal(error.to_string()))?; + let rows = statement + .query_map(rusqlite::params![project_id, limit as i64], |row| { + Ok(IterationSummary { + story_id: row.get(0)?, + started_at: row.get(1)?, + duration_secs: row.get(2)?, + result: row.get(3)?, + agent_used: row.get(4)?, + }) + }) + .map_err(|error| ServiceError::Internal(error.to_string()))?; + rows.collect::, _>>() + .map_err(|error| ServiceError::Internal(error.to_string())) + }) + } +} + +#[tauri::command] +pub async fn session_stats_command( + window: tauri::Window, + project_id: String, +) -> Result { + RuntimeSessionService::new(TauriSessionAdapter::new(&window)) + .session_stats(&required(project_id, "project_id").map_err(LoopError::Path)?) + .map_err(loop_error) +} + +#[tauri::command] +pub async fn get_iteration_history_command( + window: tauri::Window, + project_id: String, +) -> Result, LoopError> { + RuntimeSessionService::new(TauriSessionAdapter::new(&window)) + .iteration_history( + &required(project_id, "project_id").map_err(LoopError::Path)?, + 200, + ) + .map(|rows| { + rows.into_iter() + .map(|row| { + let started_at = row.started_at; + let duration_secs = row.duration_secs; + IterationRow { + story_id: row.story_id, + started_at: started_at.clone(), + duration_secs, + duration_label: format_duration_label(duration_secs), + time_label: format_time_label(&started_at), + result: row.result, + agent_used: row.agent_used, + } + }) + .collect() + }) + .map_err(loop_error) +} + +fn required(value: String, name: &str) -> Result { + let trimmed = value.trim().to_string(); + if trimmed.is_empty() { + return Err(format!("{name} is required")); + } + Ok(trimmed) +} + +fn loop_error(error: ServiceError) -> LoopError { + match error { + ServiceError::Invalid(message) => LoopError::Path(message), + ServiceError::NotFound(message) + | ServiceError::Conflict(message) + | ServiceError::Internal(message) => LoopError::Internal(message), + } +} diff --git a/src-tauri/src/services/wizard_session_adapter.rs b/src-tauri/src/services/wizard_session_adapter.rs new file mode 100644 index 0000000..88bafc4 --- /dev/null +++ b/src-tauri/src/services/wizard_session_adapter.rs @@ -0,0 +1,415 @@ +use crate::projects::artifacts::{artifact_dir, non_empty_file_content}; +use crate::projects::config_types::ProjectConfig; +use crate::projects::documents::load_text_artifact_with_legacy_fallback; +use crate::projects::runtime_config::load_project_config_from_paths; +use crate::projects::stories_crud::load_prd_from_paths; +use crate::projects::{Project, ProjectError, WizardProjectData}; +use ralph_core::prd::UserStory; +use serde::{Deserialize, Serialize}; +use serde_json::Value; +use std::path::Path; +use tauri::{AppHandle, Runtime}; + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct WizardStepValidation { + pub valid: bool, + #[serde(default)] + pub issues: Vec, +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct WizardSessionValidation { + pub describe: WizardStepValidation, + pub plan: WizardStepValidation, + pub atomize: WizardStepValidation, + pub configure: WizardStepValidation, + pub launch: WizardStepValidation, +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct CanonicalWizardSession { + pub project: Project, + pub wizard_step: String, + pub highest_step: u32, + pub project_data: WizardProjectData, + pub plan: Option, + pub plan_complete: bool, + pub stories: Vec, + pub config: Option, + pub prompt: Option, + pub guardrails: Option, + pub validation: WizardSessionValidation, +} + +pub fn hydrate_canonical_session( + app: &AppHandle, + project: &Project, + wizard_state_json: Option<&str>, +) -> Result { + let artifacts = artifact_dir(app, &project.id)?; + let working_directory = Path::new(&project.working_directory); + let wizard_step = resolved_wizard_step(project, wizard_state_json); + let mut session = CanonicalWizardSession { + project: project.clone(), + highest_step: crate::projects::wizard_state::step_name_to_number(&wizard_step).unwrap_or(1), + wizard_step, + project_data: WizardProjectData { + name: project.name.clone(), + description: project.description.clone(), + working_directory: project.working_directory.clone(), + plan_agent: "claude".to_string(), + plan_model: None, + plan_effort: None, + }, + plan: None, + plan_complete: false, + stories: Vec::new(), + config: None, + prompt: None, + guardrails: None, + validation: valid_validation(), + }; + + let stale_from_step = hydrate_draft(&artifacts, wizard_state_json, &mut session); + load_plan(&artifacts, working_directory, stale_from_step, &mut session); + load_stories(&artifacts, working_directory, stale_from_step, &mut session); + load_config(&artifacts, working_directory, stale_from_step, &mut session); + load_prompt(&artifacts, working_directory, stale_from_step, &mut session); + load_guardrails(&artifacts, working_directory, stale_from_step, &mut session); + + Ok(session) +} + +pub fn save_canonical_session( + app: &AppHandle, + project: &Project, + draft_json: &str, + wizard_state_json: Option<&str>, +) -> Result { + let artifacts = artifact_dir(app, &project.id)?; + std::fs::create_dir_all(&artifacts)?; + let previous = load_saved_session(&artifacts, project, wizard_state_json)?; + let mut session = crate::projects::wizard_state::CanonicalWizardSession::from_payload( + draft_json, + Some(&project.id), + project.wizard_step.as_deref(), + )?; + session.stale_from_step = + merged_stale_step(session.stale_from_step, invalidated_step(previous.as_ref(), &session)); + clear_invalidated_descendants(&artifacts, &mut session)?; + std::fs::write(artifacts.join("draft.json"), session.to_json()?)?; + Ok(session) +} + +fn hydrate_draft( + artifacts: &Path, + wizard_state_json: Option<&str>, + session: &mut CanonicalWizardSession, +) -> Option { + let raw = match non_empty_file_content(&artifacts.join("draft.json")) { + Ok(content) => content.or_else(|| wizard_state_json.map(str::to_string)), + Err(err) => { + invalidate(&mut session.validation.describe, err.to_string()); + wizard_state_json.map(str::to_string) + } + }; + let Some(raw) = raw else { + return None; + }; + let Ok(draft) = serde_json::from_str::(&raw) else { + invalidate( + &mut session.validation.describe, + "draft.json is not valid JSON", + ); + return None; + }; + let stale_from_step = draft + .get("staleFromStep") + .and_then(Value::as_u64) + .map(|value| value as u32); + let describe = draft.get("describe").unwrap_or(&draft); + set_text(&mut session.project_data.name, describe.get("name")); + set_text( + &mut session.project_data.description, + describe.get("description"), + ); + set_text( + &mut session.project_data.working_directory, + describe.get("workingDirectory"), + ); + set_text( + &mut session.project_data.plan_agent, + describe.get("planAgent"), + ); + session.project_data.plan_model = string_field(describe.get("planModel")); + session.project_data.plan_effort = string_field(describe.get("planEffort")); + session.highest_step = draft + .get("highestStep") + .and_then(Value::as_u64) + .map_or(session.highest_step, |value| value as u32); + if draft + .get("plan") + .and_then(|value| value.get("completed")) + .and_then(Value::as_bool) + .unwrap_or(false) + { + session.plan_complete = true; + } + if let Some(configure) = draft.get("configure").cloned() { + match serde_json::from_value::(configure) { + Ok(config) => session.config = Some(config), + Err(err) => invalidate(&mut session.validation.configure, err.to_string()), + } + } + stale_from_step +} + +fn load_plan( + artifacts: &Path, + working_directory: &Path, + stale_from_step: Option, + session: &mut CanonicalWizardSession, +) { + if step_is_stale(stale_from_step, "plan") { + invalidate(&mut session.validation.plan, "plan.md is missing"); + return; + } + let mut failed = false; + match load_text_artifact_with_legacy_fallback(artifacts, working_directory, "plan.md") { + Ok(plan) => session.plan = plan, + Err(err) => { + failed = true; + invalidate(&mut session.validation.plan, err.to_string()); + } + } + session.plan_complete |= session.plan.is_some(); + if session.plan.is_none() && !failed { + invalidate(&mut session.validation.plan, "plan.md is missing"); + } +} + +fn load_stories( + artifacts: &Path, + working_directory: &Path, + stale_from_step: Option, + session: &mut CanonicalWizardSession, +) { + if step_is_stale(stale_from_step, "plan") { + invalidate(&mut session.validation.atomize, "prd.json is missing"); + return; + } + match load_prd_from_paths(artifacts, working_directory) { + Ok(Some(prd)) => session.stories = prd.stories, + Ok(None) => invalidate(&mut session.validation.atomize, "prd.json is missing"), + Err(err) => invalidate(&mut session.validation.atomize, err.to_string()), + } +} + +fn load_config( + artifacts: &Path, + working_directory: &Path, + stale_from_step: Option, + session: &mut CanonicalWizardSession, +) { + if step_is_stale(stale_from_step, "configure") { + if session.config.is_none() { + invalidate(&mut session.validation.configure, "config.json is missing"); + } + return; + } + match load_project_config_from_paths(artifacts, working_directory) { + Ok(Some(config)) => session.config = Some(config), + Ok(None) if session.config.is_none() => { + invalidate(&mut session.validation.configure, "config.json is missing"); + } + Err(err) => invalidate(&mut session.validation.configure, err.to_string()), + Ok(None) => {} + } +} + +fn load_prompt( + artifacts: &Path, + working_directory: &Path, + stale_from_step: Option, + session: &mut CanonicalWizardSession, +) { + if step_is_stale(stale_from_step, "launch") { + invalidate(&mut session.validation.launch, "prompt.md is missing"); + return; + } + let mut failed = false; + match load_text_artifact_with_legacy_fallback(artifacts, working_directory, "prompt.md") { + Ok(prompt) => session.prompt = prompt, + Err(err) => { + failed = true; + invalidate(&mut session.validation.launch, err.to_string()); + } + } + if session.prompt.is_none() && !failed { + invalidate(&mut session.validation.launch, "prompt.md is missing"); + } +} + +fn load_guardrails( + artifacts: &Path, + working_directory: &Path, + stale_from_step: Option, + session: &mut CanonicalWizardSession, +) { + if step_is_stale(stale_from_step, "launch") { + invalidate(&mut session.validation.launch, "guardrails.md is missing"); + return; + } + let mut failed = false; + match load_text_artifact_with_legacy_fallback(artifacts, working_directory, "guardrails.md") { + Ok(guardrails) => session.guardrails = guardrails, + Err(err) => { + failed = true; + invalidate(&mut session.validation.launch, err.to_string()); + } + } + if session.guardrails.is_none() && !failed { + invalidate(&mut session.validation.launch, "guardrails.md is missing"); + } +} + +fn resolved_wizard_step(project: &Project, wizard_state_json: Option<&str>) -> String { + project + .wizard_step + .clone() + .or_else(|| wizard_state_json.and_then(extract_step)) + .or_else(|| (project.status == "draft").then(|| "describe".to_string())) + .unwrap_or_else(|| "describe".to_string()) +} + +fn extract_step(raw: &str) -> Option { + serde_json::from_str::(raw).ok().and_then(|value| { + value + .get("currentStep") + .and_then(Value::as_str) + .map(str::to_string) + }) +} + +fn load_saved_session( + artifacts: &Path, + project: &Project, + wizard_state_json: Option<&str>, +) -> Result, ProjectError> { + non_empty_file_content(&artifacts.join("draft.json"))? + .or_else(|| wizard_state_json.map(str::to_string)) + .map(|raw| { + crate::projects::wizard_state::CanonicalWizardSession::from_payload( + &raw, + Some(&project.id), + project.wizard_step.as_deref(), + ) + .map_err(ProjectError::from) + }) + .transpose() +} + +fn invalidated_step( + previous: Option<&crate::projects::wizard_state::CanonicalWizardSession>, + current: &crate::projects::wizard_state::CanonicalWizardSession, +) -> Option { + let previous = previous?; + if session_field_changed(&previous.describe, ¤t.describe) { + return crate::projects::wizard_state::step_name_to_number("plan"); + } + if session_field_changed(&previous.configure, ¤t.configure) { + return crate::projects::wizard_state::step_name_to_number("configure"); + } + None +} + +fn merged_stale_step(current: Option, derived: Option) -> Option { + match (current, derived) { + (Some(current), Some(derived)) => Some(current.min(derived)), + (Some(current), None) => Some(current), + (None, Some(derived)) => Some(derived), + (None, None) => None, + } +} + +fn clear_invalidated_descendants( + artifacts: &Path, + session: &mut crate::projects::wizard_state::CanonicalWizardSession, +) -> Result<(), ProjectError> { + if step_is_stale(session.stale_from_step, "plan") { + clear_artifacts( + artifacts, + &["plan.md", "prd.json", "config.json", "prompt.md", "guardrails.md"], + )?; + session.plan = crate::projects::wizard_state::WizardPlanState::default(); + session.atomize = crate::projects::wizard_state::WizardAtomizeState::default(); + } + if step_is_stale(session.stale_from_step, "configure") { + clear_artifacts(artifacts, &["config.json"])?; + } + if step_is_stale(session.stale_from_step, "launch") { + clear_artifacts(artifacts, &["prompt.md", "guardrails.md"])?; + session.prompt = None; + session.guardrails = None; + } + Ok(()) +} + +fn clear_artifacts(artifacts: &Path, file_names: &[&str]) -> Result<(), ProjectError> { + for file_name in file_names { + let file_path = artifacts.join(file_name); + if file_path.exists() { + std::fs::remove_file(file_path)?; + } + } + Ok(()) +} + +fn step_is_stale(stale_from_step: Option, step_name: &str) -> bool { + let Some(step_number) = crate::projects::wizard_state::step_name_to_number(step_name) else { + return false; + }; + stale_from_step.is_some_and(|stale| stale <= step_number) +} + +fn session_field_changed(previous: &T, current: &T) -> bool { + serde_json::to_value(previous).ok() != serde_json::to_value(current).ok() +} + +fn string_field(value: Option<&Value>) -> Option { + value.and_then(Value::as_str).map(str::to_string) +} + +fn set_text(target: &mut String, value: Option<&Value>) { + if let Some(value) = value + .and_then(Value::as_str) + .filter(|value| !value.is_empty()) + { + *target = value.to_string(); + } +} + +fn valid_validation() -> WizardSessionValidation { + WizardSessionValidation { + describe: valid_step(), + plan: valid_step(), + atomize: valid_step(), + configure: valid_step(), + launch: valid_step(), + } +} + +fn valid_step() -> WizardStepValidation { + WizardStepValidation { + valid: true, + issues: Vec::new(), + } +} + +fn invalidate(step: &mut WizardStepValidation, issue: impl Into) { + step.valid = false; + step.issues.push(issue.into()); +} diff --git a/src-tauri/src/shell_resolve.rs b/src-tauri/src/shell_resolve.rs new file mode 100644 index 0000000..c3df01e --- /dev/null +++ b/src-tauri/src/shell_resolve.rs @@ -0,0 +1,100 @@ +fn shell_quote_unix(value: &str) -> String { + if value.is_empty() { + "''".to_string() + } else { + format!("'{}'", value.replace('\'', "'\"'\"'")) + } +} + +#[cfg(windows)] +fn shell_quote_windows(value: &str) -> String { + if value.is_empty() { + "\"\"".to_string() + } else { + format!("\"{}\"", value.replace('"', "\"\"")) + } +} + +fn resolved_path_env() -> String { + let path = crate::agent_runtime_env::probe_path_env(); + if path.is_empty() { + std::env::var("PATH").unwrap_or_default() + } else { + path + } +} + +pub fn resolve_shell() -> (String, Vec) { + #[cfg(windows)] + { + ("cmd.exe".to_string(), vec!["/C".to_string()]) + } + #[cfg(not(windows))] + { + let path_env = resolved_path_env(); + if let Ok(shell) = std::env::var("SHELL") { + if !shell.trim().is_empty() && std::path::Path::new(&shell).exists() { + return (shell, vec!["-lc".to_string()]); + } + } + for candidate in ["zsh", "sh", "bash"] { + if crate::agent_runtime_env::resolve_binary_path(candidate, &path_env).is_some() { + return (candidate.to_string(), vec!["-lc".to_string()]); + } + } + ("sh".to_string(), vec!["-lc".to_string()]) + } +} + +pub fn resolve_binary_via_shell(binary: &str) -> (String, Vec) { + #[cfg(windows)] + { + ( + "cmd.exe".to_string(), + vec!["/C".to_string(), format!("where {binary}")], + ) + } + #[cfg(not(windows))] + { + let (shell, mut shell_args) = resolve_shell(); + shell_args.push(format!("command -v {binary}")); + (shell, shell_args) + } +} + +pub fn build_null_stdin_command(binary: &str, args: &[String]) -> (String, Vec) { + #[cfg(windows)] + { + let mut parts = Vec::with_capacity(args.len() + 1); + parts.push(shell_quote_windows(binary)); + for value in args { + parts.push(shell_quote_windows(value)); + } + ( + "cmd.exe".to_string(), + vec!["/C".to_string(), format!("{} < NUL", parts.join(" "))], + ) + } + #[cfg(not(windows))] + { + let mut parts = Vec::with_capacity(args.len() + 1); + parts.push(shell_quote_unix(binary)); + for value in args { + parts.push(shell_quote_unix(value)); + } + let (shell, mut shell_args) = resolve_shell(); + shell_args.push(format!("{} < /dev/null", parts.join(" "))); + (shell, shell_args) + } +} + +pub fn is_shell_exec_line(line: &str) -> bool { + let trimmed = line.trim(); + if trimmed == "exec" || trimmed.starts_with("exec ") { + return true; + } + if trimmed.contains(" -lc") && (trimmed.contains("zsh") || trimmed.contains("/sh")) { + return true; + } + trimmed.contains(" /C") && (trimmed.contains("cmd.exe") || trimmed.starts_with("cmd ")) +} diff --git a/src-tauri/src/storage/artifacts.rs b/src-tauri/src/storage/artifacts.rs index 6e52bfe..5f95091 100644 --- a/src-tauri/src/storage/artifacts.rs +++ b/src-tauri/src/storage/artifacts.rs @@ -1,7 +1,10 @@ use std::path::{Path, PathBuf}; -use tauri::{AppHandle, Manager}; +use tauri::{AppHandle, Manager, Runtime}; -pub fn project_artifact_dir(app: &AppHandle, project_id: &str) -> Result { +pub fn project_artifact_dir( + app: &AppHandle, + project_id: &str, +) -> Result { app.path() .app_data_dir() .map(|data_dir| data_dir.join("projects").join(project_id)) diff --git a/src-tauri/src/storage/db.rs b/src-tauri/src/storage/db.rs index c329024..3e9ad9c 100644 --- a/src-tauri/src/storage/db.rs +++ b/src-tauri/src/storage/db.rs @@ -1,6 +1,6 @@ use rusqlite::Connection; use std::sync::Mutex; -use tauri::{AppHandle, Manager}; +use tauri::{AppHandle, Manager, Runtime}; use thiserror::Error; #[derive(Debug, Error)] @@ -16,7 +16,7 @@ pub enum DbError { pub struct DbState(pub Mutex); impl DbState { - pub fn open(app: &AppHandle) -> Result { + pub fn open(app: &AppHandle) -> Result { let data_dir = app .path() .app_data_dir() @@ -42,6 +42,18 @@ impl DbState { );", )?; + conn.busy_timeout(std::time::Duration::from_secs(5))?; + conn.execute_batch("BEGIN EXCLUSIVE;")?; + let result = Self::apply_pending_migrations(conn); + if result.is_err() { + let _ = conn.execute_batch("ROLLBACK;"); + } else { + conn.execute_batch("COMMIT;")?; + } + result + } + + fn apply_pending_migrations(conn: &Connection) -> Result<(), DbError> { let current_version: i64 = conn .query_row( "SELECT COALESCE(MAX(version), 0) FROM _migrations", @@ -92,7 +104,7 @@ impl DbState { loop_status TEXT NOT NULL DEFAULT 'interrupted', saved_at TEXT NOT NULL DEFAULT (datetime('now')) ); - INSERT INTO _migrations (version) VALUES (1);", + INSERT OR IGNORE INTO _migrations (version) VALUES (1);", )?; } @@ -100,7 +112,7 @@ impl DbState { conn.execute_batch( "ALTER TABLE projects ADD COLUMN wizard_step TEXT DEFAULT NULL; ALTER TABLE projects ADD COLUMN wizard_state_json TEXT DEFAULT NULL; - INSERT INTO _migrations (version) VALUES (2);", + INSERT OR IGNORE INTO _migrations (version) VALUES (2);", )?; } @@ -117,14 +129,14 @@ impl DbState { display_name TEXT, PRIMARY KEY (connection_id, repo_path) ); - INSERT INTO _migrations (version) VALUES (3);", + INSERT OR IGNORE INTO _migrations (version) VALUES (3);", )?; } if current_version < 4 { conn.execute_batch( "ALTER TABLE projects ADD COLUMN notification_prefs TEXT DEFAULT NULL; - INSERT INTO _migrations (version) VALUES (4);", + INSERT OR IGNORE INTO _migrations (version) VALUES (4);", )?; } @@ -137,7 +149,7 @@ impl DbState { config_json TEXT NOT NULL DEFAULT '{}', updated_at TEXT NOT NULL DEFAULT (datetime('now')) ); - INSERT INTO _migrations (version) VALUES (5);", + INSERT OR IGNORE INTO _migrations (version) VALUES (5);", )?; } @@ -159,7 +171,25 @@ impl DbState { ); CREATE INDEX IF NOT EXISTS idx_ask_messages_conv ON ask_messages(conversation_id, created_at); - INSERT INTO _migrations (version) VALUES (6);", + INSERT OR IGNORE INTO _migrations (version) VALUES (6);", + )?; + } + + if current_version < 7 { + conn.execute_batch( + "CREATE TABLE IF NOT EXISTS notifications ( + id TEXT PRIMARY KEY, + project_id TEXT NOT NULL REFERENCES projects(id), + notification_type TEXT NOT NULL, + title TEXT NOT NULL, + message TEXT NOT NULL, + ring_color TEXT NOT NULL DEFAULT 'cyan', + read INTEGER NOT NULL DEFAULT 0, + timestamp INTEGER NOT NULL + ); + CREATE INDEX IF NOT EXISTS idx_notifications_project + ON notifications(project_id, timestamp DESC); + INSERT OR IGNORE INTO _migrations (version) VALUES (7);", )?; } @@ -186,7 +216,7 @@ impl DbState { stmt.query_map([], |row| { Ok((row.get::<_, String>(0)?, row.get::<_, String>(1)?)) }) - .map(|rows| rows.filter_map(|row| row.ok()).collect()) + .map(|rows| rows.filter_map(Result::ok).collect()) .unwrap_or_default() } diff --git a/src-tauri/src/storage/migration.rs b/src-tauri/src/storage/migration.rs index e1ab50a..7cfa7c3 100644 --- a/src-tauri/src/storage/migration.rs +++ b/src-tauri/src/storage/migration.rs @@ -95,10 +95,10 @@ fn copy_db_bundle(source_db: &Path, target_db: &Path) -> Result<(), String> { } fn sidecar_path(db_path: &Path, suffix: &str) -> PathBuf { - let file_name = db_path - .file_name() - .map(|name| name.to_string_lossy().to_string()) - .unwrap_or_else(|| "loopforge.db".to_string()); + let file_name = db_path.file_name().map_or_else( + || "loopforge.db".to_string(), + |name| name.to_string_lossy().to_string(), + ); db_path.with_file_name(format!("{file_name}{suffix}")) } diff --git a/src-tauri/src/summary_generator.rs b/src-tauri/src/summary_generator.rs index 948285d..c7e417b 100644 --- a/src-tauri/src/summary_generator.rs +++ b/src-tauri/src/summary_generator.rs @@ -102,9 +102,7 @@ fn collect_agent_usage(conn: &rusqlite::Connection, session_id: &str) -> Vec Vec { .collect() } -pub(crate) fn classify_complexity_for_test(files_changed: usize, total_lines: i64, story_count: usize) -> &'static str { +pub(crate) fn classify_complexity_for_test( + files_changed: usize, + total_lines: i64, + story_count: usize, +) -> &'static str { classify_complexity(files_changed, total_lines, story_count) } @@ -184,16 +186,30 @@ pub fn generate_summary( let diff_stats = collect_diff_stats(work_dir); let exec_secs = total_execution_time(conn, session_id); - let passed = stories.iter().filter(|story| story.status == "success").count(); - let blocked = stories.iter().filter(|story| story.status == "failed" || story.status == "blocked").count(); - let skipped = stories.iter().filter(|story| story.status == "pending").count(); + let passed = stories + .iter() + .filter(|story| story.status == "success") + .count(); + let blocked = stories + .iter() + .filter(|story| story.status == "failed" || story.status == "blocked") + .count(); + let skipped = stories + .iter() + .filter(|story| story.status == "pending") + .count(); let total_insertions: i64 = diff_stats.iter().map(|stat| stat.insertions).sum(); let total_deletions: i64 = diff_stats.iter().map(|stat| stat.deletions).sum(); let files_changed = diff_stats.len(); let total_stories = stories.len(); - let complexity = classify_complexity(files_changed, total_insertions + total_deletions, total_stories).to_string(); + let complexity = classify_complexity( + files_changed, + total_insertions + total_deletions, + total_stories, + ) + .to_string(); let completion_status = if blocked == 0 && skipped == 0 { "SUCCESS".to_string() @@ -289,10 +305,7 @@ fn fallback_narrative(summary: &LoopSummary) -> String { } fn artifact_dir(app: &AppHandle, project_id: &str) -> Result { - let data_dir = app - .path() - .app_data_dir() - .map_err(|err| err.to_string())?; + let data_dir = app.path().app_data_dir().map_err(|err| err.to_string())?; Ok(data_dir.join("projects").join(project_id)) } @@ -312,7 +325,14 @@ pub async fn generate_loop_summary( let prd = Prd::load(&prd_path).map_err(|err| err.to_string())?; let work_dir = PathBuf::from(&working_directory); - let mut summary = generate_summary(&conn, &project_id, &project_name, &session_id, &prd, &work_dir); + let mut summary = generate_summary( + &conn, + &project_id, + &project_name, + &session_id, + &prd, + &work_dir, + ); summary.narrative = render_narrative(&summary); let summary_path = artifact_path.join("summary.json"); diff --git a/src-tauri/src/test_env_lock.rs b/src-tauri/src/test_env_lock.rs new file mode 100644 index 0000000..84b44f1 --- /dev/null +++ b/src-tauri/src/test_env_lock.rs @@ -0,0 +1,3 @@ +use std::sync::{LazyLock, Mutex}; + +pub(crate) static ENV_LOCK: LazyLock> = LazyLock::new(|| Mutex::new(())); diff --git a/src-tauri/src/test_support/planning.rs b/src-tauri/src/test_support/planning.rs index c2f8027..b54ef66 100644 --- a/src-tauri/src/test_support/planning.rs +++ b/src-tauri/src/test_support/planning.rs @@ -142,6 +142,7 @@ fn batch( PlanActivityBatchPayload { project_id: project_id.to_string(), events, + plan_content: plan_content_delta.to_string(), plan_content_delta: plan_content_delta.to_string(), } } diff --git a/src-tauri/src/test_support/tests.rs b/src-tauri/src/test_support/tests.rs index bef6418..f28e97d 100644 --- a/src-tauri/src/test_support/tests.rs +++ b/src-tauri/src/test_support/tests.rs @@ -1,9 +1,7 @@ use super::agents::fixture_agent_names; use super::runtime::{resolve_test_mode, FixtureSet, TestConfigError, TestMode}; use std::path::PathBuf; -use std::sync::{Mutex, MutexGuard}; - -static ENV_LOCK: Mutex<()> = Mutex::new(()); +use std::sync::MutexGuard; const TEST_ENV_KEYS: [&str; 4] = [ "LOOPFORGE_TEST_MODE", @@ -19,7 +17,9 @@ struct EnvGuard { impl EnvGuard { fn new() -> Self { - let lock = ENV_LOCK.lock().unwrap_or_else(|err| err.into_inner()); + let lock = crate::test_env_lock::ENV_LOCK + .lock() + .unwrap_or_else(|err| err.into_inner()); let values = TEST_ENV_KEYS .into_iter() .map(|key| (key, std::env::var(key).ok())) diff --git a/src-tauri/src/tests.rs b/src-tauri/src/tests.rs index f6d97a1..7d5c0b2 100644 --- a/src-tauri/src/tests.rs +++ b/src-tauri/src/tests.rs @@ -141,7 +141,12 @@ fn list_projects_groups_correctly_by_status() { .filter_map(|result| result.ok()) .collect(); - let count = |target: &str| statuses.iter().filter(|status| status.as_str() == target).count(); + let count = |target: &str| { + statuses + .iter() + .filter(|status| status.as_str() == target) + .count() + }; assert_eq!(count("active"), 1); assert_eq!(count("paused"), 1); @@ -361,29 +366,34 @@ fn ci_feedback_loop_verification_retries_tracked() { "INSERT INTO projects (id, name, status, working_directory, created_at, updated_at) VALUES ('proj-ci', 'CI Test', 'active', '/tmp', ?1, ?2)", rusqlite::params![now, now], - ).unwrap(); + ) + .unwrap(); conn.execute( "INSERT INTO sessions (id, project_id, started_at) VALUES ('sess-ci', 'proj-ci', ?1)", rusqlite::params![now], - ).unwrap(); + ) + .unwrap(); conn.execute( "INSERT INTO iterations (id, session_id, story_id, duration_secs, result, agent_used) VALUES ('iter-1', 'sess-ci', 'S-001', 10, 'failed', 'claude')", [], - ).unwrap(); + ) + .unwrap(); conn.execute( "INSERT INTO iterations (id, session_id, story_id, duration_secs, result, agent_used) VALUES ('iter-2', 'sess-ci', 'S-001', 15, 'failed', 'claude')", [], - ).unwrap(); + ) + .unwrap(); conn.execute( "INSERT INTO iterations (id, session_id, story_id, duration_secs, result, agent_used) VALUES ('iter-3', 'sess-ci', 'S-001', 20, 'success', 'claude')", [], - ).unwrap(); + ) + .unwrap(); let attempts: i64 = conn .query_row( @@ -411,7 +421,8 @@ fn connection_crud_lifecycle() { conn.execute( "INSERT INTO connections (id, name) VALUES ('conn-1', 'Monorepo Connection')", [], - ).unwrap(); + ) + .unwrap(); conn.execute( "INSERT INTO connection_repos (connection_id, repo_path, display_name) VALUES ('conn-1', '/tmp/repo-a', 'Repo A')", [], @@ -430,7 +441,8 @@ fn connection_crud_lifecycle() { .unwrap(); assert_eq!(repo_count, 2); - conn.execute("DELETE FROM connections WHERE id = 'conn-1'", []).unwrap(); + conn.execute("DELETE FROM connections WHERE id = 'conn-1'", []) + .unwrap(); let leftover: i64 = conn .query_row( @@ -451,13 +463,16 @@ fn notification_prefs_stored_and_retrieved() { "INSERT INTO projects (id, name, status, working_directory, created_at, updated_at) VALUES ('proj-notif', 'Notif Test', 'active', '/tmp', ?1, ?2)", rusqlite::params![now, now], - ).unwrap(); + ) + .unwrap(); - let prefs_json = r#"{"story_blocked":{"ring":true,"os":true},"loop_completed":{"ring":true,"os":false}}"#; + let prefs_json = + r#"{"story_blocked":{"ring":true,"os":true},"loop_completed":{"ring":true,"os":false}}"#; conn.execute( "UPDATE projects SET notification_prefs = ?1 WHERE id = 'proj-notif'", rusqlite::params![prefs_json], - ).unwrap(); + ) + .unwrap(); let stored: Option = conn .query_row( @@ -503,13 +518,15 @@ fn session_stats_calculation() { "INSERT INTO projects (id, name, status, working_directory, created_at, updated_at) VALUES ('proj-stats', 'Stats Test', 'active', '/tmp', ?1, ?2)", rusqlite::params![now, now], - ).unwrap(); + ) + .unwrap(); conn.execute( "INSERT INTO sessions (id, project_id, started_at, total_iterations) VALUES ('sess-stats', 'proj-stats', ?1, 0)", rusqlite::params![now], - ).unwrap(); + ) + .unwrap(); for idx in 0..5 { let iter_id = format!("iter-s-{idx}"); @@ -518,7 +535,8 @@ fn session_stats_calculation() { "INSERT INTO iterations (id, session_id, story_id, duration_secs, result, agent_used) VALUES (?1, 'sess-stats', ?2, ?3, ?4, 'claude')", rusqlite::params![iter_id, format!("S-{:03}", idx + 1), (idx + 1) * 60, result], - ).unwrap(); + ) + .unwrap(); } let total: i64 = conn diff --git a/src-tauri/src/tray.rs b/src-tauri/src/tray.rs index f2a25d7..b186df8 100644 --- a/src-tauri/src/tray.rs +++ b/src-tauri/src/tray.rs @@ -49,7 +49,7 @@ pub fn setup_tray(app: &mut tauri::App) -> tauri::Result<()> { Ok(()) } -pub fn update_tooltip(app: &AppHandle, active_loops: usize) { +pub fn update_tooltip(app: &AppHandle, active_loops: usize) { if let Some(tray) = app.tray_by_id(TRAY_ID) { let tooltip = if active_loops == 0 { "LoopForge".to_string() diff --git a/src-tauri/src/worktree_manager.rs b/src-tauri/src/worktree_manager.rs index 5567d5b..344204a 100644 --- a/src-tauri/src/worktree_manager.rs +++ b/src-tauri/src/worktree_manager.rs @@ -136,12 +136,20 @@ pub async fn create_worktree( let worktree_path = format!(".loopforge/worktrees/{loop_name}"); let branch = format!("loopforge/{loop_name}"); - let (ok, stdout, _stderr) = - git_run(&app, &working_directory, &["worktree", "add", &worktree_path, "-b", &branch]).await?; + let (ok, stdout, _stderr) = git_run( + &app, + &working_directory, + &["worktree", "add", &worktree_path, "-b", &branch], + ) + .await?; if !ok { - let (ok2, stdout2, stderr2) = - git_run(&app, &working_directory, &["worktree", "add", &worktree_path, &branch]).await?; + let (ok2, stdout2, stderr2) = git_run( + &app, + &working_directory, + &["worktree", "add", &worktree_path, &branch], + ) + .await?; if !ok2 { return Err(WorktreeError::Git(stderr2)); } @@ -170,8 +178,12 @@ pub async fn list_worktrees( app: AppHandle, working_directory: String, ) -> Result, WorktreeError> { - let (ok, stdout, stderr) = - git_run(&app, &working_directory, &["worktree", "list", "--porcelain"]).await?; + let (ok, stdout, stderr) = git_run( + &app, + &working_directory, + &["worktree", "list", "--porcelain"], + ) + .await?; if !ok { return Err(WorktreeError::Git(stderr)); @@ -187,8 +199,12 @@ pub async fn remove_worktree( worktree_path: String, delete_branch: bool, ) -> Result<(), WorktreeError> { - let (ok, _stdout, stderr) = - git_run(&app, &working_directory, &["worktree", "remove", &worktree_path, "--force"]).await?; + let (ok, _stdout, stderr) = git_run( + &app, + &working_directory, + &["worktree", "remove", &worktree_path, "--force"], + ) + .await?; if !ok { return Err(WorktreeError::Git(stderr)); @@ -220,7 +236,10 @@ pub async fn check_scope_overlap( working_directory: String, ) -> Result, WorktreeError> { let active_project_ids: Vec = { - let handles = loop_state.0.lock().map_err(|_| WorktreeError::LockPoisoned)?; + let handles = loop_state + .0 + .lock() + .map_err(|_| WorktreeError::LockPoisoned)?; handles.keys().cloned().collect() }; @@ -299,11 +318,11 @@ pub async fn start_loop_with_worktree( .unwrap_or_default(); let has_active_loop_in_dir = { - let handles = loop_state.0.lock().map_err(|_| crate::loop_manager::LoopError::LockPoisoned)?; - handles - .keys() - .any(|pid| pid != &args.project_id) - && !overlaps.is_empty() + let handles = loop_state + .0 + .lock() + .map_err(|_| crate::loop_manager::LoopError::LockPoisoned)?; + handles.keys().any(|pid| pid != &args.project_id) && !overlaps.is_empty() }; let effective_working_dir = if has_active_loop_in_dir { diff --git a/src-tauri/templates/atomize_chunk.j2 b/src-tauri/templates/atomize_chunk.j2 index e4365bb..f737e71 100644 --- a/src-tauri/templates/atomize_chunk.j2 +++ b/src-tauri/templates/atomize_chunk.j2 @@ -1,18 +1,29 @@ -You are a software architect. Split the following implementation plan into logical implementation sections. +You are a software architect splitting an implementation plan into cohesive work sections. -Each section should represent a cohesive area of work (e.g., "Database schema", "API endpoints", "Frontend components"). +Each section must represent a self-contained area of work that can be decomposed into user stories independently. -Return a JSON array of section objects. Each section must have: -- "title": short section name -- "content": the relevant portion of the plan for that section +## Splitting rules + +1. Group by architectural layer or bounded context, not by paragraph order. + Good sections: "Database schema and migrations", "API endpoint handlers", "Auth middleware", "React components for dashboard" + Bad sections: "Part 1 of the plan", "Setup", "Miscellaneous changes" + +2. Keep tightly coupled changes together. A database migration and the API code that depends on it belong in the same section if they cannot function independently. + +3. Each section should contain enough work for 2 to 8 user stories. If a section would produce only 1 story, merge it with a related section. If it would produce more than 10, split further. -Return ONLY valid JSON, no markdown code fences, no explanation. +4. Preserve dependency information. If section B depends on types or schemas defined in section A, note that in section B's content. + +5. Never split a single file's changes across multiple sections unless the file serves clearly distinct purposes (e.g., a shared types file referenced by multiple layers). + +## Output format + +Return a JSON array of section objects. Each section must have: +- "title": a short, specific section name describing the architectural area +- "content": the relevant portion of the plan for that section, preserving file paths and technical details verbatim +- "dependsOn": an array of other section titles this section requires (empty array if none) -Example output: -[ - {"title": "Database schema", "content": "..."}, - {"title": "API endpoints", "content": "..."} -] +Return ONLY valid JSON. No markdown code fences, no explanation, no preamble. --- diff --git a/src-tauri/templates/atomize_merge.j2 b/src-tauri/templates/atomize_merge.j2 index 726d5f1..6fd127c 100644 --- a/src-tauri/templates/atomize_merge.j2 +++ b/src-tauri/templates/atomize_merge.j2 @@ -1,23 +1,51 @@ -You are a technical lead finalizing a set of user stories into a coherent implementation plan. +You are a technical lead finalizing a set of user stories into a coherent, ordered implementation plan. Project: {{ project_name }} -Tasks: -1. Assign sequential IDs starting at S-001 -2. Resolve dependencies: set dependsOn to the IDs of stories that must complete first -3. Order stories from foundational to dependent -4. Remove duplicate stories -5. Ensure no story has empty acceptanceCriteria +## Your tasks + +### 1. Assign sequential IDs +Assign IDs starting at S-001, incrementing by 1. Preserve the relative ordering from the input where it makes sense. + +### 2. Deduplicate stories +Two stories are duplicates if they share the same primary deliverable. Detect duplicates by checking: +- Titles that describe the same action on the same target +- Scope overlap: if filesToModify and filesToCreate are identical or nearly identical +- When duplicates are found, keep the story with more specific acceptance criteria and richer scope. Discard the other. + +### 3. Resolve dependencies +Set each story's dependsOn to the IDs of stories that must complete first. A dependency exists when: +- Story B reads from files that story A creates +- Story B imports types, functions, or modules that story A defines +- Story B modifies a file that story A also modifies (the earlier story goes first) + +Validate that the dependency graph is acyclic. If you detect a cycle, break it by merging the cyclic stories into one. + +### 4. Order stories +Sort stories in topological order (dependencies first). Among stories at the same depth, order by priority (critical > high > medium > low), then by estimated complexity (small first). + +### 5. Validate completeness +- Every story must have at least one acceptance criterion +- Every story must have at least one file in filesToModify or filesToCreate +- Every story must have at least one verification command +- No story should have an empty title or description + +Remove or fix any story that fails these checks. + +### 6. Compute totals +Sum all estimatedMinutes to produce totalEstimatedMinutes. + +## Output format Return a JSON object with this exact structure: { "projectName": "{{ project_name }}", "generatedAt": "{{ generated_at }}", - "totalEstimatedMinutes": , + "totalEstimatedMinutes": , "stories": [ ...ordered stories with assigned IDs... ] } -Return ONLY valid JSON, no markdown code fences, no explanation. +Return ONLY valid JSON. No markdown code fences, no explanation, no preamble. --- diff --git a/src-tauri/templates/atomize_stories.j2 b/src-tauri/templates/atomize_stories.j2 index 2a98ba4..ff4ac1a 100644 --- a/src-tauri/templates/atomize_stories.j2 +++ b/src-tauri/templates/atomize_stories.j2 @@ -3,29 +3,49 @@ You are a senior engineer decomposing a plan section into atomic user stories. Project: {{ project_name }} Section: {{ section_title }} -CONSTRAINTS: -- Work ONLY with the section content provided below. Do NOT request file access, permissions, or additional context. -- Do NOT ask clarifying questions. Infer reasonable defaults from the plan text. -- Your entire response must be a single valid JSON array. No prose, no markdown fences, no explanation before or after. +## Thinking process + +Before writing any JSON, reason through these steps internally: +1. What are the distinct units of work in this section? +2. What are the dependencies between those units? +3. Can each unit be implemented and verified independently? +4. Are there files that multiple units need to touch? If so, order the units to avoid conflicts. + +## Story quality rules Each story must be: -- Independently implementable (no hidden dependencies within the story) -- Verifiable via specific acceptance criteria -- Small enough to complete in one focused coding session +- Independently implementable without knowledge of other stories in this batch +- Verifiable through concrete commands or assertions that produce a pass/fail result +- Small enough to complete in one focused coding session (under 60 minutes for a competent engineer) +- Scoped to a single logical change (one concern, one commit) + +Each acceptance criterion must be testable with a single command or a single assertion check. Vague criteria like "works correctly" or "handles edge cases" are not acceptable. Write criteria like "POST /api/users returns 201 with valid payload" or "cargo test --lib auth passes". + +Each verification command must be runnable from the project root without manual setup. Prefer existing test commands, linters, and type checkers. If no test exists, specify what file to check or what output to expect. + +## Anti-patterns to avoid + +- Do NOT create two stories that modify the same file in conflicting ways +- Do NOT create stories with circular dependencies +- Do NOT include stories for "write tests" separately from the feature they test; bundle the test with its feature story +- Do NOT create setup/cleanup stories unless they produce independently valuable output +- Do NOT pad the list with trivial stories (rename a variable, add a comment) +- Do NOT use generic scope like filesToModify: [] or filesToAvoid: []; be explicit + +## Story object schema -Story object schema: { - "title": "imperative verb phrase", - "description": "what and why in 1-3 sentences", - "acceptanceCriteria": ["verifiable assertion 1", "verifiable assertion 2"], + "title": "imperative verb phrase describing the deliverable", + "description": "what changes and why, 1-3 sentences", + "acceptanceCriteria": ["testable assertion with expected outcome"], "scope": { "filesToModify": ["relative/path/to/file"], "filesToCreate": ["relative/path/to/new_file"], - "filesToAvoid": [] + "filesToAvoid": ["relative/path/to/protected_file"] }, "verification": { - "commands": ["cargo test", "npm run typecheck"], - "assertions": [] + "commands": ["cargo test --lib module_name", "bun run typecheck"], + "assertions": ["function X exists in file Y", "endpoint returns 200"] }, "commitMessage": "type(scope): description", "priority": "critical|high|medium|low", @@ -34,7 +54,38 @@ Story object schema: "dependsOn": [] } -IMPORTANT: Output ONLY the JSON array. First character must be [, last character must be ]. +## Example story + +[ + { + "title": "Add user creation endpoint to API router", + "description": "Implement POST /api/users handler that validates input, inserts into the users table, and returns the created user. This is the first endpoint in the user management module.", + "acceptanceCriteria": [ + "POST /api/users with valid JSON body returns 201 and a user object with an id field", + "POST /api/users with missing required fields returns 400 with a validation error message", + "cargo test --lib routes::users passes with no failures" + ], + "scope": { + "filesToModify": ["src/routes/mod.rs"], + "filesToCreate": ["src/routes/users.rs"], + "filesToAvoid": ["src/db/migrations/"] + }, + "verification": { + "commands": ["cargo test --lib routes::users", "cargo clippy -- -D warnings"], + "assertions": ["POST handler is registered in the router", "CreateUser struct validates required fields"] + }, + "commitMessage": "feat(api): add user creation endpoint", + "priority": "high", + "estimatedComplexity": "medium", + "estimatedMinutes": 35, + "dependsOn": [] + } +] + +CONSTRAINTS: +- Work ONLY with the section content provided below. Do NOT request file access or additional context. +- Your entire response must be a single valid JSON array. No prose, no markdown fences, no explanation. +- First character must be [, last character must be ]. --- diff --git a/src-tauri/templates/atomize_summarize.j2 b/src-tauri/templates/atomize_summarize.j2 index d0a4d8d..2b1fbc3 100644 --- a/src-tauri/templates/atomize_summarize.j2 +++ b/src-tauri/templates/atomize_summarize.j2 @@ -1,17 +1,36 @@ -You are a technical writer. Condense the following implementation plan into a precise summary of at most 2000 tokens. +You are a technical editor condensing an implementation plan into a precise summary. -Preserve: -- All feature areas and modules mentioned -- Technical constraints and design decisions -- File paths and component names -- Any acceptance criteria or success metrics +Your goal is to produce a compressed version that preserves all information an engineer needs to decompose this plan into atomic user stories. -Remove: -- Repetition and filler text -- Conversational asides -- Obvious statements +## Preservation priority (highest to lowest) -Return ONLY the condensed plan text, no preamble. +CRITICAL (never drop): +- Architecture decisions and their rationale +- File paths and component names (use exact paths from the plan) +- API contracts, data schemas, and type definitions +- Dependency relationships between modules or features +- Constraints that limit implementation choices + +IMPORTANT (compress but keep): +- Feature descriptions and acceptance criteria +- Error handling requirements +- Migration or compatibility notes + +DISCARD: +- Repetition and restated points +- Conversational filler and hedging language +- Obvious statements that any engineer would infer +- Motivational or marketing language + +## Output format + +- Write in terse technical prose, not bullet lists +- Preserve the plan's section structure using markdown headers +- Keep dependency relationships explicit: "X depends on Y" not "after the previous step" +- Preserve all file paths verbatim +- Target approximately 6000 characters (not tokens) + +Return ONLY the condensed plan text. No preamble, no explanation. --- diff --git a/src-tauri/test-data/ask_history/ask_0.md b/src-tauri/test-data/ask_history/ask_0.md new file mode 100644 index 0000000..6484059 --- /dev/null +++ b/src-tauri/test-data/ask_history/ask_0.md @@ -0,0 +1,8 @@ +--- +index: 0 +--- +## Question +What changed? + +## Response +fixture happy-path ask response 0 diff --git a/src-tauri/tests/canonical_wizard_session_state.rs b/src-tauri/tests/canonical_wizard_session_state.rs new file mode 100644 index 0000000..901cbd2 --- /dev/null +++ b/src-tauri/tests/canonical_wizard_session_state.rs @@ -0,0 +1,179 @@ +include!("../src/lib.rs"); + +use serde_json::json; +use std::path::{Path, PathBuf}; +use std::sync::MutexGuard; +use tauri::test::{mock_builder, mock_context, noop_assets, MockRuntime}; +use tauri::App; + +struct TestHarness { + _lock: MutexGuard<'static, ()>, + _env_guard: EnvGuard, + root_dir: PathBuf, + app: App, + work_dir: PathBuf, +} + +struct EnvGuard { + keys: Vec<(&'static str, Option)>, +} + +impl TestHarness { + fn new() -> Self { + let lock = test_env_lock::ENV_LOCK + .lock() + .unwrap_or_else(|err| err.into_inner()); + let root_dir = + std::env::temp_dir().join(format!("loopforge-canonical-{}", uuid::Uuid::new_v4())); + let home_dir = root_dir.join("home"); + let work_dir = root_dir.join("work"); + std::fs::create_dir_all(&home_dir).expect("home dir"); + std::fs::create_dir_all(&work_dir).expect("work dir"); + let env_guard = EnvGuard::set(&home_dir); + let app = mock_builder() + .build(mock_context(noop_assets())) + .expect("test app"); + app.manage(DbState::open(app.handle()).expect("db state")); + Self { + _lock: lock, + _env_guard: env_guard, + root_dir, + app, + work_dir, + } + } +} + +impl Drop for TestHarness { + fn drop(&mut self) { + let _ = std::fs::remove_dir_all(&self.root_dir); + } +} + +impl EnvGuard { + fn set(home_dir: &Path) -> Self { + let keys = vec![ + ("HOME", std::env::var("HOME").ok()), + ("XDG_DATA_HOME", std::env::var("XDG_DATA_HOME").ok()), + ("XDG_CONFIG_HOME", std::env::var("XDG_CONFIG_HOME").ok()), + ]; + std::env::set_var("HOME", home_dir); + std::env::set_var("XDG_DATA_HOME", home_dir.join(".local").join("share")); + std::env::set_var("XDG_CONFIG_HOME", home_dir.join(".config")); + Self { keys } + } +} + +impl Drop for EnvGuard { + fn drop(&mut self) { + for (key, previous) in self.keys.drain(..) { + match previous { + Some(value) => std::env::set_var(key, value), + None => std::env::remove_var(key), + } + } + } +} + +#[tokio::test(flavor = "current_thread")] +async fn canonical_wizard_session_state_defaults_legacy_draft_fields() { + let harness = TestHarness::new(); + let project = projects::catalog::create_project( + harness.app.handle().clone(), + harness.app.state::(), + "Legacy Project".to_string(), + "Legacy wizard payload".to_string(), + harness.work_dir.to_string_lossy().to_string(), + Some("describe".to_string()), + ) + .await + .expect("project"); + let legacy_draft = json!({ + "projectId": project.id, + "describe": { + "name": "Legacy Project", + "description": "Legacy wizard payload", + "workingDirectory": harness.work_dir.to_string_lossy() + } + }); + + projects::wizard::save_draft( + harness.app.handle().clone(), + project.id.clone(), + legacy_draft.to_string(), + ) + .await + .expect("save draft"); + + let resume = projects::wizard::resume_wizard( + harness.app.handle().clone(), + harness.app.state::(), + project.id, + ) + .await + .expect("resume"); + let session = resume.wizard_session.expect("canonical session"); + + assert_eq!(session.version, 1); + assert_eq!(session.current_step, "describe"); + assert_eq!(session.highest_step, 1); + assert_eq!(session.describe.plan_agent, "claude"); + assert!(session.describe.plan_model.is_none()); + assert!(session.describe.plan_effort.is_none()); + assert!(!session.plan.completed); + assert!(session.plan.document.is_none()); + assert_eq!(session.atomize.stories_count, 0); + assert!(session.atomize.stories.is_empty()); + assert!(session.configure.is_none()); + assert!(session.prompt.is_none()); + assert!(session.guardrails.is_none()); +} + +#[tokio::test(flavor = "current_thread")] +async fn canonical_wizard_session_state_hydrates_without_downstream_artifacts() { + let harness = TestHarness::new(); + let working_directory = harness.work_dir.to_string_lossy().to_string(); + let project = projects::catalog::create_project( + harness.app.handle().clone(), + harness.app.state::(), + "Hydration Project".to_string(), + "Hydrates optional state".to_string(), + working_directory.clone(), + Some("plan".to_string()), + ) + .await + .expect("project"); + let draft = json!({ + "projectId": project.id, + "currentStep": "plan", + "describe": { + "name": "Hydration Project", + "description": "Hydrates optional state", + "workingDirectory": working_directory, + "planAgent": "codex" + } + }); + + projects::wizard::save_draft( + harness.app.handle().clone(), + project.id.clone(), + draft.to_string(), + ) + .await + .expect("save draft"); + + let hydration = projects::wizard::hydrate_wizard( + harness.app.handle().clone(), + harness.app.state::(), + project.id, + ) + .await + .expect("hydrate"); + + assert_eq!(hydration.wizard_step, "plan"); + assert_eq!(hydration.highest_step, 2); + assert_eq!(hydration.project_data.plan_agent, "codex"); + assert!(!hydration.plan_complete); + assert!(hydration.stories.is_empty()); + assert!(hydration.config.is_none()); +} diff --git a/src-tauri/tests/project_query_adapter.rs b/src-tauri/tests/project_query_adapter.rs new file mode 100644 index 0000000..b2ccebc --- /dev/null +++ b/src-tauri/tests/project_query_adapter.rs @@ -0,0 +1,169 @@ +mod models { + #[derive(Debug, Clone)] + pub enum ProjectStatus { Draft, Ready, Running, Paused, Blocked, Failed, Completed, Archived } + impl ProjectStatus { + pub fn from_db_status(status: &str) -> Self { + match status { + "active" | "running" => Self::Running, + "paused" => Self::Paused, + "blocked" => Self::Blocked, + "failed" => Self::Failed, + "completed" => Self::Completed, + "archived" => Self::Archived, + "ready" => Self::Ready, + _ => Self::Draft, + } + } + pub fn resolve_canonical( + db_status: &str, + has_prd: bool, + has_config: bool, + has_active_session: bool, + ) -> Self { + let base_status = Self::from_db_status(db_status); + if has_active_session { return Self::Running; } + if matches!(base_status, Self::Running | Self::Paused) { return Self::Paused; } + if matches!( + base_status, + Self::Blocked | Self::Failed | Self::Completed | Self::Archived + ) { + return base_status; + } + if !has_prd { return Self::Draft; } + if has_config { Self::Ready } else { Self::Draft } + } + pub fn as_project_status(&self) -> &'static str { + match self { + Self::Draft => "draft", + Self::Ready => "ready", + Self::Running => "active", + Self::Paused => "paused", + Self::Blocked => "blocked", + Self::Failed => "failed", + Self::Completed => "completed", + Self::Archived => "archived", + } + } + } +} + +#[path = "../src/services/project_query_adapter.rs"] +mod project_query_adapter; + +use project_query_adapter::{home_listings, monitor_snapshot}; +use rusqlite::Connection; +use serde_json::json; +use std::{ + collections::HashSet, + fs, + path::{Path, PathBuf}, + time::{SystemTime, UNIX_EPOCH}, +}; + +#[test] +fn home_listing_groups_state_transitions() { + let conn = schema(); + let root = temp_root(); + insert_project(&conn, "draft-1", "Draft", "ready"); + insert_project(&conn, "ready-1", "Ready", "draft"); + insert_project(&conn, "paused-1", "Paused", "active"); + write_project(&root, "draft-1", None, None, None); + write_project(&root, "ready-1", Some(prd_json(2, 1)), Some(r#"{"executeAgent":"codex"}"#), None); + write_project(&root, "paused-1", Some(prd_json(1, 0)), Some(r#"{"executeAgent":"codex"}"#), None); + let groups = home_listings(&conn, &HashSet::new(), |id| Ok(root.join(id))).unwrap(); + assert_eq!(groups["draft"][0].id, "draft-1"); + assert_eq!(groups["ready"][0].id, "ready-1"); + assert_eq!(groups["paused"][0].id, "paused-1"); + assert!(groups["draft"][0].total_stories.is_none()); + assert_eq!(groups["ready"][0].stories_completed, Some(1)); + let _ = fs::remove_dir_all(root); +} + +#[test] +fn monitor_snapshot_handles_stale_and_partial_states() { + let conn = schema(); + let root = temp_root(); + insert_project(&conn, "project-1", "Monitor", "active"); + insert_session(&conn, "session-1", "project-1", "2026-01-01T00:00:00Z", Some("2026-01-01T00:06:00Z")); + insert_iteration(&conn, "iter-1", "session-1", "S-001", "2026-01-01T00:05:00Z"); + write_project(&root, "project-1", None, None, Some("line 1\nline 2\n")); + let snapshot = monitor_snapshot(&conn, &root.join("project-1"), "project-1", false).unwrap(); + let value = serde_json::to_value(&snapshot).unwrap(); + assert_eq!(snapshot.status, "paused"); + assert!(snapshot.is_stale); + assert!(snapshot.has_partial_progress); + assert_eq!(snapshot.current_story.as_deref(), Some("S-001")); + assert_eq!(snapshot.events, vec!["session_started", "iteration_completed", "session_ended"]); + assert_eq!(value["recentOutput"][1]["content"], "line 2"); + let _ = fs::remove_dir_all(root); +} + +fn schema() -> Connection { + let conn = Connection::open_in_memory().unwrap(); + conn.execute_batch("CREATE TABLE projects (id TEXT PRIMARY KEY, name TEXT NOT NULL, status TEXT NOT NULL, updated_at TEXT NOT NULL); CREATE TABLE sessions (id TEXT PRIMARY KEY, project_id TEXT NOT NULL, started_at TEXT, ended_at TEXT); CREATE TABLE iterations (id TEXT PRIMARY KEY, session_id TEXT NOT NULL, story_id TEXT NOT NULL, started_at TEXT NOT NULL);").unwrap(); + conn +} + +fn insert_project(conn: &Connection, id: &str, name: &str, status: &str) { + conn.execute("INSERT INTO projects (id, name, status, updated_at) VALUES (?1, ?2, ?3, '2026-01-01T00:00:00Z')", [id, name, status]).unwrap(); +} +fn insert_session( + conn: &Connection, + id: &str, + project_id: &str, + started_at: &str, + ended_at: Option<&str>, +) { + conn.execute( + "INSERT INTO sessions (id, project_id, started_at, ended_at) VALUES (?1, ?2, ?3, ?4)", + rusqlite::params![id, project_id, started_at, ended_at], + ) + .unwrap(); +} +fn insert_iteration( + conn: &Connection, + id: &str, + session_id: &str, + story_id: &str, + started_at: &str, +) { + conn.execute( + "INSERT INTO iterations (id, session_id, story_id, started_at) VALUES (?1, ?2, ?3, ?4)", + [id, session_id, story_id, started_at], + ) + .unwrap(); +} + +fn temp_root() -> PathBuf { + let stamp = SystemTime::now() + .duration_since(UNIX_EPOCH) + .unwrap() + .as_nanos(); + let root = std::env::temp_dir().join(format!("loopforge-query-{stamp}")); + fs::create_dir_all(&root).unwrap(); + root +} + +fn write_project( + root: &Path, + id: &str, + prd: Option, + config: Option<&str>, + output: Option<&str>, +) { + let dir = root.join(id); + fs::create_dir_all(&dir).unwrap(); + if let Some(value) = prd { + fs::write(dir.join("prd.json"), value).unwrap(); + } + if let Some(value) = config { + fs::write(dir.join("config.json"), value).unwrap(); + } + if let Some(value) = output { + fs::write(dir.join("agent_output.log"), value).unwrap(); + } +} + +fn prd_json(total: usize, passed: usize) -> String { + json!({"project":"LoopForge","userStories":(0..total).map(|index| json!({"id":format!("S-{index:03}"),"title":format!("Story {index}"),"acceptanceCriteria":["ok"],"passes":index < passed,"blocked":false})).collect::>()}).to_string() +} diff --git a/src-tauri/tests/wizard_draft_roundtrip.rs b/src-tauri/tests/wizard_draft_roundtrip.rs index a3875fb..c0f19c9 100644 --- a/src-tauri/tests/wizard_draft_roundtrip.rs +++ b/src-tauri/tests/wizard_draft_roundtrip.rs @@ -107,10 +107,14 @@ async fn wizard_draft_roundtrip_preserves_nested_and_optional_fields() { "plan": { "completed": true }, + "highestStep": 4, + "staleFromStep": serde_json::Value::Null, "atomize": { + "stories": [], "storiesCount": 3 }, "configure": { + "schemaVersion": 1, "executeAgent": "codex", "executeModel": serde_json::Value::Null, "executeEffort": "medium", @@ -147,11 +151,14 @@ async fn wizard_draft_roundtrip_preserves_nested_and_optional_fields() { .expect("draft content"); assert_eq!(resume_state.project.name, "Roundtrip Project"); - assert_eq!(resume_state.project.description, "Persist wizard draft values"); + assert_eq!( + resume_state.project.description, + "Persist wizard draft values" + ); assert_eq!(resume_state.project.working_directory, working_directory); assert_eq!(resume_state.wizard_step, "configure"); - assert_eq!(resume_state.wizard_state_json, None); - assert!(!resume_state.has_plan); + assert!(resume_state.wizard_state_json.is_some()); + assert!(resume_state.has_plan); assert!(!resume_state.has_prd); assert_eq!( serde_json::from_str::(&loaded_draft).expect("draft json"), diff --git a/src-tauri/tests/wizard_session_hydration.rs b/src-tauri/tests/wizard_session_hydration.rs new file mode 100644 index 0000000..7fe87c7 --- /dev/null +++ b/src-tauri/tests/wizard_session_hydration.rs @@ -0,0 +1,207 @@ +include!("../src/lib.rs"); + +use serde_json::json; +use std::path::{Path, PathBuf}; +use std::sync::MutexGuard; +use tauri::test::{mock_builder, mock_context, noop_assets, MockRuntime}; +use tauri::App; + +struct TestHarness { + _lock: MutexGuard<'static, ()>, + _env_guard: EnvGuard, + root_dir: PathBuf, + app: App, + work_dir: PathBuf, +} + +struct EnvGuard { + keys: Vec<(&'static str, Option)>, +} + +impl TestHarness { + fn new() -> Self { + let lock = test_env_lock::ENV_LOCK + .lock() + .unwrap_or_else(|err| err.into_inner()); + let root_dir = + std::env::temp_dir().join(format!("loopforge-hydrate-{}", uuid::Uuid::new_v4())); + let home_dir = root_dir.join("home"); + let work_dir = root_dir.join("work"); + std::fs::create_dir_all(&home_dir).expect("home dir"); + std::fs::create_dir_all(&work_dir).expect("work dir"); + + let app = mock_builder() + .build(mock_context(noop_assets())) + .expect("test app"); + let env_guard = EnvGuard::set(&home_dir); + app.manage(db::DbState::open(app.handle()).expect("db state")); + + Self { + _lock: lock, + _env_guard: env_guard, + root_dir, + app, + work_dir, + } + } + + fn artifact_dir(&self, project_id: &str) -> PathBuf { + storage::artifacts::project_artifact_dir(self.app.handle(), project_id).expect("artifacts") + } +} + +impl Drop for TestHarness { + fn drop(&mut self) { + let _ = std::fs::remove_dir_all(&self.root_dir); + } +} + +impl EnvGuard { + fn set(home_dir: &Path) -> Self { + let keys = vec![ + ("HOME", std::env::var("HOME").ok()), + ("XDG_DATA_HOME", std::env::var("XDG_DATA_HOME").ok()), + ("XDG_CONFIG_HOME", std::env::var("XDG_CONFIG_HOME").ok()), + ]; + std::env::set_var("HOME", home_dir); + std::env::set_var("XDG_DATA_HOME", home_dir.join(".local").join("share")); + std::env::set_var("XDG_CONFIG_HOME", home_dir.join(".config")); + Self { keys } + } +} + +impl Drop for EnvGuard { + fn drop(&mut self) { + for (key, previous) in self.keys.drain(..) { + match previous { + Some(value) => std::env::set_var(key, value), + None => std::env::remove_var(key), + } + } + } +} + +#[tokio::test(flavor = "current_thread")] +async fn wizard_session_hydration_marks_only_impacted_steps_invalid() { + let harness = TestHarness::new(); + let working_directory = harness.work_dir.to_string_lossy().to_string(); + let project = projects::catalog::create_project( + harness.app.handle().clone(), + harness.app.state::(), + "Hydrate Project".to_string(), + "Session hydrate".to_string(), + working_directory.clone(), + Some("launch".to_string()), + ) + .await + .expect("project"); + let artifacts = harness.artifact_dir(&project.id); + + std::fs::write( + artifacts.join("draft.json"), + json!({ + "currentStep": "launch", + "highestStep": 5, + "name": "Legacy Hydrate", + "description": "Hydrates old draft payloads", + "workingDirectory": working_directory, + "planAgent": "codex" + }) + .to_string(), + ) + .expect("draft"); + let _ = std::fs::remove_file(artifacts.join("plan.md")); + std::fs::write(artifacts.join("prd.json"), "{ broken").expect("prd"); + std::fs::write(artifacts.join("config.json"), "{ broken").expect("config"); + let _ = std::fs::remove_file(artifacts.join("prompt.md")); + std::fs::write(artifacts.join("guardrails.md"), [0_u8, 159, 146, 150]).expect("guardrails"); + + let session = projects::runtime_config::wizard_session_adapter::hydrate_canonical_session( + harness.app.handle(), + &project, + None, + ) + .expect("hydrate"); + + assert_eq!(session.project_data.name, "Legacy Hydrate"); + assert_eq!(session.project_data.plan_agent, "codex"); + assert_eq!(session.highest_step, 5); + assert!(session.validation.describe.valid); + assert!(!session.validation.plan.valid); + assert!(!session.validation.atomize.valid); + assert!(!session.validation.configure.valid); + assert!(!session.validation.launch.valid); + assert_eq!(session.validation.plan.issues, vec!["plan.md is missing"]); + assert_eq!(session.stories.len(), 0); + assert!(session.config.is_none()); + assert!(session.prompt.is_none()); + assert!(session.guardrails.is_none()); +} + +#[tokio::test(flavor = "current_thread")] +async fn wizard_session_hydration_reads_legacy_working_directory_artifacts() { + let harness = TestHarness::new(); + let working_directory = harness.work_dir.to_string_lossy().to_string(); + let project = projects::catalog::create_project( + harness.app.handle().clone(), + harness.app.state::(), + "Legacy Workspace".to_string(), + "Session hydrate".to_string(), + working_directory.clone(), + Some("launch".to_string()), + ) + .await + .expect("project"); + let artifacts = harness.artifact_dir(&project.id); + + std::fs::write( + artifacts.join("draft.json"), + json!({ "currentStep": "launch", "highestStep": 5 }).to_string(), + ) + .expect("draft"); + let _ = std::fs::remove_file(artifacts.join("plan.md")); + let _ = std::fs::remove_file(artifacts.join("prd.json")); + let _ = std::fs::remove_file(artifacts.join("config.json")); + let _ = std::fs::remove_file(artifacts.join("prompt.md")); + let _ = std::fs::remove_file(artifacts.join("guardrails.md")); + + std::fs::write(harness.work_dir.join("plan.md"), "# Legacy plan").expect("legacy plan"); + std::fs::write( + harness.work_dir.join("prd.json"), + json!({ + "projectName": "Legacy Workspace", + "workingDirectory": working_directory, + "stories": [{ "id": "S-001", "title": "Hydrate", "acceptanceCriteria": ["Works"] }] + }) + .to_string(), + ) + .expect("legacy prd"); + std::fs::write( + harness.work_dir.join("config.json"), + json!({ "executeAgent": "codex", "maxIterations": 25 }).to_string(), + ) + .expect("legacy config"); + std::fs::write(harness.work_dir.join("prompt.md"), "Legacy prompt").expect("legacy prompt"); + std::fs::write(harness.work_dir.join("guardrails.md"), "Legacy guardrails") + .expect("legacy guardrails"); + + let session = projects::runtime_config::wizard_session_adapter::hydrate_canonical_session( + harness.app.handle(), + &project, + None, + ) + .expect("hydrate"); + + assert!(session.validation.plan.valid); + assert!(session.validation.atomize.valid); + assert!(session.validation.configure.valid); + assert!(session.validation.launch.valid); + assert_eq!(session.plan.as_deref(), Some("# Legacy plan")); + assert_eq!(session.stories.len(), 1); + assert_eq!( + session.config.as_ref().map(|config| &config.execute_agent), + Some(&"codex".to_string()) + ); + assert_eq!(session.prompt.as_deref(), Some("Legacy prompt")); + assert_eq!(session.guardrails.as_deref(), Some("Legacy guardrails")); +} diff --git a/src-tauri/tests/wizard_session_invalidation.rs b/src-tauri/tests/wizard_session_invalidation.rs new file mode 100644 index 0000000..e7bb29e --- /dev/null +++ b/src-tauri/tests/wizard_session_invalidation.rs @@ -0,0 +1,168 @@ +include!("../src/lib.rs"); + +use serde_json::json; +use std::path::Path; + +mod shared_harness { + include!("../src/contract_tests/harness.rs"); +} + +use shared_harness::TestHarness; + +fn seed_descendants(artifact_dir: &Path, work_dir: &Path) { + let prd = json!({ + "projectName": "Persisted", + "generatedAt": "2026-04-18T12:00:00Z", + "stories": [{ "id": "S-001", "title": "Persisted story", "acceptanceCriteria": ["Works"] }] + }); + std::fs::write(artifact_dir.join("plan.md"), "# Persisted plan").expect("artifact plan"); + std::fs::write(artifact_dir.join("prd.json"), prd.to_string()).expect("artifact prd"); + std::fs::write( + artifact_dir.join("config.json"), + json!({ "executeAgent": "claude", "maxIterations": 25 }).to_string(), + ) + .expect("artifact config"); + std::fs::write(artifact_dir.join("prompt.md"), "Persisted prompt").expect("artifact prompt"); + std::fs::write(artifact_dir.join("guardrails.md"), "Persisted guardrails") + .expect("artifact guardrails"); + std::fs::write(work_dir.join("plan.md"), "# Legacy plan").expect("legacy plan"); + std::fs::write(work_dir.join("prd.json"), prd.to_string()).expect("legacy prd"); + std::fs::write( + work_dir.join("config.json"), + json!({ "executeAgent": "gemini", "maxIterations": 50 }).to_string(), + ) + .expect("legacy config"); + std::fs::write(work_dir.join("prompt.md"), "Legacy prompt").expect("legacy prompt"); + std::fs::write(work_dir.join("guardrails.md"), "Legacy guardrails").expect("legacy guardrails"); +} + +async fn create_project(harness: &TestHarness, name: &str) -> projects::Project { + projects::catalog::create_project( + harness.app.handle().clone(), + harness.app.state::(), + name.to_string(), + format!("{name} description"), + harness.work_dir.to_string_lossy().to_string(), + Some("launch".to_string()), + ) + .await + .expect("project") +} + +#[tokio::test(flavor = "current_thread")] +async fn wizard_session_invalidation_clears_all_descendants_after_description_change() { + let harness = TestHarness::new(); + let project = create_project(&harness, "Description Reset").await; + let artifact_dir = harness.artifact_dir(&project.id); + seed_descendants(&artifact_dir, &harness.work_dir); + std::fs::write( + artifact_dir.join("draft.json"), + json!({ + "projectId": project.id, "currentStep": "launch", "highestStep": 5, + "describe": { "name": project.name, "description": "Old description", "workingDirectory": harness.work_dir, "planAgent": "claude" }, + "plan": { "completed": true, "document": "# Persisted plan" }, + "atomize": { "storiesCount": 1, "stories": [{ "id": "S-001", "title": "Persisted story", "acceptanceCriteria": ["Works"] }] }, + "configure": { "executeAgent": "claude", "maxIterations": 25 }, + "prompt": "Persisted prompt", "guardrails": "Persisted guardrails" + }) + .to_string(), + ) + .expect("previous draft"); + + let saved = projects::runtime_config::wizard_session_adapter::save_canonical_session( + harness.app.handle(), + &project, + &json!({ + "projectId": project.id, "currentStep": "launch", "highestStep": 5, + "describe": { "name": project.name, "description": "New description", "workingDirectory": harness.work_dir, "planAgent": "claude" }, + "plan": { "completed": true, "document": "# Stale plan" }, + "atomize": { "storiesCount": 1, "stories": [{ "id": "S-001", "title": "Stale story", "acceptanceCriteria": ["Works"] }] }, + "configure": { "executeAgent": "codex", "maxIterations": 40 }, + "prompt": "Stale prompt", "guardrails": "Stale guardrails" + }) + .to_string(), + None, + ) + .expect("save session"); + + assert_eq!(saved.stale_from_step, Some(2)); + assert!(!saved.plan.completed); + assert!(saved.atomize.stories.is_empty()); + assert!(saved.prompt.is_none()); + assert!(saved.guardrails.is_none()); + for file_name in ["plan.md", "prd.json", "config.json", "prompt.md", "guardrails.md"] { + assert!(!artifact_dir.join(file_name).exists(), "{file_name} should be removed"); + } + + let session = projects::runtime_config::wizard_session_adapter::hydrate_canonical_session( + harness.app.handle(), + &project, + None, + ) + .expect("hydrate"); + + assert!(artifact_dir.join("draft.json").exists()); + assert_eq!(session.project_data.description, "New description"); + assert!(session.plan.is_none()); + assert!(!session.plan_complete); + assert!(session.stories.is_empty()); + assert!(session.prompt.is_none()); + assert!(session.guardrails.is_none()); +} + +#[tokio::test(flavor = "current_thread")] +async fn wizard_session_invalidation_preserves_plan_and_prd_after_config_change() { + let harness = TestHarness::new(); + let project = create_project(&harness, "Config Reset").await; + let artifact_dir = harness.artifact_dir(&project.id); + seed_descendants(&artifact_dir, &harness.work_dir); + std::fs::write( + artifact_dir.join("draft.json"), + json!({ + "projectId": project.id, "currentStep": "launch", "highestStep": 5, + "describe": { "name": project.name, "description": project.description, "workingDirectory": harness.work_dir, "planAgent": "claude" }, + "plan": { "completed": true }, "atomize": { "storiesCount": 1 }, + "configure": { "executeAgent": "claude", "maxIterations": 25 } + }) + .to_string(), + ) + .expect("previous draft"); + + let saved = projects::runtime_config::wizard_session_adapter::save_canonical_session( + harness.app.handle(), + &project, + &json!({ + "projectId": project.id, "currentStep": "launch", "highestStep": 5, + "describe": { "name": project.name, "description": project.description, "workingDirectory": harness.work_dir, "planAgent": "claude" }, + "plan": { "completed": true }, "atomize": { "storiesCount": 1 }, + "configure": { "executeAgent": "codex", "maxIterations": 40 }, + "prompt": "Stale prompt", "guardrails": "Stale guardrails" + }) + .to_string(), + None, + ) + .expect("save session"); + + assert_eq!(saved.stale_from_step, Some(4)); + assert!(artifact_dir.join("plan.md").exists()); + assert!(artifact_dir.join("prd.json").exists()); + for file_name in ["config.json", "prompt.md", "guardrails.md"] { + assert!(!artifact_dir.join(file_name).exists(), "{file_name} should be removed"); + } + + let session = projects::runtime_config::wizard_session_adapter::hydrate_canonical_session( + harness.app.handle(), + &project, + None, + ) + .expect("hydrate"); + + assert_eq!(session.plan.as_deref(), Some("# Persisted plan")); + assert_eq!(session.stories.len(), 1); + assert_eq!( + session.config.as_ref().map(|config| config.execute_agent.as_str()), + Some("codex") + ); + assert!(session.prompt.is_none()); + assert!(session.guardrails.is_none()); +} diff --git a/src/components/EmptyState.tsx b/src/components/EmptyState.tsx index 5d30692..287818a 100644 --- a/src/components/EmptyState.tsx +++ b/src/components/EmptyState.tsx @@ -1,4 +1,4 @@ -import * as React from "react"; +import type * as React from "react"; import { Button, type ButtonProps } from "@/components/ui/button"; import { Empty, type EmptyProps } from "@/components/ui/empty"; @@ -15,12 +15,7 @@ export type EmptyStateProps = Omit & { action?: React.ReactNode; }; -export function EmptyState({ - action, - primaryAction, - secondaryAction, - ...props -}: EmptyStateProps) { +export function EmptyState({ action, primaryAction, secondaryAction, ...props }: EmptyStateProps) { let actionContent = action; if (!actionContent && (primaryAction || secondaryAction)) { diff --git a/src/components/EphemeralOverlay.tsx b/src/components/EphemeralOverlay.tsx index b0b211a..2d0c09a 100644 --- a/src/components/EphemeralOverlay.tsx +++ b/src/components/EphemeralOverlay.tsx @@ -1,5 +1,5 @@ import { useEffect, useRef, useState } from "react"; -import { ephemeralQuery, type EphemeralAnswer } from "../lib/tauri"; +import { type EphemeralAnswer, ephemeralQuery } from "../lib/tauri"; import { Badge } from "./ui/badge"; import { Button } from "./ui/button"; import { @@ -18,11 +18,7 @@ interface EphemeralOverlayProps { onClose: () => void; } -export function EphemeralOverlay({ - projectId, - isOpen, - onClose, -}: EphemeralOverlayProps) { +export function EphemeralOverlay({ projectId, isOpen, onClose }: EphemeralOverlayProps) { const [question, setQuestion] = useState(""); const [answer, setAnswer] = useState(null); const [loading, setLoading] = useState(false); @@ -73,7 +69,7 @@ export function EphemeralOverlay({ return ( { event.preventDefault(); inputRef.current?.focus(); @@ -90,7 +86,7 @@ export function EphemeralOverlay({ -
+ - {loading ?

thinking...

: null} + {loading ?

thinking...

: null}
{answer && ( -
+
-
+            
               {answer.answer}
             
)} - -

Esc to close · Ctrl+Shift+Space to toggle

+ +

+ Esc to close · Ctrl+Shift+Space to toggle +

) : ( -
- {stepBody} -
+
{stepBody}
)}
{showConnectors && stepIndex < steps.length - 1 ? ( ) : null} diff --git a/src/components/ThemeToggle.tsx b/src/components/ThemeToggle.tsx index ae8113f..b9c83c9 100644 --- a/src/components/ThemeToggle.tsx +++ b/src/components/ThemeToggle.tsx @@ -1,4 +1,4 @@ -import * as React from "react"; +import type * as React from "react"; import { Badge } from "@/components/ui/badge"; import { Switch, type SwitchProps } from "@/components/ui/switch"; import { cn } from "@/lib/utils"; @@ -32,11 +32,9 @@ export function ThemeToggle({ return (
-

{label}

+

{label}

{description ? ( -

- {description} -

+

{description}

) : null}
diff --git a/src/components/TitleBar.tsx b/src/components/TitleBar.tsx index 0fd8b7e..f45a709 100644 --- a/src/components/TitleBar.tsx +++ b/src/components/TitleBar.tsx @@ -1,5 +1,6 @@ -import { useState, useEffect, useMemo, type MouseEvent } from "react"; import { getCurrentWindow } from "@tauri-apps/api/window"; +import { useEffect, useMemo, useState } from "react"; +import { isApplePlatform } from "../lib/platform"; import { WindowControls } from "./title-bar/WindowControls"; import { Button } from "./ui/button"; import { @@ -13,12 +14,7 @@ import { export function TitleBar() { const appWindow = getCurrentWindow(); const [isMaximized, setIsMaximized] = useState(false); - const isMac = useMemo( - () => - typeof navigator !== "undefined" && - /Mac|iPhone|iPad|iPod/.test(navigator.userAgent), - [], - ); + const isMac = useMemo(() => isApplePlatform(), []); useEffect(() => { void appWindow.isMaximized().then(setIsMaximized); @@ -37,22 +33,10 @@ export function TitleBar() { void appWindow.close(); } - function handleDragStart(event: MouseEvent) { - if (event.button !== 0) { - return; - } - const target = event.target as HTMLElement; - if (target.closest("[data-no-drag]")) { - return; - } - void appWindow.startDragging(); - } - return (
{isMac ? ( @@ -66,7 +50,7 @@ export function TitleBar() { ) : null} LoopForge diff --git a/src/components/layout/AppSidebar.tsx b/src/components/layout/AppSidebar.tsx index 7cbf5cc..8cfd087 100644 --- a/src/components/layout/AppSidebar.tsx +++ b/src/components/layout/AppSidebar.tsx @@ -1,15 +1,28 @@ -import { useEffect, useMemo, useState } from "react"; -import { NavLink, useLocation, useNavigate } from "react-router"; import { Bell, ChevronDown, House, Plus } from "lucide-react"; -import { ThemeToggle } from "../ThemeToggle"; -import { Button } from "../ui/button"; -import { Collapsible, CollapsibleContent, CollapsibleTrigger } from "../ui/collapsible"; -import { Sidebar, SidebarContent, SidebarFooter, SidebarGroup, SidebarGroupContent, SidebarGroupLabel, SidebarHeader, SidebarMenu, SidebarMenuButton, SidebarMenuItem, SidebarMenuLabel, SidebarTrigger, useSidebar } from "../ui/sidebar"; +import { useEffect, useState } from "react"; +import { NavLink, useLocation, useNavigate } from "react-router"; import { useProjectStore } from "../../stores/projectStore"; -import type { Project } from "../../types/project"; import { useThemeStore } from "../../stores/themeStore"; import { useWizardStore } from "../../stores/wizardStore"; -import { ACTIVE_PROJECT_STATUSES, FINISHED_PROJECT_STATUSES } from "../../lib/project-status"; +import type { Project } from "../../types/project"; +import { ThemeToggle } from "../ThemeToggle"; +import { Button } from "../ui/button"; +import { Collapsible, CollapsibleContent, CollapsibleTrigger } from "../ui/collapsible"; +import { + Sidebar, + SidebarContent, + SidebarFooter, + SidebarGroup, + SidebarGroupContent, + SidebarGroupLabel, + SidebarHeader, + SidebarMenu, + SidebarMenuButton, + SidebarMenuItem, + SidebarMenuLabel, + SidebarTrigger, + useSidebar, +} from "../ui/sidebar"; import { ProjectMenuItem } from "./ProjectMenuItem"; type AppSidebarProps = { onToggleNotifications: () => void; totalUnread: number }; @@ -40,16 +53,18 @@ export function AppSidebar({ onToggleNotifications, totalUnread }: AppSidebarPro const navigate = useNavigate(); const location = useLocation(); const projects = useProjectStore((state) => state.projects); + const grouped = useProjectStore((state) => state.grouped); const theme = useThemeStore((state) => state.theme); const setTheme = useThemeStore((state) => state.setTheme); const { collapsed } = useSidebar(); - const [sectionOpen, setSectionOpen] = useState>(readSavedSections); - const { activeProjects, draftProjects, finishedProjects, archivedProjects } = useMemo(() => ({ - activeProjects: projects.filter((project) => ACTIVE_PROJECT_STATUSES.includes(project.status)), - draftProjects: projects.filter((project) => project.status === "draft"), - finishedProjects: projects.filter((project) => FINISHED_PROJECT_STATUSES.includes(project.status)), - archivedProjects: projects.filter((project) => project.status === "archived"), - }), [projects]); + const [sectionOpen, setSectionOpen] = + useState>(readSavedSections); + const { + active: activeProjects, + drafts: draftProjects, + finished: finishedProjects, + archived: archivedProjects, + } = grouped; const collapsedProjects = activeProjects; useEffect(() => { @@ -62,11 +77,13 @@ export function AppSidebar({ onToggleNotifications, totalUnread }: AppSidebarPro if (collapsed) { return null; } - return

{emptyLabel}

; + return

{emptyLabel}

; } return ( - {list.map((project) => )} + {list.map((project) => ( + + ))} ); } @@ -81,13 +98,16 @@ export function AppSidebar({ onToggleNotifications, totalUnread }: AppSidebarPro className="space-y-1" > - -
+
{renderProjectList(list, `No ${label.toLowerCase()}`)}
@@ -96,14 +116,31 @@ export function AppSidebar({ onToggleNotifications, totalUnread }: AppSidebarPro } return ( - + -
- +
+ {collapsed ? null : ( - - LF - + + LF + AI Loop Orchestrator @@ -111,29 +148,42 @@ export function AppSidebar({ onToggleNotifications, totalUnread }: AppSidebarPro
- + Workspace - navigate("/")}> + navigate("/")} + > Home @@ -142,21 +192,21 @@ export function AppSidebar({ onToggleNotifications, totalUnread }: AppSidebarPro - + Projects {projects.length === 0 ? ( -

No projects yet

+

No projects yet

) : collapsed ? ( renderProjectList(collapsedProjects, "No projects") ) : ( <>
-

+

Active

-
+
{renderProjectList(activeProjects, "No active loops")}
diff --git a/src/components/layout/ProjectMenuItem.tsx b/src/components/layout/ProjectMenuItem.tsx index 014b15e..1334692 100644 --- a/src/components/layout/ProjectMenuItem.tsx +++ b/src/components/layout/ProjectMenuItem.tsx @@ -1,19 +1,11 @@ import { useNavigate } from "react-router"; -import { StatusBadge } from "../StatusBadge"; -import { Tooltip, TooltipContent, TooltipProvider, TooltipTrigger } from "../ui/tooltip"; -import { SidebarMenuButton, SidebarMenuItem, SidebarMenuLabel, useSidebar } from "../ui/sidebar"; import { cn } from "../../lib/utils"; -import { useNotificationStore, type RingColor } from "../../stores/notificationStore"; -import { type Project } from "../../stores/projectStore"; -import { PROJECT_STATUS_META } from "../../lib/project-status"; - -const WIZARD_STEP_LABEL: Record = { - describe: "Describe", - plan: "Planning", - atomize: "Atomizing", - configure: "Configure", - launch: "Launch", -}; +import { useDisplayVocabularyStore } from "../../stores/displayVocabularyStore"; +import { type RingColor, useNotificationStore } from "../../stores/notificationStore"; +import type { Project } from "../../stores/projectStore"; +import { StatusBadge, type StatusBadgeStatus } from "../StatusBadge"; +import { SidebarMenuButton, SidebarMenuItem, SidebarMenuLabel, useSidebar } from "../ui/sidebar"; +import { Tooltip, TooltipContent, TooltipProvider, TooltipTrigger } from "../ui/tooltip"; const RING_CLASSES: Record = { red: "border-l-2 border-l-blocked", @@ -22,8 +14,8 @@ const RING_CLASSES: Record = { green: "border-l-2 border-l-success", }; -const STATUS_DOT: Record = { - active: "bg-running", +const STATUS_DOT: Record = { + running: "bg-running", paused: "bg-paused", blocked: "bg-blocked", completed: "bg-success", @@ -46,10 +38,16 @@ export function ProjectMenuItem({ currentPath, project }: ProjectMenuItemProps) const navigate = useNavigate(); const { collapsed } = useSidebar(); const href = projectHref(project); - const statusMeta = PROJECT_STATUS_META[project.status]; - const draftSubLabel = project.status === "draft" && project.wizardStep - ? WIZARD_STEP_LABEL[project.wizardStep] ?? project.wizardStep - : null; + const vocabulary = useDisplayVocabularyStore((state) => state.vocabulary); + const statusMeta = vocabulary?.projectStatusMeta[project.status] ?? { + badgeStatus: project.status === "active" ? "running" : project.status, + cardLabel: `${project.status.slice(0, 1).toUpperCase()}${project.status.slice(1)}`, + sidebarLabel: `${project.status.slice(0, 1).toUpperCase()}${project.status.slice(1)}`, + }; + const draftSubLabel = + project.status === "draft" && project.wizardStep + ? (vocabulary?.wizardStepLabels[project.wizardStep] ?? project.wizardStep) + : null; const sidebarLabel = draftSubLabel ? `Draft · ${draftSubLabel}` : statusMeta.sidebarLabel; const ringColor = useNotificationStore((state) => state.ringColorForProject(project.id)); const unreadCount = useNotificationStore((state) => state.unreadCountForProject(project.id)); @@ -64,17 +62,29 @@ export function ProjectMenuItem({ currentPath, project }: ProjectMenuItemProps) navigate(href)} > - + {project.name.slice(0, 2) || "??"} - - {unreadCount > 0 ? : null} + + {unreadCount > 0 ? ( + + ) : null} - {tooltipText} + + {tooltipText} + @@ -93,21 +103,27 @@ export function ProjectMenuItem({ currentPath, project }: ProjectMenuItemProps) > {project.name} - {project.id.slice(0, 8)} + + {project.id.slice(0, 8)} + - {unreadCount > 0 ? : null} + {unreadCount > 0 ? ( + + ) : null} - {tooltipText} + + {tooltipText} + {project.totalStories != null && project.totalStories > 0 ? ( -

+

{project.storiesCompleted ?? 0}/{project.totalStories} stories

) : null} diff --git a/src/components/markdownComponents.tsx b/src/components/markdownComponents.tsx index d87e8cb..87d58b7 100644 --- a/src/components/markdownComponents.tsx +++ b/src/components/markdownComponents.tsx @@ -1,86 +1,120 @@ import type { Components } from "react-markdown"; -export const MARKDOWN_COMPONENTS: Components = { - h1: ({ children }) => ( -

- {children} -

- ), - h2: ({ children }) => ( -

{children}

- ), - h3: ({ children }) => ( -

{children}

- ), - p: ({ children }) => ( -

{children}

- ), - ul: ({ children }) => ( -
    {children}
- ), - ol: ({ children }) => ( -
    {children}
- ), - li: ({ children }) => ( -
  • - - {children} -
  • - ), - code: ({ children, className }) => { - const isBlock = className?.includes("language-"); - if (isBlock) { +type MarkdownDensity = "compact" | "comfortable"; + +function spacing(density: MarkdownDensity, compactValue: string, comfortableValue: string) { + return density === "compact" ? compactValue : comfortableValue; +} + +export function createMarkdownComponents(density: MarkdownDensity): Components { + return { + h1: ({ children }) => ( +

    + {children} +

    + ), + h2: ({ children }) => ( +

    + {children} +

    + ), + h3: ({ children }) => ( +

    + {children} +

    + ), + p: ({ children }) => ( +

    + {children} +

    + ), + ul: ({ children }) => ( +
      {children}
    + ), + ol: ({ children }) => ( +
      + {children} +
    + ), + li: ({ children }) => ( +
  • + + {children} +
  • + ), + code: ({ children, className }) => { + const isBlock = className?.includes("language-"); + if (isBlock) { + return ( + + {children} + + ); + } return ( - + {children} ); - } - return ( - + }, + pre: ({ children }) =>
    {children}
    , + blockquote: ({ children }) => ( +
    + {children} +
    + ), + strong: ({ children }) => {children}, + em: ({ children }) => {children}, + a: ({ children, href }) => ( + + {children} + + ), + hr: () =>
    , + table: ({ children }) => ( +
    + {children}
    +
    + ), + thead: ({ children }) => {children}, + th: ({ children }) => ( + {children} -
    - ); - }, - pre: ({ children }) =>
    {children}
    , - blockquote: ({ children }) => ( -
    - {children} -
    - ), - strong: ({ children }) => ( - {children} - ), - em: ({ children }) => ( - {children} - ), - a: ({ children, href }) => ( - - {children} - - ), - hr: () =>
    , - table: ({ children }) => ( -
    - + + ), + td: ({ children }) => ( +
    {children} -
    -
    - ), - thead: ({ children }) => ( - {children} - ), - th: ({ children }) => ( - - {children} - - ), - td: ({ children }) => ( - {children} - ), -}; + + ), + }; +} + +export const MARKDOWN_COMPONENTS = createMarkdownComponents("compact"); + +export const MARKDOWN_COMPONENTS_COMFORTABLE = createMarkdownComponents("comfortable"); diff --git a/src/components/notification-panel/NotificationPanelLayout.tsx b/src/components/notification-panel/NotificationPanelLayout.tsx index 89d6113..ab536b1 100644 --- a/src/components/notification-panel/NotificationPanelLayout.tsx +++ b/src/components/notification-panel/NotificationPanelLayout.tsx @@ -1,29 +1,10 @@ +import { cn } from "@/lib/utils"; +import { useDisplayVocabularyStore } from "@/stores/displayVocabularyStore"; +import type { AppNotification, NotificationType, RingColor } from "../../stores/notificationStore"; import { Badge, type BadgeProps } from "../ui/badge"; import { Button } from "../ui/button"; import { ScrollArea, ScrollContent, ScrollViewport } from "../ui/scroll-area"; import { SheetDescription, SheetHeader, SheetTitle } from "../ui/sheet"; -import { cn } from "@/lib/utils"; -import { - type AppNotification, - type NotificationType, - type RingColor, -} from "../../stores/notificationStore"; - -const STATUS_VARIANT: Record> = { - red: "danger", - amber: "warning", - cyan: "info", - green: "success", -}; - -const TYPE_LABEL: Record = { - story_blocked: "Blocked", - loop_completed: "Completed", - rate_limited: "Rate limit", - review_comment: "Review", - loop_error: "Error", - story_completed: "Story done", -}; type NotificationPanelLayoutProps = { notifications: AppNotification[]; @@ -45,26 +26,40 @@ function formatTimeAgo(timestamp: number): string { function NotificationRow({ notification, + ringVariantMap, + typeLabelMap, onSelectNotification, }: { notification: AppNotification; + ringVariantMap: Record; + typeLabelMap: Record; onSelectNotification: (notification: AppNotification) => void; }) { return ( @@ -62,10 +66,12 @@ export function WindowControls({ > {isMaximized ? ( + Restore ) : ( + Maximize + Close diff --git a/src/components/ui/alert-dialog.tsx b/src/components/ui/alert-dialog.tsx index 251c933..11d1286 100644 --- a/src/components/ui/alert-dialog.tsx +++ b/src/components/ui/alert-dialog.tsx @@ -1,7 +1,7 @@ "use client"; -import * as React from "react"; import * as AlertDialogPrimitive from "@radix-ui/react-alert-dialog"; +import * as React from "react"; import { cn } from "@/lib/utils"; import { buttonVariants } from "./button"; @@ -17,7 +17,7 @@ const AlertDialogOverlay = React.forwardRef< ) => ( +const AlertDialogHeader = ({ className, ...props }: React.HTMLAttributes) => (
    ); -const AlertDialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( +const AlertDialogFooter = ({ className, ...props }: React.HTMLAttributes) => (
    (({ className, ...props }, ref) => ( )); @@ -78,7 +72,7 @@ const AlertDialogDescription = React.forwardRef< >(({ className, ...props }, ref) => ( )); diff --git a/src/components/ui/badge.tsx b/src/components/ui/badge.tsx index dc9f70b..244e91f 100644 --- a/src/components/ui/badge.tsx +++ b/src/components/ui/badge.tsx @@ -1,10 +1,10 @@ -import * as React from "react"; import { cva, type VariantProps } from "class-variance-authority"; +import type * as React from "react"; import { cn } from "@/lib/utils"; const badgeVariants = cva( - "inline-flex items-center rounded-full border px-2 py-0.5 text-[11px] font-sans font-semibold leading-none transition-colors", + "inline-flex items-center rounded-full border px-2 py-0.5 font-sans font-semibold text-[11px] leading-none transition-colors", { variants: { variant: { @@ -53,16 +53,10 @@ const badgeVariants = cva( }, ); -export type BadgeProps = React.HTMLAttributes & - VariantProps; +export type BadgeProps = React.HTMLAttributes & VariantProps; function Badge({ className, variant, emphasis, ...props }: BadgeProps) { - return ( - - ); + return ; } export { Badge, badgeVariants }; diff --git a/src/components/ui/button.tsx b/src/components/ui/button.tsx index 40e3704..f050525 100644 --- a/src/components/ui/button.tsx +++ b/src/components/ui/button.tsx @@ -1,17 +1,17 @@ -import * as React from "react"; import { cva, type VariantProps } from "class-variance-authority"; +import * as React from "react"; import { cn } from "@/lib/utils"; const buttonVariants = cva( - "inline-flex items-center justify-center gap-2 rounded-md border border-transparent text-sm font-sans font-medium transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 focus-visible:ring-offset-background disabled:pointer-events-none disabled:opacity-50", + "inline-flex items-center justify-center gap-2 rounded-md border border-transparent font-medium font-sans text-sm transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 focus-visible:ring-offset-background disabled:pointer-events-none disabled:opacity-50", { variants: { variant: { primary: "bg-primary text-primary-foreground hover:bg-primary/80", - secondary: "bg-elevated text-text border-border hover:bg-surface", + secondary: "border-border bg-elevated text-text hover:bg-surface", ghost: "bg-transparent text-text-muted hover:bg-elevated/70 hover:text-text", - outline: "bg-transparent text-text border-border hover:bg-elevated/70", + outline: "border-border bg-transparent text-text hover:bg-elevated/70", destructive: "bg-destructive text-destructive-foreground hover:bg-destructive/85", }, size: { diff --git a/src/components/ui/card.tsx b/src/components/ui/card.tsx index a545c6a..013a769 100644 --- a/src/components/ui/card.tsx +++ b/src/components/ui/card.tsx @@ -1,5 +1,5 @@ -import * as React from "react"; import { cva, type VariantProps } from "class-variance-authority"; +import * as React from "react"; import { cn } from "@/lib/utils"; @@ -16,18 +16,11 @@ const cardVariants = cva("rounded-lg border text-text shadow-sm", { }, }); -export type CardProps = React.HTMLAttributes & - VariantProps; +export type CardProps = React.HTMLAttributes & VariantProps; const Card = React.forwardRef( ({ className, variant, ...props }, ref) => { - return ( -
    - ); + return
    ; }, ); Card.displayName = "Card"; @@ -39,7 +32,7 @@ const CardHeader = React.forwardRef( return (
    ); @@ -54,7 +47,7 @@ const CardTitle = React.forwardRef( return (

    ); @@ -64,18 +57,17 @@ CardTitle.displayName = "CardTitle"; export type CardDescriptionProps = React.HTMLAttributes; -const CardDescription = React.forwardRef< - HTMLParagraphElement, - CardDescriptionProps ->(({ className, ...props }, ref) => { - return ( -

    - ); -}); +const CardDescription = React.forwardRef( + ({ className, ...props }, ref) => { + return ( +

    + ); + }, +); CardDescription.displayName = "CardDescription"; export type CardContentProps = React.HTMLAttributes; @@ -94,7 +86,7 @@ const CardFooter = React.forwardRef( return (

    ); diff --git a/src/components/ui/command.tsx b/src/components/ui/command.tsx index ff940ea..f689952 100644 --- a/src/components/ui/command.tsx +++ b/src/components/ui/command.tsx @@ -1,8 +1,8 @@ "use client"; -import * as React from "react"; import { Command as CommandPrimitive } from "cmdk"; import { Search } from "lucide-react"; +import * as React from "react"; import { cn } from "@/lib/utils"; import { Dialog, DialogContent, DialogDescription, DialogTitle } from "./dialog"; @@ -13,7 +13,10 @@ const Command = React.forwardRef< >(({ className, ...props }, ref) => ( )); @@ -26,7 +29,7 @@ const CommandDialog = ({ children, ...props }: CommandDialogProps) => ( Command palette Search and run available actions. - + {children} @@ -37,9 +40,13 @@ const CommandInput = React.forwardRef< React.ElementRef, React.ComponentPropsWithoutRef >(({ className, ...props }, ref) => ( -
    +
    - +
    )); CommandInput.displayName = CommandPrimitive.Input.displayName; @@ -48,21 +55,35 @@ const CommandList = React.forwardRef< React.ElementRef, React.ComponentPropsWithoutRef >(({ className, ...props }, ref) => ( - + )); CommandList.displayName = CommandPrimitive.List.displayName; const CommandEmpty = React.forwardRef< React.ElementRef, React.ComponentPropsWithoutRef ->((props, ref) => ); +>((props, ref) => ( + +)); CommandEmpty.displayName = CommandPrimitive.Empty.displayName; const CommandGroup = React.forwardRef< React.ElementRef, React.ComponentPropsWithoutRef >(({ className, ...props }, ref) => ( - + )); CommandGroup.displayName = CommandPrimitive.Group.displayName; @@ -70,7 +91,11 @@ const CommandSeparator = React.forwardRef< React.ElementRef, React.ComponentPropsWithoutRef >(({ className, ...props }, ref) => ( - + )); CommandSeparator.displayName = CommandPrimitive.Separator.displayName; @@ -78,12 +103,16 @@ const CommandItem = React.forwardRef< React.ElementRef, React.ComponentPropsWithoutRef >(({ className, ...props }, ref) => ( - + )); CommandItem.displayName = CommandPrimitive.Item.displayName; const CommandShortcut = ({ className, ...props }: React.HTMLAttributes) => ( - + ); CommandShortcut.displayName = "CommandShortcut"; diff --git a/src/components/ui/dialog.tsx b/src/components/ui/dialog.tsx index f7c86ab..c5fb255 100644 --- a/src/components/ui/dialog.tsx +++ b/src/components/ui/dialog.tsx @@ -1,7 +1,7 @@ "use client"; -import * as React from "react"; import * as DialogPrimitive from "@radix-ui/react-dialog"; +import * as React from "react"; import { cn } from "@/lib/utils"; @@ -17,7 +17,7 @@ const DialogOverlay = React.forwardRef< ) => ( +const DialogHeader = ({ className, ...props }: React.HTMLAttributes) => (
    ); -const DialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( +const DialogFooter = ({ className, ...props }: React.HTMLAttributes) => (
    (({ className, ...props }, ref) => ( )); @@ -78,7 +72,7 @@ const DialogDescription = React.forwardRef< >(({ className, ...props }, ref) => ( )); diff --git a/src/components/ui/dropdown-menu.tsx b/src/components/ui/dropdown-menu.tsx index 82c0ec1..88e54aa 100644 --- a/src/components/ui/dropdown-menu.tsx +++ b/src/components/ui/dropdown-menu.tsx @@ -1,8 +1,8 @@ "use client"; -import * as React from "react"; import * as DropdownMenuPrimitive from "@radix-ui/react-dropdown-menu"; import { Check, ChevronRight, Circle } from "lucide-react"; +import * as React from "react"; import { cn } from "@/lib/utils"; @@ -41,7 +41,7 @@ const DropdownMenuSubContent = React.forwardRef< (({ className, inset, ...props }, ref) => ( )); @@ -148,12 +148,16 @@ const DropdownMenuSeparator = React.forwardRef< React.ElementRef, React.ComponentPropsWithoutRef >(({ className, ...props }, ref) => ( - + )); DropdownMenuSeparator.displayName = DropdownMenuPrimitive.Separator.displayName; const DropdownMenuShortcut = ({ className, ...props }: React.HTMLAttributes) => ( - + ); DropdownMenuShortcut.displayName = "DropdownMenuShortcut"; diff --git a/src/components/ui/empty.tsx b/src/components/ui/empty.tsx index 09596a7..356f1d2 100644 --- a/src/components/ui/empty.tsx +++ b/src/components/ui/empty.tsx @@ -1,5 +1,5 @@ -import * as React from "react"; import { cva, type VariantProps } from "class-variance-authority"; +import type * as React from "react"; import { cn } from "@/lib/utils"; @@ -57,9 +57,9 @@ function Empty({ > {icon ?
    {icon}
    : null}
    -

    {title}

    +

    {title}

    {description ? ( -

    +

    {description}

    ) : null} diff --git a/src/components/ui/field.tsx b/src/components/ui/field.tsx index de020bf..6ece958 100644 --- a/src/components/ui/field.tsx +++ b/src/components/ui/field.tsx @@ -1,14 +1,14 @@ -import * as React from "react"; import { cva } from "class-variance-authority"; +import * as React from "react"; import { cn } from "@/lib/utils"; import { Label } from "./label"; const fieldControlVariants = cva( - "w-full rounded-md border border-border bg-surface px-3 py-2 text-sm font-sans text-text shadow-sm transition-[border-color,box-shadow,color,background-color] outline-none placeholder:text-text-dim focus-visible:border-primary focus-visible:ring-2 focus-visible:ring-ring/40 focus-visible:ring-offset-2 focus-visible:ring-offset-background disabled:cursor-not-allowed disabled:border-border/60 disabled:bg-elevated disabled:text-text-dim aria-[invalid=true]:border-destructive aria-[invalid=true]:focus-visible:border-destructive aria-[invalid=true]:focus-visible:ring-destructive/30", + "w-full rounded-md border border-border bg-surface px-3 py-2 font-sans text-sm text-text shadow-sm outline-none transition-[border-color,box-shadow,color,background-color] placeholder:text-text-dim focus-visible:border-primary focus-visible:ring-2 focus-visible:ring-ring/40 focus-visible:ring-offset-2 focus-visible:ring-offset-background disabled:cursor-not-allowed disabled:border-border/60 disabled:bg-elevated disabled:text-text-dim aria-[invalid=true]:border-destructive aria-[invalid=true]:focus-visible:border-destructive aria-[invalid=true]:focus-visible:ring-destructive/30", ); -const fieldMessageVariants = cva("text-xs font-sans leading-relaxed", { +const fieldMessageVariants = cva("font-sans text-xs leading-relaxed", { variants: { tone: { default: "text-text-muted", @@ -49,9 +49,9 @@ export function getFieldControlProps({ ariaInvalid === undefined ? invalid : ariaInvalid === true || - ariaInvalid === "true" || - ariaInvalid === "grammar" || - ariaInvalid === "spelling"; + ariaInvalid === "true" || + ariaInvalid === "grammar" || + ariaInvalid === "spelling"; return { disabled, @@ -77,13 +77,7 @@ export type FieldMessageProps = React.HTMLAttributes & { const FieldMessage = React.forwardRef( ({ className, tone = "default", ...props }, ref) => { - return ( -

    - ); + return

    ; }, ); @@ -135,19 +129,18 @@ const Field = React.forwardRef( generatedId; const message = errorText ?? helperText; const messageId = message ? `${controlId}-message` : undefined; - const enhancedChild = - childElement - ? React.cloneElement(childElement, { - id: controlId, - ...getFieldControlProps({ - describedBy: messageId, - disabled: childElement.props.disabled ?? disabled, - invalid, - ariaDescribedBy: childElement.props["aria-describedby"], - ariaInvalid: childElement.props["aria-invalid"], - }), - }) - : child; + const enhancedChild = childElement + ? React.cloneElement(childElement, { + id: controlId, + ...getFieldControlProps({ + describedBy: messageId, + disabled: childElement.props.disabled ?? disabled, + invalid, + ariaDescribedBy: childElement.props["aria-describedby"], + ariaInvalid: childElement.props["aria-invalid"], + }), + }) + : child; return ( diff --git a/src/components/ui/label.tsx b/src/components/ui/label.tsx index 15804db..247e499 100644 --- a/src/components/ui/label.tsx +++ b/src/components/ui/label.tsx @@ -1,10 +1,10 @@ -import * as React from "react"; import { cva, type VariantProps } from "class-variance-authority"; +import * as React from "react"; import { cn } from "@/lib/utils"; const labelVariants = cva( - "flex items-center gap-2 text-xs font-sans font-medium uppercase tracking-[0.16em] text-text-muted", + "flex items-center gap-2 font-medium font-sans text-text-muted text-xs uppercase tracking-[0.16em]", { variants: { disabled: { @@ -30,21 +30,39 @@ export type LabelProps = React.LabelHTMLAttributes & }; const Label = React.forwardRef( - ({ children, className, disabled, invalid, optionalText, required, ...props }, ref) => { - return ( - + + ); + + if (htmlFor) { + return ( + + ); + } + + return ( +

    } + className={cn(labelVariants({ disabled, invalid }), className)} + > + {content} +
    ); }, ); diff --git a/src/components/ui/progress.tsx b/src/components/ui/progress.tsx index a2451d8..1349996 100644 --- a/src/components/ui/progress.tsx +++ b/src/components/ui/progress.tsx @@ -1,20 +1,23 @@ -import * as React from "react"; import { cva, type VariantProps } from "class-variance-authority"; +import type * as React from "react"; import { cn } from "@/lib/utils"; -const progressIndicatorVariants = cva("h-full rounded-full transition-[width,background-color] duration-300", { - variants: { - tone: { - default: "bg-primary", - info: "bg-running", - danger: "bg-destructive", +const progressIndicatorVariants = cva( + "h-full rounded-full transition-[width,background-color] duration-300", + { + variants: { + tone: { + default: "bg-primary", + info: "bg-running", + danger: "bg-destructive", + }, + }, + defaultVariants: { + tone: "default", }, }, - defaultVariants: { - tone: "default", - }, -}); +); export type ProgressProps = React.HTMLAttributes & VariantProps & { @@ -57,10 +60,10 @@ function Progress({ return (
    {label || hint || showValue ? ( -
    +
    {label ? ( -

    +

    {label}

    ) : null} diff --git a/src/components/ui/resizable.tsx b/src/components/ui/resizable.tsx index 95a0824..e0ec93e 100644 --- a/src/components/ui/resizable.tsx +++ b/src/components/ui/resizable.tsx @@ -49,9 +49,11 @@ function resizePanels(sizes: number[], panels: PanelConfig[], handleIndex: numbe }); } -export type ResizablePanelGroupProps = React.HTMLAttributes & { direction?: Direction }; +export type ResizablePanelGroupProps = React.HTMLAttributes & { + direction?: Direction; +}; export type ResizablePanelProps = React.HTMLAttributes & PanelConfig; -export type ResizableHandleProps = React.HTMLAttributes & { +export type ResizableHandleProps = React.ButtonHTMLAttributes & { step?: number; withGrip?: boolean; }; @@ -76,9 +78,13 @@ const ResizablePanel = React.forwardRef( ResizablePanel.displayName = "ResizablePanel"; -const ResizableHandle = React.forwardRef( - ({ className, handleIndex = 0, onKeyDown, onPointerDown, step = 5, withGrip = true, ...props }, ref) => { - const { containerRef, direction, panels, setSizes, sizes } = useResizableContext("ResizableHandle"); +const ResizableHandle = React.forwardRef( + ( + { className, handleIndex = 0, onKeyDown, onPointerDown, step = 5, withGrip = true, ...props }, + ref, + ) => { + const { containerRef, direction, panels, setSizes, sizes } = + useResizableContext("ResizableHandle"); const startRef = React.useRef<{ point: number; sizes: number[] } | null>(null); React.useEffect(() => { @@ -109,10 +115,15 @@ const ResizableHandle = React.forwardRef( }, [containerRef, direction, handleIndex, panels, setSizes]); return ( -
    { onKeyDown?.(event); if (event.defaultPrevented) return; @@ -127,18 +138,28 @@ const ResizableHandle = React.forwardRef( onPointerDown={(event) => { onPointerDown?.(event); if (event.defaultPrevented) return; - startRef.current = { point: direction === "horizontal" ? event.clientX : event.clientY, sizes }; + startRef.current = { + point: direction === "horizontal" ? event.clientX : event.clientY, + sizes, + }; document.body.style.cursor = direction === "horizontal" ? "col-resize" : "row-resize"; document.body.style.userSelect = "none"; event.currentTarget.setPointerCapture(event.pointerId); }} - role="separator" - tabIndex={0} data-direction={direction} {...props} > - {withGrip ? : null} -
    + {withGrip ? ( + + ) : null} + ); }, ); @@ -149,19 +170,40 @@ const ResizablePanelGroup = React.forwardRef { const containerRef = React.useRef(null); const panels = React.useMemo( - () => React.Children.toArray(children).flatMap((child) => !React.isValidElement(child) || child.type !== ResizablePanel ? [] : [{ defaultSize: child.props.defaultSize, maxSize: child.props.maxSize, minSize: child.props.minSize }]), + () => + React.Children.toArray(children).flatMap((child) => + !React.isValidElement(child) || child.type !== ResizablePanel + ? [] + : [ + { + defaultSize: child.props.defaultSize, + maxSize: child.props.maxSize, + minSize: child.props.minSize, + }, + ], + ), [children], ); const [sizes, setSizes] = React.useState(() => buildSizes(panels)); - React.useEffect(() => setSizes((current) => current.length === panels.length ? current : buildSizes(panels)), [panels]); + React.useEffect( + () => + setSizes((current) => (current.length === panels.length ? current : buildSizes(panels))), + [panels], + ); let panelIndex = 0; let handleIndex = 0; const items = React.Children.map(children, (child) => { if (!React.isValidElement(child)) return child; - if (child.type === ResizablePanel) return React.cloneElement(child as React.ReactElement, { panelIndex: panelIndex++ }); - if (child.type === ResizableHandle) return React.cloneElement(child as React.ReactElement, { handleIndex: handleIndex++ }); + if (child.type === ResizablePanel) + return React.cloneElement(child as React.ReactElement, { + panelIndex: panelIndex++, + }); + if (child.type === ResizableHandle) + return React.cloneElement(child as React.ReactElement, { + handleIndex: handleIndex++, + }); return child; }); @@ -172,7 +214,11 @@ const ResizablePanelGroup = React.forwardRef diff --git a/src/components/ui/scroll-area.tsx b/src/components/ui/scroll-area.tsx index e2807bb..b5141ac 100644 --- a/src/components/ui/scroll-area.tsx +++ b/src/components/ui/scroll-area.tsx @@ -1,5 +1,5 @@ -import * as React from "react"; import { cva, type VariantProps } from "class-variance-authority"; +import * as React from "react"; import { cn } from "@/lib/utils"; @@ -7,18 +7,24 @@ export type ScrollAreaProps = React.HTMLAttributes; const ScrollArea = React.forwardRef( ({ className, ...props }, ref) => { - return
    ; + return ( +
    + ); }, ); ScrollArea.displayName = "ScrollArea"; -const scrollViewportVariants = cva("h-full w-full min-h-0 min-w-0 overscroll-contain", { +const scrollViewportVariants = cva("h-full min-h-0 w-full min-w-0 overscroll-contain", { variants: { orientation: { both: "overflow-auto", horizontal: "overflow-x-auto overflow-y-hidden", - vertical: "overflow-x-hidden overflow-y-auto", + vertical: "overflow-y-auto overflow-x-hidden", }, padding: { none: "", diff --git a/src/components/ui/select.tsx b/src/components/ui/select.tsx index fee1f32..f2e6df2 100644 --- a/src/components/ui/select.tsx +++ b/src/components/ui/select.tsx @@ -1,6 +1,6 @@ -import * as React from "react"; import * as SelectPrimitive from "@radix-ui/react-select"; import { Check, ChevronDown, ChevronUp } from "lucide-react"; +import * as React from "react"; import { cn } from "@/lib/utils"; const Select = SelectPrimitive.Root; @@ -14,7 +14,7 @@ const SelectTrigger = React.forwardRef< span]:line-clamp-1", className, @@ -33,7 +33,11 @@ const SelectScrollUpButton = React.forwardRef< React.ComponentRef, React.ComponentPropsWithoutRef >(({ className, ...props }, ref) => ( - + )); @@ -43,7 +47,11 @@ const SelectScrollDownButton = React.forwardRef< React.ComponentRef, React.ComponentPropsWithoutRef >(({ className, ...props }, ref) => ( - + )); @@ -58,11 +66,12 @@ const SelectContent = React.forwardRef< ref={ref} className={cn( "relative z-50 max-h-60 min-w-[8rem] overflow-hidden rounded-md border border-border bg-surface text-text shadow-md", - "data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0", + "data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:animate-out data-[state=open]:animate-in", "data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95", "data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2", "data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2", - position === "popper" && "data-[side=bottom]:translate-y-1 data-[side=left]:-translate-x-1 data-[side=right]:translate-x-1 data-[side=top]:-translate-y-1", + position === "popper" && + "data-[side=left]:-translate-x-1 data-[side=right]:translate-x-1 data-[side=bottom]:translate-y-1 data-[side=top]:-translate-y-1", className, )} position={position} @@ -70,7 +79,11 @@ const SelectContent = React.forwardRef< > {children} @@ -84,7 +97,11 @@ const SelectLabel = React.forwardRef< React.ComponentRef, React.ComponentPropsWithoutRef >(({ className, ...props }, ref) => ( - + )); SelectLabel.displayName = SelectPrimitive.Label.displayName; @@ -95,7 +112,7 @@ const SelectItem = React.forwardRef< , React.ComponentPropsWithoutRef >(({ className, ...props }, ref) => ( - + )); SelectSeparator.displayName = SelectPrimitive.Separator.displayName; diff --git a/src/components/ui/separator.tsx b/src/components/ui/separator.tsx index b37487e..f9bd6c7 100644 --- a/src/components/ui/separator.tsx +++ b/src/components/ui/separator.tsx @@ -1,5 +1,5 @@ -import * as React from "react"; import { cva, type VariantProps } from "class-variance-authority"; +import * as React from "react"; import { cn } from "@/lib/utils"; @@ -23,16 +23,7 @@ export type SeparatorProps = React.HTMLAttributes & }; const Separator = React.forwardRef( - ( - { - className, - orientation = "horizontal", - decorative = true, - tone, - ...props - }, - ref, - ) => { + ({ className, orientation = "horizontal", decorative = true, tone, ...props }, ref) => { return (
    {children} - + Close @@ -70,17 +70,11 @@ const SheetContent = React.forwardRef< )); SheetContent.displayName = DialogPrimitive.Content.displayName; -const SheetHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( +const SheetHeader = ({ className, ...props }: React.HTMLAttributes) => (
    ); -const SheetFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( +const SheetFooter = ({ className, ...props }: React.HTMLAttributes) => (
    (({ className, ...props }, ref) => ( )); @@ -105,7 +99,7 @@ const SheetDescription = React.forwardRef< >(({ className, ...props }, ref) => ( )); diff --git a/src/components/ui/sidebar.tsx b/src/components/ui/sidebar.tsx index 0b71a2d..4f14edd 100644 --- a/src/components/ui/sidebar.tsx +++ b/src/components/ui/sidebar.tsx @@ -1,9 +1,13 @@ -import * as React from "react"; import { cva, type VariantProps } from "class-variance-authority"; +import * as React from "react"; import { cn } from "@/lib/utils"; -type SidebarContextValue = { collapsed: boolean; setCollapsed: (value: boolean) => void; toggle: () => void }; +type SidebarContextValue = { + collapsed: boolean; + setCollapsed: (value: boolean) => void; + toggle: () => void; +}; const SidebarContext = React.createContext(null); export function useSidebar() { @@ -18,62 +22,151 @@ export type SidebarProviderProps = React.PropsWithChildren<{ onCollapsedChange?: (collapsed: boolean) => void; }>; -function SidebarProvider({ children, collapsed, defaultCollapsed = false, onCollapsedChange }: SidebarProviderProps) { +function SidebarProvider({ + children, + collapsed, + defaultCollapsed = false, + onCollapsedChange, +}: SidebarProviderProps) { const [uncontrolledCollapsed, setUncontrolledCollapsed] = React.useState(defaultCollapsed); const currentCollapsed = collapsed ?? uncontrolledCollapsed; - const setCollapsed = React.useCallback((nextCollapsed: boolean) => { - if (collapsed === undefined) setUncontrolledCollapsed(nextCollapsed); - onCollapsedChange?.(nextCollapsed); - }, [collapsed, onCollapsedChange]); + const setCollapsed = React.useCallback( + (nextCollapsed: boolean) => { + if (collapsed === undefined) setUncontrolledCollapsed(nextCollapsed); + onCollapsedChange?.(nextCollapsed); + }, + [collapsed, onCollapsedChange], + ); - return setCollapsed(!currentCollapsed) }}>{children}; + return ( + setCollapsed(!currentCollapsed), + }} + > + {children} + + ); } -const sidebarVariants = cva("flex h-full shrink-0 flex-col border-sidebar-border bg-sidebar text-sidebar-foreground transition-[width] duration-200", { - variants: { - collapsed: { true: "w-16", false: "w-72" }, - side: { left: "border-r", right: "order-last border-l" }, +const sidebarVariants = cva( + "flex h-full shrink-0 flex-col border-sidebar-border bg-sidebar text-sidebar-foreground transition-[width] duration-200", + { + variants: { + collapsed: { true: "w-16", false: "w-72" }, + side: { left: "border-r", right: "order-last border-l" }, + }, + defaultVariants: { collapsed: false, side: "left" }, }, - defaultVariants: { collapsed: false, side: "left" }, -}); +); -const sidebarMenuButtonVariants = cva("flex w-full items-center gap-3 rounded-md border px-3 py-2 text-left text-sm font-sans transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-sidebar-ring focus-visible:ring-offset-2 focus-visible:ring-offset-sidebar", { - variants: { - active: { - true: "border-sidebar-primary/20 bg-sidebar-primary/10 text-sidebar-primary", - false: "border-transparent text-sidebar-foreground/80 hover:bg-sidebar-accent hover:text-sidebar-foreground", +const sidebarMenuButtonVariants = cva( + "flex w-full items-center gap-3 rounded-md border px-3 py-2 text-left font-sans text-sm transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-sidebar-ring focus-visible:ring-offset-2 focus-visible:ring-offset-sidebar", + { + variants: { + active: { + true: "border-sidebar-primary/20 bg-sidebar-primary/10 text-sidebar-primary", + false: + "border-transparent text-sidebar-foreground/80 hover:bg-sidebar-accent hover:text-sidebar-foreground", + }, + collapsed: { true: "justify-center px-2", false: "" }, }, - collapsed: { true: "justify-center px-2", false: "" }, + defaultVariants: { active: false, collapsed: false }, }, - defaultVariants: { active: false, collapsed: false }, -}); +); export type SidebarLayoutProps = React.HTMLAttributes; -export type SidebarProps = React.HTMLAttributes & VariantProps; -export type SidebarMenuButtonProps = React.ButtonHTMLAttributes & VariantProps; +export type SidebarProps = React.HTMLAttributes & + VariantProps; +export type SidebarMenuButtonProps = React.ButtonHTMLAttributes & + VariantProps; -const SidebarLayout = React.forwardRef(({ className, ...props }, ref) => { - return
    ; -}); +const SidebarLayout = React.forwardRef( + ({ className, ...props }, ref) => { + return ( +
    + ); + }, +); SidebarLayout.displayName = "SidebarLayout"; -const Sidebar = React.forwardRef(({ className, collapsed, side, ...props }, ref) => { - const context = useSidebar(); - const currentCollapsed = collapsed ?? context.collapsed; - return