This is the execution plan to add opt-in anonymous usage tracking to Flow with near-zero runtime overhead.
- Default off (or unknown until prompt), explicit user opt-in.
- No sensitive data (no prompts, no command values, no paths, no repo names).
- Command runtime must not block on network.
- Ingest through
basetrace API and store in a separate ClickHouse instance.
Event kind: flow.command
Allowed fields:
install_id(random UUID, local)command_path(e.g.commit,skills.sync,setup.deploy)success(true/false)exit_code(integer or null)duration_ms(integer)flags_used(flag names only; e.g.["sync","context"])flow_versionos,archinteractive(true/false)ci(true/false)project_fingerprint(optional HMAC; never raw path/remote)attimestamp
Forbidden fields:
- prompts, command strings, args values, paths, repo URL/name, output.
- Add
src/usage.rsUsageConfigState(enabled/disabled/unknown, install_id, secret, last_prompt_at).- local queue file:
~/.config/flow/usage-queue.jsonl. - append-only write API:
record_command_event(...). - sanitize and normalize command path + flag names.
- Add config support in
src/config.rs[analytics]config:enabled(true/falseoptional)endpoint(defaulthttp://127.0.0.1:7331/v1/trace)sample_rate(default1.0)
- Hook command lifecycle in
src/main.rs- capture start timestamp before dispatch.
- on return/error, emit one event through
usage::record_command_event. - never fail command if analytics fails.
- Add command group in
src/cli.rsand handler in newsrc/analytics.rsf analytics statusf analytics enablef analytics disablef analytics exportf analytics purge
- Wire module exports in
src/lib.rsand command dispatch insrc/main.rs.
Validation:
cd ~/code/flow
cargo check
cargo run --bin f -- analytics status- In
src/main.rs, after first successful interactive command:- if state is
unknownand non-CI, prompt once:- "Enable anonymous usage tracking to improve Flow? [y/N/later]"
- if state is
- Persist response to
~/.config/flow/analytics.toml. - Add env overrides:
FLOW_ANALYTICS_FORCE=1(self-test)FLOW_ANALYTICS_DISABLE=1(hard off)
Validation:
FLOW_ANALYTICS_FORCE=1 cargo run --bin f -- tasks
cargo run --bin f -- analytics status- In
src/usage.rsaddflush_queue_async()- background thread
- small batches (50-200)
- short HTTP timeout (<=500ms)
- retries with backoff
- Upload target defaults to base trace endpoint:
http://127.0.0.1:7331/v1/trace
- Add spool safety:
- max queue bytes (e.g. 10MB), oldest-drop policy.
Validation:
f analytics status
# run a few commands
f tasks
f skills list
# then flush
f analytics exportImplement using base doc docs/flow-usage-tracking.md (added in parallel).
- Extend
seqchin~/code/org/linsa/base/crates/seqch-cli/src/main.rs- new top-level area:
flow - commands:
seqch flow commands --hours 24seqch flow flags --hours 24seqch flow failures --hours 24
- new top-level area:
- Add starter SQL dashboards:
- command usage over time
- adoption funnel (
unknown -> enabled) - failures by command
- Enable only for yourself:
FLOW_ANALYTICS_FORCE=1
- Verify no sensitive fields in payload samples (
f analytics export). - Verify ingestion into separate CH instance.
- After 3-7 days, turn prompt on for all users (still opt-in).
- P50 added runtime overhead per command < 1ms local.
- Command success path unaffected by network or ingest failures.
- No sensitive strings in stored events (spot-check samples).
- Able to answer:
- top-used commands
- least-used commands
- failure hotspots by command path.