This documents the personal L wrapper that opens or resumes Codex sessions with repo-specific query matching for ~/repos/openai/codex.
Relevant files:
- fish entrypoint:
~/config/fish/fn.fish - resolver:
~/config/fish/scripts/codex-openai-session.ts
Lwith no args runsf ai codex new.L <query>targets stored Codex sessions for~/repos/openai/codex.- On a successful match it runs
f ai codex resume <thread-id>in that repo. - On no match it exits non-zero and prints a short recent-session list instead of opening the wrong conversation.
l and L now also treat explicit recovery phrases as a separate lightweight
path.
Examples:
see this convo in ...what was I doing in ...recover recent contextcontinue the ... work
For those prompts, the launcher first runs:
f ai codex recover --summary-only --path <derived target> <prompt>Then it prepends the short recovery summary to the new prompt and opens the session in the derived target repo/workspace. Normal prompts do not pay this cost.
This path uses codex app-server instead of parsing f ai codex list output.
That matters because thread/list gives:
- exact
cwdfiltering for~/repos/openai/codex - stable
updatedAtordering - structured fields such as
id,name,preview,gitInfo, andcwd - pagination and optional server-side
searchTerm
For this wrapper, exact repo scoping is the main win. It avoids mixing sessions from unrelated repos and avoids depending on Flow's imported session index.
- Spawn
codex app-serverwith cwd set to~/repos/openai/codex. - Send
initialize, theninitialized. - Call
thread/listwith:cwd: ~/repos/openai/codexarchived: falsesortKey: updated_at
- Use a small first fetch when possible:
- latest query: 1
after most recent active: 2- text search query: 25
- fallback scan: up to 100
- Resolve the query against returned threads.
- For textual queries, rerank the top shortlist with
thread/read includeTurns:trueusing full turn text.- if summary fields miss completely, probe the most recent few threads by full turn text before giving up
- Resume the chosen thread through
f ai codex resume <id>in the Codex repo.
The resolver is deterministic. It does not call a model.
Matching order:
- exact or unique thread id prefix
- relative query with
afterorbefore - ordinal query such as
2,second,3rd - text ranking across:
thread.namethread.previewthread.gitInfo.branchthread.gitInfo.sha
- pure recency query such as
most recent active
Examples:
L most recent activeL session after most recent activeL secondL 019cca91L where does codex storeL history.jsonl
Important accuracy guardrails:
lastonly means "latest session" when the rest of the query is otherwise empty after control words are removed.- bare numbers only become ordinals when the query reduces to just that number.
- directional queries stay directional. If there is no next or previous match, the resolver fails instead of silently falling back to the latest session.
- It starts a fresh
codex app-serverprocess on every lookup. That is the main latency cost. thread/list searchTermonly filters extracted titles and is case-sensitive, so the helper still needs local fallback ranking.- The second pass only reads a few candidate threads, so this is still a bounded heuristic rather than a full semantic search across all history.
- Relative anchor queries are strongest for latest or ordinal anchors and weaker for arbitrary natural-language anchors.
- Keep a long-lived local resolver daemon so
Lreuses one app-server connection instead of spawning a fresh process. - Add a small local cache keyed by repo path with
id,updatedAt,name,preview, andgitInfo, then refresh it opportunistically. - If Codex exposes a stable reusable host transport beyond stdio for local clients, switch to that instead of process-per-query.
- Start naming important sessions with
thread/name/set; exact names will beat fuzzy preview matching. - If the wrapper ever controls session creation, persist higher-signal naming or metadata early instead of inferring from preview text later.
- Extend the second pass to handle arbitrary textual anchors inside
after ...andbefore ...queries more deeply when the anchor match is still weak.
A model-based resolver is possible but should be the fallback, not the first pass.
Why:
- slower than deterministic matching
- easier to silently choose the wrong session
- unnecessary when
id,updatedAt,name,preview, and repo path already narrow the space well
If a model is added later, the safe shape is:
- deterministic shortlist first
- prompt only over the top few candidates
- require the model to return one thread id or
NONE - keep strict failure if confidence is weak
The highest-value next change is reducing lookup latency:
- keep one local long-lived resolver or daemon
- reuse one app-server connection per repo
- cache recent shortlist results and refresh opportunistically
That should improve the user-visible speed more than further prompt tuning.