Merge upstream v0.6.98 into local overlay#34
Merged
Conversation
## Summary Ad-hoc memory notes are written under `memories/extensions/ad_hoc/`, but the consolidation agent only knows how to interpret an extension when the extension folder has an `instructions.md`. Seed those instructions from the memories write pipeline so an enabled memories startup creates the expected ad-hoc extension layout automatically. This also moves extension-specific write behavior behind a dedicated `memories/write/src/extensions/` module. `ad_hoc` owns the seeded instructions template, while the existing resource-retention cleanup lives in its own `prune` module so future memory extensions can add their own write-side setup without growing a flat helper file. ## Changes - Seed `memories/extensions/ad_hoc/instructions.md` during eligible memory startup without overwriting an existing file. - Store the ad-hoc instructions template under `memories/write/templates/extensions/ad_hoc/`, keeping ownership in `codex-memories-write`. - Split memory extension support into `extensions::ad_hoc` and `extensions::prune`. - Keep the existing old-resource pruning behavior unchanged. ## Verification - `cargo test -p codex-memories-write` - `bazel build //codex-rs/memories/write:write` --------- Co-authored-by: chatgpt-codex-connector[bot] <199175422+chatgpt-codex-connector[bot]@users.noreply.github.com>
## Why For reproducibility. A hand-written `config.toml` is not enough to recreate what a Codex session actually ran with because layered config, CLI overrides, defaults, feature aliases, resolved feature config, prompt setup, and model-catalog/session values can all affect the final runtime behavior. This PR adds an effective config lockfile path: one run can export the resolved session config, and a later run can replay that lockfile and fail early if the regenerated effective config drifts. ## What Changed - Add a dedicated `ConfigLockfileToml` wrapper with top-level lockfile metadata plus the replayable config: ```toml version = 1 codex_version = "..." [config] # effective ConfigToml fields ``` - Keep lockfile metadata out of regular `ConfigToml`; replay loads `ConfigLockfileToml` and then uses its nested `config` as the authoritative config layer. - Add `debug.config_lockfile.export_dir` to write `<thread_id>.config.lock.toml` when a root session starts. - Add `debug.config_lockfile.load_path` to replay a saved lockfile and validate the regenerated session lockfile against it. - Add `debug.config_lockfile.allow_codex_version_mismatch` to optionally tolerate Codex binary version drift while still comparing the rest of the lockfile. - Add `debug.config_lockfile.save_fields_resolved_from_model_catalog` so lock creation can either save model-catalog/session-resolved fields or intentionally leave those fields dynamic. - Build lockfiles from the effective config plus resolved runtime values such as model selection, reasoning settings, prompts, service tier, web search mode, feature states/config, memories config, skill instructions, and agent limits. - Materialize feature aliases and custom feature config into the lockfile so replay compares canonical resolved behavior instead of user-authored alias shape. - Strip profile/debug/file-include/environment-specific inputs from generated lockfiles so they contain replayable values rather than the inputs that produced those values. - Surface JSON-RPC server error code/data in app-server client and TUI bootstrap errors so config-lock replay failures include the actual TOML diff. - Regenerate the config schema for the new debug config keys. ## Review Notes The main flow is split across these files: - `config/src/config_toml.rs`: lockfile/debug TOML shapes. - `core/src/config/mod.rs`: loading `debug.config_lockfile.*`, replaying a lockfile as a config layer, and preserving the expected lockfile for validation. - `core/src/session/config_lock.rs`: exporting the current session lockfile and materializing resolved session/config values. - `core/src/config_lock.rs`: lockfile parsing, metadata/version checks, replay comparison, and diff formatting. ## Usage Export a lockfile from a normal session: ```sh codex -c 'debug.config_lockfile.export_dir="/tmp/codex-locks"' ``` Export a lockfile without saving model-catalog/session-resolved fields: ```sh codex -c 'debug.config_lockfile.export_dir="/tmp/codex-locks"' \ -c 'debug.config_lockfile.save_fields_resolved_from_model_catalog=false' ``` Replay a saved lockfile in a later session: ```sh codex -c 'debug.config_lockfile.load_path="/tmp/codex-locks/<thread_id>.config.lock.toml"' ``` If replay resolves to a different effective config, startup fails with a TOML diff. To tolerate Codex binary version drift during replay: ```sh codex -c 'debug.config_lockfile.load_path="/tmp/codex-locks/<thread_id>.config.lock.toml"' \ -c 'debug.config_lockfile.allow_codex_version_mismatch=true' ``` ## Limitations This does not support custom rules/network policies. ## Verification - `cargo test -p codex-core config_lock` - `cargo test -p codex-config` - `cargo test -p codex-thread-manager-sample`
## Why Apply-patch file changes are now part of the core turn item stream, so v2 clients can consume the same first-class item lifecycle path used by other turn items instead of relying on app-server-specific remapping from legacy patch events. ## What changed - Added a core `TurnItem::FileChange` carrying apply-patch changes and completion metadata. - Updated the apply-patch tool emitter to send `ItemStarted` / `ItemCompleted` with the new `FileChange` item while preserving legacy `PatchApplyBegin` / `PatchApplyEnd` fan-out. - Updated app-server v2 conversion to render the new core item directly and stopped `event_mapping` from remapping old patch begin/end events into item notifications. - Kept thread history reconstruction based on the existing old apply-patch events for rollout compatibility. ## Verification - `cargo test -p codex-protocol -p codex-app-server-protocol` - `cargo test -p codex-core --test all apply_patch_tool_executes_and_emits_patch_events` - `cargo test -p codex-app-server bespoke_event_handling`
## Why Issue openai#20489 calls out that animated TUI affordances can be noisy for screen-reader users. Codex already has `tui.animations = false` as a reduced-motion setting, but some live activity rows render spinner-style prefixes in that mode. These were relatively recent regressions. We have also regressed this pattern more than once by adding new spinner/shimmer callsites that do not think through the reduced-motion path, so this PR adds a small guardrail while fixing the current surfaces. ## What changed - Omit the live status-row spinner when animations are disabled, so the row starts with stable text like `Working (...)`. - Render running hook headers without the spinner prefix when animations are disabled, while preserving shimmer/spinner behavior when animations are enabled. - Centralize TUI activity indicators in `tui/src/motion.rs`, with explicit reduced-motion choices for hidden prefixes, static bullets, and plain shimmer-text fallbacks. - Route existing spinner/shimmer callsites through the central motion helper, including exec rows, MCP/web-search/loading rows, hook rows, plugin loading, and onboarding loading text. - Add a source-scan regression test that rejects direct `spinner(...)` or `shimmer_spans(...)` usage outside the central module and primitive definition. - Add focused coverage that reduced-motion active exec rows are stable, status rows start without a spinner, running hooks omit the spinner, and MCP inventory loading stays stable. - Update the one affected status-indicator snapshot; the existing detail tree prefix remains unchanged. ## Verification - `cargo test -p codex-tui`
## Why `/goal` is supposed to keep Codex working until the goal is actually done. The previous continuation logic had two ways to stop early: the continuation prompt told the model to wait for new input when it felt blocked, and the runtime suppressed another continuation turn after a continuation finished without any tool calls. That made goals stop short even when the agent could still keep making progress (I received a few reports of this from users). It also relied on a brittle heuristic that treated "no registry tool calls" as equivalent to "should stop." ## What changed - removed the continuation prompt sentence that told the model to stop and wait for new input when it could not continue productively - removed the goal runtime suppression heuristic that stopped auto-continuation after a no-tool continuation turn - deleted the continuation-activity bookkeeping and left `tool_calls` as telemetry only - added focused regressions for the two intended behaviors: completed no-tool continuation turns still continue, while `request_user_input` keeps the existing turn open instead of spawning a new continuation
Fix cargo deny by ack the `RUSTSEC` while a fix land
```
RUSTSEC-2026-0118
NSEC3 closest-encloser proof validation enters unbounded loop on cross-zone responses
RUSTSEC-2026-0119
CPU exhaustion during message encoding due to O(n²) name compression
Dependency path:
hickory-proto 0.25.2
└── hickory-resolver 0.25.2
└── rama-dns 0.3.0-alpha.4
└── rama-tcp 0.3.0-alpha.4
└── codex-network-proxy
```
Also upgrade some workers version to prevent this:
```
warning[license-not-encountered]: license was not encountered
┌─ ./codex-rs/deny.toml:131:6
│
131 │ "OpenSSL",
│ ━━━━━━━ unmatched license allowance
warning[duplicate]: found 2 duplicate entries for crate 'base64'
┌─ /github/workspace/codex-rs/Cargo.lock:79:1
│
79 │ ╭ base64 0.21.7 registry+https://github.com/rust-lang/crates.io-index
80 │ │ base64 0.22.1 registry+https://github.com/rust-lang/crates.io-index
│ ╰───────────────────────────────────────────────────────────────────┘ lock entries
```
## Why `codex-app-server` currently owns both request-processing code and transport implementation details. Splitting the transport layer into its own crate makes that boundary explicit, reduces the amount of transport-specific dependency surface carried by `codex-app-server`, and gives future transport work a narrower place to evolve. ## What changed - Added `codex-app-server-transport` and moved the existing transport tree into it, including stdio, unix socket, websocket, remote-control transport, and websocket auth. - Moved shared transport-facing message types into the new crate so both the transport implementation and `codex-app-server` use the same definitions. - Kept processor-facing connection state and outbound routing in `codex-app-server`, with the routing tests moved next to that local wrapper. - Updated workspace metadata, Bazel crate metadata, and `codex-app-server` dependencies for the new crate boundary. ## Validation - `cargo metadata --locked --no-deps` - `git diff --check` - Attempted `cargo test -p codex-app-server-transport`, `cargo test -p codex-app-server`, `just fix -p codex-app-server-transport`, and `just fix -p codex-app-server`; all were blocked before compilation by the existing `packageproxy` resolution failure for locked `rustls-webpki = 0.103.13`. - Attempted Bazel build / lockfile validation; those were blocked by external fetch failures against BuildBuddy / GitHub while resolving `v8`.
## Why Users have asked for a `/ide` command in the TUI so Codex can use the active IDE session for live context such as the current file, open tabs, and selected ranges. We already support a similar feature in the Codex desktop app, so bringing it to the TUI makes sense. One subtle compatibility constraint is that the injected prompt wrapper and transcript stripping should match the desktop app and IDE extension. By using the same `## My request for Codex:` delimiter and hiding the injected context from transcript rendering the same way, threads created in the TUI render correctly in desktop and IDE surfaces, and threads created there replay correctly in the TUI, even when IDE context was included. Addresses openai#13834. ## What changed ### Summary This PR consists of four four pieces: 1. An IPC client that uses a socket (Mac/Linux) or named pipe (Windows) to talk to the IDE Extension 2. Logic that establishes the IPC connection and requests IDE context (open files, selection) on demand 3. Logic that injects this context into the user prompt (using the same technique as the desktop app) and hides the added context when rendering the prompt in the TUI transcript 4. A new slash command for enabling/disabling this mode and text within the footer to indicate when it's enabled ### Details - Added `/ide [on|off|status]` to the TUI, with bare `/ide` toggling IDE context on or off. - Added a Rust IDE context client that connects to the local Codex IDE IPC route as a client and requests context from the IDE extension flow. - Injected IDE context using the same prompt delimiter and transcript-stripping convention as the desktop app and IDE extension so shared threads render consistently across surfaces. - Added an `IDE context` status-line indicator while the feature is active and cleared it when enabling or fetching context fails. - Added handling for multiple selection ranges, oversized selections, interleaved IPC messages, and transient reconnect timing after quick toggles. ## Verification Did extensive manual testing in addition to running automated unit and regression tests. To test: - Launch VS Code (or Cursor) with the IDE extension. - Open one or more files in the IDE and select a range of text within one of them. - Start the TUI. - Ask the agent which files you have open in your IDE, and it should say that it does not know. - Enable `/ide` mode; note that `IDE context` appears in the lower right. - Ask the agent what files you have open in your IDE and what text is selected.
## Why This adds a checked-in Codex environment configuration so the repo exposes a ready-to-run Codex action from the app environment metadata. ## What changed - Added `.codex/environments/environment.toml` with a generated `Run` action. - The action runs the `codex` binary from `codex-rs/Cargo.toml` with `mcp_oauth_credentials_store=file`. ## Verification - Not run; configuration-only change.
# Why `notify` is the remaining compatibility surface from the legacy hook implementation. The newer lifecycle hook engine now owns the active hook system, so we should start steering users away from adding new `notify` configs before removing the old path entirely. This also adds a lightweight watchpoint for the deprecation so we can see how much legacy usage remains before the clean drop. # What - emit a startup deprecation notice when a non-empty `notify` command is configured - emit `codex.notify.configured` when a session starts with legacy `notify` configured - emit `codex.notify.run` when the legacy notify path fires after a completed turn - mark `notify` as deprecated in the config schema and repo docs - remove the orphaned `codex-rs/hooks/src/user_notification.rs` file that is no longer compiled - add regression coverage for the new deprecation notice # Next steps A follow-up PR can remove the legacy notify path entirely once we are ready for the clean drop. Before then, we can watch `codex.notify.configured` and `codex.notify.run` to understand the deprecation impact and remaining active usage. The cleanup PR should then delete the `notify` config field, the `legacy_notify` implementation, the old compatibility dispatch types and callsites that only exist for the legacy path, and the remaining compatibility docs/tests. # Testing - `cargo test -p codex-hooks` - `cargo test -p codex-config` - `cargo test -p codex-core emits_deprecation_notice_for_notify`
## Summary - Route loaded `thread/read` + `includeTurns` through `CodexThread::load_history` / ThreadStore history instead of direct rollout JSONL reads. - Add an in-memory ThreadStore regression test covering loaded `thread/read includeTurns` without a local rollout path.
## Summary - make selected turn environments the source of truth for session runtime cwd and MCP runtime environment selection - keep local/no-selection fallback behavior intact - add coverage for duplicate selected environments, cwd resolution, and MCP runtime environment selection ## Validation - git diff --check - rustfmt was run on touched Rust files during the implementation workflow CI should provide the full Bazel/test signal. --------- Co-authored-by: Codex <noreply@openai.com>
Fixes openai#20501 ## Summary - add Alt+Enter to the built-in editor newline aliases - update keymap tests that used Alt+Enter as a custom submit binding now that it conflicts with newline - refresh the keymap action-menu snapshot fixture ## Test Plan - `just fmt` - `cargo test -p codex-tui keymap::tests` - `cargo test -p codex-tui bottom_pane::textarea::tests` - `cargo test -p codex-tui keymap_setup::tests` - `cargo test -p codex-tui` - `cargo insta pending-snapshots` - `git diff --check` - `just argument-comment-lint`
## Why `ConfigBuilder::build` performs a large amount of async config loading. Leaving that entire future on the caller stack makes config startup more fragile on small runtime worker stacks. ## What changed - keep `ConfigBuilder::build` as a thin wrapper that boxes the config-loading future before awaiting it - move the existing implementation into a private `build_inner` method so the large async state machine lives on the heap instead of the runtime thread stack ## Testing - Not run locally
This PR adds marketplace upgrade to the `/plugins` menu so users can update configured marketplaces. It adds a `Ctrl+U` shortcut on eligible marketplace tabs, a loading state, and the app-server request flow needed to perform `marketplace/upgrade`. After a successful upgrade, the TUI refreshes plugin data, plugin mentions, and user config so updated marketplace contents show up across the menu and other plugin surfaces. It also preserves the current marketplace tab on no-op and failure paths and surfaces backend error details directly in the TUI. - Add a `Ctrl+U` upgrade option for user-configured marketplace tabs in `/plugins` - Show the upgrade footer hint only on upgradeable marketplace tabs - Show a loading state during `marketplace/upgrade` - Surface already-up-to-date and per-marketplace failure results from the backend - Refresh plugin data, plugin mentions, and user config after successful upgrades - Add tests and snapshot updates for the shortcut flow, loading state, and failure messaging Steps to test: 1. Add a `/plugin` marketplace to Codex TUI. 2. Open `/plugins`, move to that marketplace tab, and confirm the footer shows `Ctrl+U` to upgrade. 3. Press `Ctrl+U` and confirm the popup switches into an upgrade loading state. 4. When the request finishes, confirm you see the expected result: updated marketplace contents on success, an already-up-to-date message on no-op, or backend error details on failure. On no-op or failure, confirm the popup stays on the same marketplace tab.
## Why Image-view results should be represented as a core-produced turn item instead of being reconstructed by app-server. At the same time, existing rollout/history paths still understand the legacy `ViewImageToolCall` event, so this keeps that event as compatibility output generated from the new item lifecycle. ## What changed - Added `TurnItem::ImageView` to `codex-protocol`. - Emitted image-view item start/completion directly from the core `view_image` handler. - Kept `ViewImageToolCall` as a legacy event and generate it from completed `TurnItem::ImageView` items. - Kept `thread_history.rs` on the legacy `ViewImageToolCall` replay path, with `ImageView` item lifecycle events ignored there. - Updated app-server protocol conversion, rollout persistence, and affected exhaustive event matches for the new item plus legacy fan-out shape. ## Verification - `cargo test -p codex-protocol -p codex-app-server-protocol -p codex-rollout -p codex-rollout-trace -p codex-mcp-server -p codex-app-server --lib` - `cargo test -p codex-core --test all view_image_tool_attaches_local_image` - `just fix -p codex-protocol -p codex-core -p codex-app-server-protocol -p codex-app-server -p codex-rollout -p codex-rollout-trace -p codex-mcp-server` - `git diff --check`
# Why Codex currently negotiates MCP `2025-06-18`, where the client elicitation capability is represented as an empty object. We were still serializing `capabilities.elicitation.form`, which belongs to the later capability shape and can cause strict `2025-06-18` servers to reject `initialize` with an unrecognized-field error. This keeps the handshake aligned with the protocol version Codex actually negotiates and fixes the compatibility regression tracked in openai#17492. # What - Serialize the client elicitation capability as `elicitation: {}` for `2025-06-18`. - Keep elicitation advertised for both Codex Apps and custom MCP servers. - Tighten regression coverage so the unit test asserts both the Rust value and the serialized wire shape. - Add an app-server integration test that round-trips a form elicitation from a custom MCP server; the existing connector round-trip continues to cover the connector path. # Verification - `cargo test -p codex-mcp` - `cargo test -p codex-app-server mcp_server_elicitation_round_trip` - `cargo test -p codex-app-server mcp_server_tool_call_round_trips_elicitation` # Next steps - Decide whether `tool_call_mcp_elicitation=false` should also suppress capability advertisement during `initialize`. - Revisit `form` / `url` capability advertisement when Codex is ready to negotiate MCP `2025-11-25`, which defines that newer shape.
# Why When a user interrupts a turn while a hook is still running, the normal turn status is cleared but the separate live hook row can remain visible as `Running` because the TUI may never receive a matching `HookCompleted` event before cancellation. Once the turn itself is finalized, that turn-scoped live state should not remain on screen. # What - clear any still-live `active_hook_cell` during turn finalization - add a regression snapshot covering an interrupted turn with a visible `PreToolUse` hook row # Testing - `cargo test -p codex-tui interrupted_turn_clears_visible_running_hook` - attempted `cargo test -p codex-tui` (currently aborts on unrelated existing stack overflow in `app::tests::discard_side_thread_removes_agent_navigation_entry`)
## Why The model needs a way to see which environments are available during a multi-environment turn without changing the legacy single-environment prompt surface or pulling replay/persistence changes into the same review. ## Stack 1. openai#20646 - `EnvironmentContext` rendering for selected environments (this PR) 2. openai#20669 - selected-environment ownership and tool config prep 3. openai#20647 - process-tool `environment_id` routing ## What Changed - extend `environment_context` so multi-environment turns render an `<environments>` block with the selected environment ids and cwd values - keep zero- and single-environment turns on the existing cwd-only render path - keep replay and persistence paths on the legacy surface for now so this PR stays scoped to live prompt rendering - add focused coverage in `codex-rs/core/src/context/environment_context_tests.rs` ## Testing - CI --------- Co-authored-by: Codex <noreply@openai.com>
Hide Atomics, SharedArrayBuffer, and WebAssembly from the code-mode runtime since the harness does not expose worker support or need those APIs.
## Status This is the Bazel PR-CI cross-compilation follow-up to openai#20485. It is intentionally split from the Cargo/cargo-xwin release-build PoC so openai#20485 can stay as the historical release-build exploration. The unrelated async-utils test cleanup has been moved to openai#20686, so this PR is focused on the Windows Bazel CI path. The intended tradeoff is now explicit in `.github/workflows/bazel.yml`: pull requests get the fast Windows cross-compiled Bazel test leg, while post-merge pushes to `main` run both that fast cross leg and a fully native Windows Bazel test leg. The native main-only job keeps full V8/code-mode coverage and gets a 40-minute timeout because it is less latency-sensitive than PR CI. All other Bazel jobs remain at 30 minutes. ## Why Windows Bazel PR CI currently does the expensive part of the build on Windows. A native Windows Bazel test job on `main` completed in about 28m12s, leaving very little headroom under the 30-minute job timeout and making Windows the slowest PR signal. openai#20485 showed that Windows cross-compilation can be materially faster for Cargo release builds, but PR CI needs Bazel because Bazel owns our test sharding, flaky-test retries, and integration-test layout. This PR applies the same high-level shape we already use for macOS Bazel CI: compile with remote Linux execution, then run platform-specific tests on the platform runner. The compromise is deliberately signal-aware: code-mode/V8 changes are rare enough that PR CI can accept losing the direct V8/code-mode smoke-test signal temporarily, while `main` still runs the native Windows job post-merge to catch that class of regression. A follow-up PR should investigate making the cross-built Windows gnullvm V8 archive pass the direct V8/code-mode tests so this tradeoff can eventually go away. ## What Changed - Adds a `ci-windows-cross` Bazel config that targets `x86_64-pc-windows-gnullvm`, uses Linux RBE for build actions, and keeps `TestRunner` actions local on the Windows runner. - Adds explicit Windows platform definitions for `windows_x86_64_gnullvm`, `windows_x86_64_msvc`, and a bridge toolchain that lets gnullvm test targets execute under the Windows MSVC host platform. - Updates the Windows Bazel PR test leg to opt into the cross-compile path via `--windows-cross-compile` and `--remote-download-toplevel`. - Adds a `test-windows-native-main` job that runs only for `push` events on `refs/heads/main`, uses the native Windows Bazel path, includes V8/code-mode smoke tests, and has `timeout-minutes: 40`. - Keeps fork/community PRs without `BUILDBUDDY_API_KEY` on the previous local Windows MSVC-host fallback, including `--host_platform=//:local_windows_msvc` and `--jobs=8`. - Preserves the existing integration-test shape on non-gnullvm platforms, while generating Windows-cross wrapper targets only for `windows_gnullvm`. - Resolves `CARGO_BIN_EXE_*` values from runfiles at test runtime, avoiding hard-coded Cargo paths and duplicate test runfiles. - Extends the V8 Bazel patches enough for the `x86_64-pc-windows-gnullvm` target and Linux remote execution path. - Makes the Windows sandbox test cwd derive from `INSTA_WORKSPACE_ROOT` at runtime when Bazel provides it, because cross-compiled binaries may contain Linux compile-time paths. - Keeps the direct V8/code-mode unit smoke tests out of the Windows cross PR path for now while native Windows CI continues to cover them post-merge. ## Command Shape The fast Windows PR test leg invokes the normal Bazel CI wrapper like this: ```shell ./.github/scripts/run-bazel-ci.sh \ --print-failed-action-summary \ --print-failed-test-logs \ --windows-cross-compile \ --remote-download-toplevel \ -- \ test \ --test_tag_filters=-argument-comment-lint \ --test_verbose_timeout_warnings \ --build_metadata=COMMIT_SHA=${GITHUB_SHA} \ -- \ //... \ -//third_party/v8:all \ -//codex-rs/code-mode:code-mode-unit-tests \ -//codex-rs/v8-poc:v8-poc-unit-tests ``` With the BuildBuddy secret available on Windows, the wrapper selects `--config=ci-windows-cross` and appends the important Windows-cross overrides after rc expansion: ```shell --host_platform=//:rbe --shell_executable=/bin/bash --action_env=PATH=/usr/bin:/bin --host_action_env=PATH=/usr/bin:/bin --test_env=PATH=${CODEX_BAZEL_WINDOWS_PATH} ``` The native post-merge Windows job intentionally omits `--windows-cross-compile` and does not exclude the V8/code-mode unit targets: ```shell ./.github/scripts/run-bazel-ci.sh \ --print-failed-action-summary \ --print-failed-test-logs \ -- \ test \ --test_tag_filters=-argument-comment-lint \ --test_verbose_timeout_warnings \ --build_metadata=COMMIT_SHA=${GITHUB_SHA} \ --build_metadata=TAG_windows_native_main=true \ -- \ //... \ -//third_party/v8:all ``` ## Research Notes The existing macOS Bazel CI config already uses the model we want here: build actions run remotely with `--strategy=remote`, but `TestRunner` actions execute on the macOS runner. This PR mirrors that pattern for Windows with `--strategy=TestRunner=local`. The important Bazel detail is that `rules_rs` is already targeting `x86_64-pc-windows-gnullvm` for Windows Bazel PR tests. This PR changes where the build actions execute; it does not switch the Bazel PR test target to Cargo, `cargo-nextest`, or the MSVC release target. Cargo release builds differ from this Bazel path for V8: the normal Windows Cargo release target is MSVC, and `rusty_v8` publishes prebuilt Windows MSVC `.lib.gz` archives. The Bazel PR path targets `windows-gnullvm`; `rusty_v8` does not publish a prebuilt Windows GNU/gnullvm archive, so this PR builds that archive in-tree. That Linux-RBE-built gnullvm archive currently crashes in direct V8/code-mode smoke tests, which is why the workflow keeps native Windows coverage on `main`. The less obvious Bazel detail is test wrapper selection. Bazel chooses the Windows test wrapper (`tw.exe`) from the test action execution platform, not merely from the Rust target triple. The outer `workspace_root_test` therefore declares the default test toolchain and uses the bridge toolchain above so the test action executes on Windows while its inner Rust binary is built for gnullvm. The V8 investigation exposed a Windows-client gotcha: even when an action execution platform is Linux RBE, Bazel can still derive the genrule shell path from the Windows client. That produced remote commands trying to run `C:\Program Files\Git\usr\bin\bash.exe` on Linux workers. The wrapper now passes `--shell_executable=/bin/bash` with `--host_platform=//:rbe` for the Windows cross path. The same Windows-client/Linux-RBE boundary also affected `third_party/v8:binding_cc`: a multiline genrule command can carry CRLF line endings into Linux remote bash, which failed as `$'\r'`. That genrule now keeps the `sed` command on one physical shell line while using an explicit Starlark join so the shell arguments stay readable. ## Verification Local checks included: ```shell bash -n .github/scripts/run-bazel-ci.sh bash -n workspace_root_test_launcher.sh.tpl ruby -e "require %q{yaml}; YAML.load_file(%q{.github/workflows/bazel.yml}); puts %q{ok}" RUNNER_OS=Linux ./scripts/list-bazel-clippy-targets.sh RUNNER_OS=Windows ./scripts/list-bazel-clippy-targets.sh RUNNER_OS=Linux ./tools/argument-comment-lint/list-bazel-targets.sh RUNNER_OS=Windows ./tools/argument-comment-lint/list-bazel-targets.sh ``` The Linux clippy and argument-comment target lists contain zero `*-windows-cross-bin` labels, while the Windows lists still include 47 Windows-cross internal test binaries. CI evidence: - Baseline native Windows Bazel test on `main`: success in about 28m12s, https://github.com/openai/codex/actions/runs/25206257208/job/73907325959 - Green Windows-cross Bazel run on the split PR before adding the main-only native leg: Windows test 9m16s, Windows release verify 5m10s, Windows clippy 4m43s, https://github.com/openai/codex/actions/runs/25231890068 - The latest SHA adds the explicit PR-vs-main tradeoff in `bazel.yml`; CI is rerunning on that focused diff. ## Follow-Up A subsequent PR should investigate making a cross-built Windows binary work with V8/code-mode enabled. Likely options are either making the Linux-RBE-built `windows-gnullvm` V8 archive correct at runtime, or evaluating whether a Bazel MSVC target/toolchain can reuse the same prebuilt MSVC `rusty_v8` archive shape that Cargo release builds already use.
## Why openai#20585 moved the Windows Bazel test job to the cross-compile path, but the Windows Bazel clippy and verify-release-build jobs were still using the native Windows/MSVC-host fallback. Those two jobs became the slowest Windows PR legs, even though both are build-only signal and do not need to execute the resulting binaries. ## What Changed - Switches the Windows Bazel clippy job from `--windows-msvc-host-platform` to `--windows-cross-compile`, so clippy build actions use Linux RBE while still targeting `x86_64-pc-windows-gnullvm`. - Switches the Windows Bazel verify-release-build job to `--windows-cross-compile` as well. This job only compiles `cfg(not(debug_assertions))` Rust code under `fastbuild`, so it does not need a native Windows build host. - Keeps the old `--skip_incompatible_explicit_targets` behavior only for fork/community PRs without `BUILDBUDDY_API_KEY`, where `run-bazel-ci.sh` falls back to the local Windows MSVC-host shape. - Adds `--windows-cross-compile` support to `.github/scripts/run-bazel-query-ci.sh`, so target-discovery queries select the same `ci-windows-cross` config as the subsequent build. - Threads that option through `scripts/list-bazel-clippy-targets.sh` so the Windows clippy job discovers targets under the same platform shape as the subsequent clippy build. ## Verification Local checks: ```shell bash -n .github/scripts/run-bazel-query-ci.sh bash -n scripts/list-bazel-clippy-targets.sh ruby -e 'require "yaml"; YAML.load_file(".github/workflows/bazel.yml"); puts "ok"' RUNNER_OS=Linux ./scripts/list-bazel-clippy-targets.sh | grep -c -- '-windows-cross-bin$' RUNNER_OS=Windows ./scripts/list-bazel-clippy-targets.sh --windows-cross-compile | grep -c -- '-windows-cross-bin$' ``` The Linux target-list check reported `0` Windows-cross internal test binaries, while the Windows cross target-list check reported `47`, preserving the test-code clippy coverage shape from the existing Windows job.
Refs: https://linear.app/openai/issue/SE-6311/login-fails-for-experian-users-behind-tls-inspecting-proxy ## Summary - When a custom CA bundle is configured, force the shared `codex-client` reqwest builder onto rustls before registering custom roots. - Add the `rustls-tls-native-roots` reqwest feature so the rustls client preserves native roots plus the enterprise CA bundle. - Add subprocess TLS coverage for both a direct local TLS 1.3 server and a hermetic local CONNECT TLS-intercepting proxy that forwards a token-exchange-shaped POST to a local origin. ## Plain-language explanation Experian users are behind a TLS-inspecting proxy, so the login token exchange needs to trust the enterprise CA bundle from `CODEX_CA_CERTIFICATE` or `SSL_CERT_FILE`. Before this change, that custom-CA branch still used reqwest default TLS selection, which could fail in the proxy environment. Now, only when a custom CA is configured, Codex selects rustls first and then adds the custom CA roots, matching the validated behavior from the Experian test build while leaving normal system-root clients unchanged. The new regression test recreates the enterprise-proxy shape locally: the probe client sends an HTTPS `POST /oauth/token` through an explicit HTTP CONNECT proxy, the proxy presents a leaf certificate signed by a runtime-generated test CA, decrypts the request, forwards it to a local origin, and relays the `ok` response back. ## Scope note - The actual production fix is the first commit: `8368119282 Fix custom CA reqwest clients to use rustls`. - The second commit is integration-test coverage only. It generates all test CA and localhost certificate material at runtime. ## Validation - `cd codex-rs && cargo test -p codex-client --test ca_env posts_to_token_origin_through_tls_intercepting_proxy_with_custom_ca_bundle -- --nocapture` - `cd codex-rs && cargo test -p codex-client` - `cd codex-rs && cargo test -p codex-login` - `cd codex-rs && just fmt` - `cd codex-rs && just bazel-lock-update` - `cd codex-rs && just bazel-lock-check` - `cd codex-rs && just fix -p codex-client`
## Summary Bound TUI startup terminal response probes so unsupported terminals cannot stall startup for multiple seconds. This replaces the Unix startup uses of crossterm's blocking response probes with short `/dev/tty` probes that use nonblocking reads and `poll` with a 100ms timeout. It covers the initial cursor-position query, keyboard enhancement support detection, and OSC 10/11 default-color detection. The default-color probe uses one shared deadline for foreground and background instead of allowing two independent full waits. The diagnostic mode/trace env vars from the investigation branch are intentionally not included. The shipped behavior is simply bounded probing by default, while non-Unix keeps the existing crossterm fallback path. ## Details - Add a private `terminal_probe` module for bounded Unix terminal probes and response parsers. - Let `custom_terminal::Terminal` accept a caller-provided initial cursor position so startup can compute it before constructing the terminal. - Use bounded cursor, keyboard enhancement, and default-color probes on Unix startup. - Preserve default-color cache behavior so a failed attempted query does not retry forever. ## Validation - `cd codex-rs && just fmt` - `cd codex-rs && cargo test -p codex-tui terminal_probe` - `cd codex-rs && just fix -p codex-tui` - `cd codex-rs && just argument-comment-lint` - `git diff --check` - `git diff --cached --check` `cd codex-rs && cargo test -p codex-tui` still aborts on the pre-existing local stack overflow in `app::tests::discard_side_thread_keeps_local_state_when_server_close_fails`; I reproduced that same focused failure on `main` before this PR work, so it is not introduced by this change. Manual validation in the VM showed the original crossterm path taking about 2s per unanswered probe, while bounded probing returned in about 100ms per probe.
Tool suggest still misfires when model needs tool_search, updating the prompts to further disambiguate it: - [x] rename it from `tool_suggest` to `request_plugin_install` - [x] rephrase "suggestion" to "install" in the tool descriptions. - [x] disambiguate "the tool" vs "the plugin/connector". Tested with the Codex App and verified it still works.
## Why We saw Responses websocket sessions recover only after a long quiet period when the server had already logged the websocket as disconnected. The normal connect path is already bounded by `websocket_connect_timeout_ms`, but the first request send on an established websocket reused only the receive-side idle timeout after the write completed. If the socket write/pump stalls, the client can sit in `ws_stream.send(...)` without reaching the existing receive timeout.
## Why The automated issue labeler needs more precise area labels for newly opened GitHub issues so triage can distinguish new Codex app and agent feature surfaces without falling back to broad labels. ## What Changed - Added labeler prompt entries for `computer-use`, `browser`, `memory`, `imagen`, `remote`, `performance`, `automations`, and `pets` in `.github/workflows/issue-labeler.yml`. - Updated the agent-area guidance so `memory` is used for agentic memory storage/retrieval and `performance` is used for slow behavior, high memory utilization, and leaks. - Expanded the fallback `agent` guidance so Codex prefers the new specific labels when applicable. ## Verification - Parsed `.github/workflows/issue-labeler.yml` with `yq e '.'`. - Ran `git diff --check` for the workflow change.
## Why Configured environments need to connect to exec-server instances that are not necessarily already listening on a websocket URL. A command-backed stdio transport lets Codex start an exec-server process, speak JSON-RPC over its stdio streams, and clean up that child process with the client lifetime. **Stack position:** this is PR 2 of 5. It builds on the server-side stdio listener from PR 1 and provides the client transport used by later environment/config PRs. ## What Changed - Add `ExecServerTransport` variants for websocket URLs and stdio shell commands. - Add stdio command connection support for `ExecServerClient`. - Move websocket/stdio transport setup into `client_transport.rs` so `client.rs` stays focused on shared JSON-RPC client, session, HTTP, and notification behavior. - Tie stdio child process cleanup to the JSON-RPC connection lifetime with a RAII lifetime guard. - Keep existing websocket environment behavior by adapting URL-backed remotes to `ExecServerTransport::WebSocketUrl`. ## Stack - 1. openai#20663 - Add stdio exec-server listener - **2. This PR:** openai#20664 - Add stdio exec-server client transport - 3. openai#20665 - Make environment providers own default selection - 4. openai#20666 - Add CODEX_HOME environments TOML provider - 5. openai#20667 - Load configured environments from CODEX_HOME Split from original draft: openai#20508 ## Validation Not run locally; this was split out of the original draft stack and then refactored to separate transport setup from the base client. --------- Co-authored-by: Codex <noreply@openai.com>
Remove the remote thread-store backend and checked-in protobuf artifacts. We've moved these into another crate that link against this one. Also remove the config settings for thread store backend selection, since we'll instead pass an instantiated thread store into the core-api crate's main entrypoint.
## Why The next PR in this stack introduces configured environments, where the provider knows both which environments exist and which one should be selected by default. The existing manager derived the default internally by checking for the legacy `remote` and `local` ids, and it treated "remote" as equivalent to "has a websocket URL." That does not work cleanly for stdio-command remotes because they are remote environments without an `exec_server_url`. **Stack position:** this is PR 3 of 5. It is the environment-model bridge between PR 2's transport enum and PR 4's TOML provider. ## What Changed - Add `DefaultEnvironmentSelection` to the `EnvironmentProvider` contract: - `Derived` preserves the old `remote`-then-`local` fallback behavior. - `Environment(id)` lets a provider explicitly select a configured default. - `Disabled` lets a provider intentionally expose no default environment. - Move the legacy `CODEX_EXEC_SERVER_URL=none` default-disabling behavior into `DefaultEnvironmentProvider`. - Make `EnvironmentManager` validate explicit provider defaults and return an error if the selected id is missing. - Track `remote_transport` separately from `exec_server_url` so stdio-command environments are still recognized as remote. - Add `Environment::remote_stdio_shell_command(...)` for the TOML provider added in the next PR. ## Stack - 1. openai#20663 - Add stdio exec-server listener - 2. openai#20664 - Add stdio exec-server client transport - **3. This PR:** openai#20665 - Make environment providers own default selection - 4. openai#20666 - Add CODEX_HOME environments TOML provider - 5. openai#20667 - Load configured environments from CODEX_HOME Split from original draft: openai#20508 ## Validation Not run locally; this was split out of the original draft stack. --------- Co-authored-by: Codex <noreply@openai.com>
Route view_image through selected environments so image reads use the selected turn environment and cwd, with schema exposure limited to multi-environment toolsets.\n\nCo-authored-by: Codex <noreply@openai.com>
## Why After stdio transports and provider-owned defaults exist, Codex needs a config-backed provider that can describe more than the single legacy `CODEX_EXEC_SERVER_URL` remote. This PR adds that provider without activating it in product entrypoints yet, keeping parser/validation review separate from runtime wiring. **Stack position:** this is PR 4 of 5. It builds on PR 3's provider/default model and adds the `environments.toml` provider used by PR 5. ## What Changed - Add `environment_toml.rs` as the TOML-specific home for parsing, validation, and provider construction. - Keep the TOML schema/provider structs private; the public constructor added here is `EnvironmentManager::from_codex_home(...)`. - Add `TomlEnvironmentProvider`, including validation for: - reserved ids such as `local` and `none` - duplicate ids - unknown explicit defaults - empty programs or URLs - exactly one of `url` or `program` per configured environment - Support websocket environments with `url = "ws://..."` / `wss://...`. - Support stdio-command environments with `program = "..."`. - Add helpers to load `environments.toml` from `CODEX_HOME`, but do not wire entrypoints to call them yet. - Add the `toml` dependency for parsing. ## Stack - 1. openai#20663 - Add stdio exec-server listener - 2. openai#20664 - Add stdio exec-server client transport - 3. openai#20665 - Make environment providers own default selection - **4. This PR:** openai#20666 - Add CODEX_HOME environments TOML provider - 5. openai#20667 - Load configured environments from CODEX_HOME Split from original draft: openai#20508 ## Validation Not run locally; this was split out of the original draft stack. ## Documentation This introduces the config shape for `environments.toml`; user-facing documentation should be added before this stack is treated as a documented public workflow. --------- Co-authored-by: Codex <noreply@openai.com>
## Why Remote compaction v2 consumes a normal Responses stream, but that compaction-specific stream consumer dropped the `response.completed` id. As a result, the `responses_websocket_response_processed` lifecycle notification was emitted for normal turn sampling but not after a v2 remote compaction response was fully processed. ## What changed - Return the completed response id alongside the v2 `context_compaction` output item. - After v2 compacted history is installed, send `response.processed` through the same websocket session when the feature is enabled. - Add websocket regression coverage for a remote compaction v2 request followed by `response.processed`. ## Verification - `cargo test -p codex-core --test all responses_websocket_sends_response_processed_after_remote_compaction_v2 -- --nocapture` - `cargo test -p codex-core collect_context_compaction_output_accepts_additional_output_items -- --nocapture`
## Why We want terminal tool review analytics, but the reducer should not stamp review timing from its own wall clock. This PR plumbs review timing through the real protocol and app-server seams so downstream analytics can consume the emitter's timestamps directly. Guardian reviews keep their enriched `started_at` / `completed_at` analytics fields by deriving those legacy second-based values from the same protocol-native millisecond lifecycle timestamps, rather than sampling a separate analytics clock. ## What changed - add `started_at_ms` to user approval request payloads - add `started_at_ms` / `completed_at_ms` to guardian review notifications - preserve Guardian review `started_at` / `completed_at` enrichment from the protocol-native timing source - stamp typed `ServerResponse` analytics facts with app-server-observed `completed_at_ms` - thread the new timing fields through core, protocol, app-server, TUI, and analytics fixtures ## Verification - `cargo test -p codex-app-server outgoing_message --manifest-path codex-rs/Cargo.toml` - `cargo test -p codex-app-server-protocol guardian --manifest-path codex-rs/Cargo.toml` - `cargo test -p codex-tui guardian --manifest-path codex-rs/Cargo.toml` - `cargo test -p codex-analytics analytics_client_tests --manifest-path codex-rs/Cargo.toml` --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/21434). * openai#18748 * __->__ openai#21434 * openai#18747 * openai#17090 * openai#17089 * openai#20514
## Summary - Remove `perCwdExtraUserRoots` / `SkillsListExtraRootsForCwd` from the `skills/list` app-server API. - Drop Rust app-server and `codex-core-skills` extra-root plumbing so skill scans are keyed by the normal cwd/user/plugin roots only. - Regenerate app-server schemas and update docs/tests that only existed for the removed extra-roots behavior. ## Validation - `just write-app-server-schema` - `just fmt` - `cargo test -p codex-app-server-protocol` - `cargo test -p codex-core-skills` - `just fix -p codex-app-server-protocol` - `just fix -p codex-core-skills` - `just fix -p codex-app-server` - `just fix -p codex-tui` ## Notes - `cargo test -p codex-app-server --test all skills_list` ran the edited skills-list cases, but the full filtered run ended on existing `skills_changed_notification_is_emitted_after_skill_change` timeout after a websocket `401`. - `cargo test -p codex-tui --lib` compiled the changed TUI callers, then failed two unrelated status permission tests because local `/etc/codex/requirements.toml` forbids `DangerFullAccess`. - Source-truth check found the OpenAI monorepo still has generated/app-server-kit mirror references to the removed field; those should be cleaned up when generated app-server types are synced or in a companion OpenAI cleanup.
## Summary Codex's Amazon Bedrock provider signs Mantle requests with SigV4 using credentials resolved by the AWS SDK. That worked for standard AWS profiles and environment credentials, but AWS CLI console-login profiles created by `aws login` require the SDK's `credentials-login` feature to resolve `login_session` credentials. This change enables that credential provider so Bedrock can use AWS console-login credentials through the existing provider-owned AWS auth path. While testing the console-login path, we also hit a Mantle-specific SigV4 regression from the new split between `session_id` and `thread_id`. Mantle does not preserve legacy OpenAI compatibility headers that use `snake_case` before SigV4 verification, so signing those headers can make the server reconstruct a different canonical request. The Bedrock auth path now removes that header class before signing, keeping preserved hyphenated Codex/AWS headers such as `x-codex-turn-metadata` signed normally. ## Changes - Enable `aws-config`'s `credentials-login` feature in `codex-rs/aws-auth`. - Add a compile-time regression test for `aws_config::login::LoginCredentialsProvider`. - Strip `snake_case` compatibility headers from Bedrock Mantle SigV4 requests before signing. - Expand the Bedrock auth regression test to cover `session_id`, `thread_id`, and future headers of the same shape. - Refresh Cargo and Bazel lockfiles for the added `aws-sdk-signin` dependency. ## Tests - tested with `aws login` locally and verified that it works as intended.
Requires discoverability on plugin/share/updateTargets so the server can manage workspace link access consistently, including auto-adding the workspace principal for UNLISTED. Also rejects LISTED on share creation and blocks client-supplied workspace principals while preserving response parsing for LISTED.
Fixes openai#21270. The CLI bug report template defined `description` twice for the terminal emulator field. Because duplicate YAML keys are ambiguous and parsers generally keep the later value, the form could drop the multiplexer guidance. This combines that guidance with the terminal examples under a single block scalar in `.github/ISSUE_TEMPLATE/3-cli.yml`.
Issue forms should only reference labels that exist in the repository so new reports receive the intended automatic labels. This updates the CLI issue form to stop applying the missing `needs triage` label, and changes the documentation issue form from `docs` to the existing `documentation` label. Fixes openai#21158
Fixes openai#20870. ## Summary The feature request template currently links users to the README `#contributing` anchor, but that anchor does not exist. This can confuse users who are trying to understand contribution expectations before filing a request. This updates `.github/ISSUE_TEMPLATE/5-feature-request.yml` to point `Contributing` at `docs/contributing.md`, matching the repository's existing contribution guidance.
## Why `codex exec` still included the stale `research preview` label in its human-readable startup banner, which makes the CLI look older and less current than it is. Fixes openai#21444. ## What Changed Removed the hard-coded ` (research preview)` suffix from the `OpenAI Codex v<version>` startup banner in `codex-rs/exec/src/event_processor_with_human_output.rs`. ## Validation Local validation was not required for this one-line startup banner text cleanup.
…uth (openai#21676) ## Summary API-key-auth remote compaction requests should not inherit `service_tier` from normal `/responses` turns. This path needs to match API auth expectations, while ChatGPT-auth remote compaction should keep reusing the shared request fields that still apply there. This change keeps the decision inline in `codex-rs/core/src/compact_remote.rs` only. Under API key auth, the classic remote `/responses/compact` path now omits `service_tier`; under ChatGPT auth, it keeps reusing the configured tier. `codex-rs/core/src/compact_remote_v2.rs` is unchanged. The remote compaction parity coverage and snapshots were updated to assert the API-key omission and preserve the ChatGPT-auth behavior. ## Testing - Updated remote compaction parity coverage in `codex-rs/core/tests/suite/compact_remote.rs` and the corresponding snapshots.
## What changed - rewrote `shutdown_flushes_pending_metadata_irrelevant_updated_at` to seed an existing pending `updated_at` touch directly in `RolloutWriterState` - kept the shutdown test focused on draining a pending touch, leaving the separate coalescing test to cover timing-based deferral ## Why The old test had to complete several async operations inside the 50 ms test-only coalescing window. When that sequence took longer, the second flush updated `threads.updated_at` immediately and the pre-shutdown equality assertion failed, even though shutdown behavior was correct. ## Validation - `cargo test -p codex-rollout shutdown_flushes_pending_metadata_irrelevant_updated_at` - `cargo test -p codex-rollout` Co-authored-by: Codex <noreply@openai.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Local validation
Branch audit
Every local feat/, every-code/, and code-* branch is present by ancestry or patch-equivalence. The only local fix branches still not included are legacy rollback-history branches: fix/rollback-shift-up-history, fix/rollback-shift-up-history-session-metadata, fix/rollback-history-dedupe-prefix, and fix/rollback-history-skip-whitespace.