Skip to content

fix(bb): unblock merge-train CI — restore two more transitive includes#23343

Draft
AztecBot wants to merge 122 commits into
merge-train/spartanfrom
claudebox/audit-merge-train-crypto
Draft

fix(bb): unblock merge-train CI — restore two more transitive includes#23343
AztecBot wants to merge 122 commits into
merge-train/spartanfrom
claudebox/audit-merge-train-crypto

Conversation

@AztecBot
Copy link
Copy Markdown
Collaborator

Summary

Two more transitive-include breaks were still failing CI on merge-train/spartan after the previous claudebox fix at 4af2626375. CI run 25968961690 failed bb-cpp-fuzzing and bb-cpp-native-objects (which builds test_objects), aborting the build before the e2e suite gets to run.

Root cause is the same as the previous fix: ludamad's series of header-trim/include-cleanup commits pushed directly to merge-train/spartan on 2026-05-16 (21e45212c9, 52678411c9, d20e61adbe, db34cbcb61, etc.) removed transitive #includes that the default preset doesn't need but the fuzzing/asan/native-with-tests presets do. These commits have no PR numbers because they were pushed straight to the train, so non-default build flavors were never exercised before the change landed.

Fixes

  • bool.fuzzer.hpp — add #include "barretenberg/stdlib/primitives/bool/bool.hpp" for bb::stdlib::bool_t<Builder>. Matches the existing pattern in byte_array.fuzzer.hpp and field.fuzzer.hpp which already include their primary stdlib type header. Fixes bb-cpp-fuzzing.
  • field_conversion.test.cpp — add #include "barretenberg/ecc/fields/field_conversion.hpp" for FrCodec. The stdlib field_conversion.hpp it currently includes doesn't re-export FrCodec (which is the native codec used to compute reference gate counts in the test). Fixes bb-cpp-native-objects test build.

Verified locally

  • cmake --build --preset fuzzing → success (was failing before fix)
  • cmake --build --preset fuzzing-avm → success
  • ninja stdlib_primitives_test_objects → success (was failing before fix)
  • Full build_native_objects (1074 ninja steps) → success, 0 errors
  • cmake --build --preset smt-verification (objects + circuit_checker) → success

Full analysis

Detailed post-mortem of what's accumulated on the train, why crypto changes are blocking it, and how the offending commits got in:
https://gist.github.com/AztecBot/9aa3189ef51909b2108fa0fe6dd4d35c

ClaudeBox log: https://claudebox.work/s/36322e8bf136aac3?run=1

fcarreiro and others added 30 commits May 13, 2026 17:31
…al pipeline (#23245)

## Summary

- Removes `FastTxCollection` as a separate class and absorbs all its
logic directly into `TxCollection`
- Replaces the old parallel file-store delay with a single sequential
pipeline: node RPC → reqresp → file store, where each phase blocks on
the previous (cancellation-aware)
- File store collection is now driven by `IRequestTracker` — the same
synchronization primitive used by node and reqresp paths. The tracker is
the single source of truth for "is this tx still missing?" and "is this
request still alive?"
- `FileStoreTxCollection` simplified: dropped
`start()`/`stop()`/persistent worker pool/`wakeSignal`.
`startCollecting(requestTracker, context)` returns `Promise<void>`,
spins up its own per-call worker pool, and workers self-terminate when
the tracker is cancelled (all-fetched / deadline / external)

## Collection flow inside `collectFast`

1. Start node RPC collection in the background
2. Wait `txCollectionFastNodesTimeoutBeforeReqRespMs` — interruptible by
cancellation **or by node exhaustion** (so when no nodes are configured,
reqresp starts immediately)
3. Start reqresp in the background (parallel with nodes)
4. Wait `txCollectionFileStoreFastDelayMs` — interruptible by
cancellation or reqresp completion
5. Start file store collection in the background (its workers
self-terminate)
6. `Promise.allSettled` on node + reqresp + file store

`txCollectionFileStoreFastDelayMs` description updated to reflect it is
now anchored to reqresp start, not collection start.

## File store / tracker integration

- `FileStoreTxCollection.startCollecting` no longer takes `(txHashes,
context, deadline)`; it takes `(requestTracker, context)` and reads the
missing txs + deadline from the tracker
- Workers check `requestTracker.isMissing(hash)` each scan — if the tx
was found via another path (node/reqresp/gossipsub), the entry is
dropped without an extra fetch
- Workers race their backoff sleeps against
`requestTracker.cancellationToken` — cancelling a request (deadline,
`stopCollectingForBlocksUpTo/After`, or `stop()`) propagates to file
store workers immediately
- Removed `foundTxs`/`clearPending` plumbing on `FileStoreTxCollection`
— the tracker handles both implicitly
- `startCollecting` yields once after building its entry set, so a
synchronous follow-up call (e.g. `markFetched` in tests, or the
gossipsub-found path in production) lands before workers begin scanning

## Tests

- `tx_collection.test.ts`: collapsed the `TestFastTxCollection`
subclass; all accesses go directly through `TxCollection`. Added "starts
reqresp immediately when no nodes are configured" covering the
node-exhaustion shortcut
- `file_store_tx_collection.test.ts`: rewritten for the new shape — no
`start()`/`stop()`, lifecycle driven by the tracker (cancel to terminate
workers). New "workers exit when tracker is cancelled" covers the
per-call worker-pool teardown

Closes
https://linear.app/aztec-labs/issue/A-933/tx-collection-dont-retrieve-transactions-that-have-already-been
via new synchronization with the request tracker.
…ims (#23165)

## Context

`SequencerPublisher` simulates each enqueued L1 action individually at
enqueue time, then sends them bundled through Multicall3. The `propose`
checkpoint action is validated at enqueue and send time (the latter via
a `preCheck` mechanism), but in isolation and relying on overrides.
There is no simulation of the multicall payload before sending it, so a
reverting tx is most likely not caught.

This refactor:

- Replaces the per-request `preCheck` mechanism with a **single
bundle-level `eth_simulateV1`** of the assembled `aggregate3` payload,
run right before send. If any entry reverts in sim it is dropped from
the bundle, the reduced bundle is re-simulated to get an honest
`gasUsed`, and the survivors are sent. Extracted to a
`SequencerBundleSimulator`.
- Drops the entire propose simulate at enqueue (`simulateProposeTx`,
`validateCheckpointForSubmission`). The bundle simulate covers it.
- Adds a new pre-broadcast `validateBlockHeader` call (calling
`validateHeaderWithAttestations` with empty attestations +
`ignoreSignatures: true`) that catches header-level bugs before we
gossip the proposal to peers. Emits a new `header-validation-failed`
event on failure.
- Drops every per-action simulate at enqueue (governance signal **and**
slashing votes/executes). Bundle simulate at send time is the single
decision point for every per-action revert. `simulateAndEnqueueRequest`
is deleted. We were enqueuing votes even if the simulation failed, after
all.
- Rewrites `sendRequestsAt` so it takes an L2 `SlotNumber`, derives the
timestamp for the start of that slot, and sleeps until one L1 slot
before that boundary, so we can land on the first L1 slot of the target
L2 slot.
- Centralises `SimulationOverridesPlan` construction into a single
`buildCheckpointSimulationOverridesPlan` helper. The plan **always**
pins both `pending` and `proven` chain tips (to the pipelined parent /
invalidation target, or to the current snapshot when neither applies),
so `STFLib.canPruneAtTime` cannot reintroduce a phantom prune during
simulation.
- Makes `SimulationOverridesBuilder.merge` undefined-safe: explicit
`undefined` fields in an incoming plan no longer erase previously-set
values. `withPendingTempCheckpointLogFields` now accepts a partial
subset of fields.
- Moves the payload-empty cache onto `GovernanceProposerContract` next
to its concern. Only `isPayloadEmpty=false` is cached (a CREATE2
redeploy could go empty → populated).
- Drops the old Multicall3 revert-recovery and per-request-resim
machinery, since with `allowFailure: true` the top-level multicall is
expected to land successfully. `Multicall3.forward` now throws
`MulticallForwarderRevertedError` if the receipt reports a reverted
status; the publisher does **not** rotate to a new publisher on that
error (on-chain failure, not a send failure). Adds `Multicall3.hasCode`
helper and a `simulateAggregate3` entrypoint used by the bundle
simulator.
- `L1TxUtils.sendTransaction` fails fast if `txTimeoutAt` has already
elapsed when called. `SequencerPublisher.forwardWithPublisherRotation`
re-checks the deadline at the head of each rotation iteration so it
doesn't keep cycling through publishers after the L2 slot's submission
window has closed.
- Sequencer escape-hatch (`voteInSlotWithoutSyncing`) and
full-escape-hatch (`voteOnSlotWithEscapeHatch`) vote-only paths now
submit via `sendRequestsAt(slot)` rather than `sendRequests()`, so the
bundle-simulate `block.timestamp` override matches the slot the EIP-712
vote signatures were generated for.

The intended outcome is a publisher with one explicit re-validation
point (the bundle simulate), measurable bundle gas (from the bundle
simulate's `gasUsed`), and dead/duplicated state-override plumbing
removed.

## Resulting simulations after this refactor

The full list of simulation / gas-estimation steps that remain in a
pipelined proposer slot, in execution order.

### Pre-build, in `Sequencer.doWork`

1. **`publisher.canProposeAt`** — rollup view call simulated with the
centralised override plan. Cheap pre-check gate before any block-build
work.
2. **`publisher.simulateInvalidateCheckpoint`** (conditional) — runs
**only** if `syncedTo.pendingChainValidationStatus.valid === false` AND
`!syncedTo.hasProposedCheckpoint`. Simulates the invalidate call against
the rollup. Result becomes the `invalidateCheckpoint` package passed
into `CheckpointProposalJob`. The previous code called this even when
there's a proposed parent and discarded the result; this refactor adds
the `!hasProposedCheckpoint` gate so we skip the wasted RPC.

### Per-slot, in `CheckpointProposalJob.proposeCheckpoint`

3. **CheckpointVoter votes** — `CheckpointVoter.enqueueVotes()` runs at
the top of `execute()`, returning two promises that are awaited in
parallel with block-build. It enqueues two kinds of votes via the
publisher, **neither of which simulates at enqueue time** after this
refactor:
- **`enqueueGovernanceCastSignal`** — does an `isPayloadEmpty`
pre-flight check (now on `GovernanceProposerContract`), then enqueues.
No `eth_simulateV1`.
- **`enqueueSlashingActions`** (one call per slashing action, type
`vote-offenses` or `execute-slash`) — builds the request and enqueues.
No `eth_simulateV1`.

Real reverts on any of these are caught by the bundle simulate at send
time, which drops the failing entry and proceeds with the survivors.
4. **`publisher.validateBlockHeader` (NEW: pre-broadcast)** — replaces
the old `simulateProposeTx`-at-enqueue. Calls
`validateHeaderWithAttestations` with empty attestations and
`ignoreSignatures: true` so the rollup runs the header checks (archive
match, slot match, timestamp, mana-min-fee, …) without needing real
attestations. Runs **before** we gossip the proposal to peers. If it
fails, abort the slot — log an error, emit `header-validation-failed`,
don't broadcast, don't enqueue.
5. **`prepareProposeTx → validateBlobs estimateGas`** — kept as the
blob-commitment **consistency check** (detects locally-built commitments
not matching the blob sidecars). Returns `blobEvaluationGas`, which we
stash on the propose `RequestWithExpiry` for use by the bundle gasLimit
later. The simulate-step that previously paired with this
(`simulateProposeTx`) is removed.

### Background pipeline, in
`waitForAttestationsAndEnqueueSubmissionAsync`

6. **`publisher.simulateInvalidateCheckpoint` (conditional)** — runs
**only** in the fallback path where attestation collection failed AND
the pending chain turned out to be invalid. Triggered from
`CheckpointProposalJob.enqueueInvalidation`. This is the second, late
trigger for invalidation simulation — distinct from step 2's pre-build
trigger.

### Send time, in `sendRequestsAt(targetSlot)`

7. **Bundle simulate (NEW)** — single `eth_simulateV1` of the assembled
`aggregate3` payload, with `block.timestamp` overridden to the start of
`targetSlot`, and state overrides = `[disableBlobCheck]` iff `propose`
is in the bundle and `[]` otherwise. Per-entry result decoded from the
returned `Result[]`. This is the **only** post-pipeline-sleep
re-validation; it replaces the per-request `preCheck` mechanism
entirely.
8. **Bundle re-simulate (NEW, conditional)** — runs **only** when step 7
dropped at least one entry. Re-runs the bundle simulate on the reduced
payload to get an honest `gasUsed`, and applies the same per-entry
decode so additional drops are caught. If the re-simulate falls back
(node doesn't support `eth_simulateV1`), the publisher sends the
**first-pass survivors only** with `MAX_L1_TX_LIMIT`; the entries that
the first pass already proved would revert stay dropped and are reported
as failed actions.

### Post-send

No diagnostic-only simulate paths remain. `Multicall3.forward` throws
`MulticallForwarderRevertedError` on a reverted receipt and re-throws on
a send error; per-request revert resimulation has been removed.

## Known caveats

- **`sendRequestsAt` early lead**: sleeps until `startOfTargetSlot -
ethereumSlotDuration` to maximise inclusion in the first L1 block of the
L2 slot. There is a known correctness risk: a tx mined in the L1 block
immediately preceding the L2-slot boundary would revert via
`ProposeLib.validateHeader`'s `slot ==
block.timestamp.slotFromTimestamp()` check. In practice the prior L1
block is usually already committed before this send wakes; if observed
to be unreliable in production, tune the lead down, especially on tests.
- **`validateBlockHeader` pre-broadcast coverage**: covers the
`validateHeader` checks (archive, slot/timestamp, mana-min-fee, …) and
the empty-attestation path of `validateHeaderWithAttestations`, but does
NOT cover proposer-signature verification, inbox consumption
(`Rollup__InvalidInHash`), or `header.inHash` match. Those still execute
inside the full `propose` and are caught by the bundle simulate at send
time. The cost of a rare miss is one wasted broadcast.
- **Top-level `aggregate3` revert diagnostics removed**: the previous
`Multicall3.forward` code decoded receipt-reverted reasons via
`tryGetErrorFromRevertedTx` and did a per-request resim on send-throw.
Both paths are gone. With `allowFailure: true` and `Multicall3.hasCode`
covering the no-bytecode case, a reverted forwarder receipt is genuinely
unexpected (OOG, forwarder bug). The throw of
`MulticallForwarderRevertedError` is the only diagnostic surface —
operators will need the transaction hash from the log to investigate.
…ses (#23249)

## Motivation

The `RevertCode` and `TxExecutionResult` types each carried three
deprecated aliases (`APP_LOGIC_REVERTED`, `TEARDOWN_REVERTED`,
`BOTH_REVERTED`) that all collapse to the same `REVERTED` value. Keeping
them around adds noise, requires `no-duplicate-enum-values` eslint
suppressions, and lets new code keep reaching for the old names.

## Approach

Removed the deprecated members from both enums and rewrote every call
site to use `REVERTED` directly. Tests, fixtures, and a stale doc
reference were updated to match.

## Changes

- **stdlib**: Drop deprecated
`APP_LOGIC_REVERTED`/`TEARDOWN_REVERTED`/`BOTH_REVERTED` from
`RevertCode` and `TxExecutionResult`.
- **simulator, pxe, aztec.js, end-to-end (tests)**: Replace remaining
references with `REVERTED`.
- **simulator/docs**: Update a stale `APP_LOGIC_REVERTED` reference in
the public-tx-simulation doc.
…23259)

Addresses a config-timing race in
`epochs_invalidate_block.parallel.test.ts > "proposer invalidates
multiple checkpoints"` that caused intermittent CI failures with
`expect(validCount).toBeLessThan(quorum)` (e.g. 5/6 attestations when
quorum=5).

## The race

The test reads `currentSlot` via `monitor.run()` right after waiting for
the first checkpoint to land — that read can land anywhere within the
current L2 slot, including near its end. It then computes `badSlot1 =
currentSlot + 2` and races to push malicious config
(`skipCollectingAttestations: true`, …) to that slot's proposer via
`await node.setConfig({...})`.

`CheckpointProposalJob` is constructed with `this.config` passed by
reference (`sequencer-client/src/sequencer/sequencer.ts:559`), and
`Sequencer.updateConfig` reassigns `this.config = merge(...)` rather
than mutating, so a job built before `setConfig` lands keeps the old
config object. Under proposer pipelining
(`PROPOSER_PIPELINING_SLOT_OFFSET = 1`,
`epoch-cache/src/epoch_cache.ts:26`), the job for `badSlot1` is built
during the last L1 slot of L2 slot `badSlot1 - 1`. With 32s L2 slots and
8s L1 slots, that's ~24s into the previous L2 slot — so if `currentSlot`
was read late, badSlot1's proposer can snapshot the old config before
our `setConfig` round-trip completes.

## Fix

- Wait for an L2 slot boundary (`monitor.waitUntilNextL2Slot()`) before
reading `currentSlot`, so we start from the beginning of a slot rather
than wherever we happened to land.
- Bump the gap from `+2/+3` to `+3/+4` for a second slot of margin.

Cost is up to one additional L2 slot of test runtime in the worst case;
the existing 8-slot wait window for both checkpoints still fits.
`sendBatchRequest` became unused after removing the slow tx flow and the
old tx reqresp method.

This PR removes sendBatchRequest and cleans up code that becomes unused.

It does NOT remove subprotocol validator registration/etc from reqresp.
This might be done in a follow-up depending on how
https://linear.app/aztec-labs/issue/A-1014/block-txs-reqresp-validator-validaterequestedblocktxs-is-never-invoked
becomes solved.
ProposalTxCollector doesn't exist anymore.

Clean up unused files.

Rename bench that is now only testing BatchTxRequester.
Remove the single-checkpoint-proposal map in favour of the "by hash"
variant.
Checks that inHash, archive, and sig ctx match. Should catch errors
during construction.
…23257)

Addresses [Phil's review
comment](#23165 (comment))
on #23165: uses the injected `DateProvider` instead of `new Date()` for
the pre-gas-estimation timeout check in `L1TxUtils.sendTransaction`, so
tests can drive the clock.
## Motivation

The local network sandbox (`aztec start --local-network`) historically
ran without proposer pipelining, so the compose-routed e2e suite
(`src/composed/*`, `src/guides/*`, cli-wallet flows, docs examples,
playground) never exercised the pipelined sequencer path. Turning
pipelining on revealed that each L2 slot took a full real-time slot (~72
s) before the L1 multicall fired, blowing up sandbox boot from ~30 s to
~5 min, because the existing `AnvilTestWatcher` triggers don't fire in
the pipelined-publish window.

## Approach

First commit flips `SEQ_ENABLE_PROPOSER_PIPELINING=true` on the three
sandbox-test compose envs so every compose-routed test runs through the
pipelined path. Second commit teaches `AnvilTestWatcher` about the
proposer's target slot by hooking the sequencer's `block-proposed` event
in `createLocalNetwork`; when the proposer has built a block destined
for a slot beyond L1, the watcher warps L1 (and, via `cheatcodes.warp`,
the injected date provider) forward, waking the pipelined publisher's
`sendRequestsAt` sleep and the upstream
`waitForValidParentCheckpointOnL1` wait. `block-proposed` is used rather
than the cleaner `state-changed → PUBLISHING_CHECKPOINT` because the
latter only fires *after* `waitForValidParentCheckpointOnL1` unblocks —
which is what we are trying to break — so it would be circular.

## Changes

- **yarn-project/end-to-end, docs/examples/ts, playground (compose)**:
add `SEQ_ENABLE_PROPOSER_PIPELINING=true` to the `local-network` /
`aztec` service env so every compose-routed sandbox test runs pipelined.
- **yarn-project/aztec (`AnvilTestWatcher`)**: new
`setProposedTargetSlot` setter and a `warpTimeIfNeeded` branch (gated on
`isLocalNetwork`) that warps L1 to the target slot's timestamp when it's
ahead of L1.
- **yarn-project/aztec (`createLocalNetwork`)**: subscribe to the
sequencer's `block-proposed` event and forward `slot` to the watcher.

Verified locally: sandbox boot drops from ~5 min back to ~27 s under
pipelining, and `e2e_local_network_example.test.ts` (both tests) passes
in ~33 s.
… pipelining (#23302)

## Summary

Fixes the `e2e_p2p_broadcasted_invalid_block_proposal_slash` failure
that has been blocking the `merge-train/spartan` train (run
https://github.com/AztecProtocol/aztec-packages/actions/runs/25896899879,
test log http://ci.aztec-labs.com/2bf4e2cd2d9e7944).

The test creates the malicious proposer first (auto-starting its
sequencer) and only later creates the honest nodes and waits for P2P
mesh. Under `enableProposerPipelining: true` (turned on for this test by
#23070), the malicious proposer is selected for the very next slot,
builds + broadcasts the invalid proposal one slot ahead, and lands the
broadcast before the honest validators have joined the mesh. They then
reject it at the gossipsub `checkpoint_proposal_validator` with
`Penalizing peer for invalid slot number` (since their target slot has
already moved past), so the `state_mismatch` slashing path never runs.
The malicious sequencer then gets stuck on the failed publish (`Awaiting
pending L1 payload submission`) and never proposes again before the test
times out on `awaitOffenseDetected`.

This is the same race that #23070 fixed in
`duplicate_proposal_slash.test.ts`; the same pattern is applied here:

- Create both the invalid proposer and the honest nodes with
`dontStartSequencer: true`.
- After P2P mesh connectivity + committee formation, use
`advanceToEpochBeforeProposer` to land one epoch before an epoch where
the invalid proposer is scheduled.
- Start all sequencers, then `advanceToEpoch(targetEpoch, { offset:
-AZTEC_SLOT_DURATION })` so the malicious slot fires while every node is
online and at the same wall-clock slot.
- After `awaitOffenseDetected` on one node, poll `getSlashOffenses`
across **all** nodes for `BROADCASTED_INVALID_BLOCK_PROPOSAL` — under
pipelining a given receiver may have already advanced past the build
slot when the proposal arrives, so we need to catch whichever node was
still in the build slot.

The on-chain slash assertion (`rollup.listenToSlash`) is preserved
unchanged.

Full failure analysis:
https://gist.github.com/AztecBot/39b69c1117f419145938ccd2c198f8e9

## Test plan

- CI: `e2e_p2p_broadcasted_invalid_block_proposal_slash` passes on
`merge-train/spartan`.
- Local `./bootstrap.sh ci` / `fast` / `build` are not runnable in this
container (no Docker socket and `$HOME` not writable for the container
UID — `yarn install` fails on `corepack` mkdir, parallel-bootstrap can't
create `~/.parallel`). Fix is a direct port of a pattern already
shipping green on `next` via the sibling
`duplicate_proposal_slash.test.ts`.


ClaudeBox log: https://claudebox.work/s/06a4929a1971beaf?run=1
Prevents the archiver from reporting invalid L2 tips by querying all
chain tips within a db transaction. Moves the responsibility of
assembling the tip data to the block store itself to minimize the number
of queries to the db. Clamps proven and finalized tips such that an
incorrect L1 sync still results in finalized <= proven <= checkpoints.
And adds explicit assertions that tips are ordered.

Also adds a guard in the tips store that prevents from deleting block
hashes that are still alive by a given chain tip, instead of assuming
that the finalized chain tip is always the oldest one.

This should catch errors where the block stream breaks due to a
finalized chain tip running ahead of a proven chain tip.

Note that this PR does NOT enforce ordering at the L2Tips struct itself,
since consumers (ie the ones that report the "local" chain tips) may
break this contract (see A-1061). This PR is a simpler alternative to
#22964. Fixes A-1018.
…pelining (#23296)

## Problem

`Sequencer.tryVoteWhenEscapeHatchOpen` constructed `CheckpointVoter`
with the wall-clock `slot` and called `publisher.sendRequestsAt(slot)`.
Under proposer pipelining we are the elected proposer for `slot + 1`
(`targetSlot`), and the multicall is expected to mine in `targetSlot`.
`EmpireBase.sol::_internalSignal`:

- Verifies the EIP-712 digest against the **mining-slot** signature
- Checks `msg.sender == getCurrentProposer()` for the **mining slot**

Both fail under pipelining because we're the proposer for `targetSlot`,
not `slot`. The multicall reverts silently inside Multicall3 and every
governance/slashing entry is dropped.

## Fix

Thread `targetSlot` through `tryVoteWhenEscapeHatchOpen` and use it for
both:

- `CheckpointVoter` (binds the EIP-712 signature to `targetSlot`)
- `publisher.sendRequestsAt(targetSlot)` (delays submission so the tx
mines in `targetSlot`)

This mirrors `tryVoteWhenSyncFails` and `CheckpointProposalJob.execute`,
which already use `targetSlot` correctly. When pipelining is disabled
`targetSlot == slot` (from
`epochCache.getTargetEpochAndSlotInNextL1Slot()`), so `sendRequestsAt`
resolves with no extra sleep and the legacy behaviour is preserved.

## Showcase

Re-enables `e2e_sequencer/escape_hatch_vote_only.test.ts` with
`enableProposerPipelining: true` and `inboxLag: 2`. The test asserts
`finalStats.votes >= slotsPassed` over the escape-hatch window — this
assertion fails without the fix because no votes ever land.

Test-side adjustments for the pipelined timing model:

- Move event listener attachment to **after** the warp into the
escape-hatch epoch. Checkpoint proposals in flight at warp time fail
their L1 propose tx and are setup-warp artifacts, not vote-only window
failures.
- Snapshot `slotAtMeasurement` for the vote-count lower bound, then wait
for the L1 slot to advance two more so the trailing vote (signed in
build slot N for target slot N+1) has time to mine before counting.
## Motivation

Under proposer pipelining, the checkpoint job opens a world-state fork
with `closeDelayMs: 12_000`. If a pending-chain unwind or historical
prune destroys that fork on the C++ side before the delay fires,
`DELETE_FORK` rejects with `"Fork not found"`, producing a stray warn
log and leaking the per-fork queue entry in the JS instance — one dead
entry per affected pipelined slot.

## Approach

Make `close()` idempotent via an in-flight `closePromise`, and treat
`"Fork not found"` as benign on close (same precedent as the existing
`"Native instance is closed"` suppression — fork IDs are monotonic and
never reused). Also wrap the per-fork queue cleanup in `try/finally` in
both the native and IPC instances so the JS-side queue map cannot
outlive the native fork on error.

## Changes

- **world-state**: `MerkleTreesForkFacade.close()` is now idempotent and
swallows `"Fork not found"`; per-fork queue cleanup in
`NativeWorldStateInstance` and `IpcWorldStateInstance` moved to
`finally`.
- **world-state (tests)**: Regression test that disposes a
`closeDelayMs` fork, triggers an unwind that destroys it on the C++
side, and asserts no warn is logged and the queue entry is cleaned up.

Fixes A-1055
## Summary

> **Depends on PR #23296** -- this PR is rebased on top of
`palla/fix-b5-escape-hatch-slot-targeting`, which forward-ports the §6
B5 escape-hatch slot-targeting fix onto the modern
`buildCheckpointSimulationOverridesPlan` + flat `l1Contracts` API. With
B5 in, `e2e_sequencer/escape_hatch_vote_only` and
`e2e_sequencer/gov_proposal.parallel` "should vote even when unable to
build blocks" are now re-enabled under pipelining on this PR.

Extracts the tests known to pass under proposer pipelining from PR
#23150, without flipping the global default. Tests opt into pipelining
explicitly via a new `PIPELINING_SETUP_OPTS` helper. The global
`enableProposerPipelining` default stays `false` on
`merge-train/spartan`; this PR migrates tests file-by-file so each one
is opted in by name.

This PR is intentionally scoped: it only includes tests whose
pipelining-ready status is reasonably well understood. Tests that depend
on shared base-class fixtures (`FeesTest`, `BlacklistTokenContractTest`,
`CrossChainMessagingTest`, `DeployTest`, `FullProverTest`, etc.) keep
their branch changes but are not yet wired to pipelining via their base
class -- those base classes are used by tests outside this batch and a
blanket opt-in would over-migrate. They will be migrated in follow-up
PRs.

Two commits:

1. **`test(e2e): opt unchanged tests into proposer pipelining`** -- adds
`PIPELINING_SETUP_OPTS` to `fixtures.ts`, the small deploy-phase
`accountsDeployMinTxs` conditional to `setup.ts`, and the explicit
opt-in to every §1 test that calls `setup()` directly.
2. **`test(e2e): migrate tests that needed fixes into proposer
pipelining`** -- the §2 tests with their branch fixes plus the
infrastructure they depend on (sequencer.ts B5 fix, dummy_service.ts
loopback, sequencer-publisher.ts error logging, sequencer-client READMEs
rewrite, bootstrap.sh / test_simple.sh timeout bumps).

The global default flip and the migration of base-class-using tests are
intentionally deferred. They will land separately once each batch can be
verified independently.

---

## §1 -- Pipelining enabled and passing (no code changes)

Tests that pick up `enableProposerPipelining=true` from the explicit
opt-in and pass without any per-test fix. This is the majority of the
suite -- too many to enumerate. Examples include the unmodified
`e2e_authwit`, `e2e_nft`, `e2e_amm`, `e2e_partial_notes`,
`e2e_token_contract/*` (non-overflow), `e2e_offchain_*`,
`e2e_orderbook`, `e2e_event_*`, `e2e_keys`, `e2e_avm_simulator` (after
the suite-level timeout bump only), `e2e_pending_note_hashes_contract`,
etc. None of these required test-level pipelining adaptations.

Pre-existing `it.skip`s in this bucket are unrelated to pipelining (they
predate the branch) and were not touched:
- `e2e_token_contract/{transfer,transfer_in_private,transfer_in_public}`
"transfer into account to overflow"
- `e2e_blacklist_token_contract/{transfer_private,transfer_public}`
"transfer into account to overflow"
- `e2e_synching` "replay history and then do a fresh sync" / "a wild
prune appears"
- `e2e_p2p/reex` "validators re-execute transactions before attesting"

## §2 -- Pipelining enabled and needed fixes

Tests that needed test- or fixture-level changes to pass under
pipelining. All currently passing under PR #23150.

**Fixture-level (`src/fixtures/fixtures.ts` + `src/fixtures/setup.ts`)**
- New `PIPELINING_SETUP_OPTS` preset exporting `inboxLag=2`,
`minTxsPerBlock=0`, `aztecSlotDuration=12s`, `ethereumSlotDuration=4s`,
`walletMinFeePadding=PIPELINED_FEE_PADDING` (30x), and
`enableProposerPipelining=true`.
- `setup.ts` gains a small conditional so the deploy-phase
`minTxsPerBlock` override uses `0` instead of `1` under pipelining
(otherwise the chain stalls on alternating slots).

**Cheat-codes (`src/testing/cheat_codes.ts`)** -- already on
`merge-train/spartan` via cherry-pick of #23213.

**P2P (`src/services/dummy_service.ts`)**
- `notifyOwnCheckpointProposal` now invokes the all-nodes callback
synchronously, mirroring libp2p loopback. Without this the in-process
e2e sequencer never sees its own proposal and the pipelined parent
verification blocks indefinitely.

**Sequencer-client**
- `sequencer.ts::tryVoteWhenEscapeHatchOpen` -- §6 B5 fix: takes
`targetSlot`, signs the voter for `targetSlot`, and delays submission
via `sendRequestsAt(getTimestampForSlot(targetSlot))` when pipelining is
enabled. Mirrors the existing `tryVoteWhenSyncFails` and
`CheckpointProposalJob.execute` patterns. Plus a refactor of
`canProposeAt` simulation overrides via `SimulationOverridesBuilder`.
- `sequencer-publisher.ts` -- error log on publisher exhaustion now
includes the underlying viem error and tried-addresses context.

**Per-suite test fixes**
- `e2e_lending_contract` -- predictable-time stub, longer hook windows.
- `e2e_fees/private_payments` "pays fees for tx that dont run public app
logic".
- `e2e_blacklist_token_contract/{burn, minting, shielding,
transfer_private, transfer_public, unshielding}` -- 6/7 suites
re-enabled (`access_control` still skipped, see §5).
- `e2e_contract_updates` -- all 4 tests re-enabled (covered by §1 opt-in
in this PR).
- `e2e_expiration_timestamp` invalidates tests -- L1-only
`eth.warp(target, { resetBlockInterval: true })`, no publisher cascade.
- `e2e_ordering` -- switched from "latest block" to receipt-block reads;
helper renamed to `expectLogsFromBlockToBe(logMessages, fromBlock)`.
- `e2e_fees/failures` -- snapshot `provenCheckpointBefore/After`, use
`waitForProven` with extended timeout, account for newly-proven
checkpoint deltas in reward math, read committed fee headers via
`getCommittedProverFee` / `getCommittedBurn`.
- `e2e_fees/gas_estimation` -- pad `maxFeesPerGas` via
`getPaddedMaxFeesPerGas(aztecNode)` in `beforeEach` to absorb fee-asset
price evolution between snapshot and submission. 3/3 passing.
- `e2e_crowdfunding_and_claim` "cannot donate after a deadline" --
L1-only `cheatCodes.eth.warp(deadline+1, { resetBlockInterval: true })`.
- `e2e_deploy_contract/contract_class_registration` private-ctor
variants -- thread `receipt.blockNumber` through `deployFn`, read logs
from that specific block instead of "latest". 21/21 passing.
- `e2e_state_vars` DelayedPublicMutable -- root cause was slot-duration
mismatch (`delay(4)` assumed `aztecSlotDuration=72s` from
`DefaultL1ContractsConfig`; fixture forces `12s` under pipelining).
Replaced `delay(4)` with a loop that pumps no-op txs until `timestamp >=
timestamp_of_change`, and asserted exact equality against
`tx.data.constants.anchorBlockHeader.globalVariables.timestamp +
newDelay - 1n`. Tight `toEqual`, no widened bound.
- `e2e_pending_note_hashes_contract` -- squash helpers use the latest
*non-empty* block.
- `e2e_expiration_timestamp` -- include-by computation bumped by 2x
`aztecSlotDuration`.
- `e2e_p2p/*` and `e2e_epochs/*` -- explicit `enableProposerPipelining:
true` + `inboxLag: 2` on every test that builds its own config (so
behavior is intentional rather than implicit).
- `e2e_block_building` "processes txs until hitting timetable" --
replaced legacy `canStartNextBlock` mock + single-deadline timetable
with the pipelined sub-slot budget (`blockDurationMs=2000`,
`enforceTimeTable=true`, `fakeProcessingDelayPerTxMs=500`). 10
simultaneous txs must span at least 2 distinct blocks; would fail if the
proposer reverted to single-block-per-slot or stopped enforcing sub-slot
deadlines.
- `e2e_block_building` "assembles a block with multiple txs" (x2) --
pre-publish the contract class once and pass `skipClassPublication:
true` on each per-tx deploy so the deploys don't all share the same
`ContractClassRegistry.publish` nullifier and get RBF-rejected against
each other. Also reset `blockDurationMs` in `afterEach` so the
multi-block-per-slot state from the previous test doesn't leak.
- `e2e_block_building` "publishes two empty blocks" --
`buildCheckpointIfEmpty: true` so the proposer doesn't skip empty
sub-slots; retry budget bumped from 10s -> 60s because empty checkpoints
land every `aztecSlotDuration` (12s) rather than every legacy block.
- `e2e_epochs/epochs_mbps.parallel` "builds multiple blocks per slot
with L2 to L1 messages" -- pipelined timing loses one sub-slot to
attestation propagation; expectation dropped from
`EXPECTED_BLOCKS_PER_CHECKPOINT=3` to `>= 2`, mirroring the sibling MBPS
tests.
- `e2e_l1_with_wall_time` -- test was explicitly passing
`ethereumSlotDuration` from env (=12s), defeating the fixture's
pipelining override (=4s). With `aztec=eth=12s`, pipelined timing can't
fit propose+attest+publish in one Aztec slot. Removed the explicit
`ethereumSlotDuration`; also wrapped `teardown` in `afterEach` so setup
failures surface their real error.
- `e2e_p2p/add_rollup` re-enabled (entire describe; 1 test, passes in
~9:14 locally). AttestationTimeoutError still fires in some slots, but
the bundled-multicall governance-signal preCheck is independent of the
propose preCheck -- signals accumulate and reach quorum even when
checkpoint proposes fail to attest.
- `e2e_pruned_blocks` "can discover and use notes created in both pruned
and available blocks" -- restored the explicit `markAsProven` call (as
it had pre-#21156) + a 2-block buffer for Anvil's `finalized = latest -
2` heuristic; test re-enabled and passes.
- `e2e_sequencer/escape_hatch_vote_only` re-enabled. Source fix at
`sequencer.ts::tryVoteWhenEscapeHatchOpen` (see §B5 in PR #23150).
Test-side: attach event listeners *after* the warp, explicitly drain
trailing in-flight votes before counting.
- `e2e_sequencer/gov_proposal.parallel` re-enabled (both tests). Two
pipelining-aware adjustments: warp offset bumped to
`nextRoundBeginsAtTimestamp - AZTEC_SLOT_DURATION -
ETHEREUM_SLOT_DURATION`, and per-tx wait timeouts tuned for two slots of
catch-up (proposer + L1 mine).

**Bash-level timeout adjustments (`end-to-end/bootstrap.sh`)** --
pipelined sequential dependent txs run at ~2x legacy latency:
- simple e2e default: 10m -> 20m
- `e2e_block_building`: 25m
- `e2e_avm_simulator`: 30m
- compose/web3signer: 20m
- HA: 30m
- `scripts/test_simple.sh` Jest `--testTimeout` 5m -> 10m
- ~21 test files: per-file `const TIMEOUT` raised from 100/120/150/180s
-> 300s.

---

## Out of scope

- **Global default flip**: PR #23150 flipped
`enableProposerPipelining=true` everywhere. This PR keeps the default
`false` and migrates per-test. The global flip will land in a follow-up.
- **§3 opt-outs** (`e2e_l1_publisher` "with attestations" describe,
`epoch_cache.test.ts` non-pipelined branch coverage, demo
`docker-compose.yml`): no change required while the default is `false`.
- **§5 still-skipped tests**: the tests in §5 of PR #23150's
categorization (e.g. `e2e_blacklist_token_contract/access_control`,
`e2e_publisher_funding_multi`, `e2e_fees/fee_settings`, etc.) remain at
`merge-train/spartan` state.
- **Base-class fixtures** (`FeesTest`, `BlacklistTokenContractTest`,
`CrossChainMessagingTest`, `DeployTest`, `FullProverTest`,
`EpochesTest`, P2P fixtures): test files using these get their
branch-side changes preserved but are not wired to pipelining via the
base class -- those base classes are shared with tests not in this batch
and a blanket opt-in would over-migrate. Follow-up PRs will opt them in
selectively.

Reference: PR #23150 (`palla/kill-non-pipelined-flow`) for full context
on the categorization, source-level bugs surfaced (§6 B1-B6), and
per-suite investigation notes.
ludamad and others added 24 commits May 16, 2026 03:28
…23303)

## Motivation

Under proposer pipelining, a sequencer builds slot N's checkpoint header
(and bakes `manaMinFee` into `gasFees.feePerL2Gas`) during slot N-1. If
governance executes `setProvingCostPerMana` or `updateManaTarget`
between that build and the L1 submission, L1 recomputes `manaMinFee`
from the post-mutation `FeeStore.config` and the submitted header
reverts with `Rollup__InvalidManaMinFee`. The `e2e_fees/fee_settings`
suite used `bumpProvingCostPerMana` — exactly that governance path — as
its fee-spike mechanism, which made it hostile to pipelining and didn't
reflect any organic mainnet fee channel. The publisher's
bundle-simulator drop log also stopped decoding revert payloads in PR
#23165, leaving operators staring at raw `0x...` data.

## Approach

Drive the fee spike via an L1 base-fee bump (`setNextBlockBaseFeePerGas`
+ `updateL1GasFeeOracle`) — the dominant `feePerL2Gas` channel and a
closer analogue to organic mainnet behaviour. Enable pipelining for the
suite via the `FeesTest` constructor (`enableProposerPipelining`,
`inboxLag`, `manaTarget`, `walletMinFeePadding`, etc.). Add an explicit
recovery test that bumps governance and asserts the chain advances past
the invalidated checkpoint and that a fresh tx still mines. Restore
decoded revert names in `logDroppedInSim` by merging the relevant ABIs
and routing through a shared `tryDecodeRevertReason` helper.

## Changes

- **end-to-end (tests)**: Rewrite `fee_settings.test.ts` to run under
pipelining, replace the governance fee spike with an organic L1-base-fee
bump, and add a recovery test for the governance-mutation race.
- **ethereum**: Add `tryDecodeRevertReason(data, abi)` in `utils.ts` and
route `Multicall3` through it (deduplicating the existing in-place
decoder).
- **sequencer-client**: In `logDroppedInSim`, decode unknown revert
payloads against a merged `[RollupAbi, SlashingProposerAbi,
EmpireBaseAbi, ErrorsAbi]` ABI and emit both the readable form and the
raw payload.

Fixes A-1057
…23333)

## Motivation

The merge-train/spartan train PR (#23253) was dequeued from the merge
queue this morning because grind run `x9` failed during `compile_all`:

```
==> Downloading sqlite3mc-2.2.4-sqlite-3.50.4-wasm.zip
curl: (6) Could not resolve host: release-assets.githubusercontent.com
```

CI logs:
- compile_all: http://ci.aztec-labs.com/dea5c9f3fde10614
- x9-full driver: http://ci.aztec-labs.com/1778928278029512
- merge-queue run:
https://github.com/AztecProtocol/aztec-packages/actions/runs/25959931160

The branch CI on the same commit (run 25958983932) passed — only one of
the 10 grind shards hit the DNS flake, but the merge-queue fail-fast
tore the whole run down. The other 9 grinds and the ARM run were still
pending when the queue dropped #23253.

## Approach

Add curl retry flags to `yarn-project/sqlite3mc-wasm/scripts/vendor.sh`
so a one-off `Could not resolve host` (or any other transient curl
failure) doesn't fail the build. `--retry 5 --retry-delay 2
--retry-all-errors --retry-connrefused` gives ~10s of total backoff,
which is plenty for a momentary DNS hiccup but bounded for genuine
outages.

This is the only curl in the yarn-project build path that hits GitHub
release assets, so this is a targeted fix rather than a sweep.

## Verification

`./bootstrap.sh ci` requires EC2 spawn and isn't runnable from inside
the container. Locally verified that `vendor.sh ensure` still downloads
and validates the pinned artifact correctly.

ClaudeBox log: https://claudebox.work/s/89dacb14037285cd?run=1
…er (#23334)

## Why

PR #23253 was dequeued from the merge queue when `merge-queue-heavy`'s
grind exercise hit a flake in `e2e_fees/fee_settings.test.ts`
(introduced by #23303, the head of `merge-train/spartan`). Failing
sub-test: `reproduces the stale fee snapshot race deterministically`. CI
log: http://ci.aztec-labs.com/cd390ea14cac1093

```
expect(received).toBeGreaterThan(expected)

Expected: > 1134386110000n
Received:   1067501300000n
  214 |       expect(bumpedMinFees.feePerL2Gas).toBeGreaterThan((lowerMinFees.feePerL2Gas * 11n) / 10n);
```

`bumpedMinFees` (`1067501300000`) was effectively the natural L2
baseline at that moment — no oracle rotation had occurred. The retry
inside `inflateL2FeesViaL1BaseFee` exited as soon as `after > before`
(with `before` captured at function entry), but the natural L2 fee
fluctuates between L1 blocks (EIP-1559 decay swings the L1 base-fee
sample), so a sub-percent upward drift satisfied the exit without the
oracle deadband (`LIFETIME - LAG = 3` L2 slots = 36 s) ever opening. The
test ran for only ~15 s before exiting, well short of the deadband.

The caller's `bumpedMinFees > lowerMinFees * 1.1` assertion then failed
because `lowerMinFees` was a separate snapshot taken earlier, and
natural drift between the two snapshots was below 10 %.

There is also a latent upper-bound issue: even on a successful rotation
the original `3x` L1 base-fee bump drives the L2 fee to ~2.0–2.5x once
EIP-1559 decay on the rotation-tx's block is applied, which would have
also failed `higherMinFees > bumpedMinFees` (where `higherMinFees =
lowerMinFees * 2n`).

## What

Three changes in
`yarn-project/end-to-end/src/e2e_fees/fee_settings.test.ts`:

- `inflateL2FeesViaL1BaseFee` takes a `reference: GasFees` parameter and
only returns when `after.feePerL2Gas >= reference * 13/10`. This
distinguishes a real oracle rotation (≥1.5x rise) from ambient noise
(≤±10%) and forces the loop to wait through the 36 s deadband.
- Retry budget grows from 60 s to 90 s to comfortably cover the deadband
plus a slot or two of margin.
- Test #2's synthetic `higherMinFees` grows from `lowerMinFees.mul(2)`
to `lowerMinFees.mul(4)`, giving unambiguous headroom over the realized
bumped fee while staying under the 6x default-padding cap so
`txWithDefaultPadding` is still the comparison point.

Test #1's bounds and semantics are unchanged; only the call site is
updated to pass `stableMinFees` as the reference.

## Test plan

- CI `merge-queue-heavy` (10 parallel grind runs of
e2e_fees/fee_settings)
- The PR-branch `ci-full-no-test-cache` already passed at the head
commit; the flake only surfaces under grind

Analysis:
https://gist.github.com/AztecBot/97861b48883eec686f5978a43a2082bb


ClaudeBox log: https://claudebox.work/s/89d3754c8b2b7140?run=1
…23336)

## Why

PR #23253 was dequeued (4th attempt) when `merge-queue-heavy` caught an
`e2e_amm.test.ts` setup tx getting dropped by a pipelining-driven chain
prune. CI log: `baec5a7453c20089`.

The wait-for-parent gate in
`CheckpointProposalJob.waitForValidParentCheckpointOnL1`
(`sequencer-client/src/sequencer/checkpoint_proposal_job.ts:398`)
**should** have blocked the discard, but it didn't — because a
`TestDateProvider` time warp from
`AnvilTestWatcher.syncDateProviderToL1IfBehind` landed **between** the
two `epochCache` reads in `Sequencer.work` (`sequencer.ts:217-218`) and
broke the pipelining invariant.

| step | wall-clock | `nowSeconds` | result |
|---|---|---|---|
| 1st `getEpochAndSlotInNextL1Slot` (`slot`) | ≈14:34:32.385 (pre-warp)
| `1778942079` | next L1 ts `1778942080` → **slot 18** |
| (warp at 14:34:32.390 sets offset 7611 → 7610) | | | |
| 2nd `getTargetEpochAndSlotInNextL1Slot` (`targetSlot`) | ≈14:34:32.395
(post-warp) | `1778942080` | next L1 ts `1778942084` → **slot 19** →
`+offset=1` → **targetSlot 20** |

Logged confirmation (gap = 2 instead of 1):

```
14:34:32.612  Preparing checkpoint proposal 19 for target slot 20 during wall-clock slot 18
              {nowSeconds=1778942079, slot=18, targetSlot=20, …}
```

With `slotNow = 18`, the gate at `checkpoint_proposal_job.ts:402` waits
on `waitForSyncedL2SlotNumber(slotNow)`. The archiver had already synced
past slot 18 — the wait returns immediately, far too early to see parent
ckpt 18 (which lands four seconds later at 14:34:36). The gate then sees
`checkpointedNumber=17, parentCheckpointNumber=18`, declares the parent
absent, and discards. Slot 20 expires uncheckpointed, archiver prunes
blocks 19/20, the inflight setup tx anchored to block 19 dies with
`Block header not found`.

Full timeline + log evidence:
https://gist.github.com/AztecBot/4863d10084dd20587bffcc43fd61dfee

## What

Scoped, test-only — per direction from Santiago. The previous "make
`checkpointed` the global PXE default" approach is reverted; only
`e2e_amm` is opted in:

```diff
-    } = await setup(4, { ...PIPELINING_SETUP_OPTS }));
+    } = await setup(4, { ...PIPELINING_SETUP_OPTS }, { syncChainTip: 'checkpointed' }));
```

The PXE option exists already (`yarn-project/pxe/src/config/index.ts`,
added in `75df5b5d44`). This is the same approach every other
pipelining-aware test uses (`e2e_p2p/*`, `e2e_epochs/*`,
`e2e_slashing/attested_invalid_proposal`). It anchors inflight txs to
the L1-confirmed tip so prunes on the proposed tip can't invalidate
them.

`PIPELINING_SETUP_OPTS` is left untouched — the pipelining migration of
`e2e_amm` in #23275 stays.

## Recommended follow-up (separate PR)

The real bug is the race in `Sequencer.work`. Worth fixing properly:

- **Snapshot the time once.** Add
`EpochCache.getCurrentAndTargetSlotInNextL1Slot()` that returns `{slot,
targetSlot, epoch, targetEpoch, ts, nowSeconds}` from a single
`dateProvider.nowInSeconds()` read; replace the two-call site in
`Sequencer.work`. Pipelining offset is a constant, so deriving
`targetSlot = slot + offset` from the same snapshot is trivial.
- **Defensive: wait on `targetSlot - 1`.**
`waitForValidParentCheckpointOnL1` should key off the parent's expected
build slot (`targetSlot - 1`) instead of `slotNow`, so the gate is
robust even if the invariant is broken upstream.

These aren't in this PR because they touch sequencer production code and
want their own review; the test-side workaround unblocks the merge-train
without changing the global PXE default.

## Test plan

The failure requires `merge-queue-heavy`'s 10-grind L1 contention to
surface reliably (single dev box can't reproduce). Change is a
single-arg addition; TS-trivial.

Analysis:
https://gist.github.com/AztecBot/4863d10084dd20587bffcc43fd61dfee

ClaudeBox log: https://claudebox.work/s/166e664eab264b04?run=3
# Conflicts:
#	yarn-project/end-to-end/bootstrap.sh
Both fail repeatedly on merge-train attempts under proposer pipelining
despite fix attempts (#23303, #23334 for fee_settings; #23336 for
e2e_amm). Skipping in .test_patterns.yml to land the train; to be
triaged and re-enabled (tracking issue assigned to spalladino).
These four barretenberg C++ breaks arrived via next (git log
origin/next..HEAD shows 0 train commits touching them) and abort the
full CI build before the e2e suite runs, blocking merge-train #23253:

- common/fuzzer.hpp: add <cstring> for std::memcpy (bb-cpp-fuzzing)
- commitment_schemes_recursion/shplemini.test.cpp: include
  flavor/ultra_flavor.hpp for complete bb::UltraFlavor (bb-cpp-asan)
- smt_verification/util/smt_util.cpp: include
  stdlib_circuit_builders/ultra_circuit_builder.hpp for the full
  UltraCircuitBuilder_ template (bb-cpp-smt)
- api/api_chonk.cpp: clang-format-20 (bb-cpp-format-check)

Folded into the spartan train to unblock it per operator direction.
@AztecBot AztecBot added ci-draft Run CI on draft PRs. claudebox Owned by claudebox. it can push to this PR. labels May 16, 2026
@ludamad ludamad force-pushed the merge-train/spartan branch from 4af2626 to db4ec58 Compare May 16, 2026 19:07
@AztecBot
Copy link
Copy Markdown
Collaborator Author

Flakey Tests

🤖 says: This CI run detected 1 tests that failed, but were tolerated due to a .test_patterns.yml entry.

\033FLAKED\033 (8;;http://ci.aztec-labs.com/eb64306d099d4b0f�eb64306d099d4b0f8;;�):  yarn-project/end-to-end/scripts/run_test.sh simple src/e2e_p2p/duplicate_proposal_slash.test.ts (132s) (code: 0) group:e2e-p2p-epoch-flakes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-draft Run CI on draft PRs. claudebox Owned by claudebox. it can push to this PR.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants