Skip to content

[DNM] feat(da): support fiber (not via c-node)#3244

Draft
julienrbrt wants to merge 75 commits intomainfrom
julien/fiber
Draft

[DNM] feat(da): support fiber (not via c-node)#3244
julienrbrt wants to merge 75 commits intomainfrom
julien/fiber

Conversation

@julienrbrt
Copy link
Copy Markdown
Member

@julienrbrt julienrbrt commented Apr 13, 2026

Overview

Support Fiber client (based on https://github.com/celestiaorg/celestia-app/blob/63fbf31cca216fc4e067a9e1b3a3431115c7009b/fibre), but not via celestia node or apex for this PoC
celestiaorg/celestia-node#4892

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 13, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 802ace03-f982-4c73-bb6b-cb92876703ae

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch julien/fiber

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 13, 2026

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedMay 6, 2026, 5:00 PM

@claude
Copy link
Copy Markdown
Contributor

claude Bot commented Apr 13, 2026

Claude finished @julienrbrt's task in 6m 35s —— View job


Code Review: [DNM] feat(da): support fiber (not via c-node)

Note: This is a PoC/DNM branch. Review focuses on correctness issues and architectural risks before productionization.

  • Read AGENTS.md and project guidelines
  • Analyze PR structure and key changed files
  • Review fiber DA client implementation
  • Review DA submitter changes
  • Review configuration changes
  • Review cache and block package changes
  • Review tools (celestia-node-fiber, talis)
  • Compile and post final review

Overview

This PR adds a Fiber/Fibre DA client as an alternative to the default JSON-RPC blob path, introducing a full upload/download/subscribe cycle against Celestia's Fibre protocol. The implementation is well-structured with a clear interface (fiber.DA), a good mock (fibremock), and a real adapter in tools/celestia-node-fiber. The DA submitter refactor and batching strategy improvements are solid engineering. Below are issues, ordered by severity.


Critical Issues

1. context.Background() ignores caller's cancellation in Submit

block/internal/da/fiber_client.go:102

result, err := c.fiber.Upload(context.Background(), nsID, flat)

The passed ctx is ignored here — the upload will run until completion (or internal error) regardless of whether the caller cancels or times out. For a 128 MiB Fibre upload this could hang for tens of seconds after the node's submission loop is cancelled. Should be ctx.

Fix this →


2. DefaultMaxBlobSize global is now Fibre's 128 MiB cap — wrong for all non-Fibre nodes

block/internal/common/consts.go

The comment itself calls this a HACK:

"this default is correct for fiber-enabled deployments but WRONG for the legacy JSON-RPC blob client path — the bridge/chain rejects blobs above its own (much smaller) cap"

Any non-Fibre deployment that upgrades without an ldflag override or code change will silently attempt to submit 128 MiB blobs and get rejected by the DA bridge. The SetMaxBlobSize global mutator (exported, unsynchronized) in block/public.go:26 makes this worse since it's a package-level side-effect with no guard.

Recommendation: Don't change the default; make this per-DA-backend. If the refactor is deferred (TODO(throughput-cleanup)), at minimum ensure the non-Fiber default is preserved (e.g., keep 5 MiB default, let evnode-fibre tool set it as it already does via block.SetMaxBlobSize(100 << 20)).


Significant Issues

3. Submit always returns Height: 0 — breaks DA inclusion tracking

block/internal/da/fiber_client.go:142,149

Height: 0, /* TODO */

The submitter uses res.Height to record DA inclusion in the cache (cache.SetHeaderDAIncluded, cache.SetDataDAIncluded). With Fiber, all headers and data are recorded as included at DA height 0. This will break processDAInclusionLoop which reads these heights to advance the rollup state. The retrieve tests confirm this is known but deferred:

t.Skip("pending Height tracking from fiber DA")

This means the current Fibre integration cannot correctly track rollup liveness on the read path. The TODO must be resolved before this can be merged for any production use.


4. panic(err) in NewFiberDAClient is a public API anti-pattern

block/public.go:103

base, err := da.NewFiberClient(...)
if err != nil {
    panic(err)
}

NewFiberClient only errors on nil client, and StartNode checks fiberClient == nil before calling this. But panicking in an exported function violates the principle that library code should not crash calling programs. The function signature should return (FullDAClient, error).

Fix this →


5. EVM app unconditionally creates a WebSocket DA client even when Fiber is enabled

apps/evm/cmd/run.go:62-68

blobClient, err := blobrpc.NewWSClient(cmd.Context(), logger, nodeConfig.DA.Address, nodeConfig.DA.AuthToken, "")
if err != nil {
    return fmt.Errorf("failed to create blob client: %w", err)
}
defer blobClient.Close()
daClient := block.NewDAClient(blobClient, nodeConfig, logger)

This code runs before StartNode regardless of whether Fiber is configured. If DA.Fiber.Enabled=true and no legacy WebSocket DA server is running, the app crashes here instead of proceeding to the Fiber path. The daClient created here is also used for the force-inclusion server and sequencer, meaning the EVM app has two independent DA clients (one created here, one inside StartNode).


6. FiberDAConfig.Validate() doesn't validate KeyringPath or KeyName

pkg/config/config.go:311-339

KeyringPath and KeyName are required for Fiber payment promise signing but are not validated at startup. An operator who omits them will get a runtime error deep in the Fibre client, not a clear startup failure.

Also, FlagDAFiberKeyringPath flag constant is missing — KeyringPath can only be set via config file, not via CLI --evnode.da.fiber.keyring_path.

Fix this →


7. ApplyFiberDefaults() must be called manually — not invoked in the common startup path

Config.ApplyFiberDefaults() sets critical tuning parameters (adaptive batching, 1s block time, 10 pending limit) that are required to prevent OOM under Fibre load. It is only called in tools/celestia-node-fiber/cmd/evnode-fibre/main.go:304. Any application that uses pkg/cmd.StartNode with Fiber enabled (future integrators) will not get these defaults automatically and may OOM.

Consider calling ApplyFiberDefaults() inside StartNode when IsFiberEnabled() is true, or at minimum document this requirement prominently.


Minor Issues

8. Typo: heigth in GetLatestDAHeight

block/internal/da/fiber_client.go:370

heigth, err := c.fiber.Head(headCtx)
// ...
return heigth, nil

Fix this →


9. Silent error swallowing in forwardFibreBlobs

tools/celestia-node-fiber/listen.go:73

event, err := a.fibreBlobToEvent(ctx, b.Blob, height)
if err != nil {
    continue  // silently dropped
}

If a blob is garbage-collected from FSPs between the subscription event and the download, the error is silently dropped. This creates an invisible data gap in the sync stream. A warning log with blob ID and height would make this diagnosable.


10. retrieve silently ignores the includeHeader discriminator

block/internal/da/fiber_client.go:198

func (c *fiberDAClient) retrieve(ctx context.Context, height uint64, namespace []byte, _ bool) datypes.ResultRetrieve {

Both Retrieve and RetrieveBlobs call the same retrieve function with the bool discarded. The distinction between these two methods is meaningful to callers. This should either be implemented or clearly noted as a PoC limitation.


11. GetProofs returns an opaque error string instead of a typed sentinel

block/internal/da/fiber_client.go:379

return []datypes.Proof{}, fmt.Errorf("not implemented")

Use errors.ErrUnsupported (Go 1.21+) so callers can use errors.Is to detect unimplemented operations.


12. MockDA subscriber removal can remove the wrong subscriber

block/internal/da/fibremock/mock.go:226-235

The swap-with-last cleanup uses the index captured at subscribe time. With ≥3 concurrent subscribers being cancelled, the captured idx can end up pointing to a different subscriber after earlier cleanups rearrange the slice. In pathological cases a zombie subscriber remains in the slice after its context is done, and a subsequent Upload may attempt to send on a closed channel (panic). This is a test-only mock but correctness is still expected.


Architecture / Design Notes

Fiber height tracking is fundamentally unresolved. ResultSubmit.Height returns 0, which propagates to cache.SetHeaderDAIncluded(hash, 0, height). The DA submitter then tells the cache every header was included at DA height 0. The processDAInclusionLoop will stall or behave incorrectly. This is the most important design question to answer before the PoC can progress: does Fibre provide per-upload height, or does the listener's BlobEvent.Height need to feed back into the submitter? The current architecture separates submit and subscribe paths in a way that makes height correlation non-trivial.

DefaultMaxBlobSize as a global is a production footgun. The throughput-cleanup TODO is acknowledged in comments. Consider prioritizing this since it's a latent breakage for all non-Fiber users of the package.

tools/talis (large addition) — not reviewed in detail as it appears to be operational infrastructure, but it imports real cloud provider SDKs (AWS, GCP, DigitalOcean) and contains shell scripts. Ensure secrets (cloud credentials, SSH keys) are never checked in and the tool is not wired into CI without review.


@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 13, 2026

Codecov Report

❌ Patch coverage is 90.65657% with 37 lines in your changes missing coverage. Please review.
✅ Project coverage is 63.16%. Comparing base (2865d6d) to head (4485d91).
⚠️ Report is 3 commits behind head on main.

Files with missing lines Patch % Lines
block/public.go 0.00% 12 Missing ⚠️
block/internal/da/fibremock/mock.go 90.90% 5 Missing and 5 partials ⚠️
block/internal/da/fiber_client.go 96.74% 5 Missing and 3 partials ⚠️
pkg/sequencers/solo/sequencer.go 61.53% 5 Missing ⚠️
pkg/config/config.go 75.00% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3244      +/-   ##
==========================================
+ Coverage   62.33%   63.16%   +0.82%     
==========================================
  Files         122      124       +2     
  Lines       12873    13258     +385     
==========================================
+ Hits         8024     8374     +350     
- Misses       3968     3995      +27     
- Partials      881      889       +8     
Flag Coverage Δ
combined 63.16% <90.65%> (+0.82%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

julienrbrt and others added 7 commits April 14, 2026 15:12
Adds a fibremock package with:
- DA interface (Upload/Download/Listen) matching the fibre gRPC service
- In-memory MockDA implementation with LRU eviction and configurable retention
- Tests covering all paths

Migrated from celestiaorg/x402-risotto#16 as-is for integration.
@julienrbrt julienrbrt changed the title feat(da): support fiber (not via c-node) [DNM] feat(da): support fiber (not via c-node) Apr 20, 2026
julienrbrt and others added 15 commits April 20, 2026 14:46
Adds tools/celestia-node-fiber, a new Go sub-module that implements the
ev-node fiber.DA interface by delegating Upload, Download and Listen to a
celestia-node api/client.Client.

Upload and Download run locally against a Celestia consensus node (gRPC)
and Fibre Storage Providers (Fibre gRPC) — no bridge-node hop — using
celestia-node's self-sufficient client (celestiaorg/celestia-node#4961).
Listen subscribes to blob.Subscribe on a bridge node and forwards only
share-version-2 blobs, which is how Fibre blobs settle on-chain via
MsgPayForFibre.

The package lives in its own go.mod, parallel to tools/local-fiber, so
ev-node core does not inherit celestia-app / cosmos-sdk replace-directive
soup. A FromModules constructor accepts the Fibre and Blob Module
interfaces directly so callers can inject mocks or share an existing
*api/client.Client.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…#3280)

* test(celestia-node-fiber): showcase end-to-end Upload/Listen/Download

Adds tools/celestia-node-fiber/testing/, a single-validator in-process
showcase that boots a fibre-tagged Celestia chain + in-process Fibre
server + celestia-node bridge, registers the validator's FSP via
valaddr (with the dns:/// URI scheme the client's gRPC resolver
expects), funds an escrow account, and drives the full adapter
surface.

TestShowcase proves the round-trip: subscribe via Listen, Upload a
blob, wait for the share-version-2 BlobEvent that lands after the
async MsgPayForFibre commits, assert the BlobID from Listen matches
Upload's return, Download and diff the payload bytes.

The harness is intentionally single-validator — a 2-validator
Docker Compose showcase is planned as a follow-up for exercising real
quorum collection.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(celestia-node-fiber): scale showcase to 10 blobs, document DataSize gap

Upload 10 distinct-payload blobs through adapter.Upload, collect
BlobEvents via adapter.Listen until every BlobID is accounted for
(order-insensitive, rejects duplicates), then round-trip each blob
through adapter.Download to diff bytes. Catches routing bugs (wrong
blob returned for a BlobID) and duplicate-event bugs that a
single-blob test can't see.

Scaling the test also exposed a semantic issue: the v2 share carries
only (fibre_blob_version + commitment), so b.DataLen() — what
listen.go's fibreBlobToEvent reports today — is always 36, not the
original payload length ev-node's fibermock conveys. The adapter
can't derive the payload size from the subscription stream alone;
surfacing it correctly needs an x/fibre PaymentPromise lookup
(tracked as a TODO on fibreBlobToEvent). The test therefore asserts
DataSize is non-zero rather than matching len(payload).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…3281)

listen.go previously set BlobEvent.DataSize to b.DataLen(), which for
a share-version-2 Fibre blob is always the fixed share-data layout
(fibre_blob_version + commitment = 36 bytes) — not the original
payload length. That diverges from ev-node's fibermock contract and
misleads any consumer that uses DataSize to allocate buffers or
report progress.

The v2 share genuinely doesn't carry the original size, and x/fibre
v8 has no chain query to derive it from the commitment. The only
accurate path is to Download the blob and measure. Listen now does
exactly that before forwarding each event. The cost is one FSP
round-trip per v2 blob; can be made opt-out later if it hurts
throughput-sensitive use cases.

Tests:
- Showcase restores the strict DataSize == len(payload) assertion
  across all 10 blobs.
- Unit test TestListen_FiltersFibreOnlyAndEmitsEvent now stubs
  fakeFibre.Download to return a deterministic payload and asserts
  DataSize matches its length.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ight subscriptions (#3283)

feat(celestia-node-fiber): Listen takes fromHeight for resume subscriptions

Threads a fromHeight parameter through the Fibre DA Listen path so a
subscriber can rejoin the stream from a past block height without
missing blobs. Consumes the matching celestia-node API change landed
in celestiaorg/celestia-node#4962, which gave Blob.Subscribe a
fromHeight argument backed by a WaitForHeight loop.

Changes:

- block/internal/da/fiber/types.go: DA.Listen signature now takes
  fromHeight uint64. fromHeight == 0 preserves "follow from tip"
  semantics, >0 replays from that block forward.
- block/internal/da/fibremock/mock.go: replay matching blobs with
  height >= fromHeight before attaching the live subscriber.
- block/internal/da/fiber_client.go: outer fiberDAClient.Subscribe
  does not yet expose a starting height (datypes.DA doesn't plumb
  one), so pass 0 and defer resume-from-height wiring to a future
  datypes.DA change.
- tools/celestia-node-fiber/listen.go: propagate fromHeight to
  client.Blob.Subscribe on the celestia-node API.
- tools/celestia-node-fiber/go.mod: bump celestia-node to the merged
  pseudo-version (v0.0.0-20260423143400-194cc74ce99c) carrying #4962.
- tools/celestia-node-fiber/adapter_test.go: fakeBlob.subscribeFn
  gets the new fromHeight arg; TestListen_FiltersFibreOnlyAndEmitsEvent
  asserts that fromHeight=0 is forwarded.
- tools/celestia-node-fiber/testing/showcase_test.go: existing
  TestShowcase passes fromHeight=0. New TestShowcaseResume uploads 3
  blobs, discovers their settlement heights via a live Listen, then
  opens a fresh Listen with fromHeight at the first blob's height and
  verifies every historical blob is replayed with correct Height and
  DataSize.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
julienrbrt and others added 26 commits April 29, 2026 15:46
This reverts commit a1a0861.
…nner (#3301)

Brings the celestia-app talis multi-cloud deploy tool into ev-node,
plus a long-lived ev-node aggregator runner that wires the existing
celestia-node-fiber adapter behind ev-node's DA client interface.
Verified end-to-end on AWS — talis up → genesis → deploy →
setup-fibre → start-fibre → fibre-bootstrap-evnode reaches
24.57 MB/s @ 99.7 % ok on a 60 s sustained loadgen
(3 × c6in.4xlarge validators + c6in.2xlarge bridge +
c6in.8xlarge ev-node + c6in.2xlarge load-gen, us-east-1).

What this adds:

  • tools/talis/                — vendored from celestia-app's
    feat/fibre-payments. Provisions AWS / DO / GCP boxes for
    validators + bridge + ev-node + load-gen, deploys binaries +
    init scripts, drives the Fibre setup-fibre + start-fibre flow,
    and ships a fibre-bootstrap-evnode step that scp's the bridge
    JWT and Fibre payment keyring onto each ev-node before its
    init script starts the daemon.
  • tools/celestia-node-fiber/cmd/evnode-fibre/  — the long-lived
    aggregator runner. Wires block.NewFiberDAClient on top of the
    celestia-node-fiber adapter that julien/fiber already ships,
    plus the in-memory executor + HTTP /tx ingress used by
    evnode-txsim. Distinct from the existing fiber-bench cmd.
  • tools/talis/cmd/evnode-txsim/ — small Go load-gen that pumps
    the runner's HTTP /tx ingress for a fixed duration; deployed
    to load-gen boxes and prints a single TXSIM: line on completion.

Two small ev-node-side helpers the runner calls:

  • block/public.go: SetMaxBlobSize(n) — overrides the per-blob
    byte cap so the runner can lift Celestia's 5 MiB default to
    Fibre's 120 MiB headroom.
  • pkg/config/config.go: Config.ApplyFiberDefaults() — flips the
    DA config to Fibre-friendly settings (adaptive batching, 1 s
    DA.BlockTime, 50-deep pending-cache window) when the Fiber
    profile is enabled, so a runner can opt in with one call.

setup-fibre robustness fixes uncovered during the verified run:

  • bash script for set-host now retries until the validator's
    host appears in `query valaddr providers`. The previous one-
    shot call relied on `--yes` returning the txhash before block
    inclusion; if the chain wasn't ready, the tx silently bounced.
    The Fibre client cached the partial set on startup and uploads
    cascaded to "host not found" → "voting power: collected 0".
  • talis-CLI side polls `query valaddr providers` after the per-
    validator scripts finish and refuses to return until all
    validators are registered (5-minute deadline).

External dependency (documented in tools/talis/fibre.md):

  • Sibling clone of celestia-app on a branch with feat/fibre-payments
    + sysrex/fibre_url_fix cherry-picked. Without the URL-parse fix
    the Fibre client rejects every host:port registration.

Tested:
  - go build ./... — clean
  - go test ./block/internal/submitting ./pkg/config (the two
    pre-existing test failures on julien/fiber — TestAddFlags
    and TestFiberClient_Submit_BlobTooLarge — are not introduced
    by this PR and reproduce on raw julien/fiber)
  - End-to-end AWS deploy from this branch — 24.57 MB/s, 99.7 % ok
…log (#3307)

* feat(fibre): log per-Submit upload duration

The Fibre Submit path was opaque: failures showed up as
DeadlineExceeded with no signal of how long the upload
actually took, and successes only logged at debug level
inside the upstream library. During load-test debugging
this turned into a guessing game — was the cluster slow,
the deadline too tight, or something stuck mid-RPC?

Add a single info-level (warn-on-failure) log line in
fiberDAClient.Submit covering the Upload call: duration,
flat blob bytes, blob count. Cheap (one time.Since) and
gives the operator concrete numbers — e.g. "17 blobs / 115
MiB / 1.5 s" — to reason about whether RPCTimeout, pending
cap, or batch sizing is the right knob to turn next.

* fix(fibre): split DA Submit batches at Fibre's 128 MiB upload cap

Under sustained txsim load (~50 MiB/s) the DA submitter
batched 10 block_data items into one Upload(), producing a
flat payload of 144 MiB. Fibre's per-upload cap is hard at
~128 MiB ("blob size exceeds maximum allowed size: data
size 144366912 exceeds maximum 134217723") and rejected
every batched upload. With MaxPendingHeadersAndData=10
that took down 170 consecutive submissions before the
node halted itself with "Data exceeds DA blob size limit".

Wrap the Upload call in a chunker that groups input blobs
into ≤120 MiB chunks (8 MiB headroom under Fibre's cap for
the per-blob length-prefix overhead added by flattenBlobs)
and uploads each chunk separately. Aggregates submitted
counts and BlobIDs across chunks; on first chunk failure,
returns the error with the partially-submitted count so
the submitter's retry/backoff logic sees a coherent state
instead of all-or-nothing.

Single oversized blobs (already validated against
DefaultMaxBlobSize earlier in Submit) still land alone and
fail server-side, but at least don't drag healthy peers
into the same rejected batch.

* fix(evnode-fibre): cap per-block data at 100 MiB to fit a Fibre upload

Companion to the submitter chunking fix. The submitter can
split a multi-blob batch into ≤120 MiB Fibre uploads, but
a *single* block_data item that exceeds 128 MiB still ends
up alone in its own chunk and fails server-side ("blob size
exceeds maximum allowed size"). Lower the per-block cap to
100 MiB so under high-throughput txsim a single block can't
grow past Fibre's hard limit, and update the comment to
explain the relationship between this cap and Fibre's
~128 MiB upload reject threshold.
* fix(tools/talis): wait-for-chain + atomic keyring + one-command driver

Three race conditions surfaced repeatedly on a fresh AWS bring-up of
the Fibre throughput experiment. Each one had the same shape: a
talis subcommand "succeeded" at the CLI level (or returned the txhash
with --yes) before the chain had actually applied the work, leaving
downstream steps to fail in confusing ways. This commit makes each
step verify *outcome*, not just *invocation*, so the experiment can
go from a fresh `talis up` to a running loadgen without manual
intervention.

  • setup-fibre script (fibre_setup.go) now:
    - polls `celestia-appd status` for `latest_block_height>0`
      before submitting any tx — fixes the silent-noop where
      set-host + 100× deposit-to-escrow all bounced with
      "celestia-app is not ready; please wait for first block";
    - retries `set-host` in a loop until the validator's host
      shows up in `query valaddr providers` — fixes the case
      where --yes returns the txhash before block inclusion and
      the tx silently lands in the mempool but never confirms;
    - verifies fibre-0's escrow account is funded on-chain before
      the tmux session exits — same silent-failure mode as
      set-host, but on the deposit side.
    The talis-CLI step also now cross-checks all validators are
    registered from a single vantage point before returning, so a
    concurrent set-host race surfaces as an error instead of a
    half-empty provider list start-fibre would cache forever.

  • fibre-bootstrap-evnode (fibre_bootstrap_evnode.go) now stages
    the keyring scp into a tmp directory and `mv`s it atomically
    into place. The previous direct `scp -r` to
    /root/keyring-fibre/keyring-test created the directory before
    transferring its contents — the evnode init script's
    `[ -d keyring-test ]` poll passed mid-transfer, the daemon
    launched with no fibre-0.info, and crashed with `keyring entry
    "fibre-0" not found`.

  • evnode_init.sh (genesis.go) now waits for the specific
    keyring-test/fibre-0.info file rather than just the
    keyring-test directory. Belt-and-braces: the bootstrap mv is
    already atomic on the same filesystem, but the file-level
    guard means a hand-pushed keyring (not via talis) can't trip
    the same race.

  • New `talis fibre-experiment` umbrella command runs
    up → genesis → deploy → setup-fibre → start-fibre →
    fibre-bootstrap-evnode in order. Each step uses the same
    binary as a subprocess; failures in any step abort the chain.
    Operator goes from a prepared root dir to a running loadgen
    with one command, instead of remembering the sequence.

Verified by 5-min sustained loadgen against julien/fiber HEAD with
PR #3287 (concurrent submitter) merged: 47.65 MB/s @ 99.999 % ok,
up from the prior 24.57 MB/s baseline (the gap is PR #3287's
overlapping uploads — these talis fixes just stop the deploy from
silently breaking before throughput matters).

* fix(tools/talis): finalize fibre setup race fixes

Three follow-up bugs surfaced from the PR #3303 follow-up
verification run on a 3-validator AWS Fibre cluster:

- aws.go: CreateAWSInstances exited 0 even when individual
  instance launches failed, so `talis up` lied about success
  and downstream steps proceeded against a partial cluster.
  Returns a joined error now so failure cascades stop early.

- download.go: sshExec used cmd.CombinedOutput, mixing SSH
  warnings (the "Warning: Permanently added '...'..." chatter
  on stderr) into bytes the caller hands to fmt.Sscanf("%d").
  The CLI-side providers cross-check parsed those warnings
  as 0 and looped until its 5-min deadline even though a
  direct SSH query showed all 3 providers registered. Switch
  to cmd.Output() (stdout only) and add `-q -o LogLevel=ERROR`
  to silence the chatter for any caller that does combine
  streams.

- fibre_setup.go: the per-validator escrow verification used
  `celestia-appd query fibre escrow` which doesn't exist —
  the actual subcommand is `escrow-account`. The query
  errored on every retry, the grep for "amount" never
  matched, and the script wedged on the 3-min deadline
  reporting `FATAL: fibre-0 escrow not present`. Switch to
  `escrow-account` and key on `"found":true` (the explicit
  existence flag in the response). Also wrap the fibre-0
  deposit-to-escrow itself in a retry loop matching set-host
  — same `--yes`-returns-before-inclusion silent-failure
  mode bit it. fibre-1..N stay best-effort.

* feat(evnode-txsim): keep-alive conn pool + pprof endpoint

Two diagnostic improvements for the load generator:

1. http.Transport.MaxIdleConnsPerHost defaults to 2 in stdlib.
   With --concurrency=8 (or higher), 6+ goroutines per cycle had
   to open fresh TCP+TLS sockets per request because the pool
   couldn't hold their idle conns between requests. Bump
   MaxIdleConns / MaxIdleConnsPerHost / MaxConnsPerHost to
   2*concurrency so every active sender has a reusable keep-alive
   socket, eliminating handshake churn from the hot path.

2. Always-on net/http/pprof on 127.0.0.1:6060. evnode-txsim is a
   load tester, not a production daemon, so cost of always serving
   profiling is acceptable; the payoff is being able to grab CPU
   profiles under live load without re-deploying the binary —
   `ssh -L 6060:127.0.0.1:6060 root@loadgen \
     go tool pprof http://localhost:6060/debug/pprof/profile?seconds=30`.

A profile captured this way under c=8 traced the per-request hot
path: 25.5% in kernel write(2), 25% in net/http body marshaling.
That diagnostic surfaced that the c6in.2xlarge loadgen was the
binding constraint for the experiment at ~22 MB/s, not evnode or
DA — a finding we'd have spent another debug round chasing
without the in-process profiler.
* fix(solo,reaping): bound sequencer queue to prevent ingest-side OOM

Under sustained ingest above the block-production drain rate,
SoloSequencer.queue grew monotonically. A 32-vCPU loadgen pushing
>100 MB/s into a runner whose executor drains ~100 MB/s per block
filled the queue at ~150 MB/s of net-positive growth — heap
profiles showed 24 GB of retained io.ReadAll bytes in the queue
within ~30 s, then anon-rss:63GB OOM-kill at the box's 64 GiB
ceiling. Reproducible twice with identical signature.

Two changes, one feature:

- SoloSequencer.SetMaxQueueBytes(n) caps the queue's total
  retained tx bytes. SubmitBatchTxs uses all-or-nothing admission
  against the cap: if the incoming batch would push us over, the
  whole batch is rejected with ErrQueueFull and the queue keeps
  its current contents untouched. Partial admission would force
  the caller to track which prefix succeeded and only re-feed the
  suffix on retry; the reaper currently doesn't do that, so the
  whole-batch rule lets the reaper just retry the same batch
  later when the queue has drained. queueBytes is decremented
  on drain (queue := nil) and re-counted for postponed txs that
  the executor's FilterTxs returns to the queue. Zero cap = the
  legacy unbounded path, preserved for tests and small
  deployments.

- The reaper bridging executor mempool → sequencer matches
  ErrQueueFull via errors.Is and treats it as transient
  backpressure: marks the rejected hashes as "seen" so the
  next reaper tick doesn't re-hash + re-submit the same already-
  rejected txs forever, logs a warn line with the dropped count,
  and continues running. Without this match every queue-full
  event would tear the daemon down via the existing fatal-on-
  submit-error path.

Loadgen sees the backpressure indirectly: with the sequencer
queue full, the executor's txChan stops draining, /tx blocks on
its bounded channel send, and txsim observes 5xx / timeouts —
cleanly applied at the application layer instead of via the
kernel OOM-killer.

* fix(evnode-fibre): enforce maxBytes in inMemExecutor.FilterTxs

The stub executor used by the runner returned FilterOK for
every transaction unconditionally, ignoring the maxBytes
budget plumbed through SoloSequencer.GetNextBatch. Under
sustained txsim load (~50 MiB/s, 8 concurrent senders) the
mempool would accumulate ~50K txs while a 100 MiB upload
was in flight; on the next batch the sequencer drained
ALL of them into one block (~369 MiB raw), the submitter
saw a single item exceeding the per-blob cap, and halted
the node with `single item exceeds DA blob size limit`.

Walk the input txs in arrival order, accumulate sizes
against maxBytes, and return FilterPostpone past the
budget so the sequencer puts the overflow back on its
queue. Verified live: blocks now cap at ~10K txs / ~100
MiB and evnode sustains 58.77 MB/s DA upload throughput
through a 5-min txsim run with zero crashes (was 0 →
crash within 30 s before this fix).

* fix(evnode-fibre): wire sequencer queue cap + lift ingest queue caps

Two runner-side changes paired with the SoloSequencer bound:

- After constructing the SoloSequencer, call SetMaxQueueBytes
  with 10× the per-block tx budget (= 1 GiB at the current 100
  MiB MaxBlobSize). 10× is the sweet spot: large enough that a
  short burst above steady-state ingest doesn't trigger
  backpressure (we want to absorb that), small enough that the
  worst-case retained bytes fit comfortably under the box's
  RAM budget alongside the pending cache + DA in-flight buffers.

- Lift the inMemExecutor's hardcoded ingest caps. txChan and
  maxBlockTxs were sized at 500 (5 MB / 5K txs per reaper poll)
  back when those were the only memory bound on the runner. With
  the SetMaxQueueBytes cap and the FilterTxs-enforced per-block
  budget now actually doing the bounding, the ingest queue can
  hold a full 100 MiB block-worth of txs (10K slots at 10 KB)
  without burdening memory — and a single reaper poll can
  drain that whole batch in one GetTxs call instead of
  needing 20× cycles. This was the binding constraint at
  ~5,000 tx/s = 50 MB/s in earlier runs.

* fix(config): tighten Fiber pending cap to 10 to bound submitter memory

ApplyFiberDefaults set MaxPendingHeadersAndData=50, but each pending
data item under Fiber is up to MaxBlobSize (~100 MiB raw). With
3-FSP fan-out and per-attempt retry buffers in flight, 50 items × 3
× retries crossed 64 GiB on c6in.8xlarge under sustained txsim load
and the kernel OOM-killed evnode 30 s into the run.

10 keeps the in-flight footprint bounded while still letting healthy
uploads pipeline against the actual Fibre RPC latency. Verified by
heap profiling: pending pause at ~ 10 × 100 MiB plus fan-out keeps
RSS below ~10 GiB, evnode runs indefinitely.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants