-
Notifications
You must be signed in to change notification settings - Fork 70
WIP: Submitter Assemble Batch #864
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
📝 WalkthroughWalkthroughAdds a new batch subsystem (batch cache, data, headers, blob handling, storage), multi-client L2 caller, DB/metrics APIs, rollup init calls to initialize2/initialize3 using rollupDelayPeriod, and rewires Rollup service to use the new batch package while removing the legacy BatchFetcher/types.BatchCache. Changes
Sequence Diagram(s)sequenceDiagram
participant RC as Rollup Service
participant BC as BatchCache
participant L2 as L2 Client(s)
participant L1 as L1 / Rollup Contract
participant BS as BatchStorage/DB
RC->>BC: Start background sync (Init / InitAndSyncFromDatabase)
activate BC
BC->>L1: Query batch status / last finalized index
BC->>L2: Fetch blocks & headers for range
L2-->>BC: Return headers & tx payloads
BC->>BC: CalculateCapWithProposalBlock / PackCurrentBlock
alt capacity or timeout reached
BC->>BC: SealBatch (compress, compute data hash, build header)
BC->>BS: StoreSealedBatch
BS-->>BC: Ack
BC->>RC: Expose batch for submission
end
BC-->>RC: Get / LatestBatchIndex used in submission flow
deactivate BC
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tx-submitter/services/rollup.go (1)
1210-1211:⚠️ Potential issue | 🟠 MajorPotential precision loss with
Int64()truncation.Same issue as in commit_test.go—converting
*big.Inttoint64can truncate large gas fee values during network congestion.🔧 Proposed fix
return ethtypes.NewTx(ðtypes.BlobTx{ ChainID: uint256.MustFromBig(r.chainId), Nonce: nonce, - GasTipCap: uint256.MustFromBig(big.NewInt(tip.Int64())), - GasFeeCap: uint256.MustFromBig(big.NewInt(gasFeeCap.Int64())), + GasTipCap: uint256.MustFromBig(tip), + GasFeeCap: uint256.MustFromBig(gasFeeCap), Gas: gas,
🤖 Fix all issues with AI agents
In `@contracts/deploy/020-ContractInit.ts`:
- Around line 72-76: The initialize2 call is not awaited and its transaction
result isn't checked; make Rollup.initialize2(...) use await and capture/wait
for its receipt (e.g., store its result and call .wait()) before calling
Rollup.initialize3, and update the console.log for initialize3 to include the
actual parameter value passed (use the variable or literal passed to initialize3
in the log) so the output shows the parameter and success/failure status;
reference Rollup.initialize2, Rollup.initialize3, res and rec when making these
changes.
In `@tx-submitter/batch/batch_cache_test.go`:
- Around line 27-34: Tests currently ignore dial errors (using blank
identifier), which can mask connection failures and produce nil clients; update
the test setup and any helper in common_test.go that dials L1/L2 to capture the
returned error instead of `_`, assert it is nil and fail the test (use t.Fatalf
or require.NoError) so NewBatchCache and methods like InitFromRollupByRange
never receive nil clients; ensure the dial call sites that produce l1Client and
l2Client return both (client, err) and propagate/assert the err immediately.
In `@tx-submitter/batch/batch_cache.go`:
- Around line 350-427: The check in CalculateCapWithProposalBlock allows
re-processing block 0 even after progress, which can regress state; replace the
current special-case "blockNumber != 0" logic with a stricter allowance that
only permits blockNumber==0 when bc.lastPackedBlockHeight==0, and otherwise
treat any blockNumber <= bc.lastPackedBlockHeight as an error; update the
initial continuity guard (the if using bc.lastPackedBlockHeight and blockNumber)
to implement this rule so calls with blockNumber==0 after progress are rejected.
- Around line 999-1092: AssembleCurrentBatchHeader currently initializes
startBlockNum to lastPackedBlockHeight which makes the batch start one block too
early; change startBlockNum to lastPackedBlockHeight+1 and fetch startBlock
using that value (update the initial startBlock/get startBlockTime logic), keep
the existing post-seal update to startBlockNum = blockNum+1 but ensure you only
call BlockByNumber with the corrected startBlockNum when recomputing
startBlockTime for the next batch; references: AssembleCurrentBatchHeader,
startBlockNum, lastPackedBlockHeight, PackCurrentBlock, SealBatch.
- Around line 841-921: The loop incorrectly sets startBlockNum = blockNum+1
after SealBatchAndCheck, causing the current block to be skipped or
interval/timeout accounting to shift; instead set startBlockNum = blockNum so
the current block becomes the first block of the new batch and set
startBlockTime = nowBlockTime (use the already fetched nowBlock) rather than
fetching startBlock for blockNum+1; update
assembleUnFinalizeBatchHeaderFromL2Blocks to assign startBlockNum = blockNum and
startBlockTime = nowBlockTime after successful sealing (references:
startBlockNum, startBlockTime, nowBlockTime, SealBatchAndCheck,
PackCurrentBlock).
In `@tx-submitter/batch/batch_data.go`:
- Around line 129-139: Rename the misleading function
EstimateCompressedSizeWithNewPayload to a clear predicate name such as
WillExceedSizeLimit (or similar) and update call sites accordingly; prevent
accidental mutation of cks.txsPayload by building the concatenated blob from a
new slice (e.g., allocate a fresh slice and copy cks.txsPayload and txPayload
into it or append onto a nil slice) before checking len against MaxBlobBytesSize
and before calling zstd.CompressBatchBytes; keep the original boolean-returning
semantics (whether the compressed data would exceed MaxBlobBytesSize) and
preserve error propagation from zstd.CompressBatchBytes.
In `@tx-submitter/batch/batch_query.go`:
- Around line 280-294: The direct type assertion args[0].(struct{...}) is unsafe
and can panic; replace it with an error-checked assertion or type switch to
verify the concrete type before using it (e.g., check args[0] is the expected
struct type and return an error if not). Also remove the manual loop converting
ParentBatchHeader from []uint8 to []byte—treat
batchDataInputStruct.ParentBatchHeader as []byte directly (they are identical in
Go) when assigning to parentBatchHeader; update usages of batchDataInputStruct,
args and ParentBatchHeader accordingly so the code handles ABI mismatches safely
and avoids the unnecessary conversion.
In `@tx-submitter/batch/batch_restart_test.go`:
- Around line 493-496: getInfosFromContract currently discards errors from
rollupContract.LastCommittedBatchIndex and
rollupContract.LastFinalizedBatchIndex; update it to capture both error returns
(from LastCommittedBatchIndex and LastFinalizedBatchIndex), validate them
(ensure neither is non-nil), and propagate or fail instead of returning
potentially invalid indices—e.g., change getInfosFromContract to return
(*big.Int, *big.Int, error) and return an error if either call fails, or if this
helper is only used in tests call t.Fatalf with the captured error; reference
the function name getInfosFromContract and the methods
rollupContract.LastCommittedBatchIndex and
rollupContract.LastFinalizedBatchIndex when making the change.
In `@tx-submitter/batch/blob.go`:
- Around line 44-53: In RetrieveBlobBytes, the error message references the
wrong variable (uses data[i*32]) when checking high-order bytes; change the
error construction to report blob[i*32] instead of data[i*32] so the logged
value reflects the actual source byte being validated (update the fmt.Errorf
call inside the loop that currently checks blob[i*32] and reports data[i*32]);
ensure this uses the blob slice and keeps the same message format.
- Around line 189-198: In extractInnerTxFullBytes detect and reject malformed
RLP lengths by validating sizeByteLen (computed from firstByte - 0xf7) before
allocating/reading: ensure sizeByteLen is between 1 and 4 inclusive, return a
clear error if outside that range (e.g., "invalid RLP length bytes"), and only
then read sizeByteLen into sizeByte and compute size; this prevents the
negative-length/panic scenario when firstByte > 0xfb and guards against
empty/zero/oversized length encodings.
- Around line 18-22: The globals emptyBlob, emptyBlobCommit and emptyBlobProof
are initialized while ignoring errors from kzg4844.BlobToCommitment and
kzg4844.ComputeBlobProof; move their initialization into an init() function (or
package init block) where you call kzg4844.BlobToCommitment(emptyBlob) and
kzg4844.ComputeBlobProof(emptyBlob, emptyBlobCommit), check returned errors, and
fail fast (log and panic/exit) if the trusted setup or proof computation errors
so invalid empty proofs are never used; reference the symbols emptyBlob,
emptyBlobCommit, emptyBlobProof and the functions BlobToCommitment and
ComputeBlobProof when making the change.
In `@tx-submitter/batch/commit_test.go`:
- Around line 127-128: The code truncates big.Ints by calling tip.Int64() and
gasFeeCap.Int64() before wrapping with uint256.MustFromBig, risking precision
loss; instead pass the original *big.Int values directly into
uint256.MustFromBig (e.g., replace the big.NewInt(tip.Int64()) pattern) and
ensure tip and gasFeeCap are non-nil/validated before conversion so
full-precision gas tip and fee cap are preserved.
- Around line 214-227: The fee calculation in the test (the local variable fee
computed from tx.BlobGasFeeCap().Uint64(), tx.BlobGas(), tx.GasPrice().Uint64(),
tx.Gas()) can overflow uint64 when multiplying large values; replace the direct
uint64 multiplies with safe checks (either use math/bits.Mul64 overflow
detection or check b > math.MaxUint64/a before multiplication) or perform
arithmetic with big.Int and then compare to txFeeLimit; ensure both blobFee and
txFee are computed with overflow-safe logic and the final comparison against
txFeeLimit uses the same safe numeric type.
- Around line 28-36: The test currently slices pk[2:] inside TestRollupWithProof
which will panic because pk is an empty string; fix by validating and preparing
the private key before slicing: in TestRollupWithProof, assert the test private
key is present (e.g., require.NotEmpty(t, pk) or read from a test env), ensure
it has the "0x" prefix (or add it) and only then call crypto.HexToECDSA on the
hex portion, or alternatively replace the placeholder pk with a known test
private key literal for the test; update references to pk, crypto.HexToECDSA,
and TestRollupWithProof accordingly.
In `@tx-submitter/iface/client.go`:
- Around line 248-257: The L2Clients.GetRollupBatchByIndex wrapper currently
returns the first client response even if it's nil or missing signatures; update
it to mirror the validation in the standalone rollup GetRollupBatchByIndex by
checking that the returned *eth.RPCRollupBatch is non-nil and contains
signatures (e.g., result != nil && len(result.Signatures) > 0) inside the
tryAllClients callback, skip/continue to the next client when the batch is
invalid, and only accept/return a batch that passes validation (otherwise
propagate the final error); modify the logic around tryAllClients, result, and
err accordingly so behavior matches the standalone implementation.
In `@tx-submitter/services/rollup.go`:
- Around line 257-270: The background goroutine should respect r.ctx
cancellation, use the existing utils.Loop pattern and implement
exponential/backoff on errors; replace the raw for loop with a utils.Loop(r.ctx,
...) call (or equivalent) that calls r.batchCache.InitAndSyncFromRollup() and
r.batchCache.AssembleCurrentBatchHeader(), logs errors including the error
value, and on failure waits with increasing/backoff delays (reset on success)
before retrying instead of immediately sleeping 5s, ensuring the loop returns
when r.ctx is done.
In `@tx-submitter/types/l2Caller.go`:
- Around line 74-87: In GetSequencerSetBytes update the error message string in
the final fmt.Errorf call to remove the double space ("verify failed" → "verify
failed") so the message reads e.g. "sequencer set hash verify failed ...";
locate this in the L2Caller.GetSequencerSetBytes function (references:
sequencerContract.SequencerSetVerifyHash and
sequencerContract.GetSequencerSetBytes) and change only the error text.
🧹 Nitpick comments (15)
contracts/deploy/020-ContractInit.ts (1)
73-74: Consider extracting magic numbers to configuration.The hardcoded values
0x...0001and8640000000should ideally be sourced from theconfigobject (similar tobatchHeader,submitter,challenger) for maintainability and environment flexibility.tx-submitter/entry.go (1)
210-213: Consider adding context to the error.When
NewL2Callerfails, the error is returned without additional context, making it harder to diagnose issues during startup.💡 Suggested improvement
l2Caller, err := types.NewL2Caller(l2Clients) if err != nil { - return err + return fmt.Errorf("failed to create L2 caller: %w", err) }tx-submitter/batch/commit_test.go (2)
77-83: Replacetime.Sleepwith retry-based receipt polling.Using
time.Sleep(2 * time.Second)is flaky—transaction confirmation time varies. If the transaction takes longer, the test fails; if faster, time is wasted.♻️ Suggested approach
// Poll for receipt with timeout instead of fixed sleep ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) defer cancel() var receipt *ethtypes.Receipt for { receipt, err = l1Client.TransactionReceipt(ctx, transaction.Hash()) if err == nil { break } select { case <-ctx.Done(): t.Fatal("timeout waiting for receipt") case <-time.After(500 * time.Millisecond): // retry } }
138-150:estimateGasfunction is unused.This helper function is defined but never called in the test file. Consider removing dead code or using it appropriately.
tx-submitter/batch/batch_cache_test.go (1)
30-51: Test design is unsuitable for CI/automated testing.
TestBatchCacheInitServerruns an infinite loop and waits foros.Interrupt(CTRL-C), which will cause CI pipelines to hang or timeout. This appears to be a manual debugging/integration test.Consider:
- Adding a build tag to exclude from regular test runs (e.g.,
//go:build integration)- Adding a timeout context
- Renaming to indicate it's not a unit test
♻️ Suggested approach
+//go:build integration + package batch ... -func TestBatchCacheInitServer(t *testing.T) { +func TestBatchCacheInitServer_Manual(t *testing.T) { + if testing.Short() { + t.Skip("skipping integration test in short mode") + } + + ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) + defer cancel() + cache := NewBatchCache(nil, l1Client, []iface.L2Client{l2Client}, rollupContract, l2Caller) - go utils.Loop(cache.ctx, 5*time.Second, func() { + go utils.Loop(ctx, 5*time.Second, func() {tx-submitter/services/rollup.go (1)
780-786: Redundant block number fetch and confirmations recalculation.
currentBlockis fetched at Line 737 andconfirmationscalculated at Line 742. Lines 782-786 re-fetch and recalculate the same values unnecessarily for successful transactions. This adds latency and RPC calls without benefit since only a few lines of code execute between the two fetches.♻️ Suggested simplification
} else { // Transaction succeeded // Get current block number for confirmation count only for successful transactions - currentBlock, err = r.L1Client.BlockNumber(context.Background()) - if err != nil { - return fmt.Errorf("get current block number error: %w", err) - } - confirmations = currentBlock - status.receipt.BlockNumber.Uint64() + // currentBlock and confirmations already computed above at lines 737, 742 if method == constants.MethodCommitBatch {tx-submitter/batch/blob.go (1)
168-176: Remove commented-out code.This block of commented-out code appears to be superseded by the switch statement above. Dead code reduces readability and should be removed.
🧹 Remove dead code
- // we support the tx types of LegacyTxType/AccessListTxType/DynamicFeeTxType - //if firstByte == eth.AccessListTxType || firstByte == eth.DynamicFeeTxType { - // // the firstByte here is used to indicate tx type, so skip it - // if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil { - // return nil, err - // } - //} else if firstByte <= 0xf7 { // legacy tx first byte must be greater than 0xf7(247) - // return nil, fmt.Errorf("not supported tx type: %d", firstByte) - //} fullTxBytes, err = extractInnerTxFullBytes(firstByte, reader)tx-submitter/batch/batch_data.go (2)
73-84: Magic numbers for block context sizes should be constants.The values
60(block context size) and58(truncated context for hash) appear without explanation. Define named constants to improve maintainability and document the structure.♻️ Suggested improvement
+const ( + BlockContextSize = 60 // Size of a single block context in bytes + BlockContextHashSize = 58 // Bytes used from block context for DataHash (excludes last 2 bytes) +) + func (cks *BatchData) DataHash() common.Hash { if cks.hash != nil { return *cks.hash } var bz []byte for i := 0; i < int(cks.blockNum); i++ { - bz = append(bz, cks.blockContexts[i*60:i*60+58]...) + bz = append(bz, cks.blockContexts[i*BlockContextSize:i*BlockContextSize+BlockContextHashSize]...) } bz = append(bz, cks.l1TxHashes...) return crypto.Keccak256Hash(bz) }
88-106: Inconsistent hash caching between DataHash and DataHashV2.
DataHash()caches the result incks.hash, butDataHashV2()does not. IfDataHashV2is called frequently, consider adding similar caching for consistency and performance.tx-submitter/batch/batch_restart_test.go (3)
27-50: Duplicate global variables and init() across test files.These global variables (
rollupAddr,l1Client,l2Client,rollupContract,l2Caller) and theinit()function are duplicated inbatch_cache_test.go. Extract them to a sharedcommon_test.gofile to avoid maintenance issues.
632-637: Empty if block for exceeded capacity.The
exceededflag is checked but the block is empty with only a comment. Either implement the early-stop logic or remove the dead code.🧹 Suggested cleanup
- // If capacity exceeds limit, can stop early (optional) - if exceeded { - // Note: Can choose to continue packing until endBlockNum, or stop early - // Decide based on business requirements - } + // If capacity exceeds limit, stop packing + if exceeded { + break + }Or remove entirely if not needed:
- // If capacity exceeds limit, can stop early (optional) - if exceeded { - // Note: Can choose to continue packing until endBlockNum, or stop early - // Decide based on business requirements - }
397-402: Replace customminfunction with Go 1.21+ builtin.The project targets Go 1.24.0, which includes the builtin
minfunction. This custom implementation at lines 397-402 can be removed and replaced withmin(a, b).tx-submitter/batch/batch_query.go (2)
332-401: Duplicate struct conversion logic.
parseCommitBatchWithProofTxDataduplicates the struct definition and conversion code fromparseCommitBatchTxData. Extract common conversion helpers to reduce duplication.♻️ Suggested extraction
// extractBatchDataInput extracts IRollupBatchDataInput from ABI-unpacked args func extractBatchDataInput(arg interface{}) (*bindings.IRollupBatchDataInput, error) { s, ok := arg.(struct { Version uint8 `json:"version"` ParentBatchHeader []uint8 `json:"parentBatchHeader"` // ... rest of fields }) if !ok { return nil, fmt.Errorf("unexpected type: %T", arg) } return &bindings.IRollupBatchDataInput{ Version: s.Version, ParentBatchHeader: []byte(s.ParentBatchHeader), // ... }, nil } // Similar for extractBatchSignatureInput
46-54: Silent error handling on filter failure.When
FilterFinalizeBatchfails, the code continues querying backwards without logging. This could make debugging difficult if there's a persistent issue (e.g., RPC rate limiting).🔧 Add logging
finalizeEventIter, err := bc.rollupContract.FilterFinalizeBatch(filterOpts, []*big.Int{new(big.Int).SetUint64(index)}, nil) if err != nil { + log.Debug("filter finalize batch failed, continuing backwards", "error", err, "startBlock", startBlock, "endBlock", endBlock) // If query fails, continue querying backwards if endBlock < blockRange { break // Already queried to block 0, exit loop }tx-submitter/batch/batch_header.go (1)
167-185: EncodedBytes cache never persists with value receivers.Both
Bytes()methods assign toEncodedByteson a value receiver, so the cache is discarded after the call. Switch to pointer receivers (or remove caching) to avoid repeated allocations.🔧 Suggested fix
-func (b BatchHeaderV0) Bytes() BatchHeaderBytes { +func (b *BatchHeaderV0) Bytes() BatchHeaderBytes { if len(b.EncodedBytes) > 0 { return BatchHeaderBytes(b.EncodedBytes) } batchBytes := make([]byte, expectedLengthV0) batchBytes[0] = BatchHeaderVersion0 binary.BigEndian.PutUint64(batchBytes[1:], b.BatchIndex) binary.BigEndian.PutUint64(batchBytes[9:], b.L1MessagePopped) binary.BigEndian.PutUint64(batchBytes[17:], b.TotalL1MessagePopped) copy(batchBytes[25:], b.DataHash[:]) copy(batchBytes[57:], b.BlobVersionedHash[:]) copy(batchBytes[89:], b.PrevStateRoot[:]) copy(batchBytes[121:], b.PostStateRoot[:]) copy(batchBytes[153:], b.WithdrawalRoot[:]) copy(batchBytes[185:], b.SequencerSetVerifyHash[:]) copy(batchBytes[217:], b.ParentBatchHash[:]) b.EncodedBytes = batchBytes return batchBytes } @@ -func (b BatchHeaderV1) Bytes() BatchHeaderBytes { +func (b *BatchHeaderV1) Bytes() BatchHeaderBytes { if len(b.EncodedBytes) > 0 { return BatchHeaderBytes(b.EncodedBytes) } batchBytes := make([]byte, expectedLengthV1) batchBytes[0] = BatchHeaderVersion1 binary.BigEndian.PutUint64(batchBytes[1:], b.BatchIndex) binary.BigEndian.PutUint64(batchBytes[9:], b.L1MessagePopped) binary.BigEndian.PutUint64(batchBytes[17:], b.TotalL1MessagePopped) copy(batchBytes[25:], b.DataHash[:]) copy(batchBytes[57:], b.BlobVersionedHash[:]) copy(batchBytes[89:], b.PrevStateRoot[:]) copy(batchBytes[121:], b.PostStateRoot[:]) copy(batchBytes[153:], b.WithdrawalRoot[:]) copy(batchBytes[185:], b.SequencerSetVerifyHash[:]) copy(batchBytes[217:], b.ParentBatchHash[:]) binary.BigEndian.PutUint64(batchBytes[249:], b.LastBlockNumber) b.EncodedBytes = batchBytes return batchBytes }Also applies to: 195-215
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tx-submitter/metrics/metrics.go (1)
210-224:⚠️ Potential issue | 🟡 MinorAdd unregister for the new gauge.
Without this, re-initializing metrics can leave the registry dirty.🔧 Suggested fix
func (m *Metrics) UnregisterMetrics() { prometheus.Unregister(m.WalletBalance) prometheus.Unregister(m.RpcErrors) prometheus.Unregister(m.RollupCostSum) prometheus.Unregister(m.FinalizeCostSum) prometheus.Unregister(m.RollupCost) prometheus.Unregister(m.FinalizeCost) prometheus.Unregister(m.CollectedL1FeeSum) prometheus.Unregister(m.IndexerBlockProcessed) prometheus.Unregister(m.LastCommittedBatch) prometheus.Unregister(m.LastFinalizedBatch) + prometheus.Unregister(m.LastCacheBatchIndex) prometheus.Unregister(m.HasPendingFinalizeBatch) prometheus.Unregister(m.reorgs) prometheus.Unregister(m.confirmedTxs) }tx-submitter/services/rollup.go (1)
839-972:⚠️ Potential issue | 🟡 MinorReturn a concrete error when the next batch is missing in cache.
On cache miss, the returned error uses a stale
errvalue (often nil), which obscures the root cause.🛠️ Suggested fix
if !ok { log.Error("get next batch by index error", "batch_index", nextBatchIndex, ) - return fmt.Errorf("get next batch by index err:%v", err) + return fmt.Errorf("next batch %d not found in cache", nextBatchIndex) }
🤖 Fix all issues with AI agents
In `@tx-submitter/batch/batch_cache.go`:
- Around line 197-237: The code assumes the parent is at
batches[uint64(len(batches)-1)] which is unsafe for a map keyed by batch index
and can select the wrong parent or panic; instead determine the actual parent
index by computing the maximum key in the batches map (or derive from the
committed range fi/ci) and use that index to build the parentHeader via
BatchHeaderBytes, then proceed to set bc.lastPackedBlockHeight,
bc.parentBatchHeader and bc.prevStateRoot; if the computed parent index is
missing or out of the committed range call bc.InitAndSyncFromRollup() as the
existing logic does. Ensure you update references to batches[...] and any
variable that assumed zero-based contiguous keys (e.g., parentHeader,
lastPackedBlockHeight, prevStateRoot) to use the found maxIndex (or fi/ci)
instead.
In `@tx-submitter/batch/batch_restart_test.go`:
- Around line 31-53: The test init() currently ignores errors from
ethclient.Dial which can leave l1Client or l2Client nil and cause panics later;
update the Dial calls to capture and check errors (use l1Client, err =
ethclient.Dial(l1ClientRpc) and l2Client, err = ethclient.Dial(l2ClientRpc)),
and if err != nil return a clear failure (panic or fail the test) before using
the clients; likewise keep the existing error checks around bindings.NewRollup
and types.NewL2Caller to ensure all constructors only run with valid clients.
In `@tx-submitter/batch/commit_test.go`:
- Around line 191-207: The code can pass a nil blobFee into uint256.MustFromBig
causing a panic when head.ExcessBlobGas is nil; update getGasTipAndCap (and/or
createBlobTx) to guard blobFee before converting: ensure when head.ExcessBlobGas
== nil you set blobFee to zero (e.g., big.NewInt(0)) or return a safe zero
uint256 value, or add an explicit nil-check and branch so uint256.MustFromBig is
never called with nil; modify the places that construct the BlobTx to accept the
safe value (references: blobFee, head.ExcessBlobGas, getGasTipAndCap,
createBlobTx, uint256.MustFromBig, BlobTx constructor).
In `@tx-submitter/services/rollup.go`:
- Around line 257-275: The loop currently continues to call LatestBatchIndex()
and r.metrics.SetLastCacheBatchIndex(index) even when InitAndSyncFromDatabase or
AssembleCurrentBatchHeader fail, which can cause a panic; modify the goroutine
in rollup.go to bail out on any error from r.batchCache.InitAndSyncFromDatabase
or r.batchCache.AssembleCurrentBatchHeader (i.e., return from the iteration
early) and only call r.batchCache.LatestBatchIndex() and
r.metrics.SetLastCacheBatchIndex(index) if those previous calls succeeded and
LatestBatchIndex() itself returned no error; use the existing batchCacheSyncMu,
the functions InitAndSyncFromDatabase, AssembleCurrentBatchHeader,
LatestBatchIndex, and r.metrics.SetLastCacheBatchIndex to locate and implement
the early returns and guarded metric update.
🧹 Nitpick comments (2)
tx-submitter/batch/commit_test.go (1)
59-61: Avoid staleerrusage aftercache.Get.
cache.Getdoesn’t return an error; therequire.NoError(t, err)is redundant and potentially misleading.♻️ Suggested cleanup
batch, exist := cache.Get(latestCommitBatchIndex.Uint64() + 1) - require.NoError(t, err) require.True(t, exist)tx-submitter/batch/batch_storage.go (1)
271-278: Consider surfacing DB errors inBatchExists.Right now any DB error is treated as “not found,” which can mask underlying I/O failures. Consider returning
(bool, error)or at least logging non-notfound errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tx-submitter/services/rollup.go (1)
706-742:⚠️ Potential issue | 🟠 MajorRemove the redundant
r.pendingTxs.Add(replacedTx)call — the transaction is already added bySendTx.
createReplacementTransferTxcallsr.SendTx(newTx)(line 2020), which internally adds the transaction topendingTxsat line 1554. ThenhandleDiscardedTxredundantly callsr.pendingTxs.Add(replacedTx)again at line 734, causing the replacement transaction to be added twice.Either remove the
Addcall inhandleDiscardedTxsinceSendTxalready handles it, or refactorcreateReplacementTransferTxto return the signed transaction without callingSendTx, letting the caller manage the pending transaction state.
🤖 Fix all issues with AI agents
In `@tx-submitter/batch/batch_cache.go`:
- Around line 256-259: The code silently ignores the error returned by
parentHeader.TotalL1MessagePopped(), which can hide a corrupted parent header
and leave bc.totalL1MessagePopped at 0; update the block in the function that
sets bc.currentBlockNumber and bc.totalL1MessagePopped to capture the (value,
err) from parentHeader.TotalL1MessagePopped(), check err, and handle it (return
the error or wrap it with context) instead of discarding it so failures in
TotalL1MessagePopped are propagated; reference
parentHeader.TotalL1MessagePopped(), bc.totalL1MessagePopped, and
bc.currentBlockNumber when making the change.
- Around line 436-454: The continuity check in
BatchCache.CalculateCapWithProposalBlock is a TOCTOU because bc.mu is unlocked
before the expensive L2 client BlockByNumber call and re-locked later, allowing
lastPackedBlockHeight to change (e.g., via PackCurrentBlock); fix by moving the
BlockByNumber call outside the locked section but then re-acquiring bc.mu and
re-validating the same continuity conditions against lastPackedBlockHeight
before mutating state or returning; reference the method
CalculateCapWithProposalBlock, the mutex bc.mu, the state field
bc.lastPackedBlockHeight, and make sure any early returns (wrong/discontinuous
block number) and subsequent logic execute only after the re-check under lock.
- Around line 770-778: The code reads fields from bc.parentBatchHeader (calling
TotalL1MessagePopped(), BatchIndex(), and Hash()) but discards their errors,
which can silently produce zero values; update the block that initializes
parentBatchHeaderTotalL1, parentBatchBatchIndex, and parentBatchHash to capture
and check each error and return (or propagate) the first non-nil error instead
of ignoring it, and then update callers such as SealBatch to handle the new
error return; specifically modify the logic around bc.parentBatchHeader, change
the local assignments to receive (value, err) from
TotalL1MessagePopped/BatchIndex/Hash, check err and propagate it upward, and
adjust SealBatch (and any other callers) to accept and handle the error.
In `@tx-submitter/batch/batch_query.go`:
- Around line 260-303: The two parsing blocks in parseCommitBatchTxData and
parseCommitBatchWithProofTxData duplicate unsafe type assertions and conversion
logic; extract a shared helper (e.g., parseBatchDataAndSignature or similar)
that takes args []interface{} (or the two interface{} params) and returns
(*bindings.IRollupBatchDataInput, *bindings.IRollupBatchSignatureInput, error),
perform comma-ok type assertions when casting args[0] and args[1], convert
[]uint8 -> []byte and big.Int safely inside that helper, and replace both
parseCommitBatchTxData and parseCommitBatchWithProofTxData logic to call this
helper and handle the returned error; ensure the helper references the same
field names (Version, ParentBatchHeader, LastBlockNumber, NumL1Messages,
PrevStateRoot, PostStateRoot, WithdrawalRoot, SignedSequencersBitmap,
SequencerSets, Signature) so mapping is identical.
In `@tx-submitter/batch/batch_storage.go`:
- Around line 202-210: The comparison in updateBatchIndices currently uses a
direct equality check (err == db.ErrKeyNotFound) which fails for wrapped errors;
change that check to use errors.Is(err, db.ErrKeyNotFound) and ensure the errors
package is imported if not already, keeping the existing logic that initializes
indices to []uint64{} when the key is not found while returning other errors
from loadBatchIndices.
- Around line 160-190: StoreSealedBatches currently writes only the current
batch keys to saveBatchIndices, which overwrites and orphans previously stored
batches; change it to load the existing indices (via loadBatchIndices or
GetStoredBatchIndices), merge them with the newly appended indices (using
encodeBatchKey/idx to identify duplicates), deduplicate the combined list, and
then call saveBatchIndices with the merged list so older batches remain
reachable; alternatively, if replacement was intended, delete any old batch keys
not in the new indices before calling saveBatchIndices.
In `@tx-submitter/batch/commit_test.go`:
- Around line 58-60: The test incorrectly calls require.NoError(t, err) after
cache.Get even though cache.Get returns (*eth.RPCRollupBatch, bool) and does not
set err; remove the stale require.NoError(t, err) and instead assert the
returned batch is present (e.g. keep require.True(t, exist) and add
require.NotNil(t, batch) or equivalent) to make the intent explicit; update the
lines around the cache.Get call (referencing cache.Get and
latestCommitBatchIndex) accordingly.
In `@tx-submitter/services/rollup.go`:
- Around line 257-276: The closure passed to utils.Loop should use local error
variables and correct the log text: change assignments like `err =
r.batchCache.InitAndSyncFromDatabase()` and `err =
r.batchCache.AssembleCurrentBatchHeader()` to short declarations `err := ...` so
they don't clobber the outer Start() `err`, and update the log message that
currently reads "init and sync from rollup failed" to "init and sync from
database failed" to match the call to r.batchCache.InitAndSyncFromDatabase();
keep the other logging and the r.metrics.SetLastCacheBatchIndex(index) usage
intact.
In `@tx-submitter/types/l2Caller.go`:
- Line 12: The import for hexutil in tx-submitter/types/l2Caller.go currently
pulls from github.com/ethereum/go-ethereum/common/hexutil and must be changed to
the Morph fork; update the import to
github.com/morph-l2/go-ethereum/common/hexutil so it matches other files and
avoids mixed upstream forks—locate the import block in l2Caller.go and replace
the hexutil import path (references to hexutil functions/usages in that file
remain unchanged).
🧹 Nitpick comments (15)
contracts/deploy/020-ContractInit.ts (2)
60-71: Use strict equality and consider broader validation forrollupDelayPeriod.Line 68 uses loose equality (
==). With TypeScript, prefer===to avoid type coercion surprises (e.g.,"0" == 0istrue). Also consider validating that the value is a positive integer, not just non-zero.Proposed fix
- if (rollupDelayPeriod==0){ - console.error('rollupDelayPeriod cannot set zero') + if (!Number.isInteger(rollupDelayPeriod) || rollupDelayPeriod <= 0) { + console.error('rollupDelayPeriod must be a positive integer') return '' }
79-79: Hardcoded magic bytes32 value should be documented or extracted to a named constant.
"0x0000...0001"is passed toinitialize2without any explanation. A brief comment or a named constant (e.g.,const INITIAL_L2_CALLER = ...) would help future readers understand the intent.tx-submitter/batch/batch_query.go (1)
19-93: Silent error swallowing may mask persistent failures.In
getLastFinalizeBatchHeaderFromRollupByIndex, whenFilterFinalizeBatchfails (line 47) orTransactionByHash/parseFinalizeBatchTxDatafail (lines 64, 70), errors are silently skipped. Consider logging these atWarnlevel so persistent issues (e.g., RPC failures, ABI mismatches) are diagnosable in production.tx-submitter/batch/batch_storage.go (2)
62-83:LoadSealedBatchwrapsErrKeyNotFoundinto a formatted error, preventing callers from distinguishing "not found" from I/O errors.On line 71, the original sentinel is replaced with
fmt.Errorf("sealed batch %d not found", ...). Callers cannot useerrors.Is(err, db.ErrKeyNotFound). Consider using%wto preserve the sentinel or returning a dedicated not-found error.Suggested fix
if errors.Is(err, db.ErrKeyNotFound) { - return nil, fmt.Errorf("sealed batch %d not found", batchIndex) + return nil, fmt.Errorf("sealed batch %d not found: %w", batchIndex, db.ErrKeyNotFound) }
37-60: Index update failures are silently tolerated — consider the consistency implications.Both
StoreSealedBatch(line 55) andDeleteSealedBatch(line 127) log a warning but continue whenupdateBatchIndicesfails. This means the batch data and the indices list can drift: a stored batch might be missing from the indices list, or a deleted batch might still be referenced.LoadAllSealedBatcheshandles the latter gracefully (line 104-107), but the former case means a batch can be stored yet invisible toLoadAllSealedBatches.If this is an accepted trade-off, a short comment explaining why would help future maintainers.
Also applies to: 116-133
tx-submitter/types/l2Caller.go (1)
74-87: TOCTOU risk betweenSequencerSetVerifyHashandGetSequencerSetBytes.The hash (line 75) and set bytes (line 79) are fetched in separate RPC calls. If
optsdoesn't pin a specificBlockNumber, the sequencer set could change between the two calls, causing a spurious verification failure. Callers should ensureopts.BlockNumberis set. A brief doc comment on this function noting the requirement would help.tx-submitter/mock/rollup.go (1)
88-96: Nil iterator returns fromFilterCommitBatch/FilterFinalizeBatchwill panic on.Next()calls.Both methods return
nil, nil, but callers immediately invoke.Next()without nil-checking (e.g., batch_query.go:46, batch_restart_test.go:297/396, oracle/batch.go:226). Returning a zero-value iterator or adding setter methods would prevent nil pointer dereferences in tests that exercise event iteration.tx-submitter/batch/batch_restart_test.go (3)
297-305: Filter errors are silently swallowed, making debugging difficult.When
FilterFinalizeBatchfails, the error is discarded and the loop simply retries a different range. At minimum, log the error for observability. The same pattern exists ingetCommitBatchDataByIndex(Line 397–404).🔧 Proposed fix
finalizeEventIter, err := rollupContract.FilterFinalizeBatch(filterOpts, []*big.Int{new(big.Int).SetUint64(index)}, nil) if err != nil { - // If query fails, continue querying backwards + // If query fails, log and continue querying backwards + fmt.Printf("FilterFinalizeBatch error in range [%d, %d]: %v\n", startBlock, endBlock, err) if endBlock < blockRange { break // Already queried to block 0, exit loop }
84-164:TestBatchRestartInitdepends on live local nodes and on-chain state — consider a build tag or skip guard.This test will fail in CI or on any machine without
localhost:9545/localhost:8545running. Atesting.Short()skip or a build tag (e.g.,//go:build integration) would prevent spurious CI failures.🔧 Proposed fix (apply to each test function)
func TestBatchRestartInit(t *testing.T) { + if testing.Short() { + t.Skip("skipping integration test in short mode") + } testDir := filepath.Join(t.TempDir(), "testleveldb")
260-265: Remove customminfunction—use Go's built-in instead.The project targets Go 1.24.0. Since Go 1.21,
minis a built-in function. This custom definition shadows it and can be removed.tx-submitter/services/rollup.go (2)
1262-1268:common.Big0is a shared global — consider usingnew(big.Int)to avoid aliasing risk.
SignedSequencersBitmapholds a pointer tocommon.Big0. If any downstream code (ABI encoding, contract interaction) mutates this*big.Intin place, it would corrupt the shared constant. Usingnew(big.Int)is a low-cost defensive measure.🔧 Proposed fix
sigData := bindings.IRollupBatchSignatureInput{ - SignedSequencersBitmap: common.Big0, + SignedSequencersBitmap: new(big.Int), SequencerSets: batch.CurrentSequencerSetBytes, Signature: []byte("0x"), }
868-876: Finalize fetches the next batch to getParentBatchHeader— verify this matches contract expectations.
nextBatchIndex = target.Uint64() + 1fetches the batch after the one being finalized, using itsParentBatchHeader(which is the header of the batch being finalized). This is a non-obvious pattern. If the next batch hasn't been assembled yet, finalization will fail with a confusing "batch not found" error. Consider adding a comment clarifying the intent, and logging the target vs. next index for easier debugging.tx-submitter/batch/batch_cache.go (3)
88-93: Constructor panics on L2 client failure — prefer returning an error.
panic(err)at Line 92 crashes the entire process if any L2 client is temporarily unavailable during construction. Callers cannot handle this gracefully. Consider returning(*BatchCache, error)fromNewBatchCache.
560-584:FetchAndCacheHeaderfetches the same block twice — once inCalculateCapWithProposalBlockand again at Line 579.The second fetch is needed because
PackCurrentBlockcallsClearCurrent, discarding the header. If this method is on a hot path, consider caching the header before clearing, or returning it fromPackCurrentBlock.
699-728:CheckBatchSizeReacheduses a rough heuristic that may be unreliable.Line 724 estimates compressed size as
blockContextsSize / 2, assuming ~2x zstd compression ratio. This is a loose approximation — actual compression ratios vary widely depending on data entropy. TheSealBatchmethod already computes and returns the actualreachedExpectedSize, which is more accurate. Consider storing that value in the sealed batch or the storage layer instead of re-estimating.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tx-submitter/services/rollup.go (1)
1262-1268:⚠️ Potential issue | 🔴 CriticalSignature verification is hardcoded to empty values — confirm this is intentional and that the contract accepts zero signatures.
buildSignatureInputunconditionally setsSignedSequencersBitmap: common.Big0andSignature: []byte("0x"). The contract'scommitBatchandcommitBatchWithProofmethods accept these fields and storeSignedSequencersBitmapin state, indicating signatures are validated on-chain. Either document why empty signatures are acceptable for this phase, or implement proper BLS signature collection to avoid transaction failures.
🤖 Fix all issues with AI agents
In `@tx-submitter/batch/batch_cache.go`:
- Around line 358-369: Rename the misspelled variables in getBatchBlockRange:
change preBatchStorge to preBatchStorage and batchStorge to batchStorage (and
update any references in the same function) so the calls to
bc.rollupContract.BatchDataStore(nil, preIndex) and
bc.rollupContract.BatchDataStore(nil, batchIndex) assign into correctly spelled
variables before returning their BlockNumber values.
- Around line 850-878: buildBlockContext currently truncates txsNum and l1MsgNum
to uint16 without checks; add an explicit guard to prevent silent wrapping by
clamping both txsNum and l1MsgNum to math.MaxUint16 (65535) before casting and
writing to blsBytes, and optionally emit a warning/log when a clamp occurs;
update the assignments that write numTxs (blsBytes[56:58]) and numL1Messages
(blsBytes[58:60]) to use the clamped values instead of directly casting the
incoming ints.
- Around line 1087-1101: In AssembleCurrentBatchHeader(), fix the typo in the
error message returned when endBlockNum < bc.currentBlockNumber by replacing
"rerog" with "reorg" so the message reads something like "has reorg, should
check block status" (update the fmt.Errorf call that constructs the string using
bc.currentBlockNumber and endBlockNum).
- Around line 246-250: The code calls rollupContract.BatchDataStore(nil, fi)
using the finalized index `fi` instead of the parent batch index, which can set
bc.lastPackedBlockHeight to the wrong value; change the fallback to call
rollupContract.BatchDataStore(nil, parentBatchIndex) (the index derived from
parentHeader.LastBlockNumber()) and propagate/log any error so
bc.lastPackedBlockHeight is set from the correct BatchDataStore entry; update
the block that currently references BatchDataStore(nil, fi) to use
parentBatchIndex and preserve the existing error handling around the call.
- Around line 84-93: The constructor currently panics on L2 connectivity
failure; change NewBatchCache to return (*BatchCache, error) (or existing type
plus error) instead of panicking, and replace the panic(err) after calling
ifL2Clients.BlockNumber(ctx) with a returned wrapped error (e.g., fmt.Errorf("L2
client health check failed: %w", err)); update callers of NewBatchCache to
handle the error. Keep the default isBatchUpgraded assignment as-is and ensure
the connectivity check uses iface.L2Clients.BlockNumber(ctx) to validate clients
but bubbles the error up rather than terminating the process.
In `@tx-submitter/services/rollup.go`:
- Line 125: NewRollup currently calls batch.NewBatchCache(...) which may panic;
update NewRollup to call batch.NewBatchCache and handle the returned error
instead of assuming success: capture (bc, err) := batch.NewBatchCache(...), if
err != nil return nil, fmt.Errorf("creating batch cache: %w", err) (or propagate
appropriately), and assign bc to the Rollup.batchCache field; adjust NewRollup
signature to return (/*Rollup type*/, error) if not already so the error can be
propagated.
- Around line 718-734: handleDiscardedTx is calling r.pendingTxs.Add(replacedTx)
even though createReplacementTransferTx (which calls SendTx) already adds the tx
via SendTx's post-send hook; remove the duplicate add to avoid resetting
SendTime/QueryTimes — either delete the r.pendingTxs.Add(replacedTx) call in
handleDiscardedTx (and similar duplicate adds after ReSubmitTx/CancelTx) or
guard it with an existence check (only call Add if
pendingTxs.Contains(replacedTx.Hash()) is false) so SendTx remains the single
place that registers new pending transactions.
- Line 809: After successfully finalizing a batch, remove its on-disk data as
well as the in-memory entry: in the same place where
r.batchCache.Delete(batchIndex) is called, also invoke
r.batchStorage.DeleteSealedBatch(batchIndex) (handle/propagate any returned
error as appropriate) so sealed batch entries are removed from LevelDB and do
not accumulate on disk.
🧹 Nitpick comments (6)
tx-submitter/batch/batch_cache.go (5)
563-583:FetchAndCacheHeaderfetches the same block twice.
CalculateCapWithProposalBlock(Line 565) already fetches the block from L2, but after packing,FetchAndCacheHeaderre-fetches the same block at Line 579 just to return the header. Consider havingCalculateCapWithProposalBlockorPackCurrentBlockstash the header so this extra RPC call is avoided.
706-728:CheckBatchSizeReacheduses an unreliable heuristic.The method estimates compressed size as
blockContextsSize / 2, which is a rough approximation. Block contexts are structured data (not the actual transaction payload), so dividing by 2 doesn't reflect zstd compression of the real payload.SealBatchalready computes and returnsreachedExpectedSize— consider persisting that boolean in the sealed batch instead.
932-1009: Duplicate block fetch inassembleUnFinalizeBatchHeaderFromL2Blocks.Each iteration fetches the same block twice — once inside
CalculateCapWithProposalBlock(Line 958 → internally Line 457) and again at Line 964. This doubles the L2 RPC calls. Consider refactoringCalculateCapWithProposalBlockto accept an already-fetched block, or return the block it fetched.
1087-1180:AssembleCurrentBatchHeaderduplicates ~90% ofassembleUnFinalizeBatchHeaderFromL2Blockslogic.Both methods share the same loop structure: iterate blocks, call
CalculateCapWithProposalBlock, check exceeded/interval/timeout, seal, and pack. The only differences are how start/end block numbers are determined and how sealing is performed (with or without batch hash verification). Extract a shared method to reduce duplication and divergence risk.
122-157:Init()re-useserrfromheaderBytes.LastBlockNumber()in an inner scope, shadowing it.On Line 143,
errfromLastBlockNumber()is checked and if non-nil, a newerris declared in the innerifblock (Line 144), shadowing the outererr. While Go allows this, it makes it easy to miss that the originalLastBlockNumbererror is effectively swallowed — the fallback silently replaces it. Consider logging the original error for observability.tx-submitter/services/rollup.go (1)
868-876: Add clarifying log or documentation: finalization requires the next batch to be committed.The code correctly fetches batch at
target + 1becausebatch(target+1).ParentBatchHeadercontains the header of batchtarget, which is what the contract'sfinalizeBatchfunction expects (verified via Solidity contract line 595: it validatesfinalizedStateRoots[_batchIndex - 1]against the batch header's previous state hash).However, when the next batch isn't yet in the cache (line 873-876), the function returns
nilwith only a warn-level log. This behavior is correct by design—finalization necessarily waits for the next batch to be committed—but it's worth clarifying this constraint in the code. The current log at line 878 ("next batch is nil, wait next batch header to finalize") already hints at the reason; consider elevating this to Info level or adding an inline comment to make the dependency explicit for operators.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tx-submitter/services/rollup.go (1)
1262-1268:⚠️ Potential issue | 🔴 CriticalFix
Signature: []byte("0x")— it contains invalid data for the signature field.
buildSignatureInputhardcodes the signature to[]byte("0x"), which represents the literal ASCII string "0x" (2 bytes: [48, 120]). This is neither an empty byte slice nor valid signature data, and will likely cause contract validation failures when the calldata is packed.Additionally, confirm that
SignedSequencersBitmap: common.Big0is intentional, as the prior staker bitmap querying logic has been removed entirely. Update the function to either:
- Use
[]byte{}for the signature if no signature is required, or- Implement proper signature generation logic if a real signature is expected
🤖 Fix all issues with AI agents
In `@tx-submitter/batch/batch_cache.go`:
- Around line 985-1001: The code calls bc.GetSealedBatch(batchIndex) and ignores
the found boolean, then dereferences batch (batch.LastBlockNumber) which can
nil-panic; change the call site in SealBatchAndCheck/Seal path (where batch, _
:= bc.GetSealedBatch(batchIndex)) to check the returned found flag and return an
error if not found (include context like batchIndex and batchHash), only then
use batch.LastBlockNumber to compute startBlockNum and proceed to fetch
startBlock via bc.l2Clients.BlockByNumber; apply the identical guard/fix in
AssembleCurrentBatchHeader at the similar GetSealedBatch usage around line 1164
to avoid the same nil dereference.
- Around line 1057-1067: BatchCache.Delete currently only removes the entry from
the in-memory sealedBatches map; update it to also remove the persisted copy by
calling bc.batchStorage.DeleteSealedBatch(batchIndex) when the entry exists.
Keep the mutex protection around the in-memory delete, then invoke
bc.batchStorage.DeleteSealedBatch(batchIndex) (guarded by nil-check on
bc.batchStorage), and on error log via the cache's logger (e.g., bc.logger or
existing logging facility) so callers in rollup.go no longer leave stale LevelDB
data; still return the boolean indicating whether the in-memory entry existed.
- Around line 200-207: Accessing indices[0] before checking length is unsafe; in
the code that computes maxIndex from indices (variable names: indices,
maxIndex), first guard that len(indices) > 0 and only then assign maxIndex =
indices[0], otherwise handle the empty-case (e.g., return, skip computation, or
set to a sentinel) so you never index into an empty slice; adjust the loop
accordingly so the first element is used only after the length check.
In `@tx-submitter/batch/batch_query.go`:
- Around line 31-53: The loop can underflow when startBlock == 0 because
endBlock = startBlock - 1 will wrap; in the block where FilterFinalizeBatch
returns an error (the error handling after calling
bc.rollupContract.FilterFinalizeBatch), guard against underflow by checking if
startBlock == 0 and if so break/exit the loop (or set endBlock = 0 and break)
instead of performing startBlock - 1; update the error branch that currently
assigns endBlock = startBlock - 1 to handle the startBlock==0 case to safely
terminate the loop and avoid wrapping.
🧹 Nitpick comments (6)
tx-submitter/types/l2Caller.go (1)
74-87: TOCTOU window betweenSequencerSetVerifyHashandGetSequencerSetBytescalls.The hash (line 75) and setBytes (line 79) are fetched in two separate RPC calls. If the sequencer set rotates between these calls, the verification at line 83 will fail. While the error is descriptive and calling code likely retries, consider using a single
callOptswith a pinned block number to ensure both reads see the same state:Suggested approach
func (c *L2Caller) GetSequencerSetBytes(opts *bind.CallOpts) ([]byte, common.Hash, error) { + // Ensure both reads target the same block to avoid TOCTOU + if opts.BlockNumber == nil { + // Pin to latest known block if not already specified + } hash, err := c.sequencerContract.SequencerSetVerifyHash(opts)The callers in
batch_cache.godo setcallOpts.BlockNumberbefore calling this, so in practice this is likely safe. But if this method is ever called withopts.BlockNumber == nil, the two reads could hit different blocks.tx-submitter/batch/batch_query.go (1)
57-83: Silent error swallowing makes debugging difficult.Lines 64-66, 70-72, and 76-78 all
continueon error without any logging. For a backward scan that might iterate many events, this makes it nearly impossible to diagnose why the function eventually fails to find a batch. Consider addinglog.Debugorlog.Warnfor these cases.tx-submitter/services/rollup.go (1)
868-876: Variablebatchshadows the importedbatchpackage.Line 870 declares
batch, ok := r.batchCache.Get(nextBatchIndex), which shadows thebatchpackage imported on line 29. While functionally OK in this scope, renaming to e.g.nextBatchwould avoid confusion and prevent issues if thebatchpackage is referenced later in this function.Proposed fix
- batch, ok := r.batchCache.Get(nextBatchIndex) - if !ok { - log.Warn("get next batch by index failed, batch not found", + nextBatch, ok := r.batchCache.Get(nextBatchIndex) + if !ok { + log.Warn("get next batch by index failed, batch not found", "batch_index", nextBatchIndex, ) return nil } - if batch == nil { + if nextBatch == nil { log.Info("next batch is nil,wait next batch header to finalize", "next_batch_index", nextBatchIndex) return nil } // calldata - calldata, err := r.abi.Pack("finalizeBatch", []byte(batch.ParentBatchHeader)) + calldata, err := r.abi.Pack("finalizeBatch", []byte(nextBatch.ParentBatchHeader))tx-submitter/batch/batch_storage.go (2)
37-60: Batch data can become orphaned if index update fails.Line 54-56: If
updateBatchIndicesfails, the batch data is stored in LevelDB but not tracked in the indices list, making it unreachable viaLoadAllSealedBatches. The warning log is good, but this could accumulate orphaned data over time.Consider either failing the entire operation (so it can be retried atomically) or adding a recovery mechanism that scans for orphaned keys.
160-166: Mixed string/binary key encoding works but may surprise during debugging.
encodeBatchKeyconcatenates a human-readable prefix"sealed_batch_"with a raw 8-byte big-endian uint64. The binary portion won't be readable in LevelDB inspection tools. This is a minor ergonomics concern — not a correctness issue.tx-submitter/batch/batch_cache.go (1)
562-583:FetchAndCacheHeaderfetches the same block twice from L2.The block is first fetched inside
CalculateCapWithProposalBlock(line 446), then fetched again at line 578 afterPackCurrentBlockclears the current state. If this backward-compat wrapper sees ongoing use, consider caching the header in the current-block state before clearing.
tx-submitter/batch/batch_cache.go
Outdated
| parentHeader := BatchHeaderBytes(batches[maxIndex].ParentBatchHeader[:]) | ||
| bc.lastPackedBlockHeight, err = parentHeader.LastBlockNumber() | ||
| if err != nil { | ||
| parentBatchIndex, err := parentHeader.BatchIndex() | ||
| if err != nil { | ||
| return fmt.Errorf("get batch index from parent header failed err: %w", err) | ||
| } | ||
| // check batch index range | ||
| if parentBatchIndex > ci.Uint64() || parentBatchIndex < fi.Uint64() { | ||
| // sync from another side | ||
| err = bc.InitAndSyncFromRollup() | ||
| if err != nil { | ||
| return err | ||
| } | ||
| return nil | ||
| } | ||
| store, err := bc.rollupContract.BatchDataStore(nil, fi) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| bc.lastPackedBlockHeight = store.BlockNumber.Uint64() | ||
| } | ||
| bc.sealedBatches = batches | ||
| bc.parentBatchHeader = &parentHeader | ||
| bc.prevStateRoot, err = parentHeader.PostStateRoot() | ||
| if err != nil { | ||
| return fmt.Errorf("get post state root err: %w", err) | ||
| } | ||
| bc.currentBlockNumber = bc.lastPackedBlockHeight | ||
| bc.totalL1MessagePopped, err = parentHeader.TotalL1MessagePopped() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug:parentBatchHeader is set to the wrong batch header in InitAndSyncFromDatabase
At line 225-248, when restoring from database:
parentHeader := BatchHeaderBytes(batches[maxIndex].ParentBatchHeader[:])
// ...
bc.parentBatchHeader = &parentHeader
batches[maxIndex] is the last sealed batch (say batch N). Its .ParentBatchHeader field stores the header of batch N-1 (the parent), because that's what was saved during SealBatch:
// In SealBatch():
parentBatchHeaderBytes = hexutil.Bytes(*bc.parentBatchHeader) // parent's header at seal time
sealedBatch := ð.RPCRollupBatch{
ParentBatchHeader: parentBatchHeaderBytes, // stores N-1's header
...
}
// Then updates for next batch:
bc.parentBatchHeader = &batchHeader // now points to N's own header
So after InitAndSyncFromDatabase, bc.parentBatchHeader is set to batch N-1's header instead of batch N's header. This causes all derived state to be off by one batch
tx-submitter/services/rollup.go
Outdated
| if err = r.batchCache.InitAndSyncFromDatabase(); err != nil { | ||
| log.Error("init and sync from database failed, wait for the next try", "error", err) | ||
| return | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
InitAndSyncFromDatabase only needs to succeed once (initDone short-circuits subsequent calls).
It should be moved out of the utils.Loop — retry init until success first, then start the assembly loop. This avoids unnecessary lock acquisition and function call overhead on every tick after init is done.
| for i := fi.Uint64(); i <= ci.Uint64(); i++ { | ||
| batchHash, err := bc.rollupContract.CommittedBatches(nil, new(big.Int).SetUint64(i)) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| batchStorage, exist := batches[i] | ||
| if !exist || !bytes.Equal(batchHash[:], batchStorage.Hash.Bytes()) { | ||
| err = bc.batchStorage.DeleteAllSealedBatches() | ||
| if err != nil { | ||
| return err | ||
| } | ||
| // batch not contiguous or batch is invalid | ||
| return bc.InitAndSyncFromRollup() | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The hash-check loop only covers fi to ci, but maxIndex (from local storage) could be greater than ci — e.g., batches sealed but not yet committed to L1.
For those unvalidated batches, do we need to validate the batchIndex and blockNumber to make sure they are in the right order?
tx-submitter/batch/batch_cache.go
Outdated
| return bc.InitAndSyncFromRollup() | ||
| } | ||
| maxIndex := indices[0] | ||
| if len(indices) > 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
len(indices) > 0 is always true since the len(indices) == 0 case already returns at line 196-198.
tx-submitter/batch/batch_cache.go
Outdated
| } | ||
| return nil | ||
| } | ||
| store, err := bc.rollupContract.BatchDataStore(nil, fi) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it should not be last finalized batch index, but the last batch's end block?
Summary by CodeRabbit
New Features
Improvements