feat(jsonrpc): add resource restrict for jsonrpc#6728
feat(jsonrpc): add resource restrict for jsonrpc#6728317787106 wants to merge 29 commits intotronprotocol:developfrom
Conversation
…r twice; add several methods of HttpServletRequestWrapper
Conflict resolution (all conflicts are additive — keep both sides): - NodeConfig.java: keep maxBatchSize/maxResponseSize/maxAddressSize (HEAD) and maxMessageSize (develop) - reference.conf: keep all four config entries - Args.java: keep all four PARAMETER assignments Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
| batchResult.add(responseNode); | ||
| } | ||
|
|
||
| byte[] finalBytes = MAPPER.writeValueAsBytes(batchResult); |
There was a problem hiding this comment.
[NIT] Add a final size check on the serialized batch result
accumulatedSize is incremented by subOutput.toByteArray().length + (separator? 1 : 0), which is an estimate. The final MAPPER.writeValueAsBytes(batchResult) re-serializes parsed JsonNodes, and the resulting byte length can drift from the estimate (whitespace, unicode escaping, ObjectNode key ordering). The drift is tiny in practice but means the maxResponseSize hard cap is approximate, not exact, for the batch path.
Suggestion: after computing finalBytes, do one more cheap check before writing:
byte[] finalBytes = MAPPER.writeValueAsBytes(batchResult);
if (maxResponseSize > 0 && finalBytes.length > maxResponseSize) {
writeJsonRpcError(resp, JsonRpcError.RESPONSE_TOO_LARGE,
"Response exceeds the limit of " + maxResponseSize + " bytes", null, true);
return;
}This turns the per-iteration accumulation into a fast short-circuit and lets the final check enforce the hard cap exactly.
There was a problem hiding this comment.
We can't satisfy both requirements at the same time:
- If the accumulated size of the first few items does not exceed the threshold, return them normally, while marking the subsequent ones as overflowed.
- When the overall result overflows, return only a single overflow indicator.
| writeJsonRpcError(resp, JsonRpcError.INTERNAL_ERROR, "Internal error", null, true); | ||
| return; | ||
| } | ||
| batchResult.add(responseNode); |
There was a problem hiding this comment.
[NIT] All-notification batch returns [] instead of an empty body
When every sub-request in a batch is a notification (no id), batchResult ends up empty and the servlet writes [] with HTTP 200. JSON-RPC 2.0 § 6 says: "If there are no Response objects contained within the Response array as it is to be sent to the client, the server MUST NOT return an empty Array and should return nothing at all." The current JsonRpcServletTest.batchLimitDisabled_largeBatchAllowed test asserts body.size() == 0 — i.e. the test acknowledges and pins this behavior.
Most ETH-compatible clients tolerate [], but this is a low-effort spec compliance gap. Suggestion: either (a) flip to a spec-compliant empty response (resp.setStatus(204); resp.setContentLength(0); and skip the write) when batchResult.isEmpty(), or (b) document this intentional deviation in the class javadoc / PR description.
There was a problem hiding this comment.
Thanks for your review, add it :
// JSON-RPC 2.0 §6: MUST NOT return an empty Array when there are no response objects.
if (batchResult.isEmpty()) {
resp.setStatus(HttpServletResponse.SC_OK);
resp.setContentLength(0);
return;
}
| batchResult.add(responseNode); | ||
| } | ||
|
|
||
| byte[] finalBytes = MAPPER.writeValueAsBytes(batchResult); |
There was a problem hiding this comment.
[NOTE] Batch path peak memory ≈ ~3x maxResponseSize — worth documenting for ops
Not a defect — just an operational observation that helps deployers size the JVM heap correctly.
At this line, the batch path holds three coexisting representations of the response:
batchResult(ArrayNode of parsedJsonNodes) — usually ~1.5–2x the underlying byte size due to object overhead and unicode string expansion.finalBytes— the freshly serialized byte[] (~1x the byte size).- The just-written response on the way to the socket buffer.
So per concurrent batch the transient peak is roughly ~3x the configured maxResponseSize (default 25 MB → ~75 MB peak per batch; 100 concurrent batches → ~7.5 GB). Operators sizing Xmx from the upper-bound config alone would underestimate.
Suggestion (no code change required, doc-only):
- Add a line to the
node.json-rpc.max-response-sizecomment inreference.conf: e.g. "actual peak heap per concurrent batch ≈ 3x this value due to JsonNode tree + serialized bytes coexisting briefly." - Or an implementation note on
handleBatch's javadoc.
This is strictly informational; feel free to skip if it's already covered in your deploy docs.
There was a problem hiding this comment.
There is no need to tell user who only concerns the result.
| return; | ||
| } | ||
|
|
||
| ByteArrayOutputStream subOutput = new ByteArrayOutputStream(); |
There was a problem hiding this comment.
[MUST] Batch sub‑responses are buffered without a hard limit. subOutput is a plain ByteArrayOutputStream, so one large sub‑response can allocate beyond maxResponseSize before line 183 checks the size. Please replace this with a bounded output stream and stop the batch as soon as overflow is detected.
| } | ||
| accumulatedSize += addition; | ||
|
|
||
| JsonNode responseNode; |
There was a problem hiding this comment.
[SHOULD] This stores all sub‑responses in batchResult and serializes the whole batch again at line 200. That creates another unbounded in‑memory copy before writing the response. Please accumulate serialized sub‑response bytes into a bounded batch buffer instead of building a full ArrayNode.
| if (overflow) { | ||
| return; | ||
| } | ||
| if (maxBytes > 0 && buffer.size() + len > maxBytes) { |
There was a problem hiding this comment.
[MUST] buffer.size() + len is unsafe int arithmetic and can overflow before the limit check. Please use checked or long arithmetic, e.g. long nextSize = (long) buffer.size() + len, and mark overflow when it exceeds maxBytes.
There was a problem hiding this comment.
There is little possibility to int overflow, so stays.
| } | ||
|
|
||
| // comma separator between array elements | ||
| int addition = responseBytes.length + (!batchResult.isEmpty() ? 1 : 0); |
There was a problem hiding this comment.
[MUST] responseBytes.length + ... and accumulatedSize + addition are unsafe int additions. Please use checked or long arithmetic for response‑size accounting so overflow cannot bypass maxResponseSize.
There was a problem hiding this comment.
The possibility of integer overflow is extremely low, so we'll leave it unchanged for now.
What does this PR do?
Adds configurable resource limits to the JSON-RPC endpoint to prevent memory exhaustion and abuse from oversized requests or responses. Closes #6632
Changes:
Batch size limit (
node.jsonrpc.maxBatchSize, default: 100)-32005(exceed limit).maxBatchSize ≤ 0(no limit).Empty batch rejection
[]is now rejected with error code-32600(Invalid Request) per JSON-RPC 2.0 §6: "the response from the Server MUST be a single Response object" when the input is not an array with at least one value.Response size limit (
node.jsonrpc.maxResponseSize, default: 25 MB)BufferedResponseWrapper: interceptsgetOutputStream()andgetWriter()writes into an in-memory buffer. When a write would exceed the configured limit, it sets anoverflowflag and resets the buffer instead of continuing to accumulate bytes, bounding worst-case memory usage to at mostmaxResponseSize.CachedBodyRequestWrapper: replays the pre-read request body via bothgetInputStream()andgetReader(), so the body can be inspected before being forwarded toJsonRpcServer.isOverflow()and — if set — discards the partial buffer and returns error code-32003(response too large).Address list limit (
node.jsonrpc.maxAddressSize, default: 1000)LogFilter, validates theaddressarray length ineth_getLogs/eth_newFilterrequests.JsonRpcInvalidParamsException.Structured JSON-RPC error responses
writeJsonRpcErrorusesObjectMapperto build error responses safely, avoiding JSON injection from error messages.-32700parse error-32600invalid request-32603internal error-32005exceed limit-32003response too largeWhy are these changes required?
maxResponseSizeand fails fast rather than buffering the entire response before checking.[]closes a spec compliance gap: previously the empty batch fell through toJsonRpcServer, whose behavior for an empty array is undefined by the spec.Configuration
Tests
JsonRpcServletTest[]→-32600, batch size limit, response overflow, internal error, normal pathBufferedResponseWrapperTestgetWriterdelegationCachedBodyRequestWrapperTestgetInputStreamandgetReaderJsonRpcTest.testLogFilterAddressSizeLimit"exceed max addresses:"), limit=0 disabled