Skip to content

feat(jsonrpc): add resource restrict for jsonrpc#6728

Open
317787106 wants to merge 29 commits intotronprotocol:developfrom
317787106:hotfix/restrict_jsonrpc_size
Open

feat(jsonrpc): add resource restrict for jsonrpc#6728
317787106 wants to merge 29 commits intotronprotocol:developfrom
317787106:hotfix/restrict_jsonrpc_size

Conversation

@317787106
Copy link
Copy Markdown
Collaborator

@317787106 317787106 commented Apr 28, 2026

What does this PR do?

Adds configurable resource limits to the JSON-RPC endpoint to prevent memory exhaustion and abuse from oversized requests or responses. Closes #6632

Changes:

  1. Batch size limit (node.jsonrpc.maxBatchSize, default: 100)

    • Validates the array length of batch JSON-RPC requests before dispatching.
    • Requests exceeding the limit are rejected with error code -32005 (exceed limit).
    • The check is skipped when maxBatchSize ≤ 0 (no limit).
  2. Empty batch rejection

    • An empty batch array [] is now rejected with error code -32600 (Invalid Request) per JSON-RPC 2.0 §6: "the response from the Server MUST be a single Response object" when the input is not an array with at least one value.
    • The response is a single error object (not an array), matching the spec requirement.
  3. Response size limit (node.jsonrpc.maxResponseSize, default: 25 MB)

    • Introduces BufferedResponseWrapper: intercepts getOutputStream() and getWriter() writes into an in-memory buffer. When a write would exceed the configured limit, it sets an overflow flag and resets the buffer instead of continuing to accumulate bytes, bounding worst-case memory usage to at most maxResponseSize.
    • Introduces CachedBodyRequestWrapper: replays the pre-read request body via both getInputStream() and getReader(), so the body can be inspected before being forwarded to JsonRpcServer.
    • After the handler returns, the servlet checks isOverflow() and — if set — discards the partial buffer and returns error code -32003 (response too large).
  4. Address list limit (node.jsonrpc.maxAddressSize, default: 1000)

    • In LogFilter, validates the address array length in eth_getLogs / eth_newFilter requests.
    • Requests exceeding the limit are rejected with JsonRpcInvalidParamsException.
  5. Structured JSON-RPC error responses

    • writeJsonRpcError uses ObjectMapper to build error responses safely, avoiding JSON injection from error messages.
    • Error codes follow the JSON-RPC 2.0 spec:
    • -32700 parse error
    • -32600 invalid request
    • -32603 internal error
    • -32005 exceed limit
    • -32003 response too large

Why are these changes required?

  • Without limits, a client can send an arbitrarily large batch, trigger an expensive query with many addresses, or force the node to serialize a massive response — all of which cause unbounded memory growth.
  • The response buffer caps worst-case allocation to maxResponseSize and fails fast rather than buffering the entire response before checking.
  • Rejecting [] closes a spec compliance gap: previously the empty batch fell through to JsonRpcServer, whose behavior for an empty array is undefined by the spec.

Configuration

node {
  jsonrpc {
    # Max JSON-RPC batch array size; 0 = no limit
    maxBatchSize = 100
    # Max response body in bytes (default 25 MB)
    maxResponseSize = 26214400
    # Max address entries in eth_getLogs / eth_newFilter
    maxAddressSize = 1000
  }
}

Tests

Test class Coverage
JsonRpcServletTest Parse error, empty body, empty batch []-32600, batch size limit, response overflow, internal error, normal path
BufferedResponseWrapperTest Write, overflow, reset, getWriter delegation
CachedBodyRequestWrapperTest Body replay via getInputStream and getReader
JsonRpcTest.testLogFilterAddressSizeLimit Address list at limit (passes), at limit+1 (throws with "exceed max addresses:"), limit=0 disabled

@halibobo1205 halibobo1205 added this to the GreatVoyage-v4.8.2 milestone Apr 29, 2026
@halibobo1205 halibobo1205 added topic:json-rpc topic:api rpc/http related issue labels Apr 29, 2026
Comment thread common/src/main/java/org/tron/core/config/args/NodeConfig.java
Comment thread framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java Outdated
Comment thread common/src/main/java/org/tron/common/parameter/CommonParameter.java
Comment thread framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java Outdated
Comment thread framework/src/main/resources/config.conf Outdated
@github-actions github-actions Bot requested review from 0xbigapple and bladehan1 May 8, 2026 07:39
317787106 and others added 4 commits May 8, 2026 22:10
Conflict resolution (all conflicts are additive — keep both sides):
- NodeConfig.java: keep maxBatchSize/maxResponseSize/maxAddressSize (HEAD)
  and maxMessageSize (develop)
- reference.conf: keep all four config entries
- Args.java: keep all four PARAMETER assignments

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@halibobo1205 halibobo1205 requested a review from waynercheung May 9, 2026 04:03
batchResult.add(responseNode);
}

byte[] finalBytes = MAPPER.writeValueAsBytes(batchResult);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] Add a final size check on the serialized batch result

accumulatedSize is incremented by subOutput.toByteArray().length + (separator? 1 : 0), which is an estimate. The final MAPPER.writeValueAsBytes(batchResult) re-serializes parsed JsonNodes, and the resulting byte length can drift from the estimate (whitespace, unicode escaping, ObjectNode key ordering). The drift is tiny in practice but means the maxResponseSize hard cap is approximate, not exact, for the batch path.

Suggestion: after computing finalBytes, do one more cheap check before writing:

byte[] finalBytes = MAPPER.writeValueAsBytes(batchResult);
if (maxResponseSize > 0 && finalBytes.length > maxResponseSize) {
  writeJsonRpcError(resp, JsonRpcError.RESPONSE_TOO_LARGE,
      "Response exceeds the limit of " + maxResponseSize + " bytes", null, true);
  return;
}

This turns the per-iteration accumulation into a fast short-circuit and lets the final check enforce the hard cap exactly.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't satisfy both requirements at the same time:

  1. If the accumulated size of the first few items does not exceed the threshold, return them normally, while marking the subsequent ones as overflowed.
  2. When the overall result overflows, return only a single overflow indicator.

writeJsonRpcError(resp, JsonRpcError.INTERNAL_ERROR, "Internal error", null, true);
return;
}
batchResult.add(responseNode);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] All-notification batch returns [] instead of an empty body

When every sub-request in a batch is a notification (no id), batchResult ends up empty and the servlet writes [] with HTTP 200. JSON-RPC 2.0 § 6 says: "If there are no Response objects contained within the Response array as it is to be sent to the client, the server MUST NOT return an empty Array and should return nothing at all." The current JsonRpcServletTest.batchLimitDisabled_largeBatchAllowed test asserts body.size() == 0 — i.e. the test acknowledges and pins this behavior.

Most ETH-compatible clients tolerate [], but this is a low-effort spec compliance gap. Suggestion: either (a) flip to a spec-compliant empty response (resp.setStatus(204); resp.setContentLength(0); and skip the write) when batchResult.isEmpty(), or (b) document this intentional deviation in the class javadoc / PR description.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your review, add it :

    // JSON-RPC 2.0 §6: MUST NOT return an empty Array when there are no response objects.
    if (batchResult.isEmpty()) {
      resp.setStatus(HttpServletResponse.SC_OK);
      resp.setContentLength(0);
      return;
    }

batchResult.add(responseNode);
}

byte[] finalBytes = MAPPER.writeValueAsBytes(batchResult);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NOTE] Batch path peak memory ≈ ~3x maxResponseSize — worth documenting for ops

Not a defect — just an operational observation that helps deployers size the JVM heap correctly.

At this line, the batch path holds three coexisting representations of the response:

  1. batchResult (ArrayNode of parsed JsonNodes) — usually ~1.5–2x the underlying byte size due to object overhead and unicode string expansion.
  2. finalBytes — the freshly serialized byte[] (~1x the byte size).
  3. The just-written response on the way to the socket buffer.

So per concurrent batch the transient peak is roughly ~3x the configured maxResponseSize (default 25 MB → ~75 MB peak per batch; 100 concurrent batches → ~7.5 GB). Operators sizing Xmx from the upper-bound config alone would underestimate.

Suggestion (no code change required, doc-only):

  • Add a line to the node.json-rpc.max-response-size comment in reference.conf: e.g. "actual peak heap per concurrent batch ≈ 3x this value due to JsonNode tree + serialized bytes coexisting briefly."
  • Or an implementation note on handleBatch's javadoc.

This is strictly informational; feel free to skip if it's already covered in your deploy docs.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no need to tell user who only concerns the result.

return;
}

ByteArrayOutputStream subOutput = new ByteArrayOutputStream();
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[MUST] Batch sub‑responses are buffered without a hard limit. subOutput is a plain ByteArrayOutputStream, so one large sub‑response can allocate beyond maxResponseSize before line 183 checks the size. Please replace this with a bounded output stream and stop the batch as soon as overflow is detected.

}
accumulatedSize += addition;

JsonNode responseNode;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] This stores all sub‑responses in batchResult and serializes the whole batch again at line 200. That creates another unbounded in‑memory copy before writing the response. Please accumulate serialized sub‑response bytes into a bounded batch buffer instead of building a full ArrayNode.

if (overflow) {
return;
}
if (maxBytes > 0 && buffer.size() + len > maxBytes) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[MUST] buffer.size() + len is unsafe int arithmetic and can overflow before the limit check. Please use checked or long arithmetic, e.g. long nextSize = (long) buffer.size() + len, and mark overflow when it exceeds maxBytes.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is little possibility to int overflow, so stays.

}

// comma separator between array elements
int addition = responseBytes.length + (!batchResult.isEmpty() ? 1 : 0);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[MUST] responseBytes.length + ... and accumulatedSize + addition are unsafe int additions. Please use checked or long arithmetic for response‑size accounting so overflow cannot bypass maxResponseSize.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The possibility of integer overflow is extremely low, so we'll leave it unchanged for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

topic:api rpc/http related issue topic:json-rpc

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

[Feature] Introduce resource limits for JSON-RPC (batch size, response size, address size, timeout)

6 participants