Skip to content

Conversation

@li-zhixin
Copy link

@li-zhixin li-zhixin commented Jan 13, 2026

Summary

Introduces a batching mechanism for log reporting to reduce network requests and improve performance.

Changes

  • Add LogBatcher class to implement log batching logic
  • Support batching strategy based on entry count and payload size limits
  • Add configuration options: batchLogs, batchLogsSize, batchPayloadLimit
  • Implement flushLogs() method to manually flush pending logs
  • Update TypeScript type definitions and documentation
  • Add comprehensive unit tests covering various batching scenarios

Benefits

  • Reduces network overhead by consolidating multiple log entries into single requests
  • Configurable batching parameters to balance between latency and throughput
  • Maintains backward compatibility

Summary by CodeRabbit

  • New Features

    • Optional log batching with configurable enable, batch size, and payload limit
    • Added flushLogs() API to manually flush pending batched logs; batches also flush automatically on size/payload limits or when finishing a launch
  • Documentation

    • Documented batch logging options, defaults, and usage examples
  • Tests

    • Added comprehensive tests for batching behavior, limits, flush semantics, and edge cases

✏️ Tip: You can customize this high-level summary in your review settings.

…ce network requests

- Add LogBatcher class to implement log batching logic
- Support batching strategy based on entry count and payload size limits
- Add configuration options: batchLogs, batchLogsSize, batchPayloadLimit
- Implement flushLogs method to manually flush pending logs
- Update TypeScript type definitions and documentation
- Add comprehensive unit tests covering various batching scenarios
@li-zhixin li-zhixin requested a review from AmsterGet as a code owner January 13, 2026 10:44
@coderabbitai
Copy link

coderabbitai bot commented Jan 13, 2026

Walkthrough

Adds configurable log batching: constants and multipart-size helpers, a LogBatcher class with append/flush, integration into ReportPortalClient with batch send/flush (including flush on finishLaunch), TypeScript declarations, tests, and README docs for three new config options and a flushLogs API.

Changes

Cohort / File(s) Summary
Configuration & Types
lib/commons/config.js, index.d.ts
Add batchLogs, batchLogsSize, batchPayloadLimit to runtime config (with defaults/validation) and declare ReportPortalClient.flushLogs(): Promise<void>.
Log Batching Core
lib/logs/constants.js, lib/logs/helpers.js, lib/logs/batcher.js
New constants (MAX_LOG_BATCH_SIZE, MAX_LOG_BATCH_PAYLOAD_SIZE), multipart-size helpers (calculateJsonPartSize, calculateFilePartSize, calculateMultipartSize), and LogBatcher implementing append(), flush(), and batch/payload tracking.
Module Aggregation
lib/logs/index.js
Barrel export for LogBatcher, helper functions, and constants.
Client Integration
lib/report-portal-client.js
Conditionally instantiate LogBatcher when config.batchLogs; add sendLogViaBatcher, buildBatchMultiPartStream, sendLogBatch, and flushLogs; route per-log flows through batcher; ensure flush on finishLaunch.
Tests & Docs
__tests__/log-batcher.spec.js, __tests__/report-portal-client.spec.js, __tests__/config.spec.js, README.md
Add extensive unit tests covering batcher behavior, multipart stream building, client integration and flush; update README with batchLogs, batchLogsSize, batchPayloadLimit docs and flushLogs() usage/conditions.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant RPClient
    participant LogBatcher
    participant REST_API

    Client->>RPClient: saveLog(logData)
    alt batchLogs enabled
        RPClient->>LogBatcher: append(logRequest)
        LogBatcher-->>LogBatcher: accumulate (count & payload)
        alt batch full (count or payload)
            LogBatcher->>RPClient: return batch
            RPClient->>REST_API: sendLogBatch(multipart stream)
            REST_API-->>RPClient: response
            RPClient-->>Client: resolve/push result(s)
        else accumulating
            LogBatcher-->>RPClient: pending (tempId + promise)
            RPClient-->>Client: return tempId & promise
        end
    else
        RPClient->>REST_API: send individual log
        REST_API-->>RPClient: response
        RPClient-->>Client: result
    end
Loading
sequenceDiagram
    participant Client
    participant RPClient
    participant LogBatcher
    participant REST_API

    Client->>RPClient: flushLogs()
    RPClient->>LogBatcher: flush()
    LogBatcher-->>RPClient: current batch (if any)
    alt batch exists
        RPClient->>RPClient: buildBatchMultiPartStream(batch)
        RPClient->>REST_API: sendLogBatch(multipart stream)
        REST_API-->>RPClient: response
    else
        Note over RPClient: nothing to flush
    end
    RPClient-->>Client: Promise resolved
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 I nibble bytes and stitch them tight,
I gather logs from day and night,
When count or size decrees they're done,
I hop them off — batched, one by one,
Off they go, a carrot-flavored run. 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: implementing batch sending for logs to optimize performance, which matches the extensive batching infrastructure added across multiple files.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In @lib/logs/batcher.js:
- Around line 16-25: In appendInternal, when size >= this.payloadLimit and
this.batch.length > 0 the current code returns the existing batch but drops
logReq; change it to return the existing batch while queuing the oversized log
for the next flush by: capture const { batch } = this; set this.batch =
[logReq]; set this.payloadSize = size; return batch. This preserves the returned
batch and ensures the oversized logReq is not lost; alternatively you can choose
to return [logReq] immediately and keep this.batch intact, but be explicit which
behavior you pick.

In @lib/report-portal-client.js:
- Around line 831-877: Replace the direct UniqId() call with this.getUniqId()
inside sendLogViaBatcher to match the codebase convention, and make the promise
resolution consistent: when this.logBatcher.append(logRequest) returns null,
resolve the associated promise with the same shape/value that
sendLogBatch(batch).then would resolve with (e.g., resolve a standardized
response or null with the same field names) so callers receive a consistent
response via this.map[tempId].promiseFinish; keep references to tempId,
this.map[tempId], sendLogBatch, logBatcher.append, and promiseFinish when
implementing the change.
🧹 Nitpick comments (5)
lib/commons/config.js (1)

120-122: Consider validating numeric batch options.

The nullish coalescing handles missing values correctly, but there's no validation for invalid values (e.g., negative numbers, zero, or non-numeric types). A user passing batchLogsSize: -1 or batchPayloadLimit: 'abc' could cause unexpected behavior in LogBatcher.

💡 Suggested validation
-      batchLogs: options.batchLogs ?? false,
-      batchLogsSize: options.batchLogsSize ?? MAX_LOG_BATCH_SIZE,
-      batchPayloadLimit: options.batchPayloadLimit ?? MAX_LOG_BATCH_PAYLOAD_SIZE,
+      batchLogs: options.batchLogs ?? false,
+      batchLogsSize: Number.isInteger(options.batchLogsSize) && options.batchLogsSize > 0
+        ? options.batchLogsSize
+        : MAX_LOG_BATCH_SIZE,
+      batchPayloadLimit: Number.isInteger(options.batchPayloadLimit) && options.batchPayloadLimit > 0
+        ? options.batchPayloadLimit
+        : MAX_LOG_BATCH_PAYLOAD_SIZE,
lib/logs/helpers.js (1)

25-31: Potential size miscalculation for array-like content.

The fallback at line 29-30 assumes file.content.length represents byte size, but for array-like objects (e.g., Uint8Array, typed arrays), this is correct, while for a plain array of objects it would not be. Consider being more explicit about supported types.

💡 Suggested refinement
   if (Buffer.isBuffer(file.content)) {
     fileSize = file.content.length;
   } else if (typeof file.content === 'string') {
     fileSize = Buffer.byteLength(file.content, 'utf8');
-  } else if (file.content.length !== undefined) {
+  } else if (ArrayBuffer.isView(file.content)) {
+    fileSize = file.content.byteLength;
+  } else if (typeof file.content.length === 'number') {
     fileSize = file.content.length;
   }
lib/logs/batcher.js (2)

27-34: Edge case: empty batch when payload limit exceeded.

If this.payloadSize + size >= this.payloadLimit but this.batch.length === 0 (e.g., after the oversized log fix above), the code falls through without returning, which is correct. However, the conditional at line 28 makes the structure slightly confusing.

💡 Clearer structure
     if (this.payloadSize + size >= this.payloadLimit) {
-      if (this.batch.length > 0) {
-        const { batch } = this;
-        this.batch = [logReq];
-        this.payloadSize = size;
-        return batch;
-      }
+      const { batch } = this;
+      this.batch = [logReq];
+      this.payloadSize = size;
+      if (batch.length > 0) {
+        return batch;
+      }
+      return null;
     }
-
-    this.batch.push(logReq);
-    this.payloadSize += size;

This makes the empty-batch case explicit rather than relying on fall-through.


69-75: Getter name batchSize may be misleading.

The getter returns this.batch.length (current count), but based on the JSDoc context and typical naming conventions, batchSize could be confused with the maximum batch size (entryNum). Consider renaming to currentBatchLength or pendingCount for clarity.

__tests__/log-batcher.spec.js (1)

259-345: Consider adding a test for undefined file property.

The edge case tests are comprehensive, but there's no explicit test for when file property is undefined (vs explicitly null). This could be relevant since the implementation checks logReq.file in sendLogViaBatcher.

📝 Suggested additional test case
it('should handle undefined file property', () => {
  const batcher = new LogBatcher(3, 10000);

  const logReq = {
    payload: { message: 'Test log without file property', level: 'info' },
    // file property is undefined
  };

  const result = batcher.append(logReq);

  expect(result).toBeNull();
  expect(batcher.batchSize).toBe(1);
});
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4878b42 and 5a81d0e.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (9)
  • README.md
  • __tests__/log-batcher.spec.js
  • index.d.ts
  • lib/commons/config.js
  • lib/logs/batcher.js
  • lib/logs/constants.js
  • lib/logs/helpers.js
  • lib/logs/index.js
  • lib/report-portal-client.js
🧰 Additional context used
🧬 Code graph analysis (3)
lib/commons/config.js (3)
__tests__/log-batcher.spec.js (1)
  • require (2-2)
lib/logs/index.js (2)
  • require (2-6)
  • require (7-7)
lib/logs/constants.js (2)
  • MAX_LOG_BATCH_SIZE (1-1)
  • MAX_LOG_BATCH_PAYLOAD_SIZE (2-2)
__tests__/log-batcher.spec.js (3)
lib/logs/index.js (3)
  • LogBatcher (1-1)
  • require (2-6)
  • require (7-7)
lib/logs/batcher.js (2)
  • require (1-1)
  • require (2-2)
lib/logs/constants.js (2)
  • MAX_LOG_BATCH_SIZE (1-1)
  • MAX_LOG_BATCH_PAYLOAD_SIZE (2-2)
lib/logs/batcher.js (2)
lib/logs/index.js (2)
  • require (2-6)
  • require (7-7)
lib/logs/constants.js (2)
  • MAX_LOG_BATCH_SIZE (1-1)
  • MAX_LOG_BATCH_PAYLOAD_SIZE (2-2)
🔇 Additional comments (17)
lib/logs/constants.js (1)

1-7: LGTM!

Constants are well-defined with sensible defaults. The 20-entry batch size and 64 MB payload limit provide reasonable thresholds for batching behavior.

lib/logs/helpers.js (1)

38-45: LGTM!

Clean composition of size calculations with appropriate boundary overhead.

lib/logs/batcher.js (2)

53-56: LGTM!

Clean delegation to appendInternal with size calculation.


58-67: LGTM!

Flush implementation correctly returns and resets the batch state.

lib/logs/index.js (1)

1-16: LGTM!

Clean barrel module that correctly aggregates and re-exports all logging utilities for public API consumption.

README.md (2)

126-128: LGTM! Clear documentation for new batching configuration options.

The documentation correctly describes the new configuration options with their default values and purpose. The default values align with the constants defined in the codebase.


598-636: LGTM! Comprehensive documentation for the flushLogs API.

The documentation clearly explains when to use flushLogs(), provides a practical code example, and lists the automatic flush triggers. This aligns well with the implementation.

index.d.ts (2)

163-177: LGTM! Type definitions correctly added for batching configuration.

The optional properties with appropriate types and JSDoc defaults align with the implementation.


407-442: LGTM! Well-documented flushLogs method declaration.

The JSDoc provides comprehensive documentation with a practical usage example.

__tests__/log-batcher.spec.js (4)

1-21: LGTM! Good constructor test coverage.

Tests verify both default values from constants and custom parameter initialization.


23-103: LGTM! Comprehensive append behavior tests.

Good coverage of the core append scenarios: non-full batch, entry limit reached, payload limit exceeded, and file attachments.


105-181: LGTM! Thorough oversized log handling tests.

The tests correctly verify behavior for oversized logs when batch is empty vs. non-empty, and consecutive oversized logs.


183-232: LGTM! Flush tests verify idempotency correctly.

Good coverage of empty batch, non-empty batch, and second flush call scenarios.

lib/report-portal-client.js (4)

69-72: LGTM! Conditional batcher initialization.

The LogBatcher is only created when batchLogs is enabled, and correctly passes the configuration values.


284-326: LGTM! Good refactoring for launch finish flow with log flushing.

The flow correctly flushes logs before finishing the launch, and proceeds with finishing even if flush fails (logging the error). This ensures the launch finishes regardless of flush errors.


1021-1034: LGTM! Clean implementation of flushLogs.

The method correctly returns early if no batcher exists, and delegates to sendLogBatch which handles null/empty batches gracefully.


762-764: LGTM! Clean routing through batcher when enabled.

Both sendLogWithoutFile and sendLogWithFile correctly check for the batcher and route appropriately.

Also applies to: 807-809

- Fix oversized log being dropped when batch is non-empty
- Use this.getUniqId() instead of UniqId() for consistency
- Return consistent response shape from sendLogViaBatcher
- Add JSDoc comments to batcher and helpers
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
lib/report-portal-client.js (2)

12-12: Pre-existing: Module-level boundary reuse in concurrent scenarios.

The MULTIPART_BOUNDARY constant is generated once at module load and reused for all non-batched log requests with files. While this is pre-existing code, consider whether concurrent file log requests could cause issues. The new batch code correctly generates unique boundaries per request (line 1000), which is the safer pattern.

♻️ Optional: Generate boundary per request for consistency
-const MULTIPART_BOUNDARY = Math.floor(Math.random() * 10000000000).toString();
+const generateBoundary = () => Math.floor(Math.random() * 10000000000).toString();

Then update getRequestLogWithFile to generate a fresh boundary per call.


831-877: Document the fire-and-forget semantics of queued logs in batched logging.

The queued log responses resolve with { queued: true } and don't receive the actual server response from the batch. This is intentional for batching performance, but callers should understand they cannot await the actual server response. The flushLogs() method provides a way to force batch sending if needed.

Update the JSDoc for sendLogViaBatcher to clarify that promises for queued logs resolve with { queued: true } and won't contain the server response, and document when/how to use flushLogs() if actual responses are required.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5a81d0e and 485e90c.

📒 Files selected for processing (4)
  • __tests__/log-batcher.spec.js
  • lib/logs/batcher.js
  • lib/logs/helpers.js
  • lib/report-portal-client.js
🚧 Files skipped from review as they are similar to previous changes (3)
  • lib/logs/helpers.js
  • tests/log-batcher.spec.js
  • lib/logs/batcher.js
🔇 Additional comments (8)
lib/report-portal-client.js (8)

10-10: LGTM!

The import correctly references the new logs module aggregation.


69-72: LGTM!

Conditional initialization of the log batcher is clean and correctly passes the configuration parameters.


284-320: LGTM!

The refactored finish flow correctly ensures logs are flushed before completing the launch. The error handling appropriately logs flush failures without blocking the launch completion, which is the right tradeoff for reliability.


762-764: LGTM!

Clean conditional routing to the batcher path when batching is enabled.


807-809: LGTM!

Consistent routing pattern with the file object properly passed for batch handling.


994-1019: LGTM!

The batch sending logic is well-implemented:

  • Proper guard for empty batches
  • Unique boundary per request prevents collision issues
  • Error is properly re-thrown for caller handling

1027-1034: LGTM!

Clean implementation that safely handles both batching-disabled and empty-batch scenarios.


968-980: No action needed. The batch file attachment handling is correctly implemented.

Multiple file parts can safely use the same form field name name="file" because they are uniquely identified by their filename attributes in the Content-Disposition headers. The correlation with log entries is established through order preservation: logRequests are iterated in the same sequence when building the payloads array and when adding file parts, and each payload includes file metadata (payload.file = { name: fileObj.name }). This is standard multipart/form-data format that ReportPortal's server will handle correctly.

Likely an incorrect or invalid review comment.

@li-zhixin
Copy link
Author

@HardNorth I'm sorry to bother you again. I've reimplemented it based on your feedback. Could you please take another look?

Copy link
Member

@HardNorth HardNorth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix tests.

CruzLiu added 2 commits January 15, 2026 15:58
Add comprehensive test cases for batch logging to meet coverage thresholds (branches: 80%, functions: 75%).
@li-zhixin li-zhixin requested a review from HardNorth January 15, 2026 08:39
@li-zhixin
Copy link
Author

Please fix tests.

done.

@li-zhixin
Copy link
Author

I'm sorry to interrupt again. Can someone kind help me look at it again

… logs

- Validate batchLogsSize and batchPayloadLimit to ensure positive numbers
- Add warning when single log exceeds payload limit
- Update tests to match async sendLogViaBatcher behavior
@li-zhixin
Copy link
Author

@maria-hambardzumian Thank you for your time in pointing out the problem, I've fixed it.

@li-zhixin
Copy link
Author

@maria-hambardzumian I'm sorry to bother you, could you please take another look?

@maria-hambardzumian
Copy link
Contributor

@AmsterGet can you take a look?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants