Skip to content

Token leadingTrivia handling and reduce memory retention in long-lived compiler/LSP caches#1705

Merged
markwpearce merged 3 commits into
v1from
memory_performance
May 12, 2026
Merged

Token leadingTrivia handling and reduce memory retention in long-lived compiler/LSP caches#1705
markwpearce merged 3 commits into
v1from
memory_performance

Conversation

@markwpearce
Copy link
Copy Markdown
Collaborator

@markwpearce markwpearce commented May 11, 2026

Summary

This PR tightens memory behavior and trivia handling across lexer/parser/transpile paths.

It does two main things:

  1. Reduces retained memory in long-lived objects and caches.
  2. Makes trivia access resilient now that leadingTrivia is optional when empty.

What changed

  • Added disposal cleanup to release cached references earlier:
    • clear BrsFile lookup caches during dispose
    • clear Scope cache during dispose
    • unlink Scope symbol table during dispose
  • Reduced cache growth pressure:
    • bounded/managed standardizePath cache behavior
    • added URI conversion caching for path-to-URI conversion
  • Reduced per-token overhead:
    • lexer now omits leadingTrivia when there is no trivia
    • Token.leadingTrivia is now optional
  • Updated downstream consumers for optional trivia:
    • parser/transpile/comment helpers now use null-safe trivia access
    • completion/comment checks updated to handle missing trivia arrays
  • Small correctness/cleanup updates:
    • typo fix (isPostionInComment -> isPositionInComment)
    • lazy allocation of duplicate-symbol map in cross-scope validation

Why

In editor-driven and long-running language server sessions, small retained references and unbounded cache growth can accumulate over time.
These changes keep behavior intact while lowering memory pressure and making trivia handling more explicit and robust.

Testing

  • Added Scope disposal cache-clearing test.
  • Added BrsFile disposal lookup-cache test.
  • Added lexer test to verify trivia arrays are only present when needed.
  • Updated existing tests to support optional trivia arrays.

Results

This code was run on an app with these metrics:

Memory Load Time Validation Time Build Time
Before 890MB 884ms 6483ms 2551ms
After 774MB 786ms 6420ms 2606ms

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR reduces memory retention in long-lived BrighterScript sessions by placing bounds on a frequently-used path normalization cache and by more aggressively releasing cached data during disposal of core objects (scopes and files).

Changes:

  • Add a size limit to Util.standardizePath’s internal cache to prevent unbounded growth (clears the cache when the limit is reached).
  • Ensure Scope.dispose() releases cached data and unlinks symbol table relationships.
  • Ensure BrsFile.dispose() invalidates _cachedLookups so cached AST-derived collections don’t remain retained after disposal, with accompanying tests.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated no comments.

Show a summary per file
File Description
src/util.ts Adds a bounded standardizePath cache with a helper method to enforce the limit.
src/util.spec.ts Adds test coverage to verify cache bounding behavior.
src/Scope.ts Clears scope cache and unlinks symbol tables during dispose() to reduce retention.
src/Scope.spec.ts Adds a test ensuring scope cache is cleared on dispose.
src/files/BrsFile.ts Invalidates _cachedLookups during file disposal to release cached lookup data.
src/files/BrsFile.spec.ts Adds a test ensuring cached lookups are cleared on dispose.

Copy link
Copy Markdown
Member

@TwitchBronBron TwitchBronBron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. So this path caching was added to improve performance. I benchmarked it to verify that it was significantly faster to cache these lookups.

Adding a function call in the middle of this hot function might negate a significant amount the benefit of caching in the first place.

Just curious, did you do any profiling to see how much memory this cache actually gets? Is it actually significant or just a guess?

@markwpearce
Copy link
Copy Markdown
Collaborator Author

Hmm. So this path caching was added to improve performance. I benchmarked it to verify that it was significantly faster to cache these lookups.

Adding a function call in the middle of this hot function might negate a significant amount the benefit of caching in the first place.

Just curious, did you do any profiling to see how much memory this cache actually gets? Is it actually significant or just a guess?

Yeah.. That was just AI.. I'll revert that change.

@markwpearce markwpearce changed the title Reduce memory retention in caches and dispose paths Token leadingTrivia handling and reduce memory retention in long-lived compiler/LSP caches May 12, 2026
@markwpearce markwpearce marked this pull request as ready for review May 12, 2026 19:51
@luis-j-soares luis-j-soares added the create-package create a temporary npm package on every commit label May 12, 2026
@rokucommunity-bot
Copy link
Copy Markdown
Contributor

Hey there! I just built a new temporary npm package based on 39c1aae. You can download it here or install it by running the following command:

npm install https://github.com/rokucommunity/brighterscript/releases/download/v0.0.0-packages/brighterscript-1.0.0-alpha.50-memory-performance.20260512220521.tgz

@luis-j-soares luis-j-soares removed the create-package create a temporary npm package on every commit label May 12, 2026
@markwpearce markwpearce merged commit 9de11ed into v1 May 12, 2026
12 checks passed
@markwpearce markwpearce deleted the memory_performance branch May 12, 2026 23:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants