Skip to content

Comments

⚡️ Speed up method JavaScriptSupport._build_runtime_map by 87% in PR #1561 (add/support_react)#1612

Merged
claude[bot] merged 2 commits intoadd/support_reactfrom
codeflash/optimize-pr1561-2026-02-20T14.24.41
Feb 20, 2026
Merged

⚡️ Speed up method JavaScriptSupport._build_runtime_map by 87% in PR #1561 (add/support_react)#1612
claude[bot] merged 2 commits intoadd/support_reactfrom
codeflash/optimize-pr1561-2026-02-20T14.24.41

Conversation

@codeflash-ai
Copy link
Contributor

@codeflash-ai codeflash-ai bot commented Feb 20, 2026

⚡️ This pull request contains optimizations for PR #1561

If you approve this dependent PR, these changes will be merged into the original PR branch add/support_react.

This PR will be automatically closed if the original PR is merged.


📄 87% (0.87x) speedup for JavaScriptSupport._build_runtime_map in codeflash/languages/javascript/support.py

⏱️ Runtime : 85.3 milliseconds 45.6 milliseconds (best of 17 runs)

📝 Explanation and details

The optimized code achieves an 87% speedup (from 85.3ms to 45.6ms) through two primary performance improvements in the _build_runtime_map method:

Key Optimizations

1. Path Resolution Caching (Primary Improvement)

The original code called resolve_js_test_module_path() and abs_path.resolve().with_suffix("") for every invocation, even when multiple invocations shared the same test_module_path. The optimized version introduces _resolved_path_cache to store computed path strings per module path, eliminating redundant filesystem operations.

Line profiler data confirms the dramatic impact:

  • resolve_js_test_module_path calls: 3,481 → 1,480 (57% reduction)
  • Time in path resolution: 84.7ms → 38.6ms (54% faster)
  • Time in abs_path.resolve(): 186.9ms → 89.2ms (52% faster)

2. Optimized String Parsing

The original code parsed iteration_id inefficiently:

parts = iteration_id.split("_").__len__()  # Creates list, calls __len__()
cur_invid = iteration_id.split("_")[0] if parts < 3 else "_".join(iteration_id.split("_")[:-1])  # Splits again!

The optimized version splits once and reuses the result:

parts = iteration_id.split("_")
parts_len = len(parts)
cur_invid = parts[0] if parts_len < 3 else "_".join(parts[:-1])

Additionally, dictionary access was optimized from:

if match_key not in unique_inv_ids:
    unique_inv_ids[match_key] = 0
unique_inv_ids[match_key] += min(runtimes)

to:

unique_inv_ids[match_key] = unique_inv_ids.get(match_key, 0) + min(runtimes)

Performance Benefits by Test Type

The optimization particularly excels with workloads featuring:

  1. Many invocations with shared module paths (e.g., test_large_number_of_invocations: 1567% faster, test_many_different_iteration_ids: 3037% faster) - the cache eliminates redundant path resolutions
  2. Repeated path resolution (e.g., test_multiple_invocations_same_module: 52.4% faster) - cache hits avoid expensive filesystem operations
  3. Complex iteration IDs (e.g., test_complex_iteration_id_patterns: 2472% faster) - optimized string parsing reduces per-item overhead

The optimization maintains correctness across all test cases while delivering substantial performance improvements, especially in realistic scenarios where test suites contain multiple tests in the same modules.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 76 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 94.7%
🌀 Click to see Generated Regression Tests
from pathlib import Path

# imports
import pytest  # used for our unit tests
from codeflash.languages.javascript.edit_tests import \
    resolve_js_test_module_path
from codeflash.languages.javascript.support import JavaScriptSupport
from codeflash.models.models import InvocationId

# function to test
def _call_build_runtime_map(inv_id_runtimes: dict[InvocationId, list[int]], tests_project_rootdir: Path) -> dict[str, int]:
    """
    Helper wrapper to call the instance method on a real JavaScriptSupport instance.
    Keeps test code concise and ensures we call the method the same way production code would.
    """
    js_support = JavaScriptSupport()  # construct a real instance (no args required)
    return js_support._build_runtime_map(inv_id_runtimes, tests_project_rootdir)

def _abs_path_str_for(test_module_path: str, tests_project_rootdir: Path) -> str:
    """
    Compute the absolute path string used by the implementation for a given test_module_path.
    This mirrors the same path normalization: resolve() and with_suffix("") are applied in the code under test.
    Using the same helper (resolve_js_test_module_path) keeps expectations accurate across platforms.
    """
    abs_path = resolve_js_test_module_path(test_module_path, tests_project_rootdir)
    return str(abs_path.resolve().with_suffix(""))

def _qualified_name(inv_id: InvocationId) -> str:
    """Return the test qualified name as the implementation composes it (class.function or function only)."""
    if inv_id.test_class_name:
        return inv_id.test_class_name + "." + inv_id.test_function_name  # type: ignore[operator]
    return inv_id.test_function_name  # type: ignore[return-value]

def test_basic_functionality_single_invocation(tmp_path: Path):
    # Create a file under tmp_path so resolve() produces a stable absolute path.
    file_rel = "some/path/__unit_test_example.test.js"
    file_path = tmp_path / Path(file_rel)
    file_path.parent.mkdir(parents=True, exist_ok=True)
    file_path.write_text("// dummy test file")  # create file so resolve() is stable

    # Build a single InvocationId with both class and function names
    inv = InvocationId(
        test_module_path=file_rel,
        test_class_name="MySuite",
        test_function_name="test_one",
        function_getting_tested="fn_x",
        iteration_id="iterationA",
    )

    # Provide runtimes; the implementation uses min(runtimes)
    runtimes = [10, 20, 15]

    result = _call_build_runtime_map({inv: runtimes}, tmp_path)

    # Construct expected key exactly as implementation does
    expected_key = _qualified_name(inv) + "#" + _abs_path_str_for(file_rel, tmp_path) + "#" + "iterationA"

def test_duplicate_invocations_aggregate_min_runtimes(tmp_path: Path):
    # Create file so path resolution is deterministic
    file_rel = "dir/__unit_test_dup.test.js"
    file_path = tmp_path / Path(file_rel)
    file_path.parent.mkdir(parents=True, exist_ok=True)
    file_path.write_text("// duplicate test file")

    # Two distinct InvocationId objects that should map to the same key
    inv1 = InvocationId(
        test_module_path=file_rel,
        test_class_name="Suite",
        test_function_name="test_dup",
        function_getting_tested="f",
        iteration_id="it1",
    )
    inv2 = InvocationId(
        test_module_path=file_rel,
        test_class_name="Suite",
        test_function_name="test_dup",
        function_getting_tested="f",
        iteration_id="it1",
    )

    # Provide runtimes; implementation uses min per invocation and sums across unique keys
    runtimes1 = [3, 5]
    runtimes2 = [2, 8]

    result = _call_build_runtime_map({inv1: runtimes1, inv2: runtimes2}, tmp_path)

    expected_key = _qualified_name(inv1) + "#" + _abs_path_str_for(file_rel, tmp_path) + "#" + "it1"

def test_iteration_id_parsing_various_formats(tmp_path: Path):
    # The iteration_id parsing logic treats underscore-separated parts specially.
    file_rel = "it_parsing/__unit_test_it.test.js"
    file_path = tmp_path / Path(file_rel)
    file_path.parent.mkdir(parents=True, exist_ok=True)
    file_path.write_text("// iteration id parsing file")

    # iteration_id with a single token -> cur_invid is that token
    inv_single = InvocationId(
        test_module_path=file_rel,
        test_class_name=None,
        test_function_name="fn",
        function_getting_tested="t",
        iteration_id="single",
    )

    # iteration_id with two tokens -> parts < 3 so cur_invid is first token
    inv_two = InvocationId(
        test_module_path=file_rel,
        test_class_name=None,
        test_function_name="fn2",
        function_getting_tested="t",
        iteration_id="a_b",
    )

    # iteration_id with three tokens -> parts == 3 so cur_invid is join of all except last ("a_b_c" -> "a_b")
    inv_three = InvocationId(
        test_module_path=file_rel,
        test_class_name=None,
        test_function_name="fn3",
        function_getting_tested="t",
        iteration_id="a_b_c",
    )

    result = _call_build_runtime_map(
        {
            inv_single: [7],
            inv_two: [11],
            inv_three: [5],
        },
        tmp_path,
    )

    # Build expected keys accordingly
    key_single = "fn" + "#" + _abs_path_str_for(file_rel, tmp_path) + "#" + "single"
    key_two = "fn2" + "#" + _abs_path_str_for(file_rel, tmp_path) + "#" + "a"
    key_three = "fn3" + "#" + _abs_path_str_for(file_rel, tmp_path) + "#" + "a_b"

def test_skips_paths_without_unit_or_perf_markers(tmp_path: Path):
    # If the resolved path does not contain "__unit_test_" or "__perf_test_", the invocation should be skipped.
    file_rel = "regular_tests/no_marker.test.js"
    file_path = tmp_path / Path(file_rel)
    file_path.parent.mkdir(parents=True, exist_ok=True)
    file_path.write_text("// regular test file with no special marker")

    inv = InvocationId(
        test_module_path=file_rel,
        test_class_name="Suite",
        test_function_name="test_norm",
        function_getting_tested="f",
        iteration_id="iter",
    )

    result = _call_build_runtime_map({inv: [1]}, tmp_path)

def test_large_scale_runtime_mapping_1000_entries(tmp_path: Path):
    # Build 1000 InvocationId entries. Use a small set of distinct files/functions so keys collide and get aggregated.
    total_entries = 1000
    inv_map: dict[InvocationId, list[int]] = {}

    # Prepare a small number of files (10) and functions (20), iteration ids (5) to create collisions
    for i in range(total_entries):
        file_index = i % 10
        fn_index = i % 20
        iter_index = i % 5

        # Include "__unit_test_" so entries are considered
        file_rel = f"bulk/__unit_test_bulk{file_index}.test.js"
        # Ensure file exists for stable resolution
        file_path = tmp_path / Path(file_rel)
        file_path.parent.mkdir(parents=True, exist_ok=True)
        # create file only once per unique file name to save IO
        if not file_path.exists():
            file_path.write_text("// bulk test file")

        inv = InvocationId(
            test_module_path=file_rel,
            test_class_name=f"Suite{fn_index % 3}",  # only 3 distinct classes to increase collisions
            test_function_name=f"test_{fn_index}",
            function_getting_tested="target_fn",
            iteration_id=f"iter_{iter_index}",  # iteration ids like iter_0 ... iter_4
        )

        # Make runtimes vary; include some larger and smaller numbers
        runtimes = [ (i % 7) + 1, ((i + 3) % 5) + 1 ]  # deterministic small lists
        inv_map[inv] = runtimes

    # Call the implementation under test
    result = _call_build_runtime_map(inv_map, tmp_path)

    # Now compute expected results using the same mapping logic but in-test to verify correctness.
    # We compute expectations deterministically rather than relying on the implementation under test.
    expected: dict[str, int] = {}
    for inv, runtimes in inv_map.items():
        # Compose qualified name
        qname = _qualified_name(inv)
        # Compute abs path string the same way
        abs_path_str = _abs_path_str_for(inv.test_module_path, tmp_path)
        # compute cur_invid according to rules from the implementation
        iteration_id = inv.iteration_id or ""
        parts_len = len(iteration_id.split("_"))
        cur_invid = iteration_id.split("_")[0] if parts_len < 3 else "_".join(iteration_id.split("_")[:-1])
        key = qname + "#" + abs_path_str + "#" + cur_invid

        # Only consider keys where abs_path_str contains the required marker; mimic implementation's filter:
        if "__unit_test_" not in abs_path_str and "__perf_test_" not in abs_path_str:
            continue

        if key not in expected:
            expected[key] = 0
        expected[key] += min(runtimes)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
from pathlib import Path

# imports
import pytest
from codeflash.languages.javascript.frameworks.detector import FrameworkInfo
from codeflash.languages.javascript.support import JavaScriptSupport
from codeflash.models.models import InvocationId

def test_empty_input_dict():
    """Test that empty input dictionary returns empty result."""
    support = JavaScriptSupport()
    codeflash_output = support._build_runtime_map({}, Path("/test/root")); result = codeflash_output # 2.81μs -> 3.10μs (9.40% slower)

def test_single_invocation_with_class_and_function():
    """Test single invocation with both test class and function names."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__unit_test_example.ts",
        test_class_name="TestSuite",
        test_function_name="testFunction",
        function_getting_tested="myFunction",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id: [100, 200, 150]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test/root")); result = codeflash_output # 53.2μs -> 55.7μs (4.43% slower)
    key = list(result.keys())[0]

def test_single_invocation_without_class():
    """Test single invocation with function name only (no class)."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__unit_test_simple.js",
        test_class_name=None,
        test_function_name="testFunction",
        function_getting_tested="myFunction",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id: [50, 75, 60]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test/root")); result = codeflash_output # 50.1μs -> 53.5μs (6.42% slower)
    key = list(result.keys())[0]

def test_multiple_invocations_same_module():
    """Test multiple invocations in the same test module."""
    support = JavaScriptSupport()
    inv_id_1 = InvocationId(
        test_module_path="__unit_test_main.ts",
        test_class_name="Suite1",
        test_function_name="test1",
        function_getting_tested="func1",
        iteration_id="iter_0",
    )
    inv_id_2 = InvocationId(
        test_module_path="__unit_test_main.ts",
        test_class_name="Suite1",
        test_function_name="test2",
        function_getting_tested="func2",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id_1: [100], inv_id_2: [200]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test/root")); result = codeflash_output # 82.0μs -> 53.8μs (52.4% faster)
    values = sorted(result.values())

def test_minimum_runtime_selection():
    """Test that minimum runtime from list is selected."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__unit_test_perf.js",
        test_class_name=None,
        test_function_name="perfTest",
        function_getting_tested="expensiveFunc",
        iteration_id="iter_0",
    )
    # Multiple runtimes, min should be selected
    inv_id_runtimes = {inv_id: [500, 100, 300, 250]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test/root")); result = codeflash_output # 49.8μs -> 51.1μs (2.48% slower)

def test_perf_test_extension_recognized():
    """Test that __perf_test_ prefix is recognized in module path."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__perf_test_performance.tsx",
        test_class_name="PerfTests",
        test_function_name="benchmarkRender",
        function_getting_tested="render",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id: [150]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test/root")); result = codeflash_output # 49.3μs -> 51.8μs (4.91% slower)

def test_simple_iteration_id():
    """Test parsing of simple iteration_id format."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__unit_test_iter.js",
        test_class_name=None,
        test_function_name="testFunc",
        function_getting_tested="myFunc",
        iteration_id="0",  # Simple iteration id
    )
    inv_id_runtimes = {inv_id: [75]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test/root")); result = codeflash_output # 48.4μs -> 50.7μs (4.47% slower)

def test_none_test_function_name():
    """Test that invocation with None test_function_name is skipped."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__unit_test_none.js",
        test_class_name="Suite",
        test_function_name=None,  # None function name should be skipped
        function_getting_tested="func",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id: [100]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test/root")); result = codeflash_output

def test_none_iteration_id():
    """Test that None iteration_id is handled correctly."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__unit_test_iter.js",
        test_class_name=None,
        test_function_name="testFunc",
        function_getting_tested="func",
        iteration_id=None,  # None iteration_id
    )
    inv_id_runtimes = {inv_id: [50]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test/root")); result = codeflash_output # 55.0μs -> 57.6μs (4.38% slower)

def test_non_unit_or_perf_test_filtered_out():
    """Test that paths without __unit_test_ or __perf_test_ are filtered."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="regular_test.js",  # Missing __unit_test_ or __perf_test_
        test_class_name="Suite",
        test_function_name="testFunc",
        function_getting_tested="func",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id: [100]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test/root")); result = codeflash_output # 50.2μs -> 51.9μs (3.24% slower)

def test_path_with_underscores():
    """Test handling of paths with multiple underscores."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__unit_test_my_complex_module_name.tsx",
        test_class_name="MyTestSuite",
        test_function_name="myTestFunc",
        function_getting_tested="complexFunc",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id: [88]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test/root")); result = codeflash_output # 51.5μs -> 53.5μs (3.72% slower)

def test_complex_iteration_id_three_parts():
    """Test iteration_id with exactly 3 underscore-separated parts."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__unit_test_complex.js",
        test_class_name=None,
        test_function_name="testFunc",
        function_getting_tested="func",
        iteration_id="0_1_suffix",  # Three parts: parts < 3 is False
    )
    inv_id_runtimes = {inv_id: [123]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test/root")); result = codeflash_output # 50.9μs -> 52.4μs (2.89% slower)
    # cur_invid should be "0_1"
    key = list(result.keys())[0]

def test_single_underscore_in_iteration_id():
    """Test iteration_id with single underscore (2 parts)."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__unit_test_test.js",
        test_class_name=None,
        test_function_name="test",
        function_getting_tested="func",
        iteration_id="0_1",  # Two parts: parts < 3 is True, take first
    )
    inv_id_runtimes = {inv_id: [99]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test/root")); result = codeflash_output # 50.0μs -> 50.8μs (1.58% slower)

def test_accumulation_same_match_key():
    """Test that multiple invocations with same match_key accumulate."""
    support = JavaScriptSupport()
    inv_id_1 = InvocationId(
        test_module_path="__unit_test_acc.js",
        test_class_name=None,
        test_function_name="test",
        function_getting_tested="func",
        iteration_id="0",
    )
    # Same invocation, should accumulate
    inv_id_runtimes = {inv_id_1: [100, 100]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test/root")); result = codeflash_output # 48.8μs -> 51.5μs (5.33% slower)

def test_special_characters_in_names():
    """Test handling of special characters in test names."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__unit_test_special.js",
        test_class_name="TestSuite_WithUnderscore",
        test_function_name="test_with_underscores",
        function_getting_tested="func_with_underscores",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id: [110]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test/root")); result = codeflash_output # 50.0μs -> 51.5μs (2.88% slower)

def test_relative_path_resolution():
    """Test resolution of relative test module paths."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="tests/__unit_test_relative.js",
        test_class_name=None,
        test_function_name="testFunc",
        function_getting_tested="func",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id: [77]}
    root_dir = Path("/project/tests")
    codeflash_output = support._build_runtime_map(inv_id_runtimes, root_dir); result = codeflash_output # 51.5μs -> 53.3μs (3.49% slower)

def test_absolute_path_in_module():
    """Test handling of absolute path in test_module_path."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="/absolute/path/__unit_test_absolute.js",
        test_class_name=None,
        test_function_name="testFunc",
        function_getting_tested="func",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id: [66]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/any/root")); result = codeflash_output # 49.5μs -> 51.6μs (3.93% slower)

def test_windows_style_path():
    """Test handling of Windows-style paths with backslashes."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="tests\\__unit_test_windows.js",
        test_class_name=None,
        test_function_name="testFunc",
        function_getting_tested="func",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id: [55]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("C:\\project\\tests")); result = codeflash_output # 49.2μs -> 51.3μs (4.16% slower)

def test_single_runtime_value():
    """Test that single runtime value is used directly."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__unit_test_single.js",
        test_class_name=None,
        test_function_name="testFunc",
        function_getting_tested="func",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id: [42]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test")); result = codeflash_output # 46.0μs -> 48.3μs (4.73% slower)

def test_empty_string_test_function_name():
    """Test that empty string test_function_name is treated as missing."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__unit_test_empty.js",
        test_class_name="Suite",
        test_function_name="",  # Empty string
        function_getting_tested="func",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id: [100]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test")); result = codeflash_output # 46.5μs -> 47.5μs (2.23% slower)

def test_zero_runtime_value():
    """Test that zero runtime values are handled correctly."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__unit_test_zero.js",
        test_class_name=None,
        test_function_name="testFunc",
        function_getting_tested="func",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id: [0, 100, 50]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test")); result = codeflash_output # 46.0μs -> 47.9μs (3.95% slower)

def test_negative_runtime_values():
    """Test handling of negative runtime values."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__unit_test_negative.js",
        test_class_name=None,
        test_function_name="testFunc",
        function_getting_tested="func",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id: [-10, 50, 100]}
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test")); result = codeflash_output # 46.6μs -> 47.6μs (1.96% slower)

def test_different_file_extensions():
    """Test different JavaScript/TypeScript file extensions."""
    support = JavaScriptSupport()
    extensions = [".js", ".ts", ".tsx", ".jsx", ".mjs", ".mts"]
    
    for ext in extensions:
        inv_id = InvocationId(
            test_module_path=f"__unit_test_file{ext}",
            test_class_name=None,
            test_function_name="testFunc",
            function_getting_tested="func",
            iteration_id="iter_0",
        )
        inv_id_runtimes = {inv_id: [100]}
        codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test")); result = codeflash_output # 205μs -> 218μs (5.92% slower)

def test_large_number_of_invocations():
    """Test with 500 different invocations."""
    support = JavaScriptSupport()
    inv_id_runtimes = {}
    
    for i in range(500):
        inv_id = InvocationId(
            test_module_path=f"__unit_test_large_{i % 10}.js",
            test_class_name=f"TestSuite{i % 20}",
            test_function_name=f"test_{i}",
            function_getting_tested=f"func_{i % 50}",
            iteration_id=f"iter_{i % 100}",
        )
        inv_id_runtimes[inv_id] = [i * 10, i * 5, i * 15]
    
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test")); result = codeflash_output # 11.3ms -> 678μs (1567% faster)

def test_many_runtimes_per_invocation():
    """Test with invocations having many runtime values."""
    support = JavaScriptSupport()
    inv_id = InvocationId(
        test_module_path="__unit_test_many_runtimes.js",
        test_class_name=None,
        test_function_name="testFunc",
        function_getting_tested="func",
        iteration_id="iter_0",
    )
    # 1000 runtime measurements
    runtimes = list(range(1, 1001))
    inv_id_runtimes = {inv_id: runtimes}
    
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test")); result = codeflash_output # 57.2μs -> 57.9μs (1.14% slower)

def test_deeply_nested_module_paths():
    """Test with deeply nested test module paths."""
    support = JavaScriptSupport()
    inv_id_runtimes = {}
    
    for i in range(100):
        # Create deeply nested path
        nested_path = "/".join([f"dir{j}" for j in range(20)]) + f"/__unit_test_nested_{i}.js"
        inv_id = InvocationId(
            test_module_path=nested_path,
            test_class_name=f"Suite{i}",
            test_function_name=f"test{i}",
            function_getting_tested="func",
            iteration_id=f"iter_{i}",
        )
        inv_id_runtimes[inv_id] = [i * 100]
    
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test")); result = codeflash_output # 8.48ms -> 8.61ms (1.49% slower)

def test_accumulation_with_many_same_keys():
    """Test accumulation behavior with 200 invocations sharing few keys."""
    support = JavaScriptSupport()
    inv_id_runtimes = {}
    
    # Create invocations that will share the same match_key
    for i in range(200):
        inv_id = InvocationId(
            test_module_path="__unit_test_shared.js",
            test_class_name="Suite",
            test_function_name="test",
            function_getting_tested="func",
            iteration_id="0",  # All same iteration_id to share key
        )
        inv_id_runtimes[inv_id] = [i + 1]
    
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test")); result = codeflash_output # 48.6μs -> 49.2μs (1.14% slower)
    # Each invocation adds min(runtime), so all add (i+1) for i in 0..199
    total = sum(range(1, 201))

def test_mixed_file_extensions_large():
    """Test with large dataset mixing different file extensions."""
    support = JavaScriptSupport()
    extensions = [".js", ".ts", ".tsx", ".jsx"]
    inv_id_runtimes = {}
    
    idx = 0
    for ext in extensions:
        for i in range(250):
            inv_id = InvocationId(
                test_module_path=f"__unit_test_mixed_{idx}{ext}",
                test_class_name=f"TestSuite{i % 10}",
                test_function_name=f"test_{i}",
                function_getting_tested=f"func_{i % 25}",
                iteration_id=f"iter_{i % 50}",
            )
            inv_id_runtimes[inv_id] = [idx * 10, idx * 5]
            idx += 1
    
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test")); result = codeflash_output # 22.5ms -> 24.4ms (8.03% slower)

def test_large_runtime_values():
    """Test with very large runtime values."""
    support = JavaScriptSupport()
    large_values = [10**9, 10**8, 10**7]  # Billion-scale values
    
    inv_id = InvocationId(
        test_module_path="__unit_test_large_values.js",
        test_class_name=None,
        test_function_name="testFunc",
        function_getting_tested="func",
        iteration_id="iter_0",
    )
    inv_id_runtimes = {inv_id: large_values}
    
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test")); result = codeflash_output # 48.3μs -> 49.9μs (3.09% slower)

def test_stress_path_resolution():
    """Test path resolution with 300 different paths."""
    support = JavaScriptSupport()
    inv_id_runtimes = {}
    
    for i in range(300):
        paths = [
            f"__unit_test_path_{i}.js",
            f"tests/__unit_test_path_{i}.js",
            f"src/tests/__unit_test_path_{i}.js",
        ]
        
        inv_id = InvocationId(
            test_module_path=paths[i % 3],
            test_class_name=f"Suite{i % 5}",
            test_function_name=f"test_{i}",
            function_getting_tested=f"func",
            iteration_id=f"iter_{i}",
        )
        inv_id_runtimes[inv_id] = [i]
    
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test/root")); result = codeflash_output # 8.77ms -> 9.29ms (5.65% slower)

def test_mixed_none_and_valid_function_names():
    """Test with 400 invocations mixing None and valid function names."""
    support = JavaScriptSupport()
    inv_id_runtimes = {}
    
    for i in range(400):
        # Alternate between None and valid function names
        func_name = None if i % 2 == 0 else f"test_{i}"
        
        inv_id = InvocationId(
            test_module_path="__unit_test_mixed_none.js",
            test_class_name="Suite",
            test_function_name=func_name,
            function_getting_tested="func",
            iteration_id="iter_0",
        )
        inv_id_runtimes[inv_id] = [i]
    
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test")); result = codeflash_output

def test_many_different_iteration_ids():
    """Test with 500 invocations having unique iteration_ids."""
    support = JavaScriptSupport()
    inv_id_runtimes = {}
    
    for i in range(500):
        inv_id = InvocationId(
            test_module_path="__unit_test_iter_many.js",
            test_class_name=None,
            test_function_name="test",
            function_getting_tested="func",
            iteration_id=f"iter_{i}",
        )
        inv_id_runtimes[inv_id] = [i * 2]
    
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test")); result = codeflash_output # 10.9ms -> 348μs (3037% faster)

def test_complex_iteration_id_patterns():
    """Test various complex iteration_id patterns at scale."""
    support = JavaScriptSupport()
    inv_id_runtimes = {}
    
    patterns = [
        "0",
        "0_1",
        "0_1_2",
        "0_1_2_3",
        "0_1_2_3_4",
    ]
    
    idx = 0
    for pattern in patterns:
        for i in range(200):
            inv_id = InvocationId(
                test_module_path="__unit_test_patterns.js",
                test_class_name=f"Suite{i % 5}",
                test_function_name=f"test_{i}",
                function_getting_tested="func",
                iteration_id=pattern,
            )
            inv_id_runtimes[inv_id] = [idx]
            idx += 1
    
    codeflash_output = support._build_runtime_map(inv_id_runtimes, Path("/test")); result = codeflash_output # 22.0ms -> 853μs (2472% faster)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

To edit these changes git checkout codeflash/optimize-pr1561-2026-02-20T14.24.41 and push.

Codeflash Static Badge

The optimized code achieves an **87% speedup** (from 85.3ms to 45.6ms) through two primary performance improvements in the `_build_runtime_map` method:

## Key Optimizations

**1. Path Resolution Caching (Primary Improvement)**

The original code called `resolve_js_test_module_path()` and `abs_path.resolve().with_suffix("")` for every invocation, even when multiple invocations shared the same `test_module_path`. The optimized version introduces `_resolved_path_cache` to store computed path strings per module path, eliminating redundant filesystem operations.

Line profiler data confirms the dramatic impact:
- `resolve_js_test_module_path` calls: 3,481 → 1,480 (57% reduction)
- Time in path resolution: 84.7ms → 38.6ms (54% faster)
- Time in `abs_path.resolve()`: 186.9ms → 89.2ms (52% faster)

**2. Optimized String Parsing**

The original code parsed `iteration_id` inefficiently:
```python
parts = iteration_id.split("_").__len__()  # Creates list, calls __len__()
cur_invid = iteration_id.split("_")[0] if parts < 3 else "_".join(iteration_id.split("_")[:-1])  # Splits again!
```

The optimized version splits once and reuses the result:
```python
parts = iteration_id.split("_")
parts_len = len(parts)
cur_invid = parts[0] if parts_len < 3 else "_".join(parts[:-1])
```

Additionally, dictionary access was optimized from:
```python
if match_key not in unique_inv_ids:
    unique_inv_ids[match_key] = 0
unique_inv_ids[match_key] += min(runtimes)
```
to:
```python
unique_inv_ids[match_key] = unique_inv_ids.get(match_key, 0) + min(runtimes)
```

## Performance Benefits by Test Type

The optimization particularly excels with workloads featuring:

1. **Many invocations with shared module paths** (e.g., `test_large_number_of_invocations`: 1567% faster, `test_many_different_iteration_ids`: 3037% faster) - the cache eliminates redundant path resolutions
2. **Repeated path resolution** (e.g., `test_multiple_invocations_same_module`: 52.4% faster) - cache hits avoid expensive filesystem operations
3. **Complex iteration IDs** (e.g., `test_complex_iteration_id_patterns`: 2472% faster) - optimized string parsing reduces per-item overhead

The optimization maintains correctness across all test cases while delivering substantial performance improvements, especially in realistic scenarios where test suites contain multiple tests in the same modules.
@codeflash-ai codeflash-ai bot added ⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash labels Feb 20, 2026
@claude
Copy link
Contributor

claude bot commented Feb 20, 2026

PR Review Summary

Prek Checks

Passed — 1 auto-fixed issue (unsorted imports in codeflash/languages/javascript/support.py), committed and pushed. Verified clean after fix.

Mypy: 2 pre-existing arg-type errors on @register_language decorators (lines 50, 2505 in support.py) — not introduced by this PR.

Code Review

No critical issues found

The optimization in _build_runtime_map is correct and preserves original behavior:

  • Path resolution caching: Properly caches resolved path strings per module path, avoiding redundant filesystem operations
  • String parsing: Splits iteration_id once and reuses the result instead of splitting 3 times
  • Dictionary access: Uses .get() with default instead of not in check + assignment
  • Null safety: Added iteration_id = inv_id.iteration_id or "" matches original implicit behavior
  • Exception handling: try/except around abs_path.resolve() is appropriate — gracefully degrades on symlink loops or permission issues

Other changes (React framework support, type annotations, _PRIMITIVE_TYPES frozenset) are also sound.

Test Coverage

File Main PR Delta
codeflash/languages/javascript/support.py 71% 69% -2%
codeflash/languages/base.py 98% 98%
codeflash/languages/javascript/treesitter.py 92% 92%
codeflash/languages/javascript/parse.py 49% 51% +2%
codeflash/models/function_types.py 100% 100%
codeflash/result/critic.py 70% 73% +3%
codeflash/result/explanation.py 46% 45% -1%
codeflash/api/aiservice.py 20% 20%
New: frameworks/detector.py 100%
New: frameworks/react/analyzer.py 100%
New: frameworks/react/benchmarking.py 100%
New: frameworks/react/context.py 99%
New: frameworks/react/discovery.py 94%
New: frameworks/react/profiler.py 14% ⚠️ Below 75%
New: treesitter_utils.py (1603 lines) 0% ⚠️ No coverage
New: frameworks/react/testgen.py (116 lines) 0% ⚠️ No coverage
New: api/schemas.py (277 lines) 0% ⚠️ No coverage

Overall: 79% → 76% (-3%)

⚠️ Coverage flags:

  • treesitter_utils.py (1603 lines) has 0% coverage — no tests exercise this new file
  • react/profiler.py has 14% coverage — well below the 75% threshold for new files
  • react/testgen.py and api/schemas.py also have 0% coverage
  • Overall coverage decreased by 3 percentage points

Note: The 8 test failures in test_tracer.py are pre-existing on main and unrelated to this PR.


Last updated: 2026-02-20

@claude claude bot merged commit 61d6547 into add/support_react Feb 20, 2026
27 of 28 checks passed
@claude claude bot deleted the codeflash/optimize-pr1561-2026-02-20T14.24.41 branch February 20, 2026 15:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants