Skip to content

Add MemMachine memory integration for NeMo Agent toolkit#1460

Open
Charlie-Yi-2002 wants to merge 88 commits intoNVIDIA:developfrom
Charlie-Yi-2002:develop
Open

Add MemMachine memory integration for NeMo Agent toolkit#1460
Charlie-Yi-2002 wants to merge 88 commits intoNVIDIA:developfrom
Charlie-Yi-2002:develop

Conversation

@Charlie-Yi-2002
Copy link

@Charlie-Yi-2002 Charlie-Yi-2002 commented Jan 22, 2026

Description

Add MemMachine as a memory plugin within NAT and also add a notebook example to demonstrate how to use MemMachine in NAT

By Submitting this PR I confirm:

  • I am familiar with the Contributing Guidelines.
  • We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
    • Any contribution which contains commits that are not Signed-Off will not be accepted.
  • When the PR is ready for review, new or existing tests cover these changes.
  • When the PR is ready for review, the documentation is up to date with these changes.

Summary by CodeRabbit

  • New Features

    • Added model health check monitoring in CI/CD pipeline
    • Added Parallel Executor component for concurrent tool execution in workflows
    • Added per-user MCP authentication support with improved isolation patterns
    • Added Tool Calling Responses API support for agents
    • Added Mem0 memory backend integration for auto-memory agent wrapper
    • Added LangSmith evaluator integration with multiple evaluation strategies
    • Separated profiler functionality into dedicated package
  • Documentation

    • Updated branding to "Hugging Face" for consistency
    • Added tabbed installation instructions for source vs package installs
    • Published Parallel Executor configuration guide
    • Enhanced per-user MCP authentication workflows
  • Improvements

    • Improved agent output handling with fallback messages for empty responses
    • Enhanced output validation for alert triage agent

@Charlie-Yi-2002 Charlie-Yi-2002 requested a review from a team as a code owner January 22, 2026 21:55
@copy-pr-bot
Copy link

copy-pr-bot bot commented Jan 22, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@coderabbitai
Copy link

coderabbitai bot commented Jan 22, 2026

Note

Currently processing new changes in this PR. This may take a few minutes, please wait...

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: f36dc651-eddf-44bf-9b69-84ea5ee93d66

📥 Commits

Reviewing files that changed from the base of the PR and between 3b2e005 and e76d857.

⛔ Files ignored due to path filters (24)
  • examples/A2A/currency_agent_a2a/uv.lock is excluded by !**/*.lock
  • examples/A2A/math_assistant_a2a/uv.lock is excluded by !**/*.lock
  • examples/A2A/math_assistant_a2a_protected/uv.lock is excluded by !**/*.lock
  • examples/HITL/por_to_jiratickets/uv.lock is excluded by !**/*.lock
  • examples/HITL/simple_calculator_hitl/uv.lock is excluded by !**/*.lock
  • examples/MCP/kaggle_mcp/uv.lock is excluded by !**/*.lock
  • examples/MCP/service_account_auth_mcp/uv.lock is excluded by !**/*.lock
  • examples/MCP/simple_auth_mcp/uv.lock is excluded by !**/*.lock
  • examples/MCP/simple_calculator_fastmcp/uv.lock is excluded by !**/*.lock
  • examples/MCP/simple_calculator_fastmcp_protected/uv.lock is excluded by !**/*.lock
  • examples/MCP/simple_calculator_mcp/uv.lock is excluded by !**/*.lock
  • examples/MCP/simple_calculator_mcp_protected/uv.lock is excluded by !**/*.lock
  • examples/RAG/simple_rag/uv.lock is excluded by !**/*.lock
  • examples/advanced_agents/alert_triage_agent/uv.lock is excluded by !**/*.lock
  • examples/advanced_agents/profiler_agent/tests/test_spans.csv is excluded by !**/*.csv
  • examples/agents/uv.lock is excluded by !**/*.lock
  • examples/control_flow/hybrid_control_flow/uv.lock is excluded by !**/*.lock
  • examples/control_flow/parallel_executor/uv.lock is excluded by !**/*.lock
  • examples/control_flow/router_agent/uv.lock is excluded by !**/*.lock
  • examples/control_flow/sequential_executor/uv.lock is excluded by !**/*.lock
  • examples/custom_functions/automated_description_generation/uv.lock is excluded by !**/*.lock
  • examples/custom_functions/plot_charts/uv.lock is excluded by !**/*.lock
  • examples/documentation_guides/uv.lock is excluded by !**/*.lock
  • examples/documentation_guides/workflows/text_file_ingest/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (111)
  • .coderabbit.yaml
  • .gitlab-ci.yml
  • .pre-commit-config.yaml
  • .pytest.ini
  • CHANGELOG.md
  • README.md
  • ci/release/update-version.sh
  • ci/release/update_toml_dep.py
  • ci/scripts/github/build_wheel.sh
  • ci/scripts/gitlab/common.sh
  • ci/scripts/gitlab/model_health_check.sh
  • ci/scripts/gitlab/report_test_results.py
  • ci/scripts/gitlab/tests.sh
  • ci/scripts/model_health_check.py
  • ci/scripts/path_checks.py
  • ci/scripts/run_tests.py
  • ci/vale/styles/config/vocabularies/nat/accept.txt
  • conftest.py
  • docs/source/build-workflows/embedders.md
  • docs/source/build-workflows/llms/index.md
  • docs/source/build-workflows/llms/using-local-llms.md
  • docs/source/components/agents/index.md
  • docs/source/components/agents/parallel-executor/index.md
  • docs/source/components/agents/parallel-executor/parallel-executor.md
  • docs/source/components/agents/react-agent/react-agent.md
  • docs/source/components/agents/reasoning-agent/reasoning-agent.md
  • docs/source/components/agents/rewoo-agent/rewoo-agent.md
  • docs/source/components/agents/router-agent/router-agent.md
  • docs/source/components/agents/tool-calling-agent/tool-calling-agent.md
  • docs/source/components/auth/mcp-auth/index.md
  • docs/source/components/auth/mcp-auth/mcp-auth-token-storage.md
  • docs/source/components/functions/text-to-sql.md
  • docs/source/components/integrations/frameworks.md
  • docs/source/conf.py
  • docs/source/extend/testing/add-unit-tests-for-tools.md
  • docs/source/get-started/installation.md
  • docs/source/get-started/tutorials/add-tools-to-a-workflow.md
  • docs/source/get-started/tutorials/create-a-new-workflow.md
  • docs/source/improve-workflows/evaluate.md
  • docs/source/improve-workflows/finetuning/dpo_with_nemo_customizer.md
  • docs/source/improve-workflows/optimizer.md
  • docs/source/improve-workflows/profiler.md
  • docs/source/improve-workflows/sizing-calc.md
  • docs/source/reference/cli.md
  • docs/source/reference/rest-api/api-server-endpoints.md
  • docs/source/reference/rest-api/evaluate-api.md
  • docs/source/resources/migration-guide.md
  • docs/source/run-workflows/a2a-server.md
  • docs/source/run-workflows/observe/observe-workflow-with-catalyst.md
  • docs/source/run-workflows/observe/observe-workflow-with-data-flywheel.md
  • docs/source/run-workflows/observe/observe-workflow-with-dbnl.md
  • docs/source/run-workflows/observe/observe-workflow-with-dynatrace.md
  • docs/source/run-workflows/observe/observe-workflow-with-galileo.md
  • docs/source/run-workflows/observe/observe-workflow-with-langsmith.md
  • docs/source/run-workflows/observe/observe-workflow-with-otel-collector.md
  • docs/source/run-workflows/observe/observe-workflow-with-phoenix.md
  • docs/source/run-workflows/observe/observe-workflow-with-weave.md
  • docs/source/run-workflows/observe/observe.md
  • examples/MCP/kaggle_mcp/README.md
  • examples/MCP/simple_auth_mcp/configs/config-mcp-auth-jira-per-user.yml
  • examples/MCP/simple_calculator_mcp/README.md
  • examples/MCP/simple_calculator_mcp/configs/config-mcp-client.yml
  • examples/MCP/simple_calculator_mcp/configs/config-per-user-mcp-client.yml
  • examples/README.md
  • examples/advanced_agents/alert_triage_agent/pyproject.toml
  • examples/advanced_agents/alert_triage_agent/src/nat_alert_triage_agent/configs/config_offline_mode.yml
  • examples/advanced_agents/alert_triage_agent/src/nat_alert_triage_agent/register.py
  • examples/advanced_agents/alert_triage_agent/tests/test_alert_triage_agent_workflow.py
  • examples/advanced_agents/profiler_agent/README.md
  • examples/advanced_agents/profiler_agent/configs
  • examples/advanced_agents/profiler_agent/src/nat_profiler_agent/agent.py
  • examples/advanced_agents/profiler_agent/src/nat_profiler_agent/configs/config.yml
  • examples/advanced_agents/profiler_agent/src/nat_profiler_agent/data_models.py
  • examples/advanced_agents/profiler_agent/src/nat_profiler_agent/prompts.py
  • examples/advanced_agents/profiler_agent/src/nat_profiler_agent/register.py
  • examples/advanced_agents/profiler_agent/src/nat_profiler_agent/tool/flow_chart.py
  • examples/advanced_agents/profiler_agent/src/nat_profiler_agent/tool/px_query.py
  • examples/advanced_agents/profiler_agent/src/nat_profiler_agent/tool/response_composer.py
  • examples/advanced_agents/profiler_agent/src/nat_profiler_agent/tool/token_usage.py
  • examples/advanced_agents/profiler_agent/src/nat_profiler_agent/tool/utils.py
  • examples/advanced_agents/profiler_agent/tests/test_profiler_agent.py
  • examples/agents/auto_memory_wrapper/README.md
  • examples/agents/auto_memory_wrapper/configs/config_mem0.yml
  • examples/agents/auto_memory_wrapper/configs/config_zep.yml
  • examples/agents/mixture_of_agents/README.md
  • examples/agents/mixture_of_agents/configs/config.yml
  • examples/agents/pyproject.toml
  • examples/agents/react/README.md
  • examples/agents/tests/conftest.py
  • examples/agents/tests/test_agents.py
  • examples/agents/tool_calling/README.md
  • examples/agents/tool_calling/configs/config-reasoning.yml
  • examples/agents/tool_calling/configs/config-responses-api.yml
  • examples/control_flow/parallel_executor/README.md
  • examples/control_flow/parallel_executor/configs
  • examples/control_flow/parallel_executor/pyproject.toml
  • examples/control_flow/parallel_executor/src/nat_parallel_executor/__init__.py
  • examples/control_flow/parallel_executor/src/nat_parallel_executor/configs/config.yml
  • examples/control_flow/parallel_executor/tests/test_parallel_executor_example.py
  • examples/documentation_guides/locally_hosted_llms/nim_config.yml
  • examples/dynamo_integration/.env.example
  • examples/dynamo_integration/README.md
  • examples/dynamo_integration/latency_sensitivity_demo/INSTALL_LIBRARY.md
  • examples/dynamo_integration/latency_sensitivity_demo/README.md
  • examples/dynamo_integration/latency_sensitivity_demo/configs
  • examples/dynamo_integration/latency_sensitivity_demo/data
  • examples/dynamo_integration/latency_sensitivity_demo/pyproject.toml
  • examples/dynamo_integration/latency_sensitivity_demo/src/latency_sensitivity_demo/__init__.py
  • examples/dynamo_integration/latency_sensitivity_demo/src/latency_sensitivity_demo/compare_sensitivity_perf.py
  • examples/dynamo_integration/latency_sensitivity_demo/src/latency_sensitivity_demo/configs/config_profile.yml
  • examples/dynamo_integration/latency_sensitivity_demo/src/latency_sensitivity_demo/configs/config_with_trie.yml
 ________________________________________________________________________________________________________________________
< Don't assume it - prove it. Prove your assumptions in the actual environment - with real data and boundary conditions. >
 ------------------------------------------------------------------------------------------------------------------------
  \
   \   (\__/)
       (•ㅅ•)
       /   づ

✏️ Tip: You can disable in-progress messages and the fortune message in your review settings.

Tip

CodeRabbit can approve the review once all CodeRabbit's comments are resolved.

Enable the reviews.request_changes_workflow setting to automatically approve the review once all CodeRabbit's comments are resolved.

Warning

Ignoring CodeRabbit configuration file changes. For security, only the configuration from the base branch is applied for open source repositories.

Walkthrough

Adds a new nvidia-nat-memmachine subpackage: packaging and metadata, README and example notebook, a MemMachineEditor and memmachine_memory_client factory integrating with the MemMachine SDK, extensive unit and integration tests, and adds the package to the meta-package dependencies.

Changes

Cohort / File(s) Summary
Package config & metadata
packages/nvidia_nat_memmachine/pyproject.toml, packages/nvidia_nat_memmachine/src/nat/meta/pypi.md, packages/nvidia_nat_all/pyproject.toml
New pyproject defining nvidia-nat-memmachine, setuptools/uv config, entry-point nat.components → nat_memmachine, PyPI README/metadata added, and meta-package nvidia-nat-all now depends on nvidia-nat-memmachine.
Documentation & examples
packages/nvidia_nat_memmachine/README.md, packages/nvidia_nat_memmachine/memmachine_memory_example.ipynb
Adds README describing MemMachine setup/usage and an example Jupyter notebook demonstrating server setup, client configuration, add/search workflows, and agent integration.
Core implementation
packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py, packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memory.py, packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/register.py, packages/nvidia_nat_memmachine/src/nat/plugins/__init__.py, packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/__init__.py
Introduces MemMachineEditor implementing async add/search/remove mapped to MemMachine SDK, MemMachineMemoryClientConfig and memmachine_memory_client factory (dynamic import, client/project resolution, retry wrapping), registration module, and license headers.
Tests — unit / API
packages/nvidia_nat_memmachine/tests/test_memory.py, packages/nvidia_nat_memmachine/tests/test_memmachine_editor.py, packages/nvidia_nat_memmachine/tests/test_memmachine_api_calls.py
Adds comprehensive unit tests and API-call spies validating config, client init/error handling, editor add/search/remove logic, SDK call mapping, metadata transformations, and retry behavior.
Tests — integration / E2E
packages/nvidia_nat_memmachine/tests/test_memmachine_integration.py, packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py
Adds integration and end-to-end tests exercising live MemMachine server connectivity, conversation/direct memory workflows, indexing delays, and multi-item retrieval.
Package boilerplate
packages/nvidia_nat_memmachine/src/nat/plugins/__init__.py, packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/__init__.py
Adds SPDX/Apache-2.0 license headers and package initializers (no functional code).

Sequence Diagram(s)

sequenceDiagram
    participant Agent as Agent/Application
    participant Builder as WorkflowBuilder
    participant Factory as memmachine_memory_client
    participant Editor as MemMachineEditor
    participant SDK as MemMachine SDK
    participant Server as MemMachine Server

    Agent->>Builder: register memory(config)
    Builder->>Factory: instantiate with config
    Factory->>SDK: create MemMachineClient(base_url, timeout, retries)
    Factory->>SDK: get_or_create_project(org_id, project_id) (optional)
    SDK->>Server: API request (project)
    Server-->>SDK: project response
    Factory->>Editor: initialize with client/project
    Factory-->>Builder: yield memory_editor

    Agent->>Editor: add_items(conversation/direct)
    Editor->>SDK: memory.add (episodic and/or semantic)
    SDK->>Server: store memories
    Server-->>SDK: ack/indexed
    SDK-->>Editor: add responses

    Agent->>Editor: search(query, top_k)
    Editor->>SDK: memory.search(query, limit=top_k)
    SDK->>Server: execute query
    Server-->>SDK: episodic + semantic results
    SDK-->>Editor: results
    Editor->>Agent: return list[MemoryItem]
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and accurately describes the main change: adding MemMachine memory integration for NeMo Agent toolkit. It uses imperative mood and is concise at 56 characters.
Docstring Coverage ✅ Passed Docstring coverage is 84.09% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 16

🤖 Fix all issues with AI agents
In `@packages/nvidia_nat_memmachine/pyproject.toml`:
- Around line 16-25: Update package dependency lists: in
packages/nvidia_nat_all/pyproject.toml add "nvidia-nat-memmachine" (no version
constraint) to the dependencies array, and in
packages/nvidia_nat_memmachine/pyproject.toml change the memmachine-server entry
from "memmachine-server~=0.2.2" to "memmachine-server~=0.2.1" so the declared
dependency matches the published version; ensure the dependencies remain sorted
per the file comments.

In
`@packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py`:
- Around line 1-9: Add the standard SPDX Apache-2.0 license header to the top of
memmachine_editor.py (above all imports); ensure the exact SPDX short-form line
"SPDX-License-Identifier: Apache-2.0" is present and include any required
copyright owner/year line per project policy so the file containing imports like
asyncio, requests, MemoryType, MemoryEditor, and MemoryItem is properly
licensed.
- Around line 185-196: The add_memory() closure captures outer loop variables
(memory, memory_text, metadata, memory_types) by reference so all scheduled
tasks use the last values; fix by binding those values into the closure before
scheduling (e.g., define add_memory(memory=memory, memory_text=memory_text,
metadata=metadata, memory_types=memory_types): or use functools.partial to bind
arguments) so the memory.add(...) call inside add_memory uses the per-iteration
values when you call task = asyncio.to_thread(add_memory).
- Around line 299-304: The bare except in the episodic_by_conversation loop
around conv_episodes.sort hides real errors; replace it with a targeted
exception handler that catches likely errors (e.g., TypeError, AttributeError,
ValueError) and log the failure including conv_key and the exception before
continuing. Specifically, wrap the conv_episodes.sort(key=lambda e:
e.get("created_at") or e.get("timestamp") or "") call in an except (TypeError,
AttributeError, ValueError) as e block and emit a warning or error via the
module's logger (or logging.warning) referencing conv_key and e so issues aren't
silently swallowed.

In `@packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memory.py`:
- Around line 85-88: The except block in the project-creation fallback (inside
memory.py where the try/except around client/project creation is located)
currently swallows all exceptions with "pass"; replace that with a
logger.exception(...) call to record the full stack trace and context (e.g.,
"Failed to create project, falling back to client usage") so debugging info is
preserved; keep the existing fallback behavior (do not re-raise) but ensure the
exception is logged via logger.exception in the except Exception as e: handler.

In `@packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/register.py`:
- Line 1: Add the standard SPDX Apache-2.0 license header at the top of this
file (register.py) before any code — ensure the file begins with the SPDX
license identifier (e.g., a comment containing "SPDX-License-Identifier:
Apache-2.0") so the header appears above the existing import line "from . import
memory"; update only the file header area to include the required license
comment.

In `@packages/nvidia_nat_memmachine/tests/API_CALL_VERIFICATION.md`:
- Around line 1-5: Add the standard SPDX Apache-2.0 license header as the very
first line of the markdown file (API_CALL_VERIFICATION.md) by inserting
"SPDX-License-Identifier: Apache-2.0" at the top of the file so the file now
begins with that license identifier.

In `@packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py`:
- Around line 1-24: Add the missing SPDX Apache-2.0 license header at the top of
the file (above the docstring or immediately after the shebang) to satisfy
licensing requirements, and mark the integration test with pytest by importing
pytest and decorating the test function (e.g., test_add_and_retrieve) with
`@pytest.mark.integration` so external-service tests are properly classified;
update imports to include "import pytest" and ensure the decorator is applied to
the test function in the script.

In `@packages/nvidia_nat_memmachine/tests/test_memmachine_api_calls.py`:
- Around line 1-2: Update the SPDX header years in the file's top comment lines:
change the copyright year range in the SPDX-FileCopyrightText line (currently
"2024-2025") to include 2026 (e.g., "2024-2026") and ensure the
SPDX-License-Identifier line remains unchanged; edit the two header comment
lines at the top of the file to reflect the new year range.
- Line 366: The assignment to results from calling editor_with_spy.search is
unused and should be removed to avoid lint warnings; simply call await
editor_with_spy.search(...) without assigning its return, or if you prefer
preserving a placeholder, assign to _ (underscore) instead—update the call site
where results is assigned in the test (the await editor_with_spy.search
invocation) to drop the unused variable.
- Line 50: Update the signature of get_calls to use explicit PEP 604 union
typing for the optional parameter: replace the parameter declaration
method_name: str = None with method_name: str | None, and ensure any related
type hints or usages in the get_calls function body or its callers remain valid
with the new annotation.

In `@packages/nvidia_nat_memmachine/tests/test_memmachine_integration.py`:
- Around line 173-184: The for-loop in test_memmachine_integration.py uses an
unused loop variable named "attempt" which triggers a lint warning; rename it to
"_attempt" in both occurrences (the loop that calls memory_client.search with
query="morning work allergy peanuts" and the similar loop around lines ~403-414)
so the intent is preserved and the linter is satisfied, keeping the loop logic
and the await asyncio.sleep(2) retry behavior unchanged.
- Around line 1-2: Update the SPDX header year range in the file's top comment
to include 2026: locate the SPDX lines at the top of the file (the
SPDX-FileCopyrightText and SPDX-License-Identifier header) and change the
copyright year range "2024-2025" to "2024-2026" so the notice accurately
reflects the file modification year.
- Around line 22-25: Update the module docstring in the
tests/test_memmachine_integration.py file to wrap code entities in backticks to
satisfy Vale and docstring rules: surround MemMachine server with `MemMachine`,
the environment variable name with `MEMMACHINE_BASE_URL`, and the test command
with backticks like `pytest tests/test_memmachine_integration.py -v`; ensure any
other inline code-like tokens in that docstring are also backticked.

In `@packages/nvidia_nat_memmachine/tests/test_memory.py`:
- Around line 1-2: Update the SPDX header year range in the file header comment
to include 2026; specifically modify the top two comment lines (the SPDX
copyright and SPDX-License-Identifier block) so the year range reads "2024-2026"
instead of "2024-2025" to reflect the file modification in 2026.
- Around line 155-159: The test test_memmachine_memory_client_config_validation
currently catches a broad Exception when constructing
MemMachineMemoryClientConfig; change the assertion to catch the specific
pydantic validation error by replacing pytest.raises(Exception) with
pytest.raises(ValidationError) and add the necessary import for ValidationError
from pydantic so the test asserts the concrete validation failure from
MemMachineMemoryClientConfig.
🧹 Nitpick comments (7)
packages/nvidia_nat_memmachine/src/nat/meta/pypi.md (1)

20-24: Capitalize "Toolkit" in the heading.

Per coding guidelines, titles/headings should use "Toolkit" (capital T) while body text uses "toolkit" (lowercase t). Also ensure the file ends with a single newline.

📝 Suggested fix
-# NVIDIA NeMo Agent toolkit Subpackage
+# NVIDIA NeMo Agent Toolkit Subpackage
 This is a subpackage for MemMachine memory integration in NeMo Agent toolkit.

 For more information about the NVIDIA NeMo Agent toolkit, please visit the [NeMo Agent toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
+
packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memory.py (3)

41-42: Add return type annotation and consider documenting unused builder parameter.

Per coding guidelines, public APIs require type hints on return values. The function should declare its return type. The builder parameter appears unused (flagged by static analysis) — if it's required by the @register_memory decorator contract, consider adding a comment or using _ prefix to indicate intentional non-use.

📝 Suggested fix
+from collections.abc import AsyncGenerator
+from typing import Any
+
 `@register_memory`(config_type=MemMachineMemoryClientConfig)
-async def memmachine_memory_client(config: MemMachineMemoryClientConfig, builder: Builder):
+async def memmachine_memory_client(
+    config: MemMachineMemoryClientConfig,
+    builder: Builder,  # Required by `@register_memory` contract, unused here
+) -> AsyncGenerator[Any, None]:

34-38: Clarify the distinction between max_retries and num_retries from RetryMixin.

The class defines max_retries for SDK-level retries, while inheriting num_retries from RetryMixin (used at line 95 for patch_with_retry). This dual retry configuration may confuse users. Consider adding a note in the docstring explaining when each is used.

📝 Suggested docstring addition
     """
     Configuration for MemMachine memory client.
     
     Based on the MemMachine Python SDK as documented at:
     https://github.com/MemMachine/MemMachine/blob/main/docs/examples/python.mdx
     
     Note: This integration is for local/self-hosted MemMachine instances.
     LLM API keys (e.g., OpenAI) are configured in the MemMachine cfg.yml file,
     not in this client configuration.
+    
+    Retry behavior:
+        - `max_retries`: SDK-level retries for individual MemMachine API calls.
+        - `num_retries` (inherited from RetryMixin): Application-level retries with
+          status code filtering, applied to the memory editor wrapper.
     """

92-98: Redundant isinstance check — config always inherits from RetryMixin.

Since MemMachineMemoryClientConfig inherits from RetryMixin, this check is always True. Consider removing the check for clarity, or add a comment if this is intentional defensive coding for future flexibility.

📝 Suggested simplification
-    if isinstance(config, RetryMixin):
-        memory_editor = patch_with_retry(
-            memory_editor,
-            retries=config.num_retries,
-            retry_codes=config.retry_on_status_codes,
-            retry_on_messages=config.retry_on_errors
-        )
+    memory_editor = patch_with_retry(
+        memory_editor,
+        retries=config.num_retries,
+        retry_codes=config.retry_on_status_codes,
+        retry_on_messages=config.retry_on_errors
+    )
packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py (1)

471-490: Unused variable delete_semantic and incomplete bulk deletion path.

The variable delete_semantic is assigned but never used. If bulk deletion should respect this flag in the future, consider either implementing partial support or removing the variable to avoid confusion.

Proposed fix - remove unused variable
             project_id = kwargs.pop("project_id", None)
             org_id = kwargs.pop("org_id", None)
-            delete_semantic = kwargs.pop("delete_semantic_memory", False)

             # Note: MemMachine SDK doesn't have a delete_all method
packages/nvidia_nat_memmachine/memmachine_memory_example.ipynb (1)

386-403: Consider using httpx for consistency with coding guidelines.

The coding guidelines prefer httpx over requests. While requests works fine here for a simple health check, consider aligning with the codebase convention.

packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py (1)

179-183: Use logger.exception() when catching and logging exceptions without re-raising.

Per coding guidelines, when catching and logging exceptions without re-raising, use logger.exception() to capture full stack trace. The current approach using traceback.print_exc() works but doesn't follow the project's logging conventions.

Copy link
Member

@willkill07 willkill07 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution!

This needs some work, and I'm going to start with the low-hanging fruit and a quick review before addressing the files that GitHub hid by default due to size.

Overall, all comments by coderabbit and additional ones from me need to be addressed.

Additionally, copyright headers for new files should include the current year rather than 2024-2025

I will admit I don't know how many commits behind develop you are, but you should assume a base as being near or close to top of tree.

This is my own opinion here, but I also don't like the idea of mandating a local server. Is there not an API/microservice that can be used instead?

Finally, if you have any questions don't hesitate to ask!

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/nvidia_nat_all/pyproject.toml (1)

23-33: Add nvidia-nat-memmachine to the workspace sources (and sync uv.lock).

Since this dependency is a workspace package, it should be listed in [tool.uv.sources] to ensure local resolution. Also make sure uv.lock is updated to reflect the new dependency.

🧩 Proposed update
 [tool.uv.sources]
 nvidia-nat = { workspace = true }
 nvidia-nat-agno = { workspace = true }
 nvidia-nat-autogen = { workspace = true }
 nvidia-nat-crewai = { workspace = true }
 nvidia-nat-langchain = { workspace = true }
 nvidia-nat-llama-index = { workspace = true }
 nvidia-nat-mcp = { workspace = true }
 nvidia-nat-mem0ai = { workspace = true }
+nvidia-nat-memmachine = { workspace = true }
 nvidia-nat-mysql = { workspace = true }

Based on learnings.

🤖 Fix all issues with AI agents
In
`@packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py`:
- Around line 324-331: In the try/except around conv_episodes.sort in
memmachine_editor (the block catching TypeError/AttributeError/ValueError for
conv_key), replace the logger.warning call with logger.exception so the full
stack trace is preserved when the sort fails; keep the same descriptive message
referencing conv_key and that we are "continuing without sorting" but call
logger.exception(...) instead of logger.warning(...).
- Around line 29-45: The docstring for the MemMachineEditor class contains
several metadata keys and code entities that are not formatted as inline code;
update the MemMachineEditor docstring so all code identifiers (e.g., MemoryItem,
and metadata keys `session_id`, `agent_id`, `group_id`, `project_id`, `org_id`,
and the class name `MemMachineClient` reference) are wrapped in backticks to
satisfy Vale/coding guidelines and ensure clarity; keep the explanatory text
unchanged and only modify the docstring formatting around these identifiers.
- Around line 179-190: The closure add_memory currently captures the outer
variable memory by reference, causing deferred asyncio.to_thread tasks to all
use the last memory instance; fix this by binding memory into the closure
signature as a default parameter (e.g., add a parameter like memory=memory) so
each add_memory captures its own memory instance before scheduling; update the
add_memory definition that calls memory.add(...) to include memory as a default
arg consistent with the identical pattern used in the other branch.

In `@packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memory.py`:
- Around line 48-51: The function signature for memmachine_memory_client
currently defines an unused parameter named builder which ruff flags; silence it
by renaming builder to _builder (or alternatively append a per-argument lint
suppression like "# noqa: ARG001") in the memmachine_memory_client definition so
the parameter remains for the `@register_memory` contract but no lint error is
raised.
- Around line 55-62: The project incorrectly requires the server package while
code imports the client: update the dependency in pyproject.toml from
memmachine-server to memmachine-client, and update the ImportError text in
memory.py (the block that catches ImportError for "from memmachine import
MemMachineClient") to instruct installing memmachine-client (e.g., "pip install
memmachine-client") and adjust any package name references/links so they point
to the client package documentation; keep the rest of the error message
structure (include the caught exception e) intact.

In `@packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py`:
- Around line 199-202: Replace the logger.exception call in the except block
that captures "Exception as e" with logger.error(..., exc_info=True) so the
error message is logged with the traceback without duplicating stack traces on
re-raise; specifically, locate the except Exception as e block (where
logger.exception("Error during test execution") is called) and change it to log
the same message using logger.error("Error during test execution",
exc_info=True) before re-raising the exception.

In `@packages/nvidia_nat_memmachine/tests/test_memmachine_integration.py`:
- Around line 68-85: The two pytest fixture functions lack return type
annotations; update test_config_fixture to declare a return type of
MemMachineMemoryClientConfig (i.e., def test_config_fixture(...) ->
MemMachineMemoryClientConfig:) and update test_user_id_fixture to declare a
return type of str (def test_user_id_fixture(...) -> str:); ensure the
MemMachineMemoryClientConfig symbol is imported or referenced correctly in the
test module if not already.
- Around line 88-156: The async test function
test_add_and_retrieve_conversation_memory is missing the pytest-asyncio marker;
add `@pytest.mark.asyncio` above the function (place it directly after
`@pytest.mark.slow`) so pytest-asyncio runs the async test correctly.
- Around line 49-65: Change the broad except Exception in the MemMachineClient
health-check block to catch the specific requests exception type raised by the
client (e.g., requests.exceptions.RequestException or requests.RequestException)
so real errors aren't swallowed; import the exception from requests at top if
needed and keep the same behavior of raising RuntimeError when fail_missing is
true or calling pytest.skip otherwise while still preserving the existing
ImportError handling for the memmachine import and the use of
MemMachineClient.health_check().
♻️ Duplicate comments (2)
packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py (1)

46-209: Avoid running live MemMachine calls in the default test suite.

test_add_and_retrieve is still a discovered test and will run without the integration marker, even though it requires a live MemMachine instance. Make it a helper (non-test) or mark it as integration to keep it out of default runs.

✅ Suggested restructuring
-async def test_add_and_retrieve():
+async def _run_add_and_retrieve():
     """Test adding memories and retrieving them."""
@@
-@pytest.mark.integration
-async def test_add_and_retrieve_integration():
+@pytest.mark.integration
+async def test_add_and_retrieve_integration():
     """Integration test for adding and retrieving memories."""
-    await test_add_and_retrieve()
+    await _run_add_and_retrieve()
@@
-if __name__ == "__main__":
-    asyncio.run(test_add_and_retrieve())
+if __name__ == "__main__":
+    asyncio.run(_run_add_and_retrieve())

As per coding guidelines.

packages/nvidia_nat_memmachine/tests/test_memmachine_integration.py (1)

16-24: Backtick code-like tokens in the module/fixture docstrings.

The docstrings still leave code entities and the localhost URL unformatted, which can trigger Vale. Please wrap them in backticks for consistency. As per coding guidelines, please wrap code entities in backticks.

✍️ Proposed fix
-Integration tests for MemMachine memory integration.
+Integration tests for `MemMachine` memory integration.
@@
-Set `MEMMACHINE_BASE_URL` environment variable to override default (http://localhost:8080).
+Set `MEMMACHINE_BASE_URL` environment variable to override default (`http://localhost:8080`).
@@
-Set MEMMACHINE_BASE_URL environment variable to override default (http://localhost:8080).
+Set `MEMMACHINE_BASE_URL` environment variable to override default (`http://localhost:8080`).

Also applies to: 39-44

@Charlie-Yi-2002 Charlie-Yi-2002 marked this pull request as draft January 23, 2026 20:46
@Charlie-Yi-2002 Charlie-Yi-2002 marked this pull request as ready for review January 26, 2026 19:56
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py`:
- Around line 205-208: The async integration test function
test_add_and_retrieve_integration is missing the `@pytest.mark.asyncio` decorator;
add `@pytest.mark.asyncio` above the async def test_add_and_retrieve_integration()
declaration so pytest runs it as an async test (keep the existing
`@pytest.mark.integration` marker and the call to await test_add_and_retrieve()).

In `@packages/nvidia_nat_memmachine/tests/test_memmachine_editor.py`:
- Around line 1-2: Update the SPDX copyright header lines to include 2026 by
changing the year range in the two SPDX comment lines: adjust the copyright
comment line that currently reads "2024-2025" to "2024-2026" and keep the
SPDX-License-Identifier line as-is; ensure these changes are applied at the top
of the file where the SPDX header is defined.
🧹 Nitpick comments (2)
packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py (1)

322-329: Remove redundant exception object from logger.exception call.

logger.exception() automatically includes the exception information, so including {e} in the message is redundant.

Proposed fix
             except (TypeError, AttributeError, ValueError) as e:
                 # Skip sorting if timestamps are missing or incompatible
                 logger.exception(
-                    f"Failed to sort episodes for conversation '{conv_key}': {e}. "
+                    f"Failed to sort episodes for conversation '{conv_key}'. "
                     "Continuing without sorting."
                 )
packages/nvidia_nat_memmachine/tests/test_memmachine_integration.py (1)

116-118: Consider moving asyncio import to module level.

The asyncio module is imported inside multiple test functions. Moving it to the top of the file with other imports would be cleaner.

Proposed fix

Add to imports at top of file:

import asyncio

Then remove the inline imports in each test function.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@packages/nvidia_nat_memmachine/tests/test_memmachine_editor.py`:
- Around line 492-496: The mock return for mock_memory_instance.search is
inconsistent with other tests: instead of assigning a plain dict set
mock_memory_instance.search.return_value to a Mock (or MagicMock) instance whose
.content attribute contains the dict used in other tests (e.g.,
{"episodic_memory": [], "semantic_memory": [], "episode_summary": []}), so that
MemMachineEditor.search (and any code accessing result.content) behaves
consistently; update the test to wrap the dict in a Mock with .content and
adjust any assertions if they expect the Mock shape.
🧹 Nitpick comments (1)
packages/nvidia_nat_memmachine/tests/test_memmachine_editor.py (1)

178-202: Consider removing unused fixture parameters.

The mock_client and mock_project parameters are injected but not used in the test body. Pytest automatically instantiates the fixture dependency chain when memmachine_editor_with_client is requested, so you only need to explicitly request fixtures you directly interact with.

This pattern appears in several tests: test_add_items_with_direct_memory, test_add_items_with_memory_text_only, test_search_success, test_remove_items_by_memory_id_episodic, test_remove_items_by_memory_id_semantic, test_add_items_with_custom_project_and_org, and test_search_with_custom_project_and_org.

♻️ Example fix for this test
 async def test_add_items_with_direct_memory(
     memmachine_editor_with_client: MemMachineEditor,
-    mock_client: Mock,
-    mock_project: Mock,
     mock_memory_instance: Mock,
     sample_direct_memory_item: MemoryItem
 ):

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py`:
- Around line 111-193: Add assertions after each search to validate results: for
Test 1 (first retrieval) assert retrieved is non-empty and that
retrieved[0].memory or retrieved[0].conversation contains an expected
substring/tag; for Test 2 (direct_memory) assert the search result length > 0
and that at least one result's memory includes "allergic to peanuts" or has tags
containing "allergy"/"preference"; for Test 3 (multiple_memories) assert
len(all_memories) >= 3 and that tags include "fact_1"/"fact_2"/"fact_3". Place
these assertions immediately after the corresponding memory_client.search calls
that populate retrieved and all_memories (use the MemoryItem,
memory_client.search, memory_client.add_items, user_id/session_id/agent_id
variables to locate the sections).
🧹 Nitpick comments (1)
packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py (1)

98-100: Replace fixed sleeps with polling + timeout to reduce flakiness.

A hardcoded 2‑second wait can be too short or unnecessarily long depending on load. Consider polling the search until results appear (bounded by a timeout) to make the test deterministic.

Also applies to: 135-136, 175-176

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@packages/nvidia_nat_memmachine/memmachine_memory_example.ipynb`:
- Around line 108-113: Update the markdown text to remove possessive
constructions for inanimate objects: change "MemMachine's LLM provider API keys
(e.g., OpenAI) are configured in the MemMachine `cfg.yml` file, not here." to a
non-possessive form such as "LLM provider API keys for MemMachine (e.g., OpenAI)
are configured in the MemMachine `cfg.yml` file, not here." and apply the same
change to the other occurrence referenced (around lines 575-579) so all
instances use "for MemMachine" or "the MemMachine" instead of "MemMachine’s".

In `@packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py`:
- Around line 59-67: The helper function currently named test_add_and_retrieve
(which pytest will collect) should be renamed to _run_add_and_retrieve to avoid
double test collection; update the function definition name and every call site
(e.g., change calls in test_add_and_retrieve_integration or any other tests that
invoke test_add_and_retrieve to call _run_add_and_retrieve instead) so tests
still execute the helper but pytest won't treat it as a standalone test.

In `@packages/nvidia_nat_memmachine/tests/test_memmachine_editor.py`:
- Around line 178-184: The test function test_add_items_with_direct_memory
currently declares fixture parameters that are only used for setup and trigger
ruff ARG001; update the function signature to silence unused-fixture warnings by
prefixing unused fixtures with an underscore (e.g., change mock_client,
mock_project, mock_memory_instance to _mock_client, _mock_project,
_mock_memory_instance) or alternatively append "# noqa: ARG001" if you prefer a
comment; ensure you only rename the fixtures in the test function signature
(leave the fixture definitions intact) so references to
test_add_items_with_direct_memory still run correctly.

In `@packages/nvidia_nat_memmachine/tests/test_memory.py`:
- Around line 19-23: This module contains multiple async test functions but
lacks the pytest-asyncio marker; add a module-level marker by inserting
pytestmark = pytest.mark.asyncio near the top (after the imports) so all async
tests run under pytest-asyncio, or alternatively decorate each async test with
`@pytest.mark.asyncio`; reference the existing imports (pytest, Builder,
MemMachineMemoryClientConfig, memmachine_memory_client) and place the
module-level pytestmark immediately after those imports.
🧹 Nitpick comments (2)
packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memory.py (1)

29-51: Add Google-style docstrings for the public config and factory.
memmachine_memory_client has no docstring and the config docstring isn’t Google‑style (Args/Returns). Please add Google‑style docstrings and wrap code entities in backticks. As per coding guidelines.

packages/nvidia_nat_memmachine/tests/test_memmachine_integration.py (1)

26-36: Add asyncio to module-level imports.

asyncio is imported inside multiple test functions (lines 117, 187, 269, 330, 411). Move it to the module-level imports for cleaner code.

✍️ Proposed fix
 import os
 import uuid
+import asyncio

 import pytest
 import requests

Then remove the import asyncio statements from within the test functions.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@packages/nvidia_nat_memmachine/tests/test_memory.py`:
- Around line 105-123: The test's import_side_effect raises ImportError with a
custom message which triggers Ruff TRY003; change the raise in
import_side_effect inside test_memmachine_memory_client_import_error to raise
ImportError (no custom string) so the patch of builtins.__import__ simply raises
the exception, leaving pytest.raises to assert the expected "Could not import
MemMachineClient" from memmachine_memory_client; update import_side_effect
accordingly and keep the rest of the test (patch and async with
memmachine_memory_client) unchanged.
🧹 Nitpick comments (1)
packages/nvidia_nat_memmachine/tests/test_memory.py (1)

26-66: Align fixture function names and return type hints with test conventions.
Fixture names should use the fixture_ prefix and return type hints for readability and linting consistency.

♻️ Proposed refactor
 `@pytest.fixture`(name="mock_builder")
-def mock_builder_fixture():
+def fixture_mock_builder() -> Mock:
     """Fixture to provide a mocked Builder instance."""
     return Mock(spec=Builder)

 `@pytest.fixture`(name="config")
-def config_fixture():
+def fixture_config() -> MemMachineMemoryClientConfig:
     """Fixture to provide a MemMachineMemoryClientConfig instance."""
     return MemMachineMemoryClientConfig(
         base_url="http://localhost:8095",
         org_id="test_org",
         project_id="test_project",
         timeout=30,
         max_retries=3
     )

 `@pytest.fixture`(name="config_minimal")
-def config_minimal_fixture():
+def fixture_config_minimal() -> MemMachineMemoryClientConfig:
     """Fixture to provide a minimal MemMachineMemoryClientConfig instance."""
     return MemMachineMemoryClientConfig(
         base_url="http://localhost:8095"
     )

 `@pytest.fixture`(name="mock_memmachine_client")
-def mock_memmachine_client_fixture():
+def fixture_mock_memmachine_client() -> Mock:
     """Fixture to provide a mocked MemMachineClient."""
     mock_client = Mock()
     mock_client.base_url = "http://localhost:8095"
     return mock_client

 `@pytest.fixture`(name="mock_project")
-def mock_project_fixture():
+def fixture_mock_project() -> Mock:
     """Fixture to provide a mocked Project instance."""
     mock_project = Mock()
     mock_project.org_id = "test_org"
     mock_project.project_id = "test_project"
     return mock_project

As per coding guidelines, ...

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py`:
- Around line 17-24: Update the module docstring at the top of
test_add_and_retrieve.py to remove the claim that results are verified—replace
"Verifies they match" with wording that accurately states the test prints or
demonstrates API interaction without asserting retrieval correctness (e.g.,
"Prints retrieved memories to demonstrate API interaction; no correctness
assertions are performed"). Ensure the change is made in the module-level
docstring that describes the script's three-step flow so it accurately reflects
intended behavior.
- Around line 224-226: The integration test function
test_add_and_retrieve_integration should be marked as slow; add the pytest
marker by annotating the function with `@pytest.mark.slow` alongside the existing
`@pytest.mark.integration` and `@pytest.mark.asyncio` decorators so pytest can skip
it in default runs; locate the test_add_and_retrieve_integration definition in
tests/test_add_and_retrieve.py and insert `@pytest.mark.slow` above the async def.
🧹 Nitpick comments (1)
packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py (1)

51-56: Narrow exception handling in health check.

Catching Exception will hide unexpected errors and can cause silent skips. Prefer a requests.RequestException catch so genuine bugs still surface.

♻️ Proposed change
-    except Exception:
+    except requests.RequestException:
         return False

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py`:
- Around line 38-56: The test is missing the pytest slow marker and
_memmachine_available() is blocking the event loop by using requests.get; change
_memmachine_available to be async (async def _memmachine_available(base_url:
str) -> bool), switch the synchronous requests call to an async httpx client
(use httpx.AsyncClient and await the GET to f"{base_url}/api/v2/health" with
timeout), narrow the caught exception to httpx.RequestError (or httpx.HTTPError)
instead of Exception, and add `@pytest.mark.slow` to the test function that relies
on this helper so the test is correctly marked as slow.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@packages/nvidia_nat_memmachine/pyproject.toml`:
- Around line 1-3: Add the SPDX Apache-2.0 header as the very first lines of the
file and update the copyright year to the current year; specifically insert the
standard SPDX header above the existing [build-system] section so the file
begins with the license header followed by the existing build-backend and
requires lines.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@packages/nvidia_nat_memmachine/README.md`:
- Line 69: Update the heading text "Usage in NeMo Agent toolkit" to use a
capital T for "Toolkit" so it reads "Usage in NeMo Agent Toolkit"; locate the
heading in packages/nvidia_nat_memmachine/README.md (the line with the string
"Usage in NeMo Agent toolkit") and modify only that heading text to follow the
project's MD casing convention.
- Line 1: The title line "# NVIDIA NeMo Agent toolkit - MemMachine Integration"
should use "Toolkit" with a capital T; update the heading text (the top-level
markdown heading) to "# NVIDIA NeMo Agent Toolkit - MemMachine Integration" so
the title follows the capitalization guideline while leaving the rest of the
README content unchanged.
- Line 1: Add the required SPDX Apache-2.0 header to the top of the README.md
file: insert the SPDX header comment (e.g. "SPDX-FileCopyrightText: <copyright
holder>" and "SPDX-License-Identifier: Apache-2.0") as the first lines of
packages/nvidia_nat_memmachine/README.md so the file begins with the required
license metadata for md/mdx files.
- Around line 50-65: Update the incorrect default port statement in the README:
change the sentence that currently reads "The server will start on
`http://localhost:8095` by default" to state the actual default
`http://localhost:8080` (or explicitly say "`http://localhost:8080` by default,
or the port you configured") and, if keeping the recommendation, clarify that
`8095` is only a suggested custom port; locate and edit the paragraph following
the memmachine-server usage example to make this correction.

@willkill07
Copy link
Member

Thanks for going through my feedback. We have a bit of a backlog and some churn on work right now, so it may take me a couple days to give this another pass.

@Charlie-Yi-2002
Copy link
Author

Thanks for going through my feedback. We have a bit of a backlog and some churn on work right now, so it may take me a couple days to give this another pass.

No worries, let me know whenever you have time which other things need to be addressed!

@willkill07
Copy link
Member

@Charlie-Yi-2002 Could you address the merge conflicts and ensure that DCO passes (all of your commits must be signed).

@willkill07 willkill07 self-assigned this Feb 9, 2026
@willkill07 willkill07 added feature request New feature or request non-breaking Non-breaking change labels Feb 9, 2026
@Charlie-Yi-2002
Copy link
Author

@Charlie-Yi-2002 Could you address the merge conflicts and ensure that DCO passes (all of your commits must be signed).

@Charlie-Yi-2002 Could you address the merge conflicts and ensure that DCO passes (all of your commits must be signed).

@willkill07 I addressed the merge conflict. I'm a little confused on the DCO checks. It lists [19f7193], [3a7c40e], [07ff4df] as not having a signoff but when I checked the commit history, they all have the verified symbol and say it was signed off by me. Not sure if I'm misinterpreting what the DCO is checking for ?

@willkill07
Copy link
Member

willkill07 commented Feb 11, 2026

@Charlie-Yi-2002 Please see: https://github.com/NVIDIA/NeMo-Agent-Toolkit/pull/1460/checks?check_run_id=63168815034

All commits need to be signed off via -s/--signoff

There should be a line in the commit messages that says:

Signed-off-by: Name <name@email.com>

There is guidance in the above link. If it's easier to push a clean history for this branch with a single commit, that's also okay.

@Charlie-Yi-2002
Copy link
Author

@Charlie-Yi-2002 Please see: https://github.com/NVIDIA/NeMo-Agent-Toolkit/pull/1460/checks?check_run_id=63168815034

All commits need to be signed off via -s/--signoff

There should be a line in the commit messages that says:

Signed-off-by: Name <name@email.com>

There is guidance in the above link. If it's easier to push a clean history for this branch with a single commit, that's also okay.

@willkill07 Fixed the DCO issue. Let me know if there's any other changes I need to make. Thanks!

mnajafian-nv and others added 29 commits March 5, 2026 11:18
…DIA#1705)

When a `tool_calling_agent` hits its `max_iterations` recursion limit (e.g. the LLM keeps calling tools instead of producing a final answer), the `GraphRecursionError` was re-raised and cascaded through three layers (`register.py` → `function.py` → `base.py`), each logging at ERROR level with a full stack trace.

This caused the QA-reported issue where the mixture_of_agents example would either loop indefinitely or produce a correct answer but with alarming error logs.

### Changes

Catch `GraphRecursionError` specifically in the tool_calling_agent's `_response_fn` and `_stream_fn` before the generic `except Exception` handler. Instead of re-raising:
- Log a single WARNING (not ERROR)
- Return a clean error string describing the failure

The outer agent (e.g. ReAct orchestrator) receives a clear message it can reason about, and the logs are clean.

### Related

- Complements PR NVIDIA#1697 (config fix for mixture_of_agents example)
- Addresses the QA issue: "Mixture of agent kept on answering and did not end / errors seen in logs"

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.




## Summary by CodeRabbit

## Release Notes

* **Bug Fixes**
  * Improved error messaging when the tool-calling agent reaches its iteration limit. Users now see a clear message instead of an unhandled error when the agent cannot produce a final answer within maximum iterations.

Authors:
  - https://github.com/mnajafian-nv

Approvers:
  - David Gardner (https://github.com/dagardner-nv)

URL: NVIDIA#1705
## Summary

Updates the nat-ui submodule pointer to include [UI PR NVIDIA#110](NVIDIA/NeMo-Agent-Toolkit-UI#110), which restores the conversationsRef sync removed in UI PR NVIDIA#76. Without this sync, creating a new conversation in WebSocket mode caused the conversation list to be overwritten and WebSocket responses to be dropped.

## Testing

- Verified new conversation creation preserves existing conversations
- Verified WebSocket responses render correctly in new conversations
- Verified switching between conversations preserves message history


## Summary by CodeRabbit

* **Chores**
  * Updated reference to an external subproject to a newer commit (metadata only). No functional or user-facing changes.

Authors:
  - Eric Evans II (https://github.com/ericevans-nv)
  - David Gardner (https://github.com/dagardner-nv)
  - gpuCI (https://github.com/GPUtester)
  - Will Killian (https://github.com/willkill07)

Approvers:
  - Will Killian (https://github.com/willkill07)

URL: NVIDIA#1704
…VIDIA#1703)

Closes

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.



## Summary by CodeRabbit

* **New Features**
  * Added fallback response when AI output is empty, logging a warning and suggesting use of a larger model.

* **Chores**
  * Updated default language model selection across alert triage components to enhance inference capability.

* **Tests**
  * Expanded integration tests to assert presence of a root-cause category, expected label, and a minimum detailed report length (>200 chars) to ensure comprehensive triage outputs.

Authors:
  - https://github.com/mnajafian-nv

Approvers:
  - Will Killian (https://github.com/willkill07)
  - Eric Evans II (https://github.com/ericevans-nv)

URL: NVIDIA#1703
…ule fix (NVIDIA#1710)

## Summary

PR NVIDIA#1704 was based on a branch tracking `develop`, which included unrelated changes (timeout middleware, docs, notebooks, version bumps) that were unintentionally merged into `release/1.5`. This PR:

1. **Reverts** the full merge commit from NVIDIA#1704, removing all unintended changes
2. **Re-applies** only the `external/nat-ui` submodule pointer update (commit `68580db`) that was the intended change — restoring the `conversationsRef` sync fix from UI PR NVIDIA#110

### Files reverted (unintended changes from develop)
- `.pre-commit-config.yaml`
- `ci/release/update-version.sh`
- `docs/source/build-workflows/advanced/middleware.md`
- `docs/source/versions1.json`
- `examples/notebooks/*.ipynb` (3 notebooks)
- `packages/nvidia_nat_core/src/nat/meta/pypi.md`
- `packages/nvidia_nat_core/src/nat/middleware/timeout/` (entire directory)
- `packages/nvidia_nat_core/src/nat/middleware/register.py`
- `packages/nvidia_nat_core/tests/nat/middleware/test_timeout_middleware.py`

### File preserved (intended change)
- `external/nat-ui` — submodule pointer to `68580db` (UI PR NVIDIA#110 conversation state fix)

## Test plan
- [ ] Verify `release/1.5` no longer contains timeout middleware or other develop-only changes
- [ ] Verify the nat-ui submodule still points to `68580db` with the conversation state fix


## Summary by CodeRabbit

* **Documentation**
  * Updated many docs and example notebooks to reference earlier documentation versions (1.4/1.3) and removed Timeout Middleware docs and recommended listing.

* **Chores**
  * Removed the Timeout Middleware and its tests from the codebase.
  * Adjusted tooling/configuration to point documentation checks and release scripts at updated paths.

Authors:
  - Eric Evans II (https://github.com/ericevans-nv)

Approvers:
  - Will Killian (https://github.com/willkill07)

URL: NVIDIA#1710
…IDIA#1713)

multi_agent_orchestration.ipynb and observability_evaluation_and_profiling.ipynb add new Python tool files after nat workflow create but never call nat workflow reinstall to refresh the editable install. This causes graph_summarizer and data_visualization_agent function types to not be recognized by NAT's schema validator on some environments.

Adds reinstall cells before each nat run call, matching the pattern in adding_tools_to_agents.ipynb

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.




## Summary by CodeRabbit

* **Documentation**
  * Updated multi-agent orchestration example notebook with workflow reinstallation instructions added after the "Running the Workflow" and HITL description sections.
  * Updated observability evaluation and profiling example notebook with workflow reinstallation instructions.
  * These additions provide practical guidance for reinitializing workflows in example scenarios.

Authors:
  - https://github.com/mnajafian-nv

Approvers:
  - Will Killian (https://github.com/willkill07)

URL: NVIDIA#1713
…ts (NVIDIA#1708)

* This example uses the [mcp-server-time](https://pypi.org/project/mcp-server-time/) tool which requires a timezone as an input. 
* This PR fixes a bug where the LLM was either hallucinating a timezone or using the timezone of it's own datacenter as input. Resulting in the timezone and thus the time changing between calls.
* Adds a new tool `current_timezone` which returns the system's time in IANA format.
* Clean up handling of timezone objects for the `current_datetime` method.
* Add dependency on `tzlocal` (currently a transitive dependency) since [datetime.astimezone](https://docs.python.org/3/library/datetime.html#datetime.datetime.astimezone) does not return timezone names in IANA format, and specifically the [tzinfo.tzname](https://docs.python.org/3/library/datetime.html#datetime.tzinfo.tzname) method does not return the name in any particular format.

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.



## Summary by CodeRabbit

* **New Features**
  * Added an explicit timezone query so clients can obtain the current timezone before requesting time-based data.
  * Time-based queries now prefer the detected timezone for more accurate results.
  * Expanded calculator tools in the MCP example: add, subtract, multiply, divide, compare.
  * Added an LLM configuration block to the MCP config for model settings.

* **Documentation**
  * Updated example README and MCP configs to reflect new tools and include lists.

* **Tests**
  * Added tests validating current time and timezone outputs.

* **Chores**
  * Added a dependency to support timezone detection.

Authors:
  - David Gardner (https://github.com/dagardner-nv)

Approvers:
  - https://github.com/Salonijain27
  - Will Killian (https://github.com/willkill07)

URL: NVIDIA#1708
…DIA#1722)

Closes

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.




## Summary by CodeRabbit

* **Updates**
  * Clarified search tool usage restrictions to better guide intended queries
  * Adjusted configuration settings for query processing behavior

Authors:
  - Will Killian (https://github.com/willkill07)

Approvers:
  - Eric Evans II (https://github.com/ericevans-nv)

URL: NVIDIA#1722
Closes

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.




## Summary by CodeRabbit

## Release Notes

* **Configuration Updates**
  * nvext hints are now enabled by default in the latency sensitivity demo configuration.
  * Prediction trie path configuration field renamed to `nvext_prediction_trie_path` with backwards compatibility for the previous field name.

Authors:
  - Dhruv Nandakumar (https://github.com/dnandakumar-nv)

Approvers:
  - Anuradha Karuppiah (https://github.com/AnuradhaKaruppiah)
  - Will Killian (https://github.com/willkill07)

URL: NVIDIA#1723
* Consolidate redundant client fixture code into `_build_nat_client` method.
* Remove deprecated `use_uvloop` config from `examples/agents/tool_calling/configs/config-responses-api.yml`
* Add GPT 5 to OpenAI proxy mapping

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.




## Summary by CodeRabbit

* **New Features**
  * Added support for new OpenAI GPT-5 mini model variants.

* **Tests**
  * Added integration tests for Tool Calling responses API.

Authors:
  - David Gardner (https://github.com/dagardner-nv)

Approvers:
  - Will Killian (https://github.com/willkill07)

URL: NVIDIA#1726
RAG integration tests in `test_rag_function.py` hardcoded `http://localhost:19530` for Milvus connections across 7 locations. This fails in CI where Milvus runs as a service container with hostname `milvus`, not `localhost`.

Replaced all hardcoded Milvus URIs with the existing `milvus_uri` fixture from `nvidia_nat_test`, which:
- Reads `NAT_CI_MILVUS_HOST` / `NAT_CI_MILVUS_PORT` env vars (defaults to `localhost:19530` for local dev)
- Skips tests gracefully when Milvus is unavailable instead of crashing with `MilvusException`

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.



## Summary by CodeRabbit

* **Tests**
  * Refactored test suite to use parameterized Milvus connection configuration, replacing hardcoded endpoint values with configurable URIs.
  * Updated test fixtures and methods throughout the integration test suite for improved environment flexibility and deployment portability.
  * Enhanced test maintainability by supporting dynamic endpoint configuration across test initialization, execution, and teardown phases.

Authors:
  - Eric Evans II (https://github.com/ericevans-nv)

Approvers:
  - David Gardner (https://github.com/dagardner-nv)

URL: NVIDIA#1724
* The `ragaai-catalyst` package depends on the deprecated `pkg_resources` module which was removed in `setuptools` v82

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.




## Summary by CodeRabbit

* **Chores**
  * Updated setuptools dependency version constraints across build and package configurations for improved compatibility.

Authors:
  - David Gardner (https://github.com/dagardner-nv)

Approvers:
  - https://github.com/mnajafian-nv
  - https://github.com/Salonijain27

URL: NVIDIA#1730
Stream tool call chunks to frontend:
- Add ChoiceDeltaToolCall and ChoiceDeltaToolCallFunction models to
  ChoiceDelta in api_server.py for OpenAI-compatible tool call deltas
- Change _stream_fn return type from AsyncGenerator[str] to
  AsyncGenerator[ChatResponseChunk] and yield structured chunks for
  both content tokens and tool call deltas
- Add unit tests for tool call chunk serialization

Fix agent_node streaming accumulation bug:
- When agent_node accumulates AIMessageChunk objects via astream, the
  resulting chunk has tool_call_chunks but additional_kwargs["tool_calls"]
  (the wire format) is left empty, causing the second LLM call to fail
  with "messages with role 'tool' must be a response to a preceeding
  message with 'tool_calls'"
- Add _chunk_to_message helper that converts accumulated AIMessageChunk
  to AIMessage, using LangChain's convert_to_openai_messages to
  reconstruct the wire format rather than hardcoding it

Streaming handler hardening:
- Guard msg.content with isinstance(str) check for non-string content
- Fall back to msg.tool_calls when tool_call_chunks is missing (AIMessage)
- Normalize tool call index via enumerate to handle None values
- Serialize dict args to JSON string for AIMessage tool_calls
- Yield ChatResponseChunk instead of plain string in GraphRecursionError handler



## Summary by CodeRabbit

* **New Features**
  * Streaming now emits unified, structured chunks that include tool-call deltas alongside content, with per-chunk IDs, timestamps, and consistent formatting.
  * Final streamed chunks are converted into full message objects that preserve tool-call details and arguments.

* **Tests**
  * Added tests verifying tool-call chunk serialization, delta formatting, content-only behavior, argument preservation, and rehydration of tool-call data.

Authors:
  - Myles Shannon (https://github.com/MylesShannon)

Approvers:
  - Will Killian (https://github.com/willkill07)

URL: NVIDIA#1717
…ns (NVIDIA#1696)

Replace `model_dump()` with `getattr()` in `_convert_input_pydantic` to prevent nested Pydantic models from being flattened to dicts. `model_dump()` recursively serializes all nested models, so parameters like `list[PromptItem]` become `list[dict]`, causing `AttributeError` when functions access model attributes (e.g. p.text).

Using `getattr()` preserves the original types as declared in function signatures. Updated test expectations to match the corrected behavior.


---

This fixes a bug where `_convert_input_pydantic` in `FunctionInfo` uses `model_dump()` to unpack multi-argument function inputs into keyword arguments. The problem is that `model_dump()` recursively serializes everything, including nested Pydantic models, into plain dicts.

For example, say you have a NAT function like this:
```python
class PromptItem(BaseModel):
    text: str
    start: int | None = None
    len: int | None = 300
```

```python
async def render_prompts(author: str, prompts: list[PromptItem], ...):
```

NAT wraps this multi-arg function into a single-arg function with an auto-generated input schema. When it unpacks that schema to call the real function, `model_dump()` converts `list[PromptItem]` into `list[dict]`. Then when the function tries to access p.text, it gets `AttributeError: 'dict' object has no attribute 'text'`.

Any NAT function with multiple arguments that includes nested Pydantic models in its signature would hit this.

The fix swaps `**value.model_dump()` for `**{k: getattr(value, k) for k in type(value).model_fields}`, which unpacks the top-level fields without recursively flattening nested models. Only the top-level unpacking behavior changes. The nested values are passed through as-is, matching what the function signature actually declares.

The test changes are just updating expected values that were inadvertently asserting the broken behavior (they compared against `str(dict)` output instead of `str(PydanticModel)` output, which didn't crash but was technically wrong).

---

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.



## Summary by CodeRabbit

* **Bug Fixes**
  * Preserved nested model structures and types when converting composite inputs for both streaming and non-streaming calls, ensuring consistent string representations and preventing unexpected changes in how complex inputs appear during invocation. Public interfaces unchanged.

Authors:
  - Myles Shannon (https://github.com/MylesShannon)

Approvers:
  - Will Killian (https://github.com/willkill07)

URL: NVIDIA#1696
* The `test_strands_agent_with_nim_thinking_mixin_non_streaming` and `test_strands_agent_with_nim_thinking_mixin_streaming` tests have been timing out, swap out the models.

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.



## Summary by CodeRabbit

* **Tests**
  * Updated integration tests to target a newer NVIDIA inference model, removed the deprecated "thinking" flag from configurations, and preserved existing LLM parameters (temperature, max tokens) to ensure compatibility with streaming and non‑streaming scenarios.

Authors:
  - David Gardner (https://github.com/dagardner-nv)

Approvers:
  - Will Killian (https://github.com/willkill07)

URL: NVIDIA#1731
Summary 

Adds a scheduled nightly check that scans example configs for NIM model references (LLMs and embedders) and verifies each endpoint is reachable. Uses a two-pass approach: catalog check via /v1/models to detect removed models, then a minimal inference call (max_tokens=1) for LLMs still in the catalog. Reports removed vs. temporarily down models separately so the team knows whether to update a config or wait.

- ci/scripts/model_health_check.py: scans config.yml for _type: nim blocks in both llms and embedders sections, handles model_name/model aliases and optimizer search_space values, outputs structured JSON results via --output-json
- ci/scripts/gitlab/model_health_check.sh: CI wrapper using create_env for environment setup
- .gitlab-ci.yml: nightly cron job (check:model_health) with allow_failure: true and JSON artifact collection

Closes NVIDIA#1715

## By Submitting this PR I confirm:
- I am familiar with the [Contributing
Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This
certifies that the contribution is your original work, or you have
rights to submit it under the same license, or a compatible license.
- Any contribution which contains commits that are not Signed-Off will
not be accepted.
- When the PR is ready for review, new or existing tests cover these
changes.
- When the PR is ready for review, the documentation is up to date with
these changes.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* Added an automated model health tool that scans example configs for
NVIDIA NIM endpoints, optionally performs live checks (requires API key
unless dry-run), separates removed vs. down models, shows per-model
referencing files in verbose mode, and can emit structured JSON results.
* **CI**
* Added a scheduled nightly job to run the health checks (time-limited,
allow-failure) and publish results.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: mnajafian-nv <mnajafian@nvidia.com>
…onversion (NVIDIA#1663)

Adds support for the Agent Trajectory Interchange Format (ATIF) v1.6, a standardized format for logging LLM agent interaction histories.

- Add an IntermediateStep → ATIF (Agent Trajectory Interchange Format) converter that translates NAT execution traces into the Harbor ATIF v1.6 spec, supporting both batch and streaming conversion.
- Expose an experimental POST `/v1/workflow/atif` streaming endpoint that emits ATIF steps as SSE events during workflow execution.
- Replace the single atif.py data model file with an `atif/` package derived from Harbor's reference Pydantic models, with proper Apache-2.0 attribution. Includes Harbor's validators (sequential step IDs, tool call reference integrity, ISO 8601 timestamps, agent-only field enforcement) and utility methods (`to_json_dict()`, `has_multimodal_content()`).
- Include a utility script (`generate_atif_trajectory.py`) for running any NAT workflow and exporting the trajectory as ATIF JSON.

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.



## Summary by CodeRabbit

* **New Features**
  * Full ATIF v1.6 support: structured trajectory models, validation, per-step & final metrics, multimodal (image+text) content, and real-time ATIF streaming endpoint/generator; CLI to capture and emit ATIF trajectories; batch and streaming converters.

* **Tests**
  * Comprehensive test suite validating batch/stream parity, step/tool handling, metrics, timestamps, and serialization.

* **Chores**
  * Added example ATIF JSON transcripts as reference artifacts.

Authors:
  - Yuchen Zhang (https://github.com/yczhang-nv)
  - Anuradha Karuppiah (https://github.com/AnuradhaKaruppiah)

Approvers:
  - Will Killian (https://github.com/willkill07)
  - Anuradha Karuppiah (https://github.com/AnuradhaKaruppiah)

URL: NVIDIA#1663
…th (NVIDIA#1736)

Closes

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.



## Summary by CodeRabbit

* **New Features**
  * Model health checks now include embedder checks alongside language-model checks.
  * CI posts model-health results to Slack automatically.

* **Improvements**
  * Unified and clearer model-health report format with richer per-model detail.
  * More robust network/error handling and controlled pacing between requests.

Authors:
  - https://github.com/mnajafian-nv

Approvers:
  - David Gardner (https://github.com/dagardner-nv)

URL: NVIDIA#1736
…1712)

The meta/llama-3.1-405b-instruct model endpoint returns HTTP 404 from
the NVIDIA NIM API (NVCF function decommissioned). This breaks both the
e2e test and running the example as documented.

Replace with nvidia/nemotron-3-nano-30b-a3b across the default config,
LangSmith configs, optimizer configs, and README.

## Description
<!-- Note: The pull request title will be included in the CHANGELOG. -->
<!-- Provide a standalone description of changes in this PR. -->
<!-- Reference any issues closed by this PR with "closes NVIDIA#1234". All PRs
should have an issue they close-->
Closes

## By Submitting this PR I confirm:
- I am familiar with the [Contributing
Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This
certifies that the contribution is your original work, or you have
rights to submit it under the same license, or a compatible license.
- Any contribution which contains commits that are not Signed-Off will
not be accepted.
- When the PR is ready for review, new or existing tests cover these
changes.
- When the PR is ready for review, the documentation is up to date with
these changes.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Documentation**
* Updated AI model selections in email phishing analyzer example
configurations to use an optimized alternative model, with changes
reflected across multiple evaluation, optimization, and base
configuration files.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: mnajafian-nv <mnajafian@nvidia.com>
Closes NVIDIA#1718 

<img width="1727" height="113" alt="image" src="https://github.com/user-attachments/assets/1e7dbf7a-18e9-4e3a-8a33-fb1f882dde55" />

<img width="517" height="360" alt="image" src="https://github.com/user-attachments/assets/0d230954-a781-4b99-b568-1933914930eb" />

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.




## Summary by CodeRabbit

## Release Notes

* **New Features**
  * Implemented user attribution functionality enabling traces and feedback to be associated with specific users through optional identifier fields, with anonymous attribution as fallback.

* **Documentation**
  * Added User Attribution guide explaining configuration and usage of user identification for traces and feedback, including code examples.

* **Tests**
  * Revised feedback endpoint validation tests to reflect updated payload requirements.

Authors:
  - Patrick Chin (https://github.com/thepatrickchin)
  - Will Killian (https://github.com/willkill07)

Approvers:
  - Will Killian (https://github.com/willkill07)

URL: NVIDIA#1668
)

* This was missed from NVIDIA#1726

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.




## Summary by CodeRabbit

* **Chores**
  * Updated CI/CD pipeline to use a specific versioned image tag for improved consistency and reproducibility in automated testing environments.

Authors:
  - David Gardner (https://github.com/dagardner-nv)

Approvers:
  - Will Killian (https://github.com/willkill07)

URL: NVIDIA#1740
…1737)

* Update existing install commands to always use double quotes rather than single quotes.

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.



## Summary by CodeRabbit

* **Documentation**
  * Added tabbed installation guidance across many docs to separate source vs. package install flows for clearer, contextual instructions.
  * Standardized install command quoting and normalized extras/spacing for consistency.
  * Updated headings, expanded examples and configuration snippets (telemetry, YAML examples), clarified tracing/observability steps, and improved PII redaction and workflow guidance.
  * Minor markup and formatting adjustments throughout for readability and consistency.

Authors:
  - David Gardner (https://github.com/dagardner-nv)

Approvers:
  - https://github.com/mnajafian-nv

URL: NVIDIA#1737
…ng (NVIDIA#1744)

### Two bugs existed in process_workflow_request:

- `asyncio.create_task(...).add_done_callback(...)` was chained in a single expression. Since `add_done_callback` returns `None`, `self._running_workflow_task` was always assigned `None` instead of the actual `Task` object. This caused the running task to never be tracked, preventing cancellation and allowing duplicate workflow executions.
- `self._workflow_schema_type` is typed as `str | None` but was used directly as a dict key in `self._schema_output_mapping[self._workflow_schema_type]` without a null check, producing a type error.

### Fix

- Split the chained method call so `self._running_workflow_task` is assigned the `Task` object, then `add_done_callback` is called on it separately.
- Added an explicit `None` guard before accessing `self._schema_output_mapping`, raising a `ValueError` if `_workflow_schema_type` is uninitialized.


## Summary by CodeRabbit

* **Bug Fixes**
  * Adds runtime validation to prevent scheduling when a workflow schema is missing, surfacing a clear error instead of proceeding.
  * Ensures workflow tasks are created before attaching callbacks so callbacks are reliably invoked.
  * Enhances error handling: processing errors are logged and sent to clients as workflow errors.
  * Sends explicit invalid-content errors to clients when user message content is missing.

Authors:
  - Ajay Thorve (https://github.com/AjayThorve)

Approvers:
  - Will Killian (https://github.com/willkill07)

URL: NVIDIA#1744
### Summary
- Fix `model_name` in training and post-training configs to point to the LoRA adapter name instead of the base model, so GRPO training actually evaluates the latest checkpoint at each epoch.
- Update README to clarify that when no `torchtune_args` are specified, the default backend uses Unsloth LoRA (not TorchTune full-weight training) and document model name routing requirements.
- Sync README inline config snippet with actual `config.yml` values.

### Problem
During LoRA finetuning, the ART backend registers each LoRA adapter in vLLM under the training run name (`tic_tac_toe_training_run`). However, the `openpipe_llm` config had `model_name: Qwen/Qwen2.5-3B-Instruct`, causing every inference request to hit the unchanged base model. Rewards were always computed against the base model, making GRPO training ineffective.

This bug it's not likely to occur with full-weight training since vLLM updates weights in-place under the same model name.

### Fix
- `config.yml`: Changed `model_name` from `Qwen/Qwen2.5-3B-Instruct` to `tic_tac_toe_training_run`
- `config_post_train.yml`: Same change so post-training evaluation also targets the LoRA adapter
- `README.md`: Corrected "TorchTune" to "Unsloth LoRA" as the default training backend; synced inline config values; added notes explaining LoRA vs. full-weight model name routing

Closes NVIDIA#1649.

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.



## Summary by CodeRabbit

* **Documentation**
  * Clarified that LoRA finetuning with Unsloth is the default and added guidance on model routing and adapter registration for inference.
  * Noted that full-weight training requires extra TorchTune configuration.
  * Updated vocabulary to accept the term "Unsloth".

* **Updates**
  * Adjusted training hyperparameters (learning rate, beta, epochs) and evaluation data reference.
  * Updated training backend reference and config comments to reflect LoRA workflow.

Authors:
  - https://github.com/aslanshi
  - Will Killian (https://github.com/willkill07)

Approvers:
  - Dhruv Nandakumar (https://github.com/dnandakumar-nv)

URL: NVIDIA#1662
Reduce the default dependency footprint of nvidia-nat-eval by moving evaluator and exporter implementations into dependency-aligned packages, while keeping nvidia-nat-eval as a thin orchestration core with pluggable components.

Follow dependencies:
- If functionality is tightly tied to an ecosystem/framework, move it into that framework package.
- If functionality does not naturally fit an existing package, create a focused new package.

- Moves built-in evaluators out of nvidia-nat-eval:
  - RAGAS → nvidia-nat-ragas
  - Trajectory + tunable RAG → nvidia-nat-langchain
  - Runtime/perf evaluators + profiler stack + sizing CLI → nvidia-nat-profiler
  - Red-team evaluator/runner + red-team CLI → nvidia-nat-security
  - Removes SWE Bench evaluator and example from active support.
- Moves Weave eval exporting out of eval core into nvidia-nat-weave via callback registration.
- Updates package wiring (pyproject/entry points), docs, migration guidance, and examples to match the new package ownership.

- nvidia-nat-eval remains the eval harness/orchestration runtime.
- Missing optional exporter integrations now warn-and-continue where applicable.
- This is a breaking packaging change: users must install feature-specific packages to recover previous capabilities.

- These functions were manually tested:
  - Evaluators - ragas, trajectory, tunable rag
  - Profiler
  - red teaming
- E2E tests were run for evaluation and profiling examples
- Testing of the following functions have been deferred
- - Weave Exporter - Additional cleanup is in progress of unifying exporters (Weave, File etc.). This feature will be tested once all the changes are completed

All uv.locks were updated to remove stale metadata

- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.

* **New Features**
  * Added optional provider-friendly evaluation callback hooks (start, prediction, usage stats, evaluator score, export flush, summary, and evaluation context).

* **Breaking Changes**
  * Profiler split into a standalone package; several evaluators moved to framework-specific packages.
  * SWE Bench evaluator, associated examples, and tests removed.
  * CLI commands split across specialized packages (eval, sizing/profiler, red-team/security).
  * Tightened allowlist for example/config paths (some formerly-exempt example paths are no longer allowlisted).

* **Documentation**
  * Installation, migration, and usage docs updated for v1.6.0 to reflect packaging, import, and migration changes.

Authors:
  - Anuradha Karuppiah (https://github.com/AnuradhaKaruppiah)
  - Will Killian (https://github.com/willkill07)

Approvers:
  - David Gardner (https://github.com/dagardner-nv)
  - https://github.com/mnajafian-nv
  - Eric Evans II (https://github.com/ericevans-nv)
  - Will Killian (https://github.com/willkill07)

URL: NVIDIA#1690
…IA#1733)

Adds a built-in `parallel_executor` control flow component to NVIDIA NeMo Agent Toolkit and updates the `parallel_executor` example to use the core implementation.

- Adds core `parallel_executor` in `nvidia-nat-langchain` with fan-out/fan-in parallel branch execution.
- Registers the component in langchain control flow and adds dedicated unit tests.
- Adds component documentation under `docs/source/components/agents/parallel-executor/` and wires it into the agents docs index.
- Updates `examples/control_flow/parallel_executor` to use built-in `parallel_executor` + `sequential_executor` flow composition.
- Removes example-local `register.py` and component entry point; example test now loads config directly without register dependency.
- Uses appended branch output blocks (text) for fan-in output, aligned with sequential stage consumption.

## By Submitting this PR I confirm:
- [x] I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- [x] We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - [x] Any contribution which contains commits that are not Signed-Off will not be accepted.
- [x] When the PR is ready for review, new or existing tests cover these changes.
- [x] When the PR is ready for review, the documentation is up to date with these changes.



## Summary by CodeRabbit

* **New Features**
  * Added Parallel Executor control flow enabling concurrent fan-out/fan-in execution of independent tools/agents with configurable logging and error-handling.

* **Examples**
  * New example workflow demonstrating parallel analysis with multiple LLM-powered branches and a final synthesis stage.

* **Documentation**
  * Comprehensive docs added describing configuration, usage modes, output format, and limitations for the Parallel Executor.

* **Tests**
  * New integration and unit tests validating parallel execution, error behaviors, and logging.

* **Chores**
  * Example packaging/configuration updated to expose the new example.

Authors:
  - Antonio Martinez (https://github.com/antoniomtz)

Approvers:
  - Will Killian (https://github.com/willkill07)
  - David Gardner (https://github.com/dagardner-nv)

URL: NVIDIA#1733
* Anthropic and Brev should only ever be capitalized
* Disallow 'kv' abbreviations should always be capitalized
* Zsh should be cased the way the project cases the name.


## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.



## Summary by CodeRabbit

* **Chores**
  * Updated style vocabulary: standardized names/capitalization (Anthropic, Brev, Zsh, Minimax).
  * Removed obsolete token ("kv") and tightened finetune variation handling.
  * Added explicit allowance for "AIQ" in one list and added an additional case-insensitive "AIQ" matching rule in the counterpart list to refine validation.

Authors:
  - David Gardner (https://github.com/dagardner-nv)

Approvers:
  - Bryan Bednarski (https://github.com/bbednarski9)

URL: NVIDIA#1745
…IA#1748)

- Move eval callback implementation to packages/nvidia_nat_eval/src/nat/plugins/eval/eval_callbacks.py so eval runtime/CLI can use a non-deprecated import path.
- Add CodeRabbit path instruction for packages/nvidia_nat_core/src/nat/eval/**/* to prevent new files in the deprecated core compatibility shim and direct new work to nat.plugins.eval.

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.



## Summary by CodeRabbit

* **Chores**
  * Marked a legacy evaluation area as deprecated and added guidance to prevent new growth there.
  * Updated evaluation plugin wiring to the reorganized plugin path.
  * No user-facing behavior changes; compatibility fixes remain allowed with justification.

* **Documentation**
  * Added path-specific instructions clarifying where new evaluation code should live.

Authors:
  - Anuradha Karuppiah (https://github.com/AnuradhaKaruppiah)

Approvers:
  - Yuchen Zhang (https://github.com/yczhang-nv)

URL: NVIDIA#1748
NVIDIA#1743)

Before the PR, evaluation was currently tightly coupled to file I/O — `EvaluationRun` always writes `workflow_output.json`, evaluator outputs, and config files directly. This blocks using eval as a Python API where callers want `EvalResult` back without mandatory disk writes. By moving file output into a callback, programmatic users get clean eval results with zero side effects, while CLI users get identical file output behavior through automatic `FileEvalCallback` registration.

This PR:
- Extract file-writing logic from `EvaluationRun` into a new `FileEvalCallback` that implements the existing `EvalCallback` protocol, making file output opt-in rather than hardcoded
- Enrich `EvalResult` with optional context fields (`evaluation_outputs`, `workflow_output_json`, `run_config`, `effective_config`, `output_dir`) so exporters can persist richer output without breaking existing callbacks
- Standardize callback timing by moving `_on_eval_complete` to fire after all data (including profiling) is assembled, rather than mid-pipeline

Need to merge after merging [PR-1748](NVIDIA#1748)

Closes

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing/index.md).
- We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
  - Any contribution which contains commits that are not Signed-Off will not be accepted.
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.



## Summary by CodeRabbit

* **New Features**
  * File-based export for evaluation artifacts, configurations, and workflow outputs
  * Enhanced evaluation results capture workflow data and configuration metadata
  * Slack reporting integration for model health status

* **Documentation**
  * Clarified installation guidance with source vs. package installation options
  * Updated module references and import paths
  * Added migration guide for evaluator reorganization

* **Removals**
  * SWE-bench evaluator support removed

Authors:
  - Yuchen Zhang (https://github.com/yczhang-nv)
  - https://github.com/mnajafian-nv
  - Patrick Chin (https://github.com/thepatrickchin)
  - David Gardner (https://github.com/dagardner-nv)
  - Ajay Thorve (https://github.com/AjayThorve)
  - https://github.com/aslanshi
  - Anuradha Karuppiah (https://github.com/AnuradhaKaruppiah)

Approvers:
  - Will Killian (https://github.com/willkill07)
  - Anuradha Karuppiah (https://github.com/AnuradhaKaruppiah)

URL: NVIDIA#1743
Signed-off-by: Charlie Yi <charlie.yi@memverge.com>
@coderabbitai coderabbitai bot added the DO NOT MERGE PR should not be merged; see PR for details label Mar 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

DO NOT MERGE PR should not be merged; see PR for details feature request New feature or request non-breaking Non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.