Conversation
WalkthroughThe changes implement a new Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant OptimizerRuntime as Optimizer Runtime
participant Registry as GlobalTypeRegistry
participant NumOpt as Numeric Optimizer
participant PromptOpt as Prompt Optimizer
participant EvalLoader as Eval Loader
Client->>OptimizerRuntime: optimize_config(cfg, run_config)
OptimizerRuntime->>OptimizerRuntime: walk_optimizables(cfg)
alt Has Numeric Phase
OptimizerRuntime->>Registry: get_optimizer(NumericOptimizationConfig)
Registry->>NumOpt: build_numeric_runner(config)
OptimizerRuntime->>NumOpt: run(base_cfg, full_space, ...)
NumOpt->>NumOpt: optimize_parameters()
NumOpt-->>OptimizerRuntime: tuned_cfg
end
alt Has Prompt Phase
OptimizerRuntime->>Registry: get_optimizer(GAPromptOptimizationConfig)
Registry->>PromptOpt: build_prompt_runner(config)
OptimizerRuntime->>PromptOpt: run(base_cfg, full_space, ...)
PromptOpt->>EvalLoader: load_evaluation_run()
PromptOpt->>PromptOpt: ga_optimization(evaluate, mutate, select)
PromptOpt-->>OptimizerRuntime: optimized_cfg
end
OptimizerRuntime-->>Client: final_optimized_cfg
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
c0dc1e7 to
be2e039
Compare
03f125b to
0376f02
Compare
- Add packages/nvidia_nat_optimizer with pyproject.toml, README, src/nat/optimizer, tests - Move parameter_optimization modules and tests from core to optimizer package - Core CLI optimize: resilient import; on missing optimizer, suggest nvidia-nat-optimizer or nvidia-nat - Meta-package default and most extras include nvidia-nat-optimizer; add uv.sources path - Update LangChain prompt_optimizer register and examples (DEVELOPER_NOTES, email_phishing test) - Docs: installation (default includes optimizer, extras), optimizer (prerequisites) Signed-off-by: afourniernv <afournier@nvidia.com>
- evolutionary_base: shared evaluation loop and template method; subclasses implement fitness and persistence - ga_prompt_optimizer: GA fitness, selection, oracle feedback, checkpointing - Oracle feedback moves out of core config into model_extra and ga_config - Replace prompt_optimizer with ga_prompt_optimizer; update runtime, langchain, tests Signed-off-by: afourniernv <afournier@nvidia.com>
- Add OPTIMIZER to ComponentEnum, OptimizerStrategyBaseConfig (TypedBaseModel) - Extend NumericOptimizationConfig and PromptGAOptimizationConfig with name= - Add register_optimizer, get_optimizer, RegisteredOptimizerInfo in type_registry - Add register_optimizer decorator in register_workflow - Add nat.optimizer.register with GA and parameter optimizer registration - Update optimizer_runtime to dispatch via registry instead of direct imports - Update test_optimizer_runtime_extra for registry-based flow Signed-off-by: afourniernv <afournier@nvidia.com>
- Restructure package layout and flatten structure - Rename package to nvidia-nat-config-optimizer - Rename import path from nat.optimizer to nat.config_optimizer - Fix doc paths and migration guide Signed-off-by: afourniernv <afournier@nvidia.com>
Signed-off-by: afourniernv <afournier@nvidia.com>
0376f02 to
65af945
Compare
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (17)
packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_individual.py (1)
22-38: Docstring is incomplete — missingtrial_numberandeval_outputfield descriptions.The docstring documents only 4 of the 6 fields. Consider adding brief descriptions for
trial_numberandeval_outputfor completeness.📝 Suggested docstring update
`@dataclass` class Individual: """ One candidate solution in the GA population. - prompts: dotted path -> prompt text (one assignment across all prompt dimensions). - metrics: evaluator name -> average score; filled after evaluation. - scalar_fitness: single fitness value used for selection; set after normalize/scalarize/diversity. - worst_items_reasoning: optional reasoning strings from worst eval items for oracle feedback. + - trial_number: optional trial index for tracking across optimization runs. + - eval_output: optional raw evaluation output object; type varies by evaluator. """🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_individual.py` around lines 22 - 38, The class docstring for Individual documents prompts, metrics, scalar_fitness, and worst_items_reasoning but omits descriptions for trial_number and eval_output; update the docstring for the Individual dataclass to add one-line descriptions for trial_number (e.g., integer tracking which trial/run produced this individual) and eval_output (e.g., raw evaluator output or structured results saved after evaluation), so all six fields are documented alongside the existing entries in the class docstring.packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/eval_runtime_loader.py (1)
19-28: Consider moving the return statement to anelseblock for clarity.The static analysis tool flags that the
returninside thetryblock could be moved to anelseblock (TRY300). This is a minor style improvement that makes the control flow more explicit.Additionally, the return type
typeis quite generic. SinceEvaluationRuncan't be imported at module level (that's the point of lazy loading), this is acceptable, but consider adding a docstring note about the returned type.♻️ Optional: Move return to else block
`@lru_cache`(maxsize=1) def load_evaluation_run() -> type: - """Lazily load eval runtime class required by `nat optimize`.""" + """Lazily load eval runtime class required by `nat optimize`. + + Returns: + The `EvaluationRun` class from `nat.plugins.eval.runtime.evaluate`. + + Raises: + RuntimeError: If `nvidia-nat-eval` is not installed. + """ try: from nat.plugins.eval.runtime.evaluate import EvaluationRun - return EvaluationRun except ImportError as exc: raise RuntimeError( "The `nat optimize` command requires evaluation support from `nvidia-nat-eval`. " "Install it with `uv pip install nvidia-nat-eval` (or `pip install nvidia-nat-eval`).") from exc + else: + return EvaluationRun🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/eval_runtime_loader.py` around lines 19 - 28, The function load_evaluation_run currently returns EvaluationRun from inside the try block; move the return into an else block (i.e., try: import EvaluationRun except ImportError as exc: raise RuntimeError(...) from exc else: return EvaluationRun) to satisfy the TRY300 style suggestion, and update the function docstring to note that it lazily returns the EvaluationRun class (type kept generic due to lazy import) while preserving the `@lru_cache`(maxsize=1) decorator and existing error handling.packages/nvidia_nat_config_optimizer/pyproject.toml (2)
80-82: Missing editable source fornvidia-nat-eval.If
nvidia-nat-evalis added as an optional dependency, you'll also need to add its editable source mapping for local development.♻️ Suggested source addition
[tool.uv.sources] nvidia-nat-core = { path = "../nvidia_nat_core", editable = true } nvidia-nat-test = { path = "../nvidia_nat_test", editable = true } +nvidia-nat-eval = { path = "../nvidia_nat_eval", editable = true }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_config_optimizer/pyproject.toml` around lines 80 - 82, The pyproject is missing an editable source mapping for the optional package name nvidia-nat-eval in the [tool.uv.sources] table; add an entry like nvidia-nat-eval = { path = "../nvidia_nat_eval", editable = true } under [tool.uv.sources] so local development and editable installs resolve the optional dependency referenced as nvidia-nat-eval.
50-58: Consider addingnvidia-nat-evalas an optional dependency.The
eval_runtime_loader.pyin this package lazily imports fromnvidia-nat-evaland raises aRuntimeErrorif it's missing. For clarity and discoverability, consider declaring it as an optional dependency:♻️ Suggested optional dependency addition
[tool.setuptools_dynamic_dependencies.optional-dependencies] visualization = [ "matplotlib~=3.9", ] test = [ "nvidia-nat-test == {version}", ] +eval = [ + "nvidia-nat-eval == {version}", +]This would allow users to install evaluation support explicitly via
nvidia-nat-config-optimizer[eval].🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_config_optimizer/pyproject.toml` around lines 50 - 58, The package declares runtime dependencies in tool.setuptools_dynamic_dependencies but doesn’t expose the optional runtime used by eval_runtime_loader.py; add an optional extra called "eval" in pyproject.toml that includes "nvidia-nat-eval" (pin version policy consistent with other deps) so users can install evaluation support via nvidia-nat-config-optimizer[eval]; update the extras definition in pyproject.toml to include this extra and list "nvidia-nat-eval" as its dependency, and ensure the README or setup metadata references the new "eval" extra for discoverability.packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/base.py (1)
30-40: Method docstring could be more descriptive.Per coding guidelines, public APIs should have Google-style docstrings. The
runmethod docstring is minimal and doesn't document the parameters or expected behavior.📝 Suggested docstring enhancement
`@abstractmethod` async def run( self, *, base_cfg: Config, full_space: dict[str, SearchSpace], optimizer_config: OptimizerConfig, opt_run_config: OptimizerRunConfig, ) -> None: - """Run prompt optimization.""" + """Run prompt optimization. + + Args: + base_cfg: The base workflow configuration to optimize. + full_space: Mapping of parameter paths to their search spaces. + optimizer_config: Configuration for the optimizer strategy. + opt_run_config: Runtime configuration for the optimization run. + """ ...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/base.py` around lines 30 - 40, Update the abstract async method run to include a Google-style docstring that documents the method behavior, each parameter (base_cfg: Config, full_space: dict[str, SearchSpace], optimizer_config: OptimizerConfig, opt_run_config: OptimizerRunConfig), the expected side effects (e.g., performs prompt optimization and updates state or writes outputs), the return value (None), and any exceptions that may be raised; modify the docstring on the run method in the base prompt optimizer class so it follows the Google docstring sections (Args, Returns, Raises) and clearly describes expected behavior and inputs for implementers of run.packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/optimizer_runtime.py (2)
150-167: Consider adding error handling for registry lookup failures.If
registry.get_optimizer()fails to find the optimizer for the config type, aKeyErrorwill propagate with a potentially unclear error message. Consider catching and re-raising with a more descriptive message.💡 Proposed improvement
if base_cfg.optimizer.numeric.enabled: registry = GlobalTypeRegistry.get() - numeric_info = registry.get_optimizer(type(base_cfg.optimizer.numeric)) + try: + numeric_info = registry.get_optimizer(type(base_cfg.optimizer.numeric)) + except KeyError as e: + raise ValueError( + f"No optimizer registered for config type {type(base_cfg.optimizer.numeric).__name__}. " + "Ensure the optimizer strategy is registered via `@register_optimizer`." + ) from e async with numeric_info.build_fn(base_cfg.optimizer.numeric) as runner:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/optimizer_runtime.py` around lines 150 - 167, Wrap the registry lookup in a try/except to catch a missing-optimizer KeyError from registry.get_optimizer(type(base_cfg.optimizer.numeric)) and re-raise a clearer error (or raise a RuntimeError) that includes the requested type and context (e.g., base_cfg.optimizer.numeric and maybe base_cfg.optimizer name); locate the lookup around GlobalTypeRegistry.get() -> registry.get_optimizer(...) and ensure the exception message references numeric_info/build_fn/runner so the caller can see which optimizer type lookup failed.
159-161: Repeated pattern for conditional parameter passing could be extracted.The pattern of checking
inspect.signature(runner.run).parametersand conditionally adding kwargs is duplicated for both numeric and prompt optimizers. Consider extracting to a helper function for maintainability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/optimizer_runtime.py` around lines 159 - 161, Extract the repeated logic that inspects runner.run parameters and conditionally inserts the callback into kwargs into a small helper (e.g., add_optional_param_if_supported or add_callback_if_supported) and call it from both places; the helper should accept the runner object, the kwargs dict (numeric_run_kwargs or prompt_run_kwargs) and the callback_manager (and param name "callback_manager") and perform the inspect.signature(runner.run).parameters check and set kwargs["callback_manager"] = callback_manager when present, replacing the duplicated blocks that reference runner_run_params, runner.run, numeric_run_kwargs and prompt_run_kwargs.packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/register.py (2)
54-55: Unusedconfigparameter should be prefixed with underscore.The static analysis correctly flags this unused parameter. Convention is to prefix with underscore to indicate intentional non-use.
🧹 Proposed fix
-async def _parameter_optimizer_build(config: NumericOptimizationConfig, ) -> AsyncIterator[_ParameterOptimizerRunner]: +async def _parameter_optimizer_build(_config: NumericOptimizationConfig, ) -> AsyncIterator[_ParameterOptimizerRunner]: yield _ParameterOptimizerRunner()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/register.py` around lines 54 - 55, The parameter name in _parameter_optimizer_build is unused and should be prefixed with an underscore to satisfy static analysis; rename the parameter from config to _config in the async def signature of _parameter_optimizer_build (keeping its type NumericOptimizationConfig) and update any local references if present so the function still yields _ParameterOptimizerRunner() unchanged.
64-65: Unusedconfigparameter should be prefixed with underscore.Same as above.
🧹 Proposed fix
-async def _ga_prompt_optimizer_build(config: GAPromptOptimizationConfig, ) -> AsyncIterator[GAPromptOptimizer]: +async def _ga_prompt_optimizer_build(_config: GAPromptOptimizationConfig, ) -> AsyncIterator[GAPromptOptimizer]: yield GAPromptOptimizer()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/register.py` around lines 64 - 65, The function _ga_prompt_optimizer_build has an unused parameter named config; rename it to _config (or otherwise prefix it with an underscore) so it follows the unused-parameter convention and avoids linter warnings. Update the async def signature in _ga_prompt_optimizer_build to accept _config instead of config and keep the rest of the function (yield GAPromptOptimizer()) unchanged; no other logic changes are required.packages/nvidia_nat_core/src/nat/data_models/optimizer.py (2)
63-66: Redundant field re-declaration.
PromptOptimizationConfigre-declaresenabledwith the same default value (False) as the parentOptimizerStrategyBaseConfig. This is unnecessary since inheritance already provides it. Consider removing the redundant declaration to reduce noise.Note:
NumericOptimizationConfig(line 53) intentionally overrides withdefault=True, which is valid.♻️ Suggested simplification
class PromptOptimizationConfig(OptimizerStrategyBaseConfig): """Base for all prompt optimization strategy configs.""" - enabled: bool = Field(default=False, description="Enable prompt optimization") - prompt_population_init_function: str | None = Field(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_core/src/nat/data_models/optimizer.py` around lines 63 - 66, PromptOptimizationConfig redundantly re-declares the enabled field with the same default as its parent OptimizerStrategyBaseConfig; remove the enabled: bool = Field(default=False, ...) declaration from PromptOptimizationConfig so it inherits the enabled setting from OptimizerStrategyBaseConfig (leave NumericOptimizationConfig untouched since it intentionally overrides enabled).
205-206: Consider usingField(default_factory=...)for nested model defaults.While Pydantic v2 handles
= Model()correctly and creates new instances, usingField(default_factory=...)is more explicit and aligns with idiomatic Pydantic patterns for mutable defaults.♻️ Suggested change
- numeric: NumericOptimizationConfig = NumericOptimizationConfig() - prompt: GAPromptOptimizationConfig = GAPromptOptimizationConfig() + numeric: NumericOptimizationConfig = Field(default_factory=NumericOptimizationConfig) + prompt: GAPromptOptimizationConfig = Field(default_factory=GAPromptOptimizationConfig)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_core/src/nat/data_models/optimizer.py` around lines 205 - 206, The numeric and prompt fields currently set defaults with direct instantiation (numeric: NumericOptimizationConfig = NumericOptimizationConfig(), prompt: GAPromptOptimizationConfig = GAPromptOptimizationConfig()); change them to use Pydantic Field with default_factory to avoid mutable-default pitfalls and follow idiomatic patterns — e.g., replace the direct instantiations with Field(default_factory=NumericOptimizationConfig) for the numeric field and Field(default_factory=GAPromptOptimizationConfig) for the prompt field in the model definition.packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_prompt_optimizer.py (6)
427-428: Use standard hyphen instead of en-dash.The log message uses an en-dash (
–, U+2013) instead of a standard hyphen-minus (-, U+002D). This can cause issues with some text processing or encoding scenarios.♻️ Proposed fix
- logger.info("No prompts to optimize – skipping.") + logger.info("No prompts to optimize - skipping.")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_prompt_optimizer.py` around lines 427 - 428, The log message passed to logger.info uses an en-dash character; replace the en-dash in the string "No prompts to optimize – skipping." with a standard hyphen-minus so it reads "No prompts to optimize - skipping." (locate the logger.info call that logs that exact message in ga_prompt_optimizer.py and update the literal string).
53-60: Unused parametereval_metrics.The
eval_metricsparameter is passed to_on_prompt_trial_endbut never used in the function body. Either use it, remove it, or prefix with underscore (_eval_metrics) to indicate it's intentionally unused.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_prompt_optimizer.py` around lines 53 - 60, The parameter eval_metrics in function _on_prompt_trial_end is declared but not used; update the function signature to either remove eval_metrics if it's unnecessary or rename it to _eval_metrics to mark it intentionally unused (or use it if metrics should affect behavior); change the signature in _on_prompt_trial_end and any call sites to match the chosen approach so there are no unused-parameter warnings and behavior remains consistent.
209-216: Unused parameteropt_run_config.The
opt_run_configparameter in_post_evaluate_singleis never used. Either use it, remove it, or prefix with underscore to indicate intentional non-use.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_prompt_optimizer.py` around lines 209 - 216, The parameter opt_run_config on the _post_evaluate_single method is unused; either remove it or mark it intentionally unused by renaming to _opt_run_config. Update the signature of _post_evaluate_single accordingly and adjust all call sites that pass opt_run_config (or update callers to stop passing it if removed). Ensure any overrides/implementations of _post_evaluate_single (and any references to its parameter list) in the class hierarchy are also updated to keep signatures consistent.
351-354: Addstrict=Truetozip()for length validation.♻️ Proposed fix for `_compute_fitness`
- for ind, norm_scores, penalty in zip(population, norm_per_ind, penalties): + for ind, norm_scores, penalty in zip(population, norm_per_ind, penalties, strict=True):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_prompt_optimizer.py` around lines 351 - 354, The loop in _compute_fitness uses zip(population, norm_per_ind, penalties) which silently truncates if the iterables differ in length; update that zip call to zip(population, norm_per_ind, penalties, strict=True) so Python will raise a ValueError on length mismatch and surface mismatched sizes from _normalize_generation or _apply_diversity_penalty; locate the loop where ind.scalar_fitness is assigned and add strict=True to the zip invocation.
279-294: Addstrict=Truetozip()for length validation.Adding
strict=Truetozip()calls ensures mismatched lengths raiseValueErrorimmediately rather than silently truncating. This is good defensive practice for these lists that should always have matching lengths.♻️ Proposed fix for `_normalize_generation`
- for m, dirn in zip(metric_names, directions): + for m, dirn in zip(metric_names, directions, strict=True):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_prompt_optimizer.py` around lines 279 - 294, The zip over metric_names and directions in the loop inside _normalize_generation silently truncates if lengths differ; change the call zip(metric_names, directions) to zip(metric_names, directions, strict=True) so a ValueError is raised on mismatch, ensuring length validation for metric_names and directions during normalization in ga_prompt_optimizer.py.
524-532: Consider catching more specific exceptions.Catching broad
Exceptionis flagged by linters. While graceful fallback is appropriate for GA operations (mutation failures shouldn't crash the optimizer), consider catching more specific exceptions (e.g.,RuntimeError,ValueError) if the failure modes are known, or add a comment explaining why broad catching is intentional here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_prompt_optimizer.py` around lines 524 - 532, The try/except around calling _mutate_prompt currently catches Exception broadly (in the block handling param, base_prompt, purpose and setting new_p), which linters dislike; narrow the except to the known failure types (e.g., catch RuntimeError, ValueError, or specific mutation-related exceptions thrown by _mutate_prompt) and preserve the existing fallback behavior (logger.warning and new_p = base_prompt), or if the failure modes are truly unknown, add a short comment explaining why a broad except is intentional and acceptable in this GA mutation context. Ensure you reference and update the except clause near the callsite of _mutate_prompt so param, base_prompt, purpose and new_p behavior remain unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/nvidia_nat_config_optimizer/README.md`:
- Line 5: The README install line uses the acronym "NAT" in prose; per
guidelines replace it with the full product name "NeMo Agent Toolkit". Update
the sentence that currently reads "Install with NAT: `pip install nvidia-nat`
..." to spell out "NeMo Agent Toolkit" in the prose while leaving package names
(e.g., `nvidia-nat`, `nvidia-nat-core`, `nvidia-nat-config-optimizer`)
unchanged; ensure any other occurrences of the standalone "NAT" in this README
are similarly replaced.
In
`@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/parameters/base.py`:
- Around line 30-39: The abstract method signature for
ParameterOptimizerRunner.run in base.py currently returns Config but
implementations like _ParameterOptimizerRunner.run in register.py return
tuple[Config, dict[str, object], int]; change this to a consistent return type
by defining a small result dataclass (e.g., ParameterOptimizationResult with
fields tuned_config: Config, best_params: dict[str, object] =
field(default_factory=dict), trial_count: int = 0) and update the abstract
method signature to return ParameterOptimizationResult; then update all
implementations (notably _ParameterOptimizerRunner.run in register.py) to
construct and return that dataclass and adjust imports/type hints accordingly so
the base class and all overrides match.
In
`@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_prompt_optimizer.py`:
- Around line 435-436: The code assumes optimizer_config.output_path is a Path
and calls out_dir.mkdir, which will raise AttributeError if output_path is None;
update the logic around optimizer_config.output_path (the out_dir assignment and
subsequent mkdir) to validate for None and use a sensible default (e.g., create
a temporary directory via tempfile.mkdtemp or fall back to Path.cwd()) before
calling mkdir, and ensure the chosen default is converted to a Path object so
out_dir.mkdir(parents=True, exist_ok=True) is safe.
- Around line 659-660: Remove the dead empty for loop that iterates "for _ in
range(max(0, offspring_size), needed): pass" since offspring creation is already
handled by the subsequent while loop; delete that for-loop block (referencing
the variables offspring_size and needed and the surrounding loop logic) so no-op
code remains in ga_prompt_optimizer.py and verify behavior remains unchanged.
In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/register.py`:
- Around line 35-43: The run method's return type annotation tuple[Config,
dict[str, object], int] conflicts with the base method
BaseParameterOptimizer.run() which is annotated to return Config; update the
override to match the base: change the signature of run to return Config and
adjust the implementation of run (and any callers) to not expect a (Config,
dict, int) tuple — either extract and persist the extra dict[int] metadata
elsewhere or adapt callers to obtain that metadata via a new explicit method;
reference the run method and BaseParameterOptimizer.run, and the types Config,
OptimizerConfig, OptimizerRunConfig and full_space when locating and updating
the code.
In `@packages/nvidia_nat_config_optimizer/tests/test_optimizer_runtime_extra.py`:
- Line 32: Remove the unnecessary `@pytest.mark.asyncio` decorator from the async
test(s) in this file: locate the line with the decorator `@pytest.mark.asyncio`
and delete it so the async runner can auto-detect and run the test(s) (no other
changes to the test function body are needed).
- Line 50: Remove the `@pytest.mark.asyncio` decorator occurrences in this test
file; locate any tests annotated with `@pytest.mark.asyncio` and delete that
decorator so the tests no longer carry the mark (ensure you only remove the
decorator token itself and leave the async test function definition intact).
In `@packages/nvidia_nat_core/src/nat/cli/commands/optimize.py`:
- Around line 24-28: The import guard is too broad and can hide transitive
import errors inside nat.config_optimizer.optimizer_runtime; modify the except
to only suppress a genuine "module not found" for that exact module (e.g., catch
ModuleNotFoundError or catch ImportError and check _e.name ==
"nat.config_optimizer.optimizer_runtime") and otherwise re-raise the exception
so real runtime import failures surface; keep the fallback assignment to
optimize_config = None and _optimizer_import_error = _e when the module truly
isn't present, referencing the import site
nat.config_optimizer.optimizer_runtime and the symbols optimize_config and
_optimizer_import_error.
In `@packages/nvidia_nat_core/src/nat/cli/type_registry.py`:
- Around line 1306-1308: In get_registered_types_by_component_type, the
OPTIMIZER branch iterates self._registered_optimizer_infos incorrectly and calls
.config_type on items (which are key classes), causing AttributeError; update
the iteration to call static_type() on the key class (use i.static_type())
consistent with other branches (refer to ComponentEnum.OPTIMIZER and
self._registered_optimizer_infos) so the method is invoked on the class rather
than accessing a non-existent .config_type attribute.
In
`@packages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/prompt_optimizer/register.py`:
- Line 22: Move the top-level import of PromptOptimizerInputSchema out of module
scope and lazy-load it inside the two functions that use it
(prompt_optimizer_function and prompt_recombiner_function); inside each function
wrap the import in a try/except ImportError and raise or log a clear error
message instructing the user to install nvidia-nat-config-optimizer (matching
the existing import-guard pattern used elsewhere in this file), so the plugin
can import when that optional package is absent.
In `@pyproject.toml`:
- Around line 46-49: pyproject.toml now includes the new dependency
"nvidia-nat-config-optimizer" but the uv.lock file wasn't updated; run the uv
tool to sync the lockfile by executing the suggested command (uv pip install
nvidia-nat-config-optimizer --sync) from the repo root so uv.lock contains the
new package and its transitive dependencies, then commit the updated uv.lock
alongside the pyproject changes to keep dependencies consistent; verify the
package name ("nvidia-nat-config-optimizer") appears in uv.lock after the sync.
---
Nitpick comments:
In `@packages/nvidia_nat_config_optimizer/pyproject.toml`:
- Around line 80-82: The pyproject is missing an editable source mapping for the
optional package name nvidia-nat-eval in the [tool.uv.sources] table; add an
entry like nvidia-nat-eval = { path = "../nvidia_nat_eval", editable = true }
under [tool.uv.sources] so local development and editable installs resolve the
optional dependency referenced as nvidia-nat-eval.
- Around line 50-58: The package declares runtime dependencies in
tool.setuptools_dynamic_dependencies but doesn’t expose the optional runtime
used by eval_runtime_loader.py; add an optional extra called "eval" in
pyproject.toml that includes "nvidia-nat-eval" (pin version policy consistent
with other deps) so users can install evaluation support via
nvidia-nat-config-optimizer[eval]; update the extras definition in
pyproject.toml to include this extra and list "nvidia-nat-eval" as its
dependency, and ensure the README or setup metadata references the new "eval"
extra for discoverability.
In
`@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/eval_runtime_loader.py`:
- Around line 19-28: The function load_evaluation_run currently returns
EvaluationRun from inside the try block; move the return into an else block
(i.e., try: import EvaluationRun except ImportError as exc: raise
RuntimeError(...) from exc else: return EvaluationRun) to satisfy the TRY300
style suggestion, and update the function docstring to note that it lazily
returns the EvaluationRun class (type kept generic due to lazy import) while
preserving the `@lru_cache`(maxsize=1) decorator and existing error handling.
In
`@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/optimizer_runtime.py`:
- Around line 150-167: Wrap the registry lookup in a try/except to catch a
missing-optimizer KeyError from
registry.get_optimizer(type(base_cfg.optimizer.numeric)) and re-raise a clearer
error (or raise a RuntimeError) that includes the requested type and context
(e.g., base_cfg.optimizer.numeric and maybe base_cfg.optimizer name); locate the
lookup around GlobalTypeRegistry.get() -> registry.get_optimizer(...) and ensure
the exception message references numeric_info/build_fn/runner so the caller can
see which optimizer type lookup failed.
- Around line 159-161: Extract the repeated logic that inspects runner.run
parameters and conditionally inserts the callback into kwargs into a small
helper (e.g., add_optional_param_if_supported or add_callback_if_supported) and
call it from both places; the helper should accept the runner object, the kwargs
dict (numeric_run_kwargs or prompt_run_kwargs) and the callback_manager (and
param name "callback_manager") and perform the
inspect.signature(runner.run).parameters check and set
kwargs["callback_manager"] = callback_manager when present, replacing the
duplicated blocks that reference runner_run_params, runner.run,
numeric_run_kwargs and prompt_run_kwargs.
In
`@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/base.py`:
- Around line 30-40: Update the abstract async method run to include a
Google-style docstring that documents the method behavior, each parameter
(base_cfg: Config, full_space: dict[str, SearchSpace], optimizer_config:
OptimizerConfig, opt_run_config: OptimizerRunConfig), the expected side effects
(e.g., performs prompt optimization and updates state or writes outputs), the
return value (None), and any exceptions that may be raised; modify the docstring
on the run method in the base prompt optimizer class so it follows the Google
docstring sections (Args, Returns, Raises) and clearly describes expected
behavior and inputs for implementers of run.
In
`@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_individual.py`:
- Around line 22-38: The class docstring for Individual documents prompts,
metrics, scalar_fitness, and worst_items_reasoning but omits descriptions for
trial_number and eval_output; update the docstring for the Individual dataclass
to add one-line descriptions for trial_number (e.g., integer tracking which
trial/run produced this individual) and eval_output (e.g., raw evaluator output
or structured results saved after evaluation), so all six fields are documented
alongside the existing entries in the class docstring.
In
`@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_prompt_optimizer.py`:
- Around line 427-428: The log message passed to logger.info uses an en-dash
character; replace the en-dash in the string "No prompts to optimize –
skipping." with a standard hyphen-minus so it reads "No prompts to optimize -
skipping." (locate the logger.info call that logs that exact message in
ga_prompt_optimizer.py and update the literal string).
- Around line 53-60: The parameter eval_metrics in function _on_prompt_trial_end
is declared but not used; update the function signature to either remove
eval_metrics if it's unnecessary or rename it to _eval_metrics to mark it
intentionally unused (or use it if metrics should affect behavior); change the
signature in _on_prompt_trial_end and any call sites to match the chosen
approach so there are no unused-parameter warnings and behavior remains
consistent.
- Around line 209-216: The parameter opt_run_config on the _post_evaluate_single
method is unused; either remove it or mark it intentionally unused by renaming
to _opt_run_config. Update the signature of _post_evaluate_single accordingly
and adjust all call sites that pass opt_run_config (or update callers to stop
passing it if removed). Ensure any overrides/implementations of
_post_evaluate_single (and any references to its parameter list) in the class
hierarchy are also updated to keep signatures consistent.
- Around line 351-354: The loop in _compute_fitness uses zip(population,
norm_per_ind, penalties) which silently truncates if the iterables differ in
length; update that zip call to zip(population, norm_per_ind, penalties,
strict=True) so Python will raise a ValueError on length mismatch and surface
mismatched sizes from _normalize_generation or _apply_diversity_penalty; locate
the loop where ind.scalar_fitness is assigned and add strict=True to the zip
invocation.
- Around line 279-294: The zip over metric_names and directions in the loop
inside _normalize_generation silently truncates if lengths differ; change the
call zip(metric_names, directions) to zip(metric_names, directions, strict=True)
so a ValueError is raised on mismatch, ensuring length validation for
metric_names and directions during normalization in ga_prompt_optimizer.py.
- Around line 524-532: The try/except around calling _mutate_prompt currently
catches Exception broadly (in the block handling param, base_prompt, purpose and
setting new_p), which linters dislike; narrow the except to the known failure
types (e.g., catch RuntimeError, ValueError, or specific mutation-related
exceptions thrown by _mutate_prompt) and preserve the existing fallback behavior
(logger.warning and new_p = base_prompt), or if the failure modes are truly
unknown, add a short comment explaining why a broad except is intentional and
acceptable in this GA mutation context. Ensure you reference and update the
except clause near the callsite of _mutate_prompt so param, base_prompt, purpose
and new_p behavior remain unchanged.
In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/register.py`:
- Around line 54-55: The parameter name in _parameter_optimizer_build is unused
and should be prefixed with an underscore to satisfy static analysis; rename the
parameter from config to _config in the async def signature of
_parameter_optimizer_build (keeping its type NumericOptimizationConfig) and
update any local references if present so the function still yields
_ParameterOptimizerRunner() unchanged.
- Around line 64-65: The function _ga_prompt_optimizer_build has an unused
parameter named config; rename it to _config (or otherwise prefix it with an
underscore) so it follows the unused-parameter convention and avoids linter
warnings. Update the async def signature in _ga_prompt_optimizer_build to accept
_config instead of config and keep the rest of the function (yield
GAPromptOptimizer()) unchanged; no other logic changes are required.
In `@packages/nvidia_nat_core/src/nat/data_models/optimizer.py`:
- Around line 63-66: PromptOptimizationConfig redundantly re-declares the
enabled field with the same default as its parent OptimizerStrategyBaseConfig;
remove the enabled: bool = Field(default=False, ...) declaration from
PromptOptimizationConfig so it inherits the enabled setting from
OptimizerStrategyBaseConfig (leave NumericOptimizationConfig untouched since it
intentionally overrides enabled).
- Around line 205-206: The numeric and prompt fields currently set defaults with
direct instantiation (numeric: NumericOptimizationConfig =
NumericOptimizationConfig(), prompt: GAPromptOptimizationConfig =
GAPromptOptimizationConfig()); change them to use Pydantic Field with
default_factory to avoid mutable-default pitfalls and follow idiomatic patterns
— e.g., replace the direct instantiations with
Field(default_factory=NumericOptimizationConfig) for the numeric field and
Field(default_factory=GAPromptOptimizationConfig) for the prompt field in the
model definition.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
packages/nvidia_nat_config_optimizer/uv.lockis excluded by!**/*.lockuv.lockis excluded by!**/*.lock
📒 Files selected for processing (44)
docs/source/extend/custom-components/index.mddocs/source/extend/custom-components/optimizer.mddocs/source/get-started/installation.mddocs/source/improve-workflows/optimizer.mddocs/source/resources/migration-guide.mdexamples/dynamo_integration/react_benchmark_agent/src/react_benchmark_agent/DEVELOPER_NOTES.mdexamples/evaluation_and_profiling/email_phishing_analyzer/tests/test_email_phishing_analyzer.pypackages/nvidia_nat_config_optimizer/README.mdpackages/nvidia_nat_config_optimizer/pyproject.tomlpackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/__init__.pypackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/eval_runtime_loader.pypackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/optimizable_utils.pypackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/optimizer_runtime.pypackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/parameters/__init__.pypackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/parameters/base.pypackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/parameters/optimizer.pypackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/parameters/pareto_visualizer.pypackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/parameters/selection.pypackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/__init__.pypackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/base.pypackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_individual.pypackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_prompt_optimizer.pypackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/oracle_feedback.pypackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/register.pypackages/nvidia_nat_config_optimizer/src/nat/config_optimizer/update_helpers.pypackages/nvidia_nat_config_optimizer/tests/test_optimizable_utils.pypackages/nvidia_nat_config_optimizer/tests/test_optimizer_runtime_extra.pypackages/nvidia_nat_config_optimizer/tests/test_oracle_feedback.pypackages/nvidia_nat_config_optimizer/tests/test_parameter_optimizer.pypackages/nvidia_nat_config_optimizer/tests/test_parameter_selection_extra.pypackages/nvidia_nat_config_optimizer/tests/test_pareto_visualizer_extra.pypackages/nvidia_nat_config_optimizer/tests/test_prompt_optimizer.pypackages/nvidia_nat_config_optimizer/tests/test_update_helpers.pypackages/nvidia_nat_core/src/nat/cli/commands/optimize.pypackages/nvidia_nat_core/src/nat/cli/register_workflow.pypackages/nvidia_nat_core/src/nat/cli/type_registry.pypackages/nvidia_nat_core/src/nat/data_models/component.pypackages/nvidia_nat_core/src/nat/data_models/optimizer.pypackages/nvidia_nat_core/src/nat/parameter_optimization/prompt_optimizer.pypackages/nvidia_nat_core/src/nat/profiler/parameter_optimization/__init__.pypackages/nvidia_nat_core/tests/nat/data_models/test_optimizer_oracle_feedback.pypackages/nvidia_nat_core/tests/nat/profiler/test_optimizer_runtime_extra.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/prompt_optimizer/register.pypyproject.toml
💤 Files with no reviewable changes (2)
- packages/nvidia_nat_core/tests/nat/profiler/test_optimizer_runtime_extra.py
- packages/nvidia_nat_core/src/nat/parameter_optimization/prompt_optimizer.py
|
|
||
| Workflow config and prompt optimization for [NVIDIA NeMo Agent Toolkit](https://github.com/NVIDIA/NeMo-Agent-Toolkit). Provides genetic-algorithm and numeric (Optuna) optimizers for workflow configs and prompts. Scoped to config-level optimization (hyperparameters, prompts); excludes runtime/inference optimizations. | ||
|
|
||
| Install with NAT: `pip install nvidia-nat` (config optimizer is included by default), or `pip install nvidia-nat-core nvidia-nat-config-optimizer`. |
There was a problem hiding this comment.
Replace the “NAT” acronym in install text.
Line 5 should spell out the product name instead of “NAT” in prose.
Suggested wording
-Install with NAT: `pip install nvidia-nat` (config optimizer is included by default), or `pip install nvidia-nat-core nvidia-nat-config-optimizer`.
+Install with NeMo Agent Toolkit: `pip install nvidia-nat` (config optimizer is included by default), or `pip install nvidia-nat-core nvidia-nat-config-optimizer`.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Install with NAT: `pip install nvidia-nat` (config optimizer is included by default), or `pip install nvidia-nat-core nvidia-nat-config-optimizer`. | |
| Install with NeMo Agent Toolkit: `pip install nvidia-nat` (config optimizer is included by default), or `pip install nvidia-nat-core nvidia-nat-config-optimizer`. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/nvidia_nat_config_optimizer/README.md` at line 5, The README install
line uses the acronym "NAT" in prose; per guidelines replace it with the full
product name "NeMo Agent Toolkit". Update the sentence that currently reads
"Install with NAT: `pip install nvidia-nat` ..." to spell out "NeMo Agent
Toolkit" in the prose while leaving package names (e.g., `nvidia-nat`,
`nvidia-nat-core`, `nvidia-nat-config-optimizer`) unchanged; ensure any other
occurrences of the standalone "NAT" in this README are similarly replaced.
| @abstractmethod | ||
| async def run( | ||
| self, | ||
| *, | ||
| base_cfg: Config, | ||
| full_space: dict[str, SearchSpace], | ||
| optimizer_config: OptimizerConfig, | ||
| opt_run_config: OptimizerRunConfig, | ||
| ) -> Config: | ||
| """Run parameter optimization and return the tuned config.""" |
There was a problem hiding this comment.
Return type mismatch between base class and implementation.
The abstract method declares -> Config, but the _ParameterOptimizerRunner.run() in register.py returns tuple[Config, dict[str, object], int]. While the runtime handles both cases, this violates Liskov Substitution Principle and could cause type checking issues.
Consider aligning the signatures:
🔧 Option 1: Update base class to match implementation
`@abstractmethod`
async def run(
self,
*,
base_cfg: Config,
full_space: dict[str, SearchSpace],
optimizer_config: OptimizerConfig,
opt_run_config: OptimizerRunConfig,
- ) -> Config:
+ ) -> Config | tuple[Config, dict[str, object], int]:
"""Run parameter optimization and return the tuned config."""
...🔧 Option 2: Create a result dataclass for consistency
`@dataclass`
class ParameterOptimizationResult:
tuned_config: Config
best_params: dict[str, object] = field(default_factory=dict)
trial_count: int = 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/parameters/base.py`
around lines 30 - 39, The abstract method signature for
ParameterOptimizerRunner.run in base.py currently returns Config but
implementations like _ParameterOptimizerRunner.run in register.py return
tuple[Config, dict[str, object], int]; change this to a consistent return type
by defining a small result dataclass (e.g., ParameterOptimizationResult with
fields tuned_config: Config, best_params: dict[str, object] =
field(default_factory=dict), trial_count: int = 0) and update the abstract
method signature to return ParameterOptimizationResult; then update all
implementations (notably _ParameterOptimizerRunner.run in register.py) to
construct and return that dataclass and adjust imports/type hints accordingly so
the base class and all overrides match.
| out_dir = optimizer_config.output_path | ||
| out_dir.mkdir(parents=True, exist_ok=True) |
There was a problem hiding this comment.
Potential AttributeError if output_path is None.
optimizer_config.output_path defaults to None (per the data model). If not explicitly set, calling out_dir.mkdir(...) will raise AttributeError: 'NoneType' object has no attribute 'mkdir'. Add validation or provide a sensible default.
🐛 Proposed fix
out_dir = optimizer_config.output_path
+ if out_dir is None:
+ raise ValueError("optimizer_config.output_path must be set for GA prompt optimization")
out_dir.mkdir(parents=True, exist_ok=True)Alternatively, default to a temporary directory or current working directory if appropriate for the use case.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_prompt_optimizer.py`
around lines 435 - 436, The code assumes optimizer_config.output_path is a Path
and calls out_dir.mkdir, which will raise AttributeError if output_path is None;
update the logic around optimizer_config.output_path (the out_dir assignment and
subsequent mkdir) to validate for None and use a sensible default (e.g., create
a temporary directory via tempfile.mkdtemp or fall back to Path.cwd()) before
calling mkdir, and ensure the chosen default is converted to a Path object so
out_dir.mkdir(parents=True, exist_ok=True) is safe.
| for _ in range(max(0, offspring_size), needed): | ||
| pass |
There was a problem hiding this comment.
Remove dead code: empty for loop.
This loop does nothing (pass body) and appears to be leftover from refactoring. The offspring creation is handled by the while loop below.
🧹 Proposed fix
needed = pop_size - len(next_population)
offspring: list[Individual] = []
- for _ in range(max(0, offspring_size), needed):
- pass
while len(offspring) < needed:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/prompts/ga_prompt_optimizer.py`
around lines 659 - 660, Remove the dead empty for loop that iterates "for _ in
range(max(0, offspring_size), needed): pass" since offspring creation is already
handled by the subsequent while loop; delete that for-loop block (referencing
the variables offspring_size and needed and the surrounding loop logic) so no-op
code remains in ga_prompt_optimizer.py and verify behavior remains unchanged.
| async def run( | ||
| self, | ||
| *, | ||
| base_cfg: Config, | ||
| full_space: dict[str, SearchSpace], | ||
| optimizer_config: OptimizerConfig, | ||
| opt_run_config: OptimizerRunConfig, | ||
| callback_manager=None, | ||
| ) -> tuple[Config, dict[str, object], int]: |
There was a problem hiding this comment.
Return type annotation differs from base class.
The run method returns tuple[Config, dict[str, object], int] but BaseParameterOptimizer.run() declares -> Config. This creates a type inconsistency. See related comment on parameters/base.py.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/nvidia_nat_config_optimizer/src/nat/config_optimizer/register.py`
around lines 35 - 43, The run method's return type annotation tuple[Config,
dict[str, object], int] conflicts with the base method
BaseParameterOptimizer.run() which is annotated to return Config; update the
override to match the base: change the signature of run to return Config and
adjust the implementation of run (and any callers) to not expect a (Config,
dict, int) tuple — either extract and persist the extra dict[int] metadata
elsewhere or adapt callers to obtain that metadata via a new explicit method;
reference the run method and BaseParameterOptimizer.run, and the types Config,
OptimizerConfig, OptimizerRunConfig and full_space when locating and updating
the code.
| assert out is cfg | ||
|
|
||
|
|
||
| @pytest.mark.asyncio |
There was a problem hiding this comment.
Remove @pytest.mark.asyncio decorator.
Same as above - the decorator should be removed.
🧹 Proposed fix
-@pytest.mark.asyncio
async def test_optimize_config_calls_numeric_and_prompt(monkeypatch):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/nvidia_nat_config_optimizer/tests/test_optimizer_runtime_extra.py`
at line 50, Remove the `@pytest.mark.asyncio` decorator occurrences in this test
file; locate any tests annotated with `@pytest.mark.asyncio` and delete that
decorator so the tests no longer carry the mark (ensure you only remove the
decorator token itself and leave the async test function definition intact).
| try: | ||
| from nat.config_optimizer.optimizer_runtime import optimize_config | ||
| except ImportError as _e: | ||
| optimize_config = None | ||
| _optimizer_import_error = _e |
There was a problem hiding this comment.
Narrow the import guard to avoid masking real runtime import failures.
Catching ImportError at Line 26 also swallows transitive import failures inside nat.config_optimizer.optimizer_runtime, which can mislead users with an installation hint when the real issue is different.
🐛 Proposed fix
try:
from nat.config_optimizer.optimizer_runtime import optimize_config
-except ImportError as _e:
+ _optimizer_import_error: ModuleNotFoundError | None = None
+except ModuleNotFoundError as _e:
+ if _e.name and not _e.name.startswith("nat.config_optimizer"):
+ raise
optimize_config = None
_optimizer_import_error = _e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/nvidia_nat_core/src/nat/cli/commands/optimize.py` around lines 24 -
28, The import guard is too broad and can hide transitive import errors inside
nat.config_optimizer.optimizer_runtime; modify the except to only suppress a
genuine "module not found" for that exact module (e.g., catch
ModuleNotFoundError or catch ImportError and check _e.name ==
"nat.config_optimizer.optimizer_runtime") and otherwise re-raise the exception
so real runtime import failures surface; keep the fallback assignment to
optimize_config = None and _optimizer_import_error = _e when the module truly
isn't present, referencing the import site
nat.config_optimizer.optimizer_runtime and the symbols optimize_config and
_optimizer_import_error.
| if component_type == ComponentEnum.OPTIMIZER: | ||
| return [i.config_type.static_type() for i in self._registered_optimizer_infos] | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the file to understand the structure
wc -l packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 127
🏁 Script executed:
# Read the section around line 1306-1308 to see the code in context
sed -n '1300,1320p' packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 1067
🏁 Script executed:
# Find where _registered_optimizer_infos is defined
grep -n "_registered_optimizer_infos" packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 725
🏁 Script executed:
# Look at similar patterns for other component types to understand the iteration pattern
sed -n '1290,1310p' packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 1218
🏁 Script executed:
# Let's check the definitions of the _registered_*_infos attributes to understand their types
grep -n "self._registered_evaluator_infos:" packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 176
🏁 Script executed:
# Check definitions for all _registered_*_infos to see their types
grep -n "self._registered_.*_infos:" packages/nvidia_nat_core/src/nat/cli/type_registry.py | head -20Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 1603
🏁 Script executed:
# Look at how _registered_evaluator_infos is used (iteration pattern)
grep -B2 -A2 "_registered_evaluator_infos" packages/nvidia_nat_core/src/nat/cli/type_registry.py | head -40Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 1503
🏁 Script executed:
# Check the RegisteredOptimizerInfo class definition
grep -n "class RegisteredOptimizerInfo" packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 148
🏁 Script executed:
# Look at the RegisteredOptimizerInfo class to see if it has a config_type attribute
sed -n '450,550p' packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 5016
🏁 Script executed:
# Check how .values() is used for other _registered_*_infos
grep -n "self._registered_.*_infos.values()" packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 1060
🏁 Script executed:
# Look more carefully at the get_registered_types_by_component_type method to see the pattern
sed -n '1280,1320p' packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 2116
🏁 Script executed:
# Check RegisteredInfo base class and RegisteredOptimizerInfo to see if they have static_type method
grep -n "class RegisteredInfo" packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 138
🏁 Script executed:
# Look at RegisteredInfo class definition
sed -n '80,180p' packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 7858
🏁 Script executed:
# Find the static_type method in RegisteredInfo
grep -n "def static_type" packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 51
🏁 Script executed:
# Look at the RegisteredInfo class more completely
sed -n '162,250p' packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 3125
🏁 Script executed:
# Also check RegisteredOptimizerInfo to see if it overrides anything
sed -n '375,385p' packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 482
🏁 Script executed:
# Search for static_type method - it might be on the config types themselves
rg "def static_type" packages/nvidia_nat_core/src/nat/Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 149
🏁 Script executed:
# Check if OptimizerStrategyBaseConfig has static_type method
rg "class OptimizerStrategyBaseConfig" packages/nvidia_nat_core/src/nat/ -A 20Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 2123
🏁 Script executed:
# Let me check what methods are available on the config base classes
fd -e "optimizer.py" packages/nvidia_nat_core/src/nat/ -x cat {}Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 506
🏁 Script executed:
# Look at the common.py file to see the static_type method
rg "def static_type" packages/nvidia_nat_core/src/nat/data_models/common.py -A 3Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 161
🏁 Script executed:
# Check TypedBaseModel to understand the inheritance
rg "class TypedBaseModel" packages/nvidia_nat_core/src/nat/data_models/common.py -A 10Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 527
🏁 Script executed:
# Now let's look at what the correct pattern should be by examining how RegisteredOptimizerInfo is used elsewhere
sed -n '975,1000p' packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 1217
🏁 Script executed:
# Check how get_registered_optimizers works
rg "def get_registered_optimizers" packages/nvidia_nat_core/src/nat/cli/type_registry.py -A 3Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 227
🏁 Script executed:
# Let me verify what happens when you iterate over a dict directly
python3 << 'EOF'
test_dict = {"key1": "value1", "key2": "value2"}
for i in test_dict:
print(f"i = {i}, type(i) = {type(i)}")
print(f"Has .config_type? {hasattr(i, 'config_type')}")
EOFRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 184
🏁 Script executed:
# Let me also verify how the evaluator code works since it has a similar pattern
# Check if there's a difference in how evaluators are stored vs optimizers
sed -n '493,496p' packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 306
🏁 Script executed:
# Let me see the actual iteration pattern more carefully - check what the intended behavior should be
# by looking at all the get_registered_*_infos methods
grep -A 2 "def get_registered_" packages/nvidia_nat_core/src/nat/cli/type_registry.py | grep -A 2 "def get_registered_optimizer"Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 226
🏁 Script executed:
# Now let me check if there's actually a test or usage of get_registered_types_by_component_type for optimizers
rg "get_registered_types_by_component_type" packages/nvidia_nat_core/src/nat/ -B 2 -A 2Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 576
🏁 Script executed:
# Let me verify once more by checking the exact iteration in context - specifically looking at line 1307 carefully
# and comparing with how other component types iterate
sed -n '1303,1325p' packages/nvidia_nat_core/src/nat/cli/type_registry.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 1139
🏁 Script executed:
# Verify by checking the data structure: when you iterate over a dict in Python,
# do you get keys or values? Let me verify the comparison
python3 << 'EOF'
class TestInfo:
def __init__(self, name):
self.config_type = name
info1 = TestInfo("config1")
info2 = TestInfo("config2")
# Simulate the data structure
registered_infos = {
"key1": info1, # key is a string, value is TestInfo
"key2": info2
}
print("Iterating over dict directly:")
for i in registered_infos:
print(f" i = {i}, type = {type(i)}")
try:
print(f" i.config_type = {i.config_type}")
except AttributeError as e:
print(f" AttributeError: {e}")
print("\nIterating over dict.values():")
for i in registered_infos.values():
print(f" i = {i}, type = {type(i)}")
print(f" i.config_type = {i.config_type}")
EOFRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 553
get_registered_types_by_component_type will raise AttributeError at runtime for optimizers.
Iterating over self._registered_optimizer_infos directly yields dictionary keys (config type classes), not the values. Attempting to access .config_type on a class type will fail. Change to i.static_type() to match the pattern used for other component types and correctly call the method on the key class.
Proposed fix
if component_type == ComponentEnum.OPTIMIZER:
- return [i.config_type.static_type() for i in self._registered_optimizer_infos]
+ return [i.static_type() for i in self._registered_optimizer_infos]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if component_type == ComponentEnum.OPTIMIZER: | |
| return [i.config_type.static_type() for i in self._registered_optimizer_infos] | |
| if component_type == ComponentEnum.OPTIMIZER: | |
| return [i.static_type() for i in self._registered_optimizer_infos] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/nvidia_nat_core/src/nat/cli/type_registry.py` around lines 1306 -
1308, In get_registered_types_by_component_type, the OPTIMIZER branch iterates
self._registered_optimizer_infos incorrectly and calls .config_type on items
(which are key classes), causing AttributeError; update the iteration to call
static_type() on the key class (use i.static_type()) consistent with other
branches (refer to ComponentEnum.OPTIMIZER and self._registered_optimizer_infos)
so the method is invoked on the class rather than accessing a non-existent
.config_type attribute.
| from nat.builder.framework_enum import LLMFrameworkEnum | ||
| from nat.builder.function_info import FunctionInfo | ||
| from nat.cli.register_workflow import register_function | ||
| from nat.config_optimizer.prompts.ga_prompt_optimizer import PromptOptimizerInputSchema |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
langchain_pyproject="$(fd '^pyproject\.toml$' packages/nvidia_nat_langchain | head -n1)"
if [ -z "${langchain_pyproject}" ]; then
echo "Could not locate packages/nvidia_nat_langchain/pyproject.toml"
exit 1
fi
echo "Inspecting ${langchain_pyproject}"
rg -n 'dependencies|optional-dependencies|nvidia-nat-config-optimizer' "${langchain_pyproject}"Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 529
🏁 Script executed:
cat -n packages/nvidia_nat_langchain/pyproject.toml | sed -n '54,85p'Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 1428
🏁 Script executed:
cat -n packages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/prompt_optimizer/register.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 8323
Move the import of PromptOptimizerInputSchema into the functions where it's used, or guard it with a try/except block.
The unguarded top-level import on line 22 will cause the entire LangChain plugin to fail to import if nvidia-nat-config-optimizer is not installed. Since this class is only used inside prompt_optimizer_function (line 69) and prompt_recombiner_function (line 133), lazy-load it within those functions. Additionally, nvidia-nat-config-optimizer is not declared in pyproject.toml dependencies, making this an undeclared hard dependency. Follow the pattern used elsewhere in the file (e.g., lines 49–52) by catching ImportError and providing a clear error message directing users to install the required package.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/prompt_optimizer/register.py`
at line 22, Move the top-level import of PromptOptimizerInputSchema out of
module scope and lazy-load it inside the two functions that use it
(prompt_optimizer_function and prompt_recombiner_function); inside each function
wrap the import in a try/except ImportError and raise or log a clear error
message instructing the user to install nvidia-nat-config-optimizer (matching
the existing import-guard pattern used elsewhere in this file), so the plugin
can import when that optional package is absent.
| dependencies = [ | ||
| "nvidia-nat-core == {version}", | ||
| "nvidia-nat-config-optimizer == {version}", | ||
| ] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
lock_file="$(fd '^uv\.lock$' | head -n1)"
if [ -z "${lock_file}" ]; then
echo "uv.lock not found"
exit 1
fi
echo "=== Checking pyproject.toml ==="
rg -n 'nvidia-nat-config-optimizer' pyproject.toml || echo "Not found in pyproject.toml"
echo ""
echo "=== Checking uv.lock ==="
rg -n 'nvidia-nat-config-optimizer' "${lock_file}" || echo "Not found in uv.lock"Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 407
Update uv.lock with the new dependency.
nvidia-nat-config-optimizer was added to pyproject.toml (lines 48, 70, 108, 241) but is missing from uv.lock. Run uv pip install nvidia-nat-config-optimizer --sync to update the lockfile and maintain dependency consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pyproject.toml` around lines 46 - 49, pyproject.toml now includes the new
dependency "nvidia-nat-config-optimizer" but the uv.lock file wasn't updated;
run the uv tool to sync the lockfile by executing the suggested command (uv pip
install nvidia-nat-config-optimizer --sync) from the repo root so uv.lock
contains the new package and its transitive dependencies, then commit the
updated uv.lock alongside the pyproject changes to keep dependencies consistent;
verify the package name ("nvidia-nat-config-optimizer") appears in uv.lock after
the sync.
Description
Closes
By Submitting this PR I confirm:
Summary by CodeRabbit
Documentation
New Features
Installation