Skip to content

Feature/optimizer module and registry#1614

Closed
afourniernv wants to merge 4 commits intoNVIDIA:developfrom
afourniernv:feature/optimizer-module-and-registry
Closed

Feature/optimizer module and registry#1614
afourniernv wants to merge 4 commits intoNVIDIA:developfrom
afourniernv:feature/optimizer-module-and-registry

Conversation

@afourniernv
Copy link

WIP DRAFT PR

Description

Extracts the optimizer into a standalone nvidia-nat-optimizer package, refactors the GA as an evolutionary-base implementation, and introduces a pluggable registry for optimizer strategies—enabling third-party optimizers without modifying core code.

Package extraction

  • New packages/nvidia_nat_optimizer package with its own pyproject.toml and minimal deps. Installable as nvidia-nat-optimizer (standalone) or included by default via nvidia-nat.
  • Parameter and prompt optimizer code moved out of nat_core profiler into nat.optimizer. Core nat optimize CLI imports optimizer resiliently and suggests installation when missing.
  • Examples, LangChain prompt-optimizer hooks, and docs updated for the new package layout.

Evolutionary base and GA refactor

  • Introduced evolutionary_base.py with a shared evaluation loop and template method; subclasses implement fitness computation and persistence.
  • Replaced monolithic prompt_optimizer with ga_prompt_optimizer as the first evolutionary implementation. Oracle feedback moves out of the shared config into model_extra and GA-specific config.

Pluggable optimizer registry

  • New OptimizerStrategyBaseConfig base and registry in type_registry (register_optimizer, get_optimizer). NumericOptimizationConfig and PromptGAOptimizationConfig extend it with name="numeric" and name="ga".
  • register_optimizer decorator in register_workflow.py (mirrors register_evaluator). Optimizer runtime dispatches via get_optimizer(type(config)) instead of direct imports.
  • nat.optimizer.register registers built-in strategies when the package loads. Existing YAML configs remain valid; strategy selection uses the inferred config type.

Note on diff size

~5k additions are mostly: uv.lock for the new package (~3.6k lines) and one new file ga_prompt_optimizer.py (~526 lines). Moved files (e.g. oracle_feedback.py, update_helpers.py) show as "renamed without changes" so blame is preserved.

- Add packages/nvidia_nat_optimizer with pyproject.toml, README, src/nat/optimizer, tests
- Move parameter_optimization modules and tests from core to optimizer package
- Core CLI optimize: resilient import; on missing optimizer, suggest nvidia-nat-optimizer or nvidia-nat
- Meta-package default and most extras include nvidia-nat-optimizer; add uv.sources path
- Update LangChain prompt_optimizer register and examples (DEVELOPER_NOTES, email_phishing test)
- Docs: installation (default includes optimizer, extras), optimizer (prerequisites)

Signed-off-by: afourniernv <afournier@nvidia.com>
- evolutionary_base: shared evaluation loop and template method; subclasses implement fitness and persistence
- ga_prompt_optimizer: GA fitness, selection, oracle feedback, checkpointing
- Oracle feedback moves out of core config into model_extra and ga_config
- Replace prompt_optimizer with ga_prompt_optimizer; update runtime, langchain, tests

Signed-off-by: afourniernv <afournier@nvidia.com>
- Add OPTIMIZER to ComponentEnum, OptimizerStrategyBaseConfig (TypedBaseModel)
- Extend NumericOptimizationConfig and PromptGAOptimizationConfig with name=
- Add register_optimizer, get_optimizer, RegisteredOptimizerInfo in type_registry
- Add register_optimizer decorator in register_workflow
- Add nat.optimizer.register with GA and parameter optimizer registration
- Update optimizer_runtime to dispatch via registry instead of direct imports
- Update test_optimizer_runtime_extra for registry-based flow
@copy-pr-bot
Copy link

copy-pr-bot bot commented Feb 19, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link

coderabbitai bot commented Feb 19, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.



class PromptGAOptimizationConfig(BaseModel):
class PromptGAOptimizationConfig(OptimizerStrategyBaseConfig, name="ga"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like this should just be PromptOptimizationConfig with the genetic algorithm being a registered variant.

class PromptGAOptimizationConfig(OptimizerStrategyBaseConfig, name="ga"):
"""
Configuration for prompt optimization using a Genetic Algorithm.
Oracle feedback and other implementation-specific options are not part of this
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The GA-specific registered variant of the PromptOptimization config can also have oracle feedback. Passing it in as model extra limits the amount of type checking and validation we can do, as well as the discoverability of available fields.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Theses tests will need to change once we pull oracle feedback and associated config items back out of model extra

from nat.data_models.optimizer import PromptGAOptimizationConfig


class GAOptimizerConfig(BaseOptimizerConfig):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we have PromptGAOptimizationConfig and also this?

from nat.optimizer.update_helpers import apply_suggestions


class BaseEvolutionaryPromptOptimizer(ABC):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like we need one more level of interface above this. Something like BasePromptOptimizer which defines methods that all prompt optimizers must implement. And then the GAPromptOptimizer is a subclass of that. BaseEvolutionaryPromptOptimizer seems unecessary unless we expect to have many evolutionary algorithms AND other algorithms we want to support.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another suggestion/note on the directory structure. Perhaps the dilenation would be clearer if we have two directories under packages/nvidia_nat_optimizer/src/nat/optimizer. One for parameters and one for prompts



@experimental(feature_name="Optimizer")
def optimize_parameters(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like we also need an ABC for parameter optimization so that developers can also register new ways to do parameter optimization but still enforce a standard interface.

@afourniernv
Copy link
Author

Superseded by:

Dhruv comments have been addressed in the latest PR.

#1637
#1637

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants