Skip to content

gpt-oss 20b support#889

Open
chochowski wants to merge 2 commits intoNVIDIA:dkorzekwa/any_modelfrom
chochowski:mchochowski/any_model_gptoss
Open

gpt-oss 20b support#889
chochowski wants to merge 2 commits intoNVIDIA:dkorzekwa/any_modelfrom
chochowski:mchochowski/any_model_gptoss

Conversation

@chochowski
Copy link

@chochowski chochowski commented Feb 13, 2026

What does this PR do?

Adds gpt-oss-20b support for puzzle any-model pruning.

Type of change:
new feature

Overview:
adds descriptor, converter and yaml configuration files for expert removal. Introduces slight changes on conversion to account for mxfp4 quantized checkpoint of gpt-oss

Usage

# Add a code snippet demonstrating how to use this

Testing

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes/No
  • Did you write any new necessary tests?: Yes/No
  • Did you add or update any necessary documentation?: Yes/No
  • Did you update Changelog?: Yes/No

Additional Information

Signed-off-by: mchochowski <mchochowski@nvidia.com>
Signed-off-by: mchochowski <mchochowski@nvidia.com>
@chochowski chochowski requested review from a team as code owners February 13, 2026 11:18
@chochowski chochowski requested review from jingyu-ml and removed request for a team February 13, 2026 11:18
@copy-pr-bot
Copy link

copy-pr-bot bot commented Feb 13, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 13, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

🗂️ Base branches to auto review (3)
  • main
  • release/.*
  • feature/.*

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@kevalmorabia97 kevalmorabia97 requested review from danielkorzekwa and kevalmorabia97 and removed request for jingyu-ml February 13, 2026 19:33
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this script NVIDIA written or adapted from some other 3rd party OSS repo?


With this release Puzzle algorithm supports only experts removal for Gpt-Oss-20b. This model comes as a quantized checkpoint i.e. MoE experts matrices are quantized with mxfp4 format. In the prunning steps puzzle utilizes decompressed model (back to bf16) for statistics and scores computation. This means, during the conversion to puzzle format we decompress the model and store it as a bf16. Once the pruning is done i.e. experts to be removed are identified and the process is finished, user may want to get back the mxfp4 format of the checkpoint. To do so, there is an additional script, that takes the original and the pruned checkpoint and outputs pruned checkpoint in mxfp4 format.
```bash
python gpt_oss_pack_mxfp4_vllm.py --student-path /workspaces/any_model_gpt_oss_20b/mip/puzzle_solutions/stats_num_params_18014757184/solutions--checkpoints/solution_0/ --original-path /workspaces/source_model_checkpoints/openai_gpt-oss-20b/ --output-path /workspaces/any_model_gpt_oss_20b/mip/puzzle_solutions/stats_num_params_18014757184/solutions--checkpoints/mxfp4-ckpt/ --deduce-experts --num-layers 24
Copy link
Collaborator

@kevalmorabia97 kevalmorabia97 Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to specify from which path to run this command. Alternatively, pls check if python -m modelopt.torch.puzzletron.anymodel.models.gpt_oss_20b.gpt_oss_pruned_to_mxfp4 --student-path ... works

- _self_

puzzle_dir: ???
descriptor: llama

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems wrong

# FFN intermediate sizes to search over (heterogeneous architecture)
# teacher_intermediate_size is 8192, so we use proportionally smaller values
pruning:
intermediate_size_list: [2048, 4096, 6144]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it needed if we prune for num_of_experts?


Modify `llama-3_1-8B_pruneffn_memory.yaml` file for advanced compression scenarios.

## GptOss - 20b

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not put it at the same level as ## Advanced Usage, I would put it into a separate MD file (in the model descriptor dir) and link nicely in the main tutorial. Consult also with @LianaMikael how best to do it. We want the tutorial to have a great user experience.

Let's also check for English style/grammar. E.g. should be no ',' after that. , likely comma after 'n the prunning steps '

@@ -0,0 +1,506 @@
#!/usr/bin/env python3

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing license header (should be autogenerated by pre-commit hooks), the same for other files., check Contributing.MD file in the ModelOpt repo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants