Conversation
Signed-off-by: mchochowski <mchochowski@nvidia.com>
Signed-off-by: mchochowski <mchochowski@nvidia.com>
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. 🗂️ Base branches to auto review (3)
Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing touches🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Is this script NVIDIA written or adapted from some other 3rd party OSS repo?
|
|
||
| With this release Puzzle algorithm supports only experts removal for Gpt-Oss-20b. This model comes as a quantized checkpoint i.e. MoE experts matrices are quantized with mxfp4 format. In the prunning steps puzzle utilizes decompressed model (back to bf16) for statistics and scores computation. This means, during the conversion to puzzle format we decompress the model and store it as a bf16. Once the pruning is done i.e. experts to be removed are identified and the process is finished, user may want to get back the mxfp4 format of the checkpoint. To do so, there is an additional script, that takes the original and the pruned checkpoint and outputs pruned checkpoint in mxfp4 format. | ||
| ```bash | ||
| python gpt_oss_pack_mxfp4_vllm.py --student-path /workspaces/any_model_gpt_oss_20b/mip/puzzle_solutions/stats_num_params_18014757184/solutions--checkpoints/solution_0/ --original-path /workspaces/source_model_checkpoints/openai_gpt-oss-20b/ --output-path /workspaces/any_model_gpt_oss_20b/mip/puzzle_solutions/stats_num_params_18014757184/solutions--checkpoints/mxfp4-ckpt/ --deduce-experts --num-layers 24 |
There was a problem hiding this comment.
We need to specify from which path to run this command. Alternatively, pls check if python -m modelopt.torch.puzzletron.anymodel.models.gpt_oss_20b.gpt_oss_pruned_to_mxfp4 --student-path ... works
| - _self_ | ||
|
|
||
| puzzle_dir: ??? | ||
| descriptor: llama |
| # FFN intermediate sizes to search over (heterogeneous architecture) | ||
| # teacher_intermediate_size is 8192, so we use proportionally smaller values | ||
| pruning: | ||
| intermediate_size_list: [2048, 4096, 6144] |
There was a problem hiding this comment.
is it needed if we prune for num_of_experts?
|
|
||
| Modify `llama-3_1-8B_pruneffn_memory.yaml` file for advanced compression scenarios. | ||
|
|
||
| ## GptOss - 20b |
There was a problem hiding this comment.
Let's not put it at the same level as ## Advanced Usage, I would put it into a separate MD file (in the model descriptor dir) and link nicely in the main tutorial. Consult also with @LianaMikael how best to do it. We want the tutorial to have a great user experience.
Let's also check for English style/grammar. E.g. should be no ',' after that. , likely comma after 'n the prunning steps '
| @@ -0,0 +1,506 @@ | |||
| #!/usr/bin/env python3 | |||
There was a problem hiding this comment.
missing license header (should be autogenerated by pre-commit hooks), the same for other files., check Contributing.MD file in the ModelOpt repo.
What does this PR do?
Adds gpt-oss-20b support for puzzle any-model pruning.
Type of change:
new feature
Overview:
adds descriptor, converter and yaml configuration files for expert removal. Introduces slight changes on conversion to account for mxfp4 quantized checkpoint of gpt-oss
Usage
# Add a code snippet demonstrating how to use thisTesting
Before your PR is "Ready for review"
Additional Information