Bugfix - Fix DeepSpeed BF16 config validation error#796
Conversation
… scaling parameters
There was a problem hiding this comment.
Pull request overview
Fixes DeepSpeed BF16 initialization failures in the Megatron-GPT benchmark by generating precision-specific DeepSpeed config sections, avoiding BF16 rejection of FP16-only loss-scaling fields under newer DeepSpeed/Pydantic validation.
Changes:
- Generate FP16 DeepSpeed config with loss-scaling parameters (unchanged behavior).
- Generate BF16 DeepSpeed config with only
enabled: Trueto satisfy strict BF16 schema validation. - Omit the precision section entirely for FP32 runs.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Replace misleading 'Load deepspeed config template json file' comment with 'Build deepspeed config template in memory' since the template is constructed inline rather than loaded from a file.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Codecov Report✅ All modified and coverable lines are covered by tests. ❌ Your patch check has failed because the patch coverage (50.00%) is below the target coverage (80.00%). You can increase the patch coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## main #796 +/- ##
=======================================
Coverage 85.69% 85.69%
=======================================
Files 103 103
Lines 7890 7894 +4
=======================================
+ Hits 6761 6765 +4
Misses 1129 1129
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Asserts that __prepare_deespeed_config writes the expected JSON schema:
- bf16 -> {'enabled': True} only (no loss_scale / loss_scale_window /
min_loss_scale / initial_scale_power / hysteresis), matching DeepSpeed's
BF16 config validator that triggered the original failure.
- fp16 -> retains the loss-scale fields.
- '' (empty precision, e.g. fp32 path) -> no precision section attached.
The test cleans up the mock pretrain_gpt.py at test end so it does not
leak into the negative path of test_megatron_gpt_preprocess.
Description
The megatron-gpt:deepspeed benchmark fails with return code 3 (INVALID_BENCHMARK_RESULT) during the BF16 training round. The benchmark runs two precision rounds (FP16 then BF16), and while FP16 succeeds, BF16 crashes at DeepSpeed initialization with:
pydantic_core._pydantic_core.ValidationError: 5 validation errors for DeepSpeedBF16Config
loss_scale - Extra inputs are not permitted
loss_scale_window - Extra inputs are not permitted
min_loss_scale - Extra inputs are not permitted
initial_scale_power - Extra inputs are not permitted
hysteresis - Extra inputs are not permitted
Root Cause
__prepare_deespeed_config() in megatron_gpt3.py uses the same precision_template for both FP16 and BF16 configs. This template includes loss-scaling parameters (loss_scale, loss_scale_window, min_loss_scale, initial_scale_power, hysteresis) that are valid for FP16 but rejected by DeepSpeedBF16Config, which uses pydantic strict validation and does not accept extra fields.
BF16 does not need loss scaling because it has sufficient dynamic range to avoid the underflow/overflow issues that FP16 faces.
This was always technically incorrect, but only became a hard failure when DeepSpeed migrated from Pydantic v1 to Pydantic v2 (around DeepSpeed v0.15–v0.16). In Pydantic v1, the extra = "forbid" setting was less strictly enforced, so the extra fields were silently ignored. Pydantic v2 strictly rejects all unknown fields with a ValidationError.
Fix
Generate precision-specific DeepSpeed configs:
This fix is backward compatible — passing only {'enabled': True} for BF16 is valid in all DeepSpeed versions, since the loss-scaling fields were never used by BF16.