Skip to content

Bugfix - Fix DeepSpeed BF16 config validation error#796

Open
polarG wants to merge 4 commits intomainfrom
dev/hongtaozhang/fix-deepspeed-bf16-config-validation
Open

Bugfix - Fix DeepSpeed BF16 config validation error#796
polarG wants to merge 4 commits intomainfrom
dev/hongtaozhang/fix-deepspeed-bf16-config-validation

Conversation

@polarG
Copy link
Copy Markdown
Contributor

@polarG polarG commented Mar 26, 2026

Description
The megatron-gpt:deepspeed benchmark fails with return code 3 (INVALID_BENCHMARK_RESULT) during the BF16 training round. The benchmark runs two precision rounds (FP16 then BF16), and while FP16 succeeds, BF16 crashes at DeepSpeed initialization with:
pydantic_core._pydantic_core.ValidationError: 5 validation errors for DeepSpeedBF16Config
loss_scale - Extra inputs are not permitted
loss_scale_window - Extra inputs are not permitted
min_loss_scale - Extra inputs are not permitted
initial_scale_power - Extra inputs are not permitted
hysteresis - Extra inputs are not permitted

Root Cause
__prepare_deespeed_config() in megatron_gpt3.py uses the same precision_template for both FP16 and BF16 configs. This template includes loss-scaling parameters (loss_scale, loss_scale_window, min_loss_scale, initial_scale_power, hysteresis) that are valid for FP16 but rejected by DeepSpeedBF16Config, which uses pydantic strict validation and does not accept extra fields.

BF16 does not need loss scaling because it has sufficient dynamic range to avoid the underflow/overflow issues that FP16 faces.

This was always technically incorrect, but only became a hard failure when DeepSpeed migrated from Pydantic v1 to Pydantic v2 (around DeepSpeed v0.15–v0.16). In Pydantic v1, the extra = "forbid" setting was less strictly enforced, so the extra fields were silently ignored. Pydantic v2 strictly rejects all unknown fields with a ValidationError.

Fix
Generate precision-specific DeepSpeed configs:

  • FP16: includes all loss-scaling parameters (unchanged behavior)
  • BF16: only {'enabled': True}
  • FP32: no precision section

This fix is backward compatible — passing only {'enabled': True} for BF16 is valid in all DeepSpeed versions, since the loss-scaling fields were never used by BF16.

@polarG polarG requested a review from a team as a code owner March 26, 2026 22:20
Copilot AI review requested due to automatic review settings March 26, 2026 22:20
@polarG polarG self-assigned this Mar 26, 2026
@polarG polarG added bug Something isn't working ROCm labels Mar 26, 2026
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes DeepSpeed BF16 initialization failures in the Megatron-GPT benchmark by generating precision-specific DeepSpeed config sections, avoiding BF16 rejection of FP16-only loss-scaling fields under newer DeepSpeed/Pydantic validation.

Changes:

  • Generate FP16 DeepSpeed config with loss-scaling parameters (unchanged behavior).
  • Generate BF16 DeepSpeed config with only enabled: True to satisfy strict BF16 schema validation.
  • Omit the precision section entirely for FP32 runs.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread superbench/benchmarks/model_benchmarks/megatron_gpt3.py Outdated
polarG and others added 2 commits April 27, 2026 14:42
Replace misleading 'Load deepspeed config template json file' comment
with 'Build deepspeed config template in memory' since the template is
constructed inline rather than loaded from a file.
Copilot AI review requested due to automatic review settings April 27, 2026 21:52
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 1 out of 1 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread superbench/benchmarks/model_benchmarks/megatron_gpt3.py
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 27, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 85.69%. Comparing base (700d650) to head (1587335).

❌ Your patch check has failed because the patch coverage (50.00%) is below the target coverage (80.00%). You can increase the patch coverage or adjust the target coverage.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #796   +/-   ##
=======================================
  Coverage   85.69%   85.69%           
=======================================
  Files         103      103           
  Lines        7890     7894    +4     
=======================================
+ Hits         6761     6765    +4     
  Misses       1129     1129           
Flag Coverage Δ
cpu-python3.12-unit-test 70.44% <100.00%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Asserts that __prepare_deespeed_config writes the expected JSON schema:
- bf16 -> {'enabled': True} only (no loss_scale / loss_scale_window /
  min_loss_scale / initial_scale_power / hysteresis), matching DeepSpeed's
  BF16 config validator that triggered the original failure.
- fp16 -> retains the loss-scale fields.
- '' (empty precision, e.g. fp32 path) -> no precision section attached.

The test cleans up the mock pretrain_gpt.py at test end so it does not
leak into the negative path of test_megatron_gpt_preprocess.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working ROCm

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants