Skip to content

[bugfix]: fix SP deadlock in negative prompt encoding during training#1178

Open
alexzms wants to merge 4 commits intohao-ai-lab:mainfrom
FoundationResearch:bugfix/train/sp
Open

[bugfix]: fix SP deadlock in negative prompt encoding during training#1178
alexzms wants to merge 4 commits intohao-ai-lab:mainfrom
FoundationResearch:bugfix/train/sp

Conversation

@alexzms
Copy link
Copy Markdown
Collaborator

@alexzms alexzms commented Mar 22, 2026

Summary

  • Fix NCCL deadlock when training with sp_size > 1
  • Root cause: ensure_negative_conditioning() created an inference pipeline on rank 0 only, but post_init() called warmup_sequence_parallel_communication() which does all-to-all on the SP group — other ranks never participated, causing a hang
  • Fix: every rank now encodes the negative prompt independently using a lightweight tokenizer + T5 encoder, eliminating the rank-0 guard and all broadcast logic (small overhead)

Tests

  • Tested with sp_size=4 on 4 and 8 GPUs. No hangs, no quality degradation
  • Pre-commit passes

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical deadlock issue encountered during distributed training, specifically when using sequence parallelism. The previous implementation of negative prompt encoding inadvertently created a bottleneck by centralizing the process on a single rank, causing other ranks to wait indefinitely for a collective operation that never fully materialized. The updated approach decentralizes this encoding, allowing each rank to process its negative prompt independently, thereby resolving the deadlock and ensuring smooth distributed training.

Highlights

  • Deadlock Fix: Resolved an NCCL deadlock that occurred during training with sp_size > 1 when encoding negative prompts.
  • Root Cause Identified: The deadlock was caused by ensure_negative_conditioning() creating an inference pipeline only on rank 0, while warmup_sequence_parallel_communication() (called by post_init()) expected all ranks in the Sequence Parallel (SP) group to participate in an all-to-all collective, leading to a hang for non-rank 0 processes.
  • Solution Implemented: Modified the negative prompt encoding mechanism so that every rank independently encodes the negative prompt using a lightweight tokenizer and T5 encoder. This eliminates the need for a rank-0 guard and broadcast logic, preventing the collective communication deadlock.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively resolves a deadlock issue in distributed training with sequence parallelism by refactoring the negative prompt encoding. The previous approach of encoding on a single rank and broadcasting the results is replaced by independent encoding on each rank. This change simplifies the code and eliminates complex collective communication. The implementation is solid, but I have one suggestion to improve robustness by using the configured precision for the text encoder instead of a hardcoded value.

Comment thread fastvideo/train/models/wan/wan.py
Comment thread fastvideo/train/models/wan/wan.py Outdated
@Eigensystem Eigensystem added the ready PR is ready to merge label Mar 23, 2026
@mergify mergify bot added type: bugfix Bug fix scope: training Training pipeline, methods, configs labels Mar 30, 2026
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Mar 30, 2026

Merge Protections

Your pull request matches the following merge protections and will not be merged until they are valid.

🔴 PR merge requirements

Waiting for:

  • #approved-reviews-by>=1
  • check-success=fastcheck-passed
  • check-success=full-suite-passed
This rule is failing.
  • #approved-reviews-by>=1
  • check-success=fastcheck-passed
  • check-success=full-suite-passed
  • check-success~=pre-commit
  • title~=(?i)^\[(feat|feature|bugfix|fix|refactor|perf|ci|doc|docs|misc|chore|kernel|new.?model)\]

@alexzms
Copy link
Copy Markdown
Collaborator Author

alexzms commented Apr 5, 2026

/merge

@github-actions github-actions bot added ready PR is ready to merge and removed ready PR is ready to merge labels Apr 5, 2026
mask_sizes = tuple(int(x) for x in mask_shape[:mask_ndim].tolist())
text_enc_dtype = PRECISION_TO_TYPE[tc.pipeline_config.text_encoder_precisions[0]]
tokenizer = AutoTokenizer.from_pretrained(os.path.join(model_path, "tokenizer"))
text_encoder = T5EncoderModel.from_pretrained(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should still be possible to just use the prompt encoding stage instead of creating a new instance? I'm pretty sure this is the wrong encoder for wan. There's also a UMT5EncoderModel which should be loaded by the existing prompt stage

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready PR is ready to merge scope: training Training pipeline, methods, configs type: bugfix Bug fix

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants