Skip to content

fix: add gate_compress parameter to all attention backends (#817)#1182

Closed
KyleNeverGivesUp wants to merge 3 commits intohao-ai-lab:mainfrom
KyleNeverGivesUp:fix/gate-compress-attention-backends
Closed

fix: add gate_compress parameter to all attention backends (#817)#1182
KyleNeverGivesUp wants to merge 3 commits intohao-ai-lab:mainfrom
KyleNeverGivesUp:fix/gate-compress-attention-backends

Conversation

@KyleNeverGivesUp
Copy link
Copy Markdown

Purpose

Fixes #817

Changes

  • Added gate_compress: torch.Tensor | None = None parameter to forward() in all attention backends (sdpa, flash_attn, sage_attn, sage_attn3, sla, vmoba, abstract)
  • This fixes the TypeError: SDPAImpl.forward() takes 5 positional arguments but 6 were given error when DistributedAttention_VSA is used with non-VSA backends
  • The # type: ignore[call-arg] comment in layer.py line 207 confirms this was a known type contract violation

Test Plan

pytest fastvideo/tests/ -k "sparse" -v

Test Results

Unable to run full tests locally due to no NVIDIA GPU available (Mac environment).
Code change is minimal - only adding gate_compress: torch.Tensor | None = None
as an optional parameter to existing forward() signatures, which should not affect
existing functionality.

Checklist

  • I ran pre-commit run --all-files and fixed all issues
  • I added or updated tests for my changes
  • I updated documentation if needed
  • I considered GPU memory impact of my changes

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request standardizes the forward method signatures across various attention backends by introducing an optional gate_compress parameter and making the attn_metadata parameter optional. This change resolves a TypeError that previously arose when the DistributedAttention_VSA component attempted to call non-VSA backends with an incompatible number of arguments, thereby enhancing the system's compatibility and preventing runtime errors in mixed attention configurations.

Highlights

  • Parameter Standardization: Added an optional gate_compress: torch.Tensor | None = None parameter to the forward() method signatures across all affected attention backends (sdpa, flash_attn, sage_attn, sage_attn3, sla, vmoba, and abstract).
  • Type Hint Refinement: Modified the attn_metadata parameter to be optional (| None = None) in the forward() method signatures across the affected attention backends.
  • Bug Fix: Resolved a TypeError that occurred when DistributedAttention_VSA was used with non-VSA backends, which was caused by an incompatible number of arguments in the forward() calls.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Welcome to FastVideo! Thanks for your first pull request.

How our CI works:

PRs run a two-tier CI system:

  1. Pre-commit — formatting (yapf), linting (ruff), type checking (mypy). Runs immediately on every PR.
  2. Fastcheck — core GPU tests (encoders, VAEs, transformers, kernels, unit tests). Runs automatically via Buildkite on relevant file changes (~10-15 min).
  3. Full Suite — integration tests, training pipelines, SSIM regression. Runs only when a reviewer adds the ready label.

Before your PR is reviewed:

  • pre-commit run --all-files passes locally
  • You've added or updated tests for your changes
  • The PR description explains what and why

If pre-commit fails, a bot comment will explain how to fix it. Fastcheck and Full Suite results appear in the Checks section below.

Useful links:

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new optional parameter gate_compress and makes the attn_metadata parameter optional in the forward method signatures across various attention backend implementations. However, making attn_metadata optional without corresponding None checks in the method bodies introduces potential AttributeError issues in sla.py and vmoba.py when attn_metadata is None. The reviewer suggests adding None checks before accessing attn_metadata attributes, or in the case of vmoba.py, considering if attn_metadata should remain non-optional.

Comment thread fastvideo/attention/backends/sla.py Outdated
Comment thread fastvideo/attention/backends/sla.py Outdated
Comment thread fastvideo/attention/backends/vmoba.py Outdated
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d9f00fc717

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread fastvideo/attention/backends/sdpa.py Outdated
Comment on lines +76 to +77
gate_compress: torch.Tensor | None = None,
attn_metadata: SDPAMetadata | None = None,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep attn_metadata as the fourth positional parameter

This signature change makes existing positional call sites pass metadata into gate_compress instead of attn_metadata. For example, fastvideo/attention/layer.py:125,286 and fastvideo/models/dits/ltx2.py:1141,1234 still call forward(q, k, v, ctx_attn_metadata), so after this commit attn_metadata becomes None inside SDPA/Flash/Sage/SLA/VMOBA paths. In masked or variable-length attention flows, that silently drops metadata-derived masking behavior and can produce incorrect attention outputs rather than raising an error.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. Moved gate_compress to after attn_metadata in all backend forward signatures, so existing call sites remain unaffected.

@loaydatrain
Copy link
Copy Markdown
Collaborator

Why would DistributedAttention_VSA be called on a non-VSA attn backend? Were you able to replicate the error from the original issue?

@alexzms
Copy link
Copy Markdown
Collaborator

alexzms commented Mar 27, 2026

Why would DistributedAttention_VSA be called on a non-VSA attn backend? Were you able to replicate the error from the original issue?

agree

@Eigensystem Eigensystem added the ready PR is ready to merge label Mar 28, 2026
@mergify mergify bot added the attention label Mar 28, 2026
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Mar 30, 2026

⚠️ PR title format required

Your PR title must start with a type tag in brackets. Examples:

  • [feat] Add new model support
  • [bugfix] Fix VAE tiling corruption
  • [refactor] Restructure training pipeline
  • [perf] Optimize attention kernel
  • [ci] Update test infrastructure
  • [docs] Add inference guide
  • [misc] Clean up configs
  • [new-model] Port Flux2 to FastVideo

Valid tags: feat, feature, bugfix, fix, refactor, perf, ci, doc, docs, misc, chore, kernel, new-model

Please update your PR title and the merge protection check will pass automatically.

@mergify mergify bot added the scope: attention Attention backends (VSA, STA, Flash, etc.) label Mar 30, 2026
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Mar 30, 2026

Merge Protections

Your pull request matches the following merge protections and will not be merged until they are valid.

🔴 PR merge requirements

This rule is failing.
  • #approved-reviews-by>=1
  • check-success=fastcheck-passed
  • check-success=full-suite-passed
  • check-success~=pre-commit
  • title~=(?i)^\[(feat|feature|bugfix|fix|refactor|perf|ci|doc|docs|misc|chore|kernel|new.?model)\]

@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Mar 30, 2026

Buildkite CI tests failed

Hi @KyleNeverGivesUp, some Buildkite CI tests have failed. Check the build for details:
View Buildkite build →

Common causes:

  • Test failures: Check the failing step's output for assertion errors or tracebacks
  • Import errors: Make sure new dependencies are added to pyproject.toml
  • GPU memory: Some tests require specific GPU types (L40S, H100 NVL)
  • Kernel build: If you changed fastvideo-kernel/, the build may have failed

If the failure is unrelated to your changes, leave a comment explaining why.

@loaydatrain
Copy link
Copy Markdown
Collaborator

Closing this PR. Issue solved by #1183

@loaydatrain loaydatrain closed this Apr 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready PR is ready to merge scope: attention Attention backends (VSA, STA, Flash, etc.)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] 1.6. INcompatibilities

4 participants