-
Notifications
You must be signed in to change notification settings - Fork 32
GEMM + ReduceScatter with Workgroup Specialization Example #317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
examples/22_gemm_one_shot_reduce_scatter_wg_specialization/gemm_reduce_scatter.py
Outdated
Show resolved
Hide resolved
mawad-amd
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR, Kyle! I know it is a draft but I left a couple of comments.
examples/22_gemm_one_shot_reduce_scatter_wg_specialization/gemm_reduce_scatter.py
Outdated
Show resolved
Hide resolved
examples/22_gemm_one_shot_reduce_scatter_wg_specialization/benchmark.py
Outdated
Show resolved
Hide resolved
|
Hi @mawad-amd , as you mentioned in #169, do I need to add a test for this like https://github.com/ROCm/iris/blob/main/tests/examples/test_all_load_bench.py? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR introduces a GEMM + ReduceScatter example that uses workgroup specialization to overlap computation and communication on AMD GPUs. The implementation divides SMs into GEMM workgroups for matrix multiplication and communication workgroups for scatter operations.
Changes:
- Added validation function for reduce-scatter operations
- Implemented persistent GEMM kernel with integrated ReduceScatter using workgroup specialization
- Created benchmark infrastructure with timing, validation, and tracing capabilities
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| examples/common/validation.py | Added validate_reduce_scatter function to verify reduce-scatter correctness |
| examples/22_gemm_one_shot_reduce_scatter_wg_specialization/gemm_reduce_scatter.py | Core kernel implementing GEMM + ReduceScatter with SM specialization |
| examples/22_gemm_one_shot_reduce_scatter_wg_specialization/matmul_wrapper.py | PyTorch autograd wrapper for the GEMM kernel |
| examples/22_gemm_one_shot_reduce_scatter_wg_specialization/benchmark.py | Benchmark script with validation, timing, and distributed setup |
examples/22_gemm_one_shot_reduce_scatter_wg_specialization/benchmark.py
Outdated
Show resolved
Hide resolved
examples/22_gemm_one_shot_reduce_scatter_wg_specialization/matmul_wrapper.py
Outdated
Show resolved
Hide resolved
mawad-amd
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Thanks, Kyle!
Motivation
To add an example of GEMM + ReduceScatter by workgroup specialization. Resolve #178
Technical Details
It's an one-shot GEMM + ReduceScatter kernel, using
atomic_addto do reduce in-place.Test Plan
As discussed, it's been tested locally.
Test Result
Submission Checklist