perf(python): optimize GPU IVF training sampling and in-memory KMeans pipeline#6383
Open
hushengquan wants to merge 3 commits intolance-format:mainfrom
Open
perf(python): optimize GPU IVF training sampling and in-memory KMeans pipeline#6383hushengquan wants to merge 3 commits intolance-format:mainfrom
hushengquan wants to merge 3 commits intolance-format:mainfrom
Conversation
… with chunked take
_efficient_sample with Rust sorted-index take strategy
Contributor
Author
|
@Xuanwo Hi, could you please take a look at this PR? We've recently been using GPUs for pre-training and noticed a large volume of small I/O operations, which is resulting in poor performance. |
Collaborator
|
Thank you for this work! I'm working on the GPU side too, will take a look. |
Contributor
Author
Thank you! I have also observed that |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Optimizes the Python GPU IVF centroid training path to eliminate redundant I/O and
align with the efficient Rust CPU training strategy.
Changes
sampler.py_efficient_sample: Generatenuniformly random indices, sort them,and take in large contiguous chunks (8192 rows per take). Sorting enables the object
store to merge adjacent row reads into fewer, larger range requests — drastically
reducing S3 I/O latency.
ifguard withwhileloop that correctly slicesaccumulated rows into
batch_size-sized RecordBatches.maybe_sample: Removemax_takesbranching; always delegate to_efficient_sample. Deprecate themax_takesparameter (kept for API compat).target_takesin_filtered_efficient_sample: Compute internally insteadof accepting as a parameter.
vector.pymaybe_sampleonce, materialize as a numpy array(~384 MB for 65536×1536 float32), and reuse for both initial centroid selection and
KMeans training.
TorchDataset+CachedDatasetdependency: No more disk-based IPCcaching. Pass
torch.Tensordirectly toKMeans.fit(), which wraps it in aTensorDatasetfor pure in-memory iteration.np.random.choiceon the in-memory arrayinstead of sampling another
krows from disk.Performance Impact
Benchmarked on 5M rows × 1536-dim float32, S3-backed dataset, k=256,
sample_rate=256, max_iters=50, GPU=Apple MPS.
Before (double sampling + disk cache + 2048 small random reads)
After (single sampling, sorted chunked reads, in-memory training)
Summary
(1 × 8 large sorted chunked reads)
k × sample_rate × dim × 4B), matches Rust CPU path