-
Notifications
You must be signed in to change notification settings - Fork 63
konflux: use wheels from RHOAI index #994
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughReplace per-architecture requirement inputs with generated hashed wheel/source requirements, bump numpy/torch, add a konflux requirements generator that splits packages and updates Tekton YAMLs, add pinned overrides, remove the old torch-removal script, regenerate hashed requirements and build metadata, and expand system/tooling packages for builds. Changes
Sequence Diagram(s)sequenceDiagram
participant Dev as Developer/CI
participant Script as scripts/konflux_requirements.sh
participant UV as uv (pip-compile)
participant Files as Generated requirement files
participant YAML as .tekton YAMLs
Dev->>Script: run ./scripts/konflux_requirements.sh
Script->>UV: pip-compile pyproject.toml + overrides -> raw requirements
UV-->>Script: raw requirements (with provenance comments)
Script->>Files: split raw into wheel list & source list
Script->>UV: pip-compile wheel/source lists -> produce hashed files
UV-->>Files: requirements.hashes.wheel.txt / requirements.hashes.source.txt
Script->>Files: run pybuild-deps -> produce build-deps file (requirements-build.txt)
Script->>YAML: rewrite .tekton/lightspeed-stack-*.yaml with wheel package names
Script-->>Dev: summary & reminders to commit generated files
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@pyproject.toml`:
- Around line 168-173: Confirm which lockfile Konflux uses for the actual build
target (requirements.hashes.wheel.txt vs requirements.hashes.source.txt); if
Konflux uses the wheel lockfile, leave pyproject.toml's pins (numpy==2.3.5,
torch==2.9.0) as-is and add a short comment noting the intentional divergence
from requirements.hashes.source.txt (numpy==2.4.1, torch==2.9.1) due to the
RHOAI cpu-ubi9 wheel index, otherwise regenerate the wheel lockfile to match the
source lockfile or update pyproject.toml to the source pins so all three files
(pyproject.toml, requirements.hashes.wheel.txt, requirements.hashes.source.txt)
are consistent.
In `@requirements-build.txt`:
- Around line 29-44: The requirements file contains duplicate conflicting pins
for the package hatchling (hatchling==1.26.3 and hatchling==1.28.0); remove the
older duplicate (hatchling==1.26.3) and keep the intended pin (recommend
hatchling==1.28.0), and update or remove any adjacent comment blocks that
reference the removed pin so only a single hatchling entry remains.
In `@requirements.hashes.source.txt`:
- Around line 1579-1588: The locks and project metadata are out of sync: ensure
the pinned versions for numpy, torch, transformers and urllib3 are identical
across pyproject.toml (and any wheel metadata), overrides files, and the
generated lock/requirements files (e.g., requirements.hashes.source.txt); pick a
single canonical version for each package (align numpy to 2.4.1 or change the
lock to 2.3.5, torch to 2.9.1 or 2.9.0, transformers to 4.57.5 or 4.57.3,
urllib3 to 2.6.3 or 2.6.2) and update the corresponding declarations in
pyproject.toml, overrides, and the lock generation step so all three places
(pyproject/wheel, overrides, requirements.hashes.source.txt) list the exact same
version strings for numpy, torch, transformers and urllib3.
In `@scripts/konflux_requirements.sh`:
- Line 71: The script fails because RAW_REQ_FILE is used on the rm line but the
variable was defined with a typo as RAW_REQ_RILE; rename the definition of
RAW_REQ_RILE to RAW_REQ_FILE (the variable referenced in the rm command
alongside WHEEL_FILE and SOURCE_FILE) so the rm "$RAW_REQ_FILE" "$WHEEL_FILE"
"$SOURCE_FILE" call uses a defined variable and resolves the ShellCheck SC2153
error.
- Around line 7-12: Fix the typos in the requirement filename variables and
update all their usages: rename RAW_REQ_RILE to RAW_REQ_FILE, SOURCE_HASHE_FILE
to SOURCE_HASHES_FILE, and WHEEL_HASHE_FILE to WHEEL_HASHES_FILE; then search
the script for any occurrences of the old names and replace them with the new
variable names (e.g., in commands that read/write these files or concatenate
them) so the variable names are consistent and referenced correctly everywhere.
- Around line 58-60: After computing wheel_packages from WHEEL_FILE, validate
that it is non-empty before running the sed replacements: check the
wheel_packages variable (and/or that WHEEL_FILE contains any non-comment lines),
and if empty either exit with a non-zero status and a clear error message or
skip the sed updates to avoid writing "packages": "" into the YAML; update the
code around the wheel_packages assignment and the two sed lines to perform this
conditional check and fail or no-op when no packages are found.
🧹 Nitpick comments (1)
scripts/konflux_requirements.sh (1)
1-3: Add error handling and strict mode.The script lacks error handling, which could lead to partial execution leaving the system in an inconsistent state.
♻️ Proposed improvement
#!/bin/bash +set -euo pipefail # Script to split requirements by index sourceAdding
set -euo pipefailensures:
-e: Exit immediately on command failure-u: Exit on undefined variable usage (would have caught the typo)-o pipefail: Fail on pipe errors
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (12)
.tekton/lightspeed-stack-pull-request.yaml.tekton/lightspeed-stack-push.yamlMakefilepyproject.tomlrequirements-build.txtrequirements.aarch64.txtrequirements.hashes.source.txtrequirements.hashes.wheel.txtrequirements.overrides.txtrequirements.torch.txtscripts/konflux_requirements.shscripts/remove_torch_deps.sh
💤 Files with no reviewable changes (2)
- requirements.torch.txt
- scripts/remove_torch_deps.sh
🧰 Additional context used
📓 Path-based instructions (1)
pyproject.toml
📄 CodeRabbit inference engine (CLAUDE.md)
pyproject.toml: Checkpyproject.tomlfor supported Python versions before development
Always checkpyproject.tomlfor existing dependencies before adding new ones
Always verify current library versions inpyproject.tomlrather than assuming versions
Use pylint withsource-roots = "src"configuration
Files:
pyproject.toml
🧠 Learnings (7)
📚 Learning: 2026-01-11T16:30:41.784Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T16:30:41.784Z
Learning: Applies to pyproject.toml : Always verify current library versions in `pyproject.toml` rather than assuming versions
Applied to files:
pyproject.toml
📚 Learning: 2026-01-11T16:30:41.784Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T16:30:41.784Z
Learning: Use `uv sync --group dev --group llslibdev` to install development dependencies
Applied to files:
requirements.hashes.source.txt
📚 Learning: 2025-08-18T10:58:14.951Z
Learnt from: matysek
Repo: lightspeed-core/lightspeed-stack PR: 292
File: pyproject.toml:47-47
Timestamp: 2025-08-18T10:58:14.951Z
Learning: psycopg2-binary is required by some llama-stack providers in the lightspeed-stack project, so it cannot be replaced with psycopg v3 or moved to optional dependencies without breaking llama-stack functionality.
Applied to files:
requirements.hashes.source.txt
📚 Learning: 2025-08-18T10:57:39.266Z
Learnt from: matysek
Repo: lightspeed-core/lightspeed-stack PR: 292
File: pyproject.toml:59-59
Timestamp: 2025-08-18T10:57:39.266Z
Learning: In the lightspeed-stack project, transitive dependencies like faiss-cpu are intentionally pinned as top-level dependencies to maintain better control over the dependency graph and avoid version conflicts when bundling ML/LLM tooling packages.
Applied to files:
requirements.hashes.source.txt
📚 Learning: 2026-01-11T16:30:41.784Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T16:30:41.784Z
Learning: Applies to **/*.py : Use Llama Stack imports: `from llama_stack_client import AsyncLlamaStackClient`
Applied to files:
requirements.hashes.source.txt
📚 Learning: 2025-08-18T10:56:55.349Z
Learnt from: matysek
Repo: lightspeed-core/lightspeed-stack PR: 292
File: pyproject.toml:0-0
Timestamp: 2025-08-18T10:56:55.349Z
Learning: The lightspeed-stack project intentionally uses a "generic image" approach, bundling many dependencies directly in the base runtime image to work for everyone, rather than using lean base images with optional dependency groups.
Applied to files:
requirements.hashes.source.txt
📚 Learning: 2025-08-18T11:45:59.961Z
Learnt from: matysek
Repo: lightspeed-core/lightspeed-stack PR: 292
File: pyproject.toml:77-77
Timestamp: 2025-08-18T11:45:59.961Z
Learning: torch==2.7.1 is available on PyPI and is a valid version that can be used in dependency specifications.
Applied to files:
Makefile
🪛 GitHub Actions: Shell check
scripts/konflux_requirements.sh
[error] 71-71: ShellCheck SC2153: Possible misspelling: RAW_REQ_FILE may not be assigned. Did you mean RAW_REQ_RILE?
[error] 71-71: The shellcheck warning contributed to the shellcheck target failure. make shellcheck exited with code 1, pipeline exit code 2.
Makefile
[error] 80-80: make shellcheck failed. Command exited with code 1.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build-pr
- GitHub Check: E2E: server mode / ci
- GitHub Check: E2E: library mode / ci
🔇 Additional comments (7)
requirements.hashes.wheel.txt (1)
1-4: Confirmed: wheel lock properly pinned to RHOAI index with no PyPI fallback.File uses
--index-url https://console.redhat.com/api/pypi/public-rhai/rhoai/3.2/cpu-ubi9/simple/with noextra-index-urlpresent. torch==2.9.0 and all 224 dependencies are hash-locked, preventing silent fallback to PyPI. CI will strictly use the configured RHOAI index.requirements.overrides.txt (1)
1-10: No action needed—version differences between lockfiles are intentional by design.The overrides file correctly pins versions matching
requirements.hashes.wheel.txt(RHOAI packages), which differ fromrequirements.hashes.source.txt(PyPI packages) because they target different package indices. The build script (scripts/konflux_requirements.sh) intentionally splits packages between PyPI (https://pypi.org/simple/) and RHOAI (console.redhat.com/api/pypi/.../simple/), then compiles each against its respective index using--override requirements.overrides.txtto ensure RHOAI-compatible versions. The divergence in versions is expected and reflects the reality that different indices publish different package versions.Likely an incorrect or invalid review comment.
.tekton/lightspeed-stack-push.yaml (1)
44-50: LGTM! Hash-based requirements and expanded binary packages list.The transition from per-architecture requirements files to
requirements.hashes.wheel.txtandrequirements.hashes.source.txtaligns with the PR objective of using RHOAI index wheels. The expanded binary packages list is consistent with the pull-request pipeline configuration..tekton/lightspeed-stack-pull-request.yaml (1)
52-58: LGTM! Consistent with push pipeline configuration.The hash-based requirements files and expanded binary packages list match the push pipeline, ensuring consistency across PR and push builds. The inline comments (lines 37-41) provide helpful context about packages requiring binary wheels or source builds.
Makefile (2)
113-114: LGTM! Simplified target delegation.The
konflux-requirementstarget now cleanly delegates to the new script, replacing the previous inline multi-step workflow.
10-10: PyTorch 2.9.0 is a valid version. The TORCH_VERSION update to 2.9.0 is confirmed to be available on PyPI.scripts/konflux_requirements.sh (1)
66-68: LGTM! Hash generation for both indexes.The hash generation for wheel and source requirements uses the appropriate index URLs and generates cryptographic hashes for integrity verification.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| wheel_packages=$(grep -v "^[#-]" "$WHEEL_FILE" | sed 's/==.*//' | tr '\n' ',' | sed 's/,$//') | ||
| sed -i 's/"packages": "[^"]*"/"packages": "'"$wheel_packages"'"/' .tekton/lightspeed-stack-pull-request.yaml | ||
| sed -i 's/"packages": "[^"]*"/"packages": "'"$wheel_packages"'"/' .tekton/lightspeed-stack-push.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Verify the sed pattern matches the expected structure in the YAML files
rg -n '"packages":' .tekton/lightspeed-stack-pull-request.yaml .tekton/lightspeed-stack-push.yamlRepository: lightspeed-core/lightspeed-stack
Length of output: 2907
🏁 Script executed:
# Examine the wheel_packages construction and WHEEL_FILE handling
head -80 scripts/konflux_requirements.sh | tail -30Repository: lightspeed-core/lightspeed-stack
Length of output: 1869
🏁 Script executed:
# Check if WHEEL_FILE is defined and look at the script from the start
head -40 scripts/konflux_requirements.shRepository: lightspeed-core/lightspeed-stack
Length of output: 1689
🏁 Script executed:
# Check full script for validation/error handling around wheel_packages
rg -n "wheel_packages" scripts/konflux_requirements.sh -A 2 -B 2Repository: lightspeed-core/lightspeed-stack
Length of output: 629
Add validation for empty wheel_packages result.
The sed pattern correctly matches the YAML structure and package names (which contain only alphanumerics and hyphens) won't cause escaping issues. However, if WHEEL_FILE contains only comments or is empty, wheel_packages becomes an empty string, resulting in "packages": "" in the YAML files. Add a check to prevent invalid replacements or fail explicitly if no packages are found.
🤖 Prompt for AI Agents
In `@scripts/konflux_requirements.sh` around lines 58 - 60, After computing
wheel_packages from WHEEL_FILE, validate that it is non-empty before running the
sed replacements: check the wheel_packages variable (and/or that WHEEL_FILE
contains any non-comment lines), and if empty either exit with a non-zero status
and a clear error message or skip the sed updates to avoid writing "packages":
"" into the YAML; update the code around the wheel_packages assignment and the
two sed lines to perform this conditional check and fail or no-op when no
packages are found.
01ef831 to
6d1827c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@scripts/konflux_requirements.sh`:
- Around line 29-33: The autogenerated header comment incorrectly names
split_requirements.sh; update the echo lines that write the header (the ones
appending to "$SOURCE_FILE" and "$WHEEL_FILE") to reference the actual script
name (konflux_requirements.sh) or, better, use the script basename (e.g.,
"$(basename "$0")") so the header is always correct; modify the two echo
statements that currently output "# This file was autogenerated by
split_requirements.sh" accordingly.
♻️ Duplicate comments (8)
pyproject.toml (1)
168-173: Verify version consistency betweenpyproject.tomland generated lockfiles.The numpy (2.3.5) and torch (2.9.0) pins look reasonable for Python 3.12. However, ensure the generated
requirements.hashes.wheel.txtfrom the RHOAI index contains compatible versions, as the source index may provide different versions.#!/bin/bash # Verify numpy and torch versions in pyproject.toml vs lockfiles echo "=== pyproject.toml pins ===" grep -E "numpy==|torch==" pyproject.toml echo "" echo "=== requirements.hashes.wheel.txt ===" grep -E "^numpy==|^torch==" requirements.hashes.wheel.txt 2>/dev/null || echo "File not found or no matches" echo "" echo "=== requirements.hashes.source.txt ===" grep -E "^numpy==|^torch==" requirements.hashes.source.txt 2>/dev/null || echo "File not found or no matches"requirements-build.txt (1)
29-43: Blocker: Duplicatehatchlingpins will cause installation failures.The file contains both
hatchling==1.26.3(line 29) andhatchling==1.28.0(line 33). Only one version should be pinned.🐛 Proposed fix (keep the newer version)
-hatchling==1.26.3 - # via - # hatch-fancy-pypi-readme - # openai hatchling==1.28.0 # via # a2a-sdk # filelock # hatch-fancy-pypi-readme # hatch-vcs # jsonschema # mcp # termcolor # uv-dynamic-versioning # uvicornscripts/konflux_requirements.sh (3)
7-12: Fix typos in variable names.Multiple variable names contain typos that will cause the script to fail:
RAW_REQ_RILE→RAW_REQ_FILESOURCE_HASHE_FILE→SOURCE_HASHES_FILEWHEEL_HASHE_FILE→WHEEL_HASHES_FILE🐛 Proposed fix
-RAW_REQ_RILE="requirements.no_hashes.txt" +RAW_REQ_FILE="requirements.no_hashes.txt" SOURCE_FILE="requirements.source.txt" WHEEL_FILE="requirements.wheel.txt" -SOURCE_HASHE_FILE="requirements.hashes.source.txt" -WHEEL_HASHE_FILE="requirements.hashes.wheel.txt" +SOURCE_HASHES_FILE="requirements.hashes.source.txt" +WHEEL_HASHES_FILE="requirements.hashes.wheel.txt" BUILD_FILE="requirements-build.txt"Then update all references throughout the script (lines 15, 54, 66-68, 71, 74-77).
71-71: Undefined variableRAW_REQ_FILEcauses script failure.This line references
RAW_REQ_FILEbut the variable was defined asRAW_REQ_RILEon line 7. This causes the ShellCheck SC2153 error shown in the pipeline failure.
56-60: Add validation for emptywheel_packagesbefore sed replacement.If
WHEEL_FILEcontains only comments or is empty,wheel_packagesbecomes an empty string, resulting in"packages": ""in the YAML files.🐛 Proposed fix
# replace the list of binary packages in konflux pipeline configuration # only the package names, not the versions, delimited by commas wheel_packages=$(grep -v "^[`#-`]" "$WHEEL_FILE" | sed 's/==.*//' | tr '\n' ',' | sed 's/,$//') + +if [[ -z "$wheel_packages" ]]; then + echo "ERROR: No wheel packages found in $WHEEL_FILE" >&2 + exit 1 +fi + sed -i 's/"packages": "[^"]*"/"packages": "'"$wheel_packages"'"/' .tekton/lightspeed-stack-pull-request.yaml sed -i 's/"packages": "[^"]*"/"packages": "'"$wheel_packages"'"/' .tekton/lightspeed-stack-push.yamlrequirements.hashes.source.txt (3)
2-2: Ensurerequirements.source.txtactually exists and is the canonical input.The generator command (Line 2) and multiple provenance lines (e.g., Line 8) reference
requirements.source.txt. If that file isn’t tracked/maintained (or is generated elsewhere), this lock becomes hard to reproduce/debug.Also applies to: 8-8
1579-1588: Keep pins consistent across pyproject/overrides/wheel/source locks.Given multiple lock paths (wheel vs source), any drift in core pins (e.g., numpy/torch/transformers/urllib3) can yield non-reproducible images depending on which lock path is taken.
Also applies to: 3002-3047, 3142-3144
1659-1739: CUDA/NVIDIA deps in “source” lock: confirm this is intentional (CPU vs GPU).This lock includes
torch==2.9.1plus manynvidia-*-cu12packages andtriton. If the intent is CPU-only builds with wheels from RHOAI, this looks like a mismatch and can re-introduce GPU artifacts from external sources depending on how the lock is consumed.Also applies to: 3002-3034, 3088-3103
🧹 Nitpick comments (1)
scripts/konflux_requirements.sh (1)
1-5: Addset -euo pipefailfor robust error handling.The script lacks defensive settings. Adding strict mode will catch errors early, especially useful given the file operations and external command calls.
♻️ Proposed improvement
#!/bin/bash +set -euo pipefail # Script to split requirements by index source # Packages from pypi.org go to requirements.source.txt # Packages from console.redhat.com go to requirements.wheel.txt
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (12)
.tekton/lightspeed-stack-pull-request.yaml.tekton/lightspeed-stack-push.yamlMakefilepyproject.tomlrequirements-build.txtrequirements.aarch64.txtrequirements.hashes.source.txtrequirements.hashes.wheel.txtrequirements.overrides.txtrequirements.torch.txtscripts/konflux_requirements.shscripts/remove_torch_deps.sh
💤 Files with no reviewable changes (2)
- scripts/remove_torch_deps.sh
- requirements.torch.txt
🚧 Files skipped from review as they are similar to previous changes (3)
- requirements.hashes.wheel.txt
- requirements.overrides.txt
- Makefile
🧰 Additional context used
📓 Path-based instructions (1)
pyproject.toml
📄 CodeRabbit inference engine (CLAUDE.md)
pyproject.toml: Checkpyproject.tomlfor supported Python versions before development
Always checkpyproject.tomlfor existing dependencies before adding new ones
Always verify current library versions inpyproject.tomlrather than assuming versions
Use pylint withsource-roots = "src"configuration
Files:
pyproject.toml
🧠 Learnings (6)
📚 Learning: 2026-01-11T16:30:41.784Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T16:30:41.784Z
Learning: Applies to pyproject.toml : Always verify current library versions in `pyproject.toml` rather than assuming versions
Applied to files:
pyproject.toml
📚 Learning: 2026-01-11T16:30:41.784Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T16:30:41.784Z
Learning: Use `uv sync --group dev --group llslibdev` to install development dependencies
Applied to files:
requirements.hashes.source.txt
📚 Learning: 2025-08-18T10:58:14.951Z
Learnt from: matysek
Repo: lightspeed-core/lightspeed-stack PR: 292
File: pyproject.toml:47-47
Timestamp: 2025-08-18T10:58:14.951Z
Learning: psycopg2-binary is required by some llama-stack providers in the lightspeed-stack project, so it cannot be replaced with psycopg v3 or moved to optional dependencies without breaking llama-stack functionality.
Applied to files:
requirements.hashes.source.txt
📚 Learning: 2025-08-18T10:57:39.266Z
Learnt from: matysek
Repo: lightspeed-core/lightspeed-stack PR: 292
File: pyproject.toml:59-59
Timestamp: 2025-08-18T10:57:39.266Z
Learning: In the lightspeed-stack project, transitive dependencies like faiss-cpu are intentionally pinned as top-level dependencies to maintain better control over the dependency graph and avoid version conflicts when bundling ML/LLM tooling packages.
Applied to files:
requirements.hashes.source.txt
📚 Learning: 2026-01-11T16:30:41.784Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T16:30:41.784Z
Learning: Applies to **/*.py : Use Llama Stack imports: `from llama_stack_client import AsyncLlamaStackClient`
Applied to files:
requirements.hashes.source.txt
📚 Learning: 2025-08-18T10:56:55.349Z
Learnt from: matysek
Repo: lightspeed-core/lightspeed-stack PR: 292
File: pyproject.toml:0-0
Timestamp: 2025-08-18T10:56:55.349Z
Learning: The lightspeed-stack project intentionally uses a "generic image" approach, bundling many dependencies directly in the base runtime image to work for everyone, rather than using lean base images with optional dependency groups.
Applied to files:
requirements.hashes.source.txt
🪛 GitHub Actions: Shell check
scripts/konflux_requirements.sh
[warning] 71-71: ShellCheck SC2153: Possible misspelling: RAW_REQ_FILE may not be assigned. Did you mean RAW_REQ_RILE? (Possible misspelling: RAW_REQ_FILE)
[error] 71-71: ShellCheck warning treated as error by make: Possible misspelling detected in variable name in line 71.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build-pr
- GitHub Check: E2E: library mode / ci
- GitHub Check: E2E: server mode / ci
🔇 Additional comments (5)
.tekton/lightspeed-stack-push.yaml (2)
44-46: LGTM! Requirements files updated to hash-based approach.The switch from per-architecture requirements (
requirements.x86_64.txt,requirements.aarch64.txt) to hash-based requirements (requirements.hashes.wheel.txt,requirements.hashes.source.txt) aligns with the PR objective to use wheels from the RHOAI index exclusively.
49-54: Binary packages list is properly synchronized between push and pull-request YAML files.Verification confirms both
.tekton/lightspeed-stack-push.yamland.tekton/lightspeed-stack-pull-request.yamlcontain identical package lists (118 packages). Thescripts/konflux_requirements.shscript automatically maintains this synchronization by extracting wheel packages from the compilation output and updating both YAML files viased..tekton/lightspeed-stack-pull-request.yaml (1)
52-58: LGTM! Pull-request pipeline updated consistently with push pipeline.The requirements files and binary packages list changes match
lightspeed-stack-push.yaml, ensuring consistent build behavior between PR and push pipelines.requirements-build.txt (1)
1-6: LGTM! Build command updated to use new source file.The compile command reference updated from the previous input to
requirements.source.txt, aligning with the new requirements generation workflow.requirements.hashes.source.txt (1)
1-3: The PyPI index is intentionally embedded for source packages, but verify if this design complies with ProdSec policy.
--index-url https://pypi.org/simple/(Line 3) is hardcoded in this file and will be honored by the Tekton prefetch system when installing dependencies. The generation script (konflux_requirements.sh) deliberately splits requirements: packages from RHOAI console go to requirements.hashes.wheel.txt (with RHOAI index), while packages from PyPI go to requirements.hashes.source.txt (with PyPI index). This suggests a design intent to fetch source distributions from PyPI and wheels from RHOAI.However, confirm with ProdSec whether this split (any external index access for sdists, even if wheels are RHOAI-only) is acceptable under your compliance requirements. Additionally, the header comment references
requirements.source.txt, which does not exist in the repo—it is deleted after hashes generation. Consider updating the comment to reflect the actual generation context or clarifying the file's role in the build pipeline.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| echo "# This file was autogenerated by split_requirements.sh" >> "$SOURCE_FILE" | ||
| echo "--index-url https://pypi.org/simple/" >> "$SOURCE_FILE" | ||
| echo "# Packages from console.redhat.com" > "$WHEEL_FILE" | ||
| echo "# This file was autogenerated by split_requirements.sh" >> "$WHEEL_FILE" | ||
| echo "--index-url https://console.redhat.com/api/pypi/public-rhai/rhoai/3.2/cpu-ubi9/simple/" >> "$WHEEL_FILE" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent script name in autogenerated comment.
The header comments reference split_requirements.sh but the script is named konflux_requirements.sh.
🐛 Proposed fix
# Initialize output files
echo "# Packages from pypi.org" > "$SOURCE_FILE"
-echo "# This file was autogenerated by split_requirements.sh" >> "$SOURCE_FILE"
+echo "# This file was autogenerated by konflux_requirements.sh" >> "$SOURCE_FILE"
echo "--index-url https://pypi.org/simple/" >> "$SOURCE_FILE"
echo "# Packages from console.redhat.com" > "$WHEEL_FILE"
-echo "# This file was autogenerated by split_requirements.sh" >> "$WHEEL_FILE"
+echo "# This file was autogenerated by konflux_requirements.sh" >> "$WHEEL_FILE"
echo "--index-url https://console.redhat.com/api/pypi/public-rhai/rhoai/3.2/cpu-ubi9/simple/" >> "$WHEEL_FILE"🤖 Prompt for AI Agents
In `@scripts/konflux_requirements.sh` around lines 29 - 33, The autogenerated
header comment incorrectly names split_requirements.sh; update the echo lines
that write the header (the ones appending to "$SOURCE_FILE" and "$WHEEL_FILE")
to reference the actual script name (konflux_requirements.sh) or, better, use
the script basename (e.g., "$(basename "$0")") so the header is always correct;
modify the two echo statements that currently output "# This file was
autogenerated by split_requirements.sh" accordingly.
c369325 to
a507bae
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@Makefile`:
- Around line 113-114: Update the konflux_requirements.sh script to fail fast
and handle errors: add "set -e" and "set -o pipefail" at the top of
konflux_requirements.sh, check that critical input variables/files (e.g., the
RAW_REQ_FILE referenced around the RAW_REQ_FILE variable) exist before using
them and exit with a helpful message if missing, and validate the success of
critical commands (in particular any sed -i calls that modify pipeline/config
files) by checking their exit status or using a temporary file + mv pattern to
ensure safe edits; ensure all early returns/logging use clear messages so
failures surface to the Makefile target that invokes konflux_requirements.sh.
♻️ Duplicate comments (1)
requirements-build.txt (1)
29-43: Blocker: duplicatehatchlingwith conflicting pins still present.Despite a prior review noting this was addressed, the file still contains both
hatchling==1.26.3(line 29) andhatchling==1.28.0(line 33). This will cause pip install failures or nondeterministic behavior.Since this file is auto-generated by
pip-compile, the root cause is likely in the input files or the generation script. Re-run./scripts/konflux_requirements.shand ensure only one version is pinned.Proposed fix (remove the older duplicate)
-hatchling==1.26.3 - # via - # hatch-fancy-pypi-readme - # openai hatchling==1.28.0 # via # a2a-sdk
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (13)
.tekton/lightspeed-stack-pull-request.yaml.tekton/lightspeed-stack-push.yamlMakefilepyproject.tomlrequirements-build.txtrequirements.aarch64.txtrequirements.hashes.source.txtrequirements.hashes.wheel.txtrequirements.overrides.txtrequirements.torch.txtrequirements.x86_64.txtscripts/konflux_requirements.shscripts/remove_torch_deps.sh
💤 Files with no reviewable changes (2)
- requirements.torch.txt
- scripts/remove_torch_deps.sh
🚧 Files skipped from review as they are similar to previous changes (5)
- scripts/konflux_requirements.sh
- requirements.hashes.source.txt
- pyproject.toml
- requirements.hashes.wheel.txt
- requirements.overrides.txt
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-18T11:45:59.961Z
Learnt from: matysek
Repo: lightspeed-core/lightspeed-stack PR: 292
File: pyproject.toml:77-77
Timestamp: 2025-08-18T11:45:59.961Z
Learning: torch==2.7.1 is available on PyPI and is a valid version that can be used in dependency specifications.
Applied to files:
Makefile
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: build-pr
- GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
- GitHub Check: E2E: server mode / ci
- GitHub Check: E2E: library mode / ci
🔇 Additional comments (5)
.tekton/lightspeed-stack-push.yaml (1)
44-50: Requirements files and binary packages configuration looks correct.The switch to
requirements.hashes.wheel.txtandrequirements.hashes.source.txtaligns with the PR objective of using hash-based requirements from the RHOAI index. The extensive binary packages list ensures these packages are fetched as wheels rather than built from source during hermetic builds.requirements-build.txt (1)
1-6: Auto-generated file structure looks correct.The header correctly indicates this is generated by
pip-compilefromrequirements.source.txt, aligning with the new build workflow..tekton/lightspeed-stack-pull-request.yaml (2)
52-58: Requirements configuration is consistent with push.yaml.The
requirements_filesandbinary.packagesmatch the push pipeline, ensuring consistent builds between PR and main branch pushes.
37-41: Helpful documentation of package build constraints.The comments explaining why packages are in the binary list (no source available, hermeto prefetch problems, build acceleration, cmake/cargo requirements) provide valuable context for maintainers.
Makefile (1)
10-10: PyTorch 2.9.0 is available on PyPI and is a valid version for dependency specification.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
a507bae to
f754b5a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@scripts/konflux_requirements.sh`:
- Line 1: Add fail-fast flags to the script by enabling "set -e" and "set -o
pipefail" immediately after the shebang in scripts/konflux_requirements.sh so
any failing command (e.g., uv pip compile, sed -i) aborts the script and
propagates pipeline errors; place these two options at the top of the file to
ensure robust error handling for the entire script.
♻️ Duplicate comments (3)
scripts/konflux_requirements.sh (2)
32-34: Incorrect script name in autogenerated comments.The header comments still reference
split_requirements.shbut the script is namedkonflux_requirements.sh.🐛 Proposed fix
echo "# Packages from pypi.org" > "$SOURCE_FILE" -echo "# This file was autogenerated by split_requirements.sh" >> "$SOURCE_FILE" +echo "# This file was autogenerated by konflux_requirements.sh" >> "$SOURCE_FILE" echo "# Packages from console.redhat.com" > "$WHEEL_FILE" -echo "# This file was autogenerated by split_requirements.sh" >> "$WHEEL_FILE" +echo "# This file was autogenerated by konflux_requirements.sh" >> "$WHEEL_FILE"
58-64: Add validation for emptywheel_packagesbefore modifying pipeline configs.If
WHEEL_FILEcontains only comments or is empty,wheel_packagesbecomes an empty string (or just theEXTRA_WHEELS), resulting in potentially invalid"packages": "uv,pip"in the YAML files. Add a check to fail explicitly if no packages are found.🐛 Proposed fix
# replace the list of binary packages in konflux pipeline configuration # only the package names, not the versions, delimited by commas wheel_packages=$(grep -v "^[`#-`]" "$WHEEL_FILE" | sed 's/==.*//' | tr '\n' ',' | sed 's/,$//') +if [[ -z "$wheel_packages" ]]; then + echo "ERROR: No wheel packages found in $WHEEL_FILE" >&2 + exit 1 +fi # append extra wheels to the list wheel_packages="$wheel_packages,$EXTRA_WHEELS" sed -i 's/"packages": "[^"]*"/"packages": "'"$wheel_packages"'"/' .tekton/lightspeed-stack-pull-request.yaml sed -i 's/"packages": "[^"]*"/"packages": "'"$wheel_packages"'"/' .tekton/lightspeed-stack-push.yamlrequirements-build.txt (1)
29-43: Blocker: duplicatehatchlingwith conflicting pins will break installs.The file contains both
hatchling==1.26.3(line 29) andhatchling==1.28.0(line 33). This was flagged in a previous review and marked as addressed, but the duplicate is still present. Having two different versions of the same package will cause pip resolution failures.🐛 Proposed fix (keep the newer version)
-hatchling==1.26.3 - # via - # hatch-fancy-pypi-readme - # openai hatchling==1.28.0 # via # a2a-sdkSince this file is autogenerated, you may need to investigate why
pybuild-deps compileis producing duplicate pins, or manually deduplicate after generation.
🧹 Nitpick comments (1)
scripts/konflux_requirements.sh (1)
70-72: Consider adding--python-platformto source hash compilation for consistency.Line 70 (wheel) doesn't specify
--python-versionwhile line 71 (source) does. The initial compilation at line 20-21 uses both--python-platform x86_64-unknown-linux-gnuand--python-version 3.12. For reproducibility, consider aligning the hash compilation commands with the initial compile flags.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (14)
.tekton/lightspeed-stack-pull-request.yaml.tekton/lightspeed-stack-push.yamlContainerfileMakefilepyproject.tomlrequirements-build.txtrequirements.aarch64.txtrequirements.hashes.source.txtrequirements.hashes.wheel.txtrequirements.overrides.txtrequirements.torch.txtrequirements.x86_64.txtscripts/konflux_requirements.shscripts/remove_torch_deps.sh
💤 Files with no reviewable changes (2)
- requirements.torch.txt
- scripts/remove_torch_deps.sh
✅ Files skipped from review due to trivial changes (1)
- requirements.hashes.wheel.txt
🚧 Files skipped from review as they are similar to previous changes (4)
- .tekton/lightspeed-stack-pull-request.yaml
- requirements.overrides.txt
- requirements.hashes.source.txt
- .tekton/lightspeed-stack-push.yaml
🧰 Additional context used
📓 Path-based instructions (1)
pyproject.toml
📄 CodeRabbit inference engine (CLAUDE.md)
pyproject.toml: Checkpyproject.tomlfor supported Python versions before development
Always checkpyproject.tomlfor existing dependencies before adding new ones
Always verify current library versions inpyproject.tomlrather than assuming versions
Use pylint withsource-roots = "src"configuration
Files:
pyproject.toml
🧠 Learnings (3)
📚 Learning: 2025-08-18T11:45:59.961Z
Learnt from: matysek
Repo: lightspeed-core/lightspeed-stack PR: 292
File: pyproject.toml:77-77
Timestamp: 2025-08-18T11:45:59.961Z
Learning: torch==2.7.1 is available on PyPI and is a valid version that can be used in dependency specifications.
Applied to files:
Makefile
📚 Learning: 2026-01-11T16:30:41.784Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T16:30:41.784Z
Learning: Applies to pyproject.toml : Always verify current library versions in `pyproject.toml` rather than assuming versions
Applied to files:
pyproject.toml
📚 Learning: 2026-01-11T16:30:41.784Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T16:30:41.784Z
Learning: Use `uv sync --group dev --group llslibdev` to install development dependencies
Applied to files:
Containerfile
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build-pr
- GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
- GitHub Check: E2E: library mode / ci
🔇 Additional comments (5)
Containerfile (1)
37-37: LGTM! The switch to hashed requirements files aligns with hermetic build goals.Using
--no-depswith pre-compiled hashed requirements ensures reproducible builds. The requirements files are generated bykonflux_requirements.shwhich resolves all transitive dependencies.Makefile (2)
10-10: LGTM!
TORCH_VERSIONis consistent with thetorch==2.9.0pin inpyproject.toml.
113-114: LGTM! Delegating to the script centralizes the requirements generation logic.The target correctly invokes the new script. Ensure the script has appropriate error handling (see review comments on
scripts/konflux_requirements.sh).requirements-build.txt (1)
2-5: Build command reference updated correctly.The header now references
requirements.source.txtwhich aligns with the new workflow inkonflux_requirements.sh.pyproject.toml (1)
168-173: Version bumps are valid and properly aligned across the project.The numpy (2.3.5) and torch (2.9.0) versions are both available on PyPI and compatible with the project's Python 3.12+ requirement. Alignment with Makefile's
TORCH_VERSIONandrequirements.hashes.wheel.txtis confirmed.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| @@ -0,0 +1,81 @@ | |||
| #!/bin/bash | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add set -e and set -o pipefail for robust error handling.
The script lacks fail-fast behavior. If any command fails silently (e.g., uv pip compile or sed -i), downstream steps may produce invalid artifacts.
♻️ Proposed fix
#!/bin/bash
+set -euo pipefail
# Script to split requirements by index source📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #!/bin/bash | |
| #!/bin/bash | |
| set -euo pipefail | |
| # Script to split requirements by index source |
🤖 Prompt for AI Agents
In `@scripts/konflux_requirements.sh` at line 1, Add fail-fast flags to the script
by enabling "set -e" and "set -o pipefail" immediately after the shebang in
scripts/konflux_requirements.sh so any failing command (e.g., uv pip compile,
sed -i) aborts the script and propagates pipeline errors; place these two
options at the top of the file to ensure robust error handling for the entire
script.
f754b5a to
33d3096
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@scripts/konflux_requirements.sh`:
- Around line 70-71: The current uv pip compile invocations (the lines invoking
"uv pip compile \"$WHEEL_FILE\" -o \"$WHEEL_HASH_FILE\" ..." and "uv pip compile
\"$SOURCE_FILE\" -o \"$SOURCE_HASH_FILE\" ...") use an invalid comma-separated
--python-platform value; change them to either run the compile command once per
platform (call uv pip compile twice per input, once with --python-platform
x86_64-unknown-linux-gnu and once with --python-platform
aarch64-unknown-linux-gnu, writing/appending separate hash outputs) or switch to
a single run using --universal to emit platform markers; update the invocations
for both WHEEL_FILE and SOURCE_FILE accordingly and ensure the resulting hashes
are saved to WHEEL_HASH_FILE and SOURCE_HASH_FILE as intended.
♻️ Duplicate comments (4)
scripts/konflux_requirements.sh (3)
1-6: Addset -euo pipefailfor robust error handling.The script lacks fail-fast behavior. If any command fails (e.g.,
uv pip compile,sed -i), subsequent steps may produce invalid artifacts or silently corrupt files.♻️ Proposed fix
#!/bin/bash +set -euo pipefail # Script to split requirements by index source
58-64: Add validation for emptywheel_packagesbefore modifying YAML files.If
WHEEL_FILEcontains only comments or is empty,wheel_packagesbecomes an empty string, resulting in"packages": ""in the YAML files, which could break the build.♻️ Proposed fix
# replace the list of binary packages in konflux pipeline configuration # only the package names, not the versions, delimited by commas wheel_packages=$(grep -v "^[`#-`]" "$WHEEL_FILE" | sed 's/==.*//' | tr '\n' ',' | sed 's/,$//') + +if [[ -z "$wheel_packages" ]]; then + echo "ERROR: No wheel packages found in $WHEEL_FILE" >&2 + exit 1 +fi + # append extra wheels to the list wheel_packages="$wheel_packages,$EXTRA_WHEELS" sed -i 's/"packages": "[^"]*"/"packages": "'"$wheel_packages"'"/' .tekton/lightspeed-stack-pull-request.yaml sed -i 's/"packages": "[^"]*"/"packages": "'"$wheel_packages"'"/' .tekton/lightspeed-stack-push.yaml
31-35: Inconsistent script name in autogenerated comments.The header comments reference
split_requirements.shbut the script is namedkonflux_requirements.sh.🐛 Proposed fix
# Initialize output files echo "# Packages from pypi.org" > "$SOURCE_FILE" -echo "# This file was autogenerated by split_requirements.sh" >> "$SOURCE_FILE" +echo "# This file was autogenerated by konflux_requirements.sh" >> "$SOURCE_FILE" echo "# Packages from console.redhat.com" > "$WHEEL_FILE" -echo "# This file was autogenerated by split_requirements.sh" >> "$WHEEL_FILE" +echo "# This file was autogenerated by konflux_requirements.sh" >> "$WHEEL_FILE" echo "--index-url https://console.redhat.com/api/pypi/public-rhai/rhoai/3.2/cpu-ubi9/simple/" >> "$WHEEL_FILE"requirements-build.txt (1)
29-43: Duplicatehatchlingpins remain—this will break installs.The file still contains both
hatchling==1.26.3(lines 29-32) andhatchling==1.28.0(lines 33-43). Having duplicate conflicting pins for the same package will cause installation failures.This was flagged in a previous review and marked as addressed, but the duplicates persist in this version.
🐛 Proposed fix: Remove the older pin
-hatchling==1.26.3 - # via - # hatch-fancy-pypi-readme - # openai hatchling==1.28.0 # via # a2a-sdk
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (14)
.tekton/lightspeed-stack-pull-request.yaml.tekton/lightspeed-stack-push.yamlContainerfileMakefilepyproject.tomlrequirements-build.txtrequirements.aarch64.txtrequirements.hashes.source.txtrequirements.hashes.wheel.txtrequirements.overrides.txtrequirements.torch.txtrequirements.x86_64.txtscripts/konflux_requirements.shscripts/remove_torch_deps.sh
💤 Files with no reviewable changes (2)
- requirements.torch.txt
- scripts/remove_torch_deps.sh
✅ Files skipped from review due to trivial changes (1)
- requirements.hashes.wheel.txt
🚧 Files skipped from review as they are similar to previous changes (4)
- .tekton/lightspeed-stack-pull-request.yaml
- pyproject.toml
- Makefile
- Containerfile
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: E2E: library mode / ci
- GitHub Check: build-pr
- GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
🔇 Additional comments (7)
requirements.hashes.source.txt (1)
1-297: Autogenerated hash file looks well-formed.This is a generated artifact from
uv pip compilewith the expected structure: index URL header followed by packages with sha256 hashes. The--no-depsand--no-annotateflags keep it clean.Since this is referenced alongside
requirements.hashes.wheel.txtin the Tekton config, verify that the split between source (PyPI) and wheel (RHOAI) packages aligns with your ProdSec requirements for which packages may come from external sources..tekton/lightspeed-stack-push.yaml (2)
44-46: Requirements files updated to use hashed artifacts.The switch from architecture-specific files (
requirements.x86_64.txt,requirements.aarch64.txt) to the new hashed files (requirements.hashes.wheel.txt,requirements.hashes.source.txt) aligns with the reproducible build workflow.
49-54: Binary packages list is properly synchronized with the generator script output.The verification confirms that both
.tekton/lightspeed-stack-push.yamland.tekton/lightspeed-stack-pull-request.yamlcontain identical packages lists (120 packages each), includinguvandpipappended asEXTRA_WHEELSfromscripts/konflux_requirements.sh. The current state matches the generator script's expected output.scripts/konflux_requirements.sh (2)
39-56: Package parsing logic handles the expected format correctly.The while loop correctly parses
uv pip compileoutput with--emit-index-annotation, extracting packages and their source index annotations. The regex patterns are appropriate for the expected format.
74-81: Cleanup and summary output looks good.Intermediate files are properly cleaned up, and the summary messages provide useful counts and a reminder to commit the generated files.
requirements-build.txt (1)
1-6: The script correctly generates the build file before cleanup. Line 72 executespybuild-deps compile --output-file="$BUILD_FILE" "$SOURCE_FILE"to generate requirements-build.txt, and only after this completes on line 75 are the intermediate files (including SOURCE_FILE) removed. The order of operations is correct.Likely an incorrect or invalid review comment.
requirements.overrides.txt (1)
1-11: Verify the RHOAI index version is correct before deployment.The comment references RHOAI 3.2 wheels index, but Red Hat's official documentation only shows RHOAI 3.0 as available (https://console.redhat.com/api/pypi/public-rhai/rhoai/3.0/cpu-ubi9/simple/). Confirm whether:
- The 3.2 index is a future release or internally available
- The URL should reference 3.0 instead
- The pinned package versions are compatible with the correct index version
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
33d3096 to
c6d98b8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@Containerfile`:
- Around line 20-21: Remove the duplicate package name in the Dockerfile RUN
instruction: in the line that installs build dependencies (the RUN dnf install
... gcc gcc cmake cargo), remove the extra repeated "gcc" so the package list
contains each package only once (e.g., "gcc cmake cargo"), keeping the rest of
the options (--nodocs, --setopt=keepcache=0, --setopt=tsflags=nodocs) unchanged.
In `@requirements-build.txt`:
- Around line 2-5: requirements-build.txt currently lists duplicate/conflicting
pins for the hatchling package (1.26.3 and 1.28.0); fix it by consolidating
hatchling to a single canonical version in the source dependency file and
regenerating the lockfile: update requirements.source.txt to pin the desired
hatchling version (e.g., 1.28.0) or remove the conflicting explicit pin coming
from an indirect entry, then re-run the compiler (pybuild-deps compile
--output-file=requirements-build.txt requirements.source.txt) to produce a
consistent requirements-build.txt with only one hatchling entry.
♻️ Duplicate comments (1)
requirements-build.txt (1)
29-43: Blocker: Duplicatehatchlingpins still present despite previous fix claim.The file contains both
hatchling==1.26.3(lines 29-32) andhatchling==1.28.0(lines 33-43). This will cause installation failures as pip cannot resolve conflicting version constraints.The previous review marked this as "addressed in commit 6d1827c", but the duplicates remain in the current code.
Proposed fix: Remove the older pin
-hatchling==1.26.3 - # via - # hatch-fancy-pypi-readme - # openai hatchling==1.28.0 # via # a2a-sdk # filelock # hatch-fancy-pypi-readme # hatch-vcs # jsonschema # mcp + # openai # termcolor # uv-dynamic-versioning # uvicorn
🧹 Nitpick comments (2)
.tekton/lightspeed-stack-push.yaml (1)
49-54: Consider extracting the packages list to a shared file for maintainability.The binary packages list is duplicated across
lightspeed-stack-push.yamlandlightspeed-stack-pull-request.yaml. Currently both files maintain identical lists, but keeping the same large inline package list in multiple places creates a maintenance burden and drift risk for future updates. Extracting this to a shared configuration file (or generating it viakonflux_requirements.sh) would centralize the definition and ensure consistency across pipeline definitions.rpms.lock.yaml (1)
147-160: Rust toolchain additions are consistent across architectures, but consider updating to a newer Rust version.Rust 1.88.0 and LLVM 20.1.8 are added for both aarch64 and x86_64 with matching versions. However, as of January 2026, Rust 1.92.0 is the latest stable release (Rust 1.88.0 was released in early 2025). Consider updating to Rust 1.92.0 or the latest available stable version to benefit from recent fixes, performance improvements, and security patches.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (16)
.tekton/lightspeed-stack-pull-request.yaml.tekton/lightspeed-stack-push.yamlContainerfileMakefilepyproject.tomlrequirements-build.txtrequirements.aarch64.txtrequirements.hashes.source.txtrequirements.hashes.wheel.txtrequirements.overrides.txtrequirements.torch.txtrequirements.x86_64.txtrpms.in.yamlrpms.lock.yamlscripts/konflux_requirements.shscripts/remove_torch_deps.sh
💤 Files with no reviewable changes (2)
- scripts/remove_torch_deps.sh
- requirements.torch.txt
✅ Files skipped from review due to trivial changes (1)
- rpms.in.yaml
🚧 Files skipped from review as they are similar to previous changes (5)
- scripts/konflux_requirements.sh
- requirements.hashes.source.txt
- requirements.hashes.wheel.txt
- requirements.overrides.txt
- Makefile
🧰 Additional context used
📓 Path-based instructions (1)
pyproject.toml
📄 CodeRabbit inference engine (CLAUDE.md)
pyproject.toml: Checkpyproject.tomlfor supported Python versions before development
Always checkpyproject.tomlfor existing dependencies before adding new ones
Always verify current library versions inpyproject.tomlrather than assuming versions
Use pylint withsource-roots = "src"configuration
Files:
pyproject.toml
🧠 Learnings (2)
📚 Learning: 2026-01-11T16:30:41.784Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T16:30:41.784Z
Learning: Use `uv sync --group dev --group llslibdev` to install development dependencies
Applied to files:
Containerfile
📚 Learning: 2026-01-11T16:30:41.784Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T16:30:41.784Z
Learning: Applies to pyproject.toml : Always verify current library versions in `pyproject.toml` rather than assuming versions
Applied to files:
pyproject.toml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build-pr
- GitHub Check: E2E: library mode / ci
- GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
🔇 Additional comments (9)
pyproject.toml (1)
168-172: LGTM! Version bumps align with RHOAI index requirements.The
numpy==2.3.5andtorch==2.9.0pins are compatible with Python 3.12 (as required byrequires-python = ">=3.12,<3.14"). These updates support the PR objective of fetching wheels from the RHOAI index..tekton/lightspeed-stack-push.yaml (1)
44-46: LGTM! Requirements files updated to hash-based approach.The transition from architecture-specific files (
requirements.x86_64.txt,requirements.aarch64.txt) to hash-based files (requirements.hashes.wheel.txt,requirements.hashes.source.txt) supports reproducible builds and RHOAI index compliance..tekton/lightspeed-stack-pull-request.yaml (2)
37-41: Good documentation of package build constraints.The inline comments explaining why certain packages need binary wheels (no source, build tool requirements, build acceleration) provide valuable context for future maintainers. This aligns well with the PR objective of configuring Konflux for RHOAI index compliance.
52-58: LGTM! Consistent with push.yaml configuration.The requirements files and binary packages configuration matches the push pipeline, ensuring consistent behavior between PR validation and production builds.
Containerfile (2)
8-12: LGTM! Environment configuration supports hermetic builds.Setting
MATURIN_NO_INSTALL_RUST=1ensures maturin uses the system-installed cargo (added in line 21) rather than attempting to download Rust tooling, which aligns with the hermetic build requirements.
36-43: LGTM! Hash-based requirements installation aligned with Konflux configuration.The pip install command now uses
--no-depswith hash-verified requirement files (requirements.hashes.wheel.txt,requirements.hashes.source.txt), ensuring reproducible and secure builds that align with the Tekton pipeline prefetch configuration.rpms.lock.yaml (3)
7-55: Build toolchain additions look appropriate.The addition of cargo, cmake family packages, and emacs-filesystem provides the necessary build toolchain for compiling packages from source, aligning with the PR objective to use RHOAI index wheels and build external packages from source.
1-4: Lockfile structure and package sources look correct.All packages are sourced from official Red Hat UBI repositories (
cdn-ubi.redhat.com), checksums are properly formatted as SHA256, and package versions are consistent across aarch64 and x86_64 architectures. This aligns with the ProdSec requirement to use trusted package sources.
403-409: Thelibxcrypt-compatx86_64-only inclusion is intentional and safe.The
libxcrypt-compatpackage (4.4.18-3.el9) is present only for x86_64, while aarch64 includes onlylibxcrypt-devel. This asymmetry is intentional and aligned with Red Hat's UBI9 repository:libxcrypt-compatprovides legacy binary compatibility for 32-bit applications on x86_64, which is not needed on aarch64. Both architectures include the matchinglibxcrypt-develpackage. No action required.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
c6d98b8 to
781b24a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@requirements-build.txt`:
- Around line 1-6: The requirements-build.txt contains duplicate hatchling
entries (1.26.3 and 1.28.0) and an incorrect header referencing a non-existent
requirements.source.txt; fix by either (A) creating the missing
requirements.source.txt and re-running the generator command (pybuild-deps
compile --output-file=requirements-build.txt requirements.source.txt) so the
file is regenerated cleanly, or (B) manually remove the older duplicate
hatchling pin (keep the desired version, e.g., 1.28.0), ensure only one
hatchling line remains, and update the header comment to accurately reflect how
the file was produced; also run pip-compile or your project’s dependency tooling
to validate there are no further duplicate pins.
♻️ Duplicate comments (6)
Makefile (1)
113-114: Script delegation is cleaner; verify error handling in the script.Delegating to
./scripts/konflux_requirements.shcentralizes the generation logic. Ensure the script includesset -eandset -o pipefailfor robust error handling, validates input files exist before processing, and checks criticalsed -ioperations succeed.#!/bin/bash # Verify script has proper error handling echo "=== Checking for set -e and set -o pipefail ===" head -20 scripts/konflux_requirements.sh | grep -E '^\s*set\s+-[euo]|set\s+-o\s+pipefail' echo "" echo "=== First 25 lines of script ===" head -25 scripts/konflux_requirements.shrequirements-build.txt (1)
29-43: Critical: Duplicatehatchlingpins still present.The file contains both
hatchling==1.26.3(line 29) andhatchling==1.28.0(line 33). This will cause pip installation failures due to conflicting version requirements.Remove the older version and keep only
hatchling==1.28.0.Proposed fix
-hatchling==1.26.3 - # via - # hatch-fancy-pypi-readme - # openai hatchling==1.28.0 # via # a2a-sdk # filelock # hatch-fancy-pypi-readme # hatch-vcs # jsonschema # mcp # termcolor # uv-dynamic-versioning # uvicornContainerfile (1)
20-21: Remove duplicategccin dnf install command.The command lists
gcctwice:gcc gcc cmake cargo. This is harmless but should be cleaned up.Proposed fix
-RUN dnf install -y --nodocs --setopt=keepcache=0 --setopt=tsflags=nodocs gcc gcc cmake cargo +RUN dnf install -y --nodocs --setopt=keepcache=0 --setopt=tsflags=nodocs gcc cmake cargoscripts/konflux_requirements.sh (3)
1-1: Addset -euo pipefailfor robust error handling.The script lacks fail-fast behavior. If any command fails (e.g.,
uv pip compile,sed -i), downstream steps may produce invalid artifacts or the script may continue silently.Proposed fix
#!/bin/bash +set -euo pipefail # Script to split requirements by index source
31-35: Fix inconsistent script name in autogenerated comments.The header comments reference
split_requirements.shbut this script is namedkonflux_requirements.sh.Proposed fix
# Initialize output files echo "# Packages from pypi.org" > "$SOURCE_FILE" -echo "# This file was autogenerated by split_requirements.sh" >> "$SOURCE_FILE" +echo "# This file was autogenerated by konflux_requirements.sh" >> "$SOURCE_FILE" echo "# Packages from console.redhat.com" > "$WHEEL_FILE" -echo "# This file was autogenerated by split_requirements.sh" >> "$WHEEL_FILE" +echo "# This file was autogenerated by konflux_requirements.sh" >> "$WHEEL_FILE" echo "--index-url https://console.redhat.com/api/pypi/public-rhai/rhoai/3.2/cpu-ubi9/simple/" >> "$WHEEL_FILE"
58-64: Add validation for emptywheel_packagesresult.If
WHEEL_FILEcontains only comments or is empty,wheel_packagesbecomes an empty string, resulting in"packages": ""in the YAML files which may cause pipeline failures.Proposed fix
# replace the list of binary packages in konflux pipeline configuration # only the package names, not the versions, delimited by commas wheel_packages=$(grep -v "^[`#-`]" "$WHEEL_FILE" | sed 's/==.*//' | tr '\n' ',' | sed 's/,$//') + +if [[ -z "$wheel_packages" ]]; then + echo "Error: No wheel packages found in $WHEEL_FILE" >&2 + exit 1 +fi + # append extra wheels to the list wheel_packages="$wheel_packages,$EXTRA_WHEELS" sed -i 's/"packages": "[^"]*"/"packages": "'"$wheel_packages"'"/' .tekton/lightspeed-stack-pull-request.yaml sed -i 's/"packages": "[^"]*"/"packages": "'"$wheel_packages"'"/' .tekton/lightspeed-stack-push.yaml
🧹 Nitpick comments (2)
requirements.overrides.txt (1)
1-11: Override pins for RHOAI index compliance look appropriate.The override file correctly pins packages to versions available on the RHOAI wheel index. This approach cleanly separates the RHOAI-specific pins from the main
pyproject.tomldependencies.Minor: Add a trailing newline after line 11 for POSIX compliance.
Add trailing newline
pillow==12.0.0 -faiss-cpu==1.12.0 +faiss-cpu==1.12.0 +scripts/konflux_requirements.sh (1)
74-75: Consider preserving intermediate files for debugging.The script removes intermediate files (
$RAW_REQ_FILE,$WHEEL_FILE,$SOURCE_FILE) immediately after processing. For a WIP PR, consider either:
- Adding a
--keep-intermediatesflag- Moving cleanup to a separate target
This aids debugging when the generated hashed requirements don't match expectations.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (16)
.tekton/lightspeed-stack-pull-request.yaml.tekton/lightspeed-stack-push.yamlContainerfileMakefilepyproject.tomlrequirements-build.txtrequirements.aarch64.txtrequirements.hashes.source.txtrequirements.hashes.wheel.txtrequirements.overrides.txtrequirements.torch.txtrequirements.x86_64.txtrpms.in.yamlrpms.lock.yamlscripts/konflux_requirements.shscripts/remove_torch_deps.sh
💤 Files with no reviewable changes (2)
- requirements.torch.txt
- scripts/remove_torch_deps.sh
✅ Files skipped from review due to trivial changes (1)
- requirements.hashes.source.txt
🚧 Files skipped from review as they are similar to previous changes (3)
- rpms.in.yaml
- requirements.hashes.wheel.txt
- .tekton/lightspeed-stack-push.yaml
🧰 Additional context used
📓 Path-based instructions (1)
pyproject.toml
📄 CodeRabbit inference engine (CLAUDE.md)
pyproject.toml: Checkpyproject.tomlfor supported Python versions before development
Always checkpyproject.tomlfor existing dependencies before adding new ones
Always verify current library versions inpyproject.tomlrather than assuming versions
Use pylint withsource-roots = "src"configuration
Files:
pyproject.toml
🧠 Learnings (3)
📚 Learning: 2026-01-11T16:30:41.784Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T16:30:41.784Z
Learning: Use `uv sync --group dev --group llslibdev` to install development dependencies
Applied to files:
Containerfile
📚 Learning: 2025-08-18T11:45:59.961Z
Learnt from: matysek
Repo: lightspeed-core/lightspeed-stack PR: 292
File: pyproject.toml:77-77
Timestamp: 2025-08-18T11:45:59.961Z
Learning: torch==2.7.1 is available on PyPI and is a valid version that can be used in dependency specifications.
Applied to files:
Makefile
📚 Learning: 2026-01-11T16:30:41.784Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T16:30:41.784Z
Learning: Applies to pyproject.toml : Always verify current library versions in `pyproject.toml` rather than assuming versions
Applied to files:
pyproject.toml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build-pr
- GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
- GitHub Check: E2E: library mode / ci
🔇 Additional comments (10)
Makefile (1)
10-10: TORCH_VERSION correctly synchronized with pyproject.toml.The version bump to 2.9.0 is consistent with the
torch==2.9.0pin inpyproject.tomlline 172..tekton/lightspeed-stack-pull-request.yaml (2)
51-54: Requirements files updated to use hashed wheel/source split—aligns with RHOAI workflow.The change from architecture-specific files to
requirements.hashes.wheel.txtandrequirements.hashes.source.txtsupports the PR's goal of separating RHOAI wheels from source-built packages. All three specified files exist and contain valid content. The removal ofrequirements.torch.txtis appropriate since torch is now included in the main hashed files.
57-62: Binary packages list is properly synchronized with hashed requirements.The binary list (120 packages) correctly includes all 118 packages from
requirements.hashes.wheel.txtplusuvandpipas build-time dependencies. The scriptscripts/konflux_requirements.shautomatically maintains this synchronization by extracting wheel packages and appendingEXTRA_WHEELS="uv,pip"before updating the YAML files (lines 68-70 of the script).pyproject.toml (1)
168-173: Versions confirmed available on RHOAI index.The numpy (2.3.5) and torch (2.9.0) versions align with Makefile
TORCH_VERSION := 2.9.0and are confirmed present inrequirements.hashes.wheel.txt, which was compiled against the RHOAI cpu-ubi9 index. This artifacts confirms both versions are available on the target index.Containerfile (2)
8-12: LGTM: Environment variable configuration for Rust/Maturin.Adding
MATURIN_NO_INSTALL_RUST=1is appropriate since Rust is now being installed via system packages (cargo). This prevents Maturin from attempting to download/install its own Rust toolchain.
36-43: LGTM: Hermetic build flow with hashed requirements.The conditional block correctly handles both hermetic (cachi2) and non-hermetic builds. Using
--no-depswith hashed requirement files ensures reproducible builds with pre-verified dependencies.scripts/konflux_requirements.sh (1)
70-72: Verify platform handling for source packages.Line 70 uses
--universalfor wheel packages (correct for multi-arch support), but line 71 does not use--universalfor source packages. If any pypi.org packages have platform-specific dependencies, this could cause issues on aarch64 builds.Consider whether
--universalshould also be applied to the source file compilation:-uv pip compile "$SOURCE_FILE" -o "$SOURCE_HASH_FILE" --refresh --generate-hashes --python-version 3.12 --emit-index-url --no-deps --no-annotate +uv pip compile "$SOURCE_FILE" -o "$SOURCE_HASH_FILE" --refresh --generate-hashes --python-version 3.12 --emit-index-url --no-deps --no-annotate --universalrpms.lock.yaml (3)
7-41: LGTM: New build tool packages for aarch64.The additions of cargo, cmake (with cmake-data, cmake-filesystem, cmake-rpm-macros), and related packages align with the Containerfile changes that now install
gcc cmake cargofor building Python packages from source.
147-160: LGTM: Rust toolchain packages added.The rust and rust-std-static packages (version 1.88.0) are required dependencies for cargo and building Rust-based Python packages like fastuuid and maturin.
312-346: LGTM: x86_64 build tool packages match aarch64.The x86_64 architecture has matching cargo, cmake, and rust packages at the same versions as aarch64, ensuring consistent builds across platforms.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
781b24a to
31eab47
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@Makefile`:
- Around line 113-114: The konflux_requirements generation script
scripts/konflux_requirements.sh lacks strict error handling; update that script
(invoked by the Makefile target konflux-requirements) by adding shell strictness
flags right after the shebang: enable exit-on-error and fail-on-pipe-failure
(i.e., add set -e and set -o pipefail) so the script exits immediately on any
command or pipeline failure and avoids producing partial/corrupted requirements
files for the Tekton build.
In `@requirements-build.txt`:
- Around line 32-35: The autogenerated header reference is wrong: replace any
mentions of "split_requirements.sh" in the requirements file header with the
actual generator name "konflux_requirements.sh" and regenerate the file using
the konflux_requirements.sh script (or update the generation metadata) so the
header and provenance match; verify the file contains the expected entries
(e.g., "hatchling==1.28.0") after regeneration.
In `@scripts/konflux_requirements.sh`:
- Around line 70-72: The source and wheel compilation commands are inconsistent:
add the same multi-platform/universal options to the source compile and ensure
you generate hashes for both x86_64 and aarch64 platforms. Specifically, update
the uv pip compile invocation for "$SOURCE_FILE" to include the same --universal
(or equivalent multi-platform flag) and emit-index-url options used for
"$WHEEL_FILE", and modify the earlier uv pip compile steps that target
x86_64-unknown-linux-gnu to also run for aarch64 (e.g., add a second compile
invocation with the aarch64 platform) so that both "$WHEEL_HASH_FILE" and
"$SOURCE_HASH_FILE" and the "$BUILD_FILE" dependency generation cover both
architectures.
♻️ Duplicate comments (5)
requirements-build.txt (1)
29-43: Blocker: Duplicatehatchlingpins still present.The file contains conflicting versions of
hatchling(1.26.3 on line 29 and 1.28.0 on line 33). This will cause pip installation failures or unpredictable behavior. The previous review marked this as addressed, but the duplicate persists in the current code.Regenerate
requirements-build.txtto resolve the conflict, keeping only one version (likely 1.28.0 as the newer pin).scripts/konflux_requirements.sh (4)
1-1: Addset -euo pipefailfor robust error handling.The script lacks fail-fast behavior. If
uv pip compile,sed, or other commands fail silently, downstream steps may produce invalid artifacts or the pipeline may continue with corrupted state.♻️ Proposed fix
#!/bin/bash +set -euo pipefail # Script to split requirements by index source
31-35: Fix incorrect script name in autogenerated comments.The header comments reference
split_requirements.shbut this script is namedkonflux_requirements.sh.🐛 Proposed fix
# Initialize output files echo "# Packages from pypi.org" > "$SOURCE_FILE" -echo "# This file was autogenerated by split_requirements.sh" >> "$SOURCE_FILE" +echo "# This file was autogenerated by konflux_requirements.sh" >> "$SOURCE_FILE" echo "# Packages from console.redhat.com" > "$WHEEL_FILE" -echo "# This file was autogenerated by split_requirements.sh" >> "$WHEEL_FILE" +echo "# This file was autogenerated by konflux_requirements.sh" >> "$WHEEL_FILE" echo "--index-url https://console.redhat.com/api/pypi/public-rhai/rhoai/3.2/cpu-ubi9/simple/" >> "$WHEEL_FILE"
58-64: Add validation for emptywheel_packages.If
WHEEL_FILEcontains only comments or is empty,wheel_packagesbecomes an empty string, resulting in"packages": ""in the Tekton YAML files. This could cause prefetch failures.🐛 Proposed fix
# replace the list of binary packages in konflux pipeline configuration # only the package names, not the versions, delimited by commas wheel_packages=$(grep -v "^[`#-`]" "$WHEEL_FILE" | sed 's/==.*//' | tr '\n' ',' | sed 's/,$//') +if [[ -z "$wheel_packages" ]]; then + echo "ERROR: No wheel packages found in $WHEEL_FILE" >&2 + exit 1 +fi # append extra wheels to the list wheel_packages="$wheel_packages,$EXTRA_WHEELS" sed -i 's/"packages": "[^"]*"/"packages": "'"$wheel_packages"'"/' .tekton/lightspeed-stack-pull-request.yaml sed -i 's/"packages": "[^"]*"/"packages": "'"$wheel_packages"'"/' .tekton/lightspeed-stack-push.yaml
18-28: The script only resolves dependencies for x86_64, but the project targets both x86_64 and aarch64 architectures.The current
uv pip compilecommand targets onlyx86_64-unknown-linux-gnu. Since the project explicitly supports aarch64 (as shown inrpms.in.yamland the extensive aarch64 packages inrpms.lock.yaml), any platform-specific dependencies for aarch64 will be missed.The
--universalflag does not exist inuv. Instead, run the compilation twice—once for each platform using--python-platform aarch64-unknown-linux-gnuin a separate pass—to capture all platform-specific dependencies.
🧹 Nitpick comments (1)
.tekton/lightspeed-stack-push.yaml (1)
49-54: Consider formatting the packages list for better maintainability.The
packagesvalue on line 50 is a very long single line (~1500+ characters), making it difficult to review changes in diffs. Since this is auto-generated byscripts/konflux_requirements.sh, consider updating the script to output packages in a more readable format (e.g., YAML list or multi-line string with line breaks).
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
.tekton/lightspeed-stack-pull-request.yaml.tekton/lightspeed-stack-push.yamlContainerfileMakefileREADME.mdpyproject.tomlrequirements-build.txtrequirements.aarch64.txtrequirements.hashes.source.txtrequirements.hashes.wheel.txtrequirements.overrides.txtrequirements.torch.txtrequirements.x86_64.txtrpms.in.yamlrpms.lock.yamlscripts/konflux_requirements.shscripts/remove_torch_deps.sh
💤 Files with no reviewable changes (2)
- requirements.torch.txt
- scripts/remove_torch_deps.sh
🚧 Files skipped from review as they are similar to previous changes (4)
- rpms.in.yaml
- requirements.hashes.source.txt
- requirements.overrides.txt
- .tekton/lightspeed-stack-pull-request.yaml
🧰 Additional context used
📓 Path-based instructions (1)
pyproject.toml
📄 CodeRabbit inference engine (CLAUDE.md)
pyproject.toml: Checkpyproject.tomlfor supported Python versions before development
Always checkpyproject.tomlfor existing dependencies before adding new ones
Always verify current library versions inpyproject.tomlrather than assuming versions
Use pylint withsource-roots = "src"configuration
Files:
pyproject.toml
🧠 Learnings (2)
📚 Learning: 2025-08-18T11:45:59.961Z
Learnt from: matysek
Repo: lightspeed-core/lightspeed-stack PR: 292
File: pyproject.toml:77-77
Timestamp: 2025-08-18T11:45:59.961Z
Learning: torch==2.7.1 is available on PyPI and is a valid version that can be used in dependency specifications.
Applied to files:
Makefilepyproject.toml
📚 Learning: 2026-01-11T16:30:41.784Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T16:30:41.784Z
Learning: Use `uv sync --group dev --group llslibdev` to install development dependencies
Applied to files:
Containerfile
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build-pr
- GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
- GitHub Check: E2E: library mode / ci
🔇 Additional comments (11)
rpms.lock.yaml (1)
1-614: LGTM - RPM lock file correctly updated for build toolchain support.The additions of
cargo,cmake,rust, and related build tooling packages are consistent with the PR objective to build packages from source when external wheels aren't available from the RHOAI index. The lock file maintains proper parity between aarch64 and x86_64 architectures, with matching checksums for noarch packages.pyproject.toml (1)
168-172: Version pins are consistent with the wheel lockfile.The
numpy==2.3.5andtorch==2.9.0pins matchrequirements.hashes.wheel.txt(lines 135 and 290), ensuring consistency between the dependency specification and the generated hashed requirements for the RHOAI index..tekton/lightspeed-stack-push.yaml (1)
44-46: Correct migration to hashed requirements files.The switch from
requirements.x86_64.txt,requirements.aarch64.txt,requirements.torch.txtto the newrequirements.hashes.wheel.txtandrequirements.hashes.source.txtaligns with the PR objective to use the RHOAI index and maintain hash verification for supply chain security.requirements.hashes.wheel.txt (2)
1-4: RHOAI index correctly configured for wheel fetching.The auto-generated file correctly uses the RHOAI index URL (
https://console.redhat.com/api/pypi/public-rhai/rhoai/3.2/cpu-ubi9/simple/) as required by ProdSec. Theuv pip compilecommand includes appropriate flags for reproducible builds:--generate-hashes,--no-deps, and--universal.
305-307: Remove triton from CPU-only build or clarify if this is actually a GPU-capable build.Triton is not required for PyTorch CPU-only builds—PyTorch's build system only adds Triton as a dependency when CUDA or ROCm is enabled. Including it here suggests either: (1) this isn't actually a CPU-only build, or (2) it was pulled in unnecessarily. Additionally, the two hashes present are for both x86_64 and aarch64 architectures, so aarch64 support is not an issue.
Makefile (1)
10-10: TORCH_VERSION correctly updated to match pyproject.toml.The version
2.9.0is consistent withpyproject.tomlline 172 (torch==2.9.0) andrequirements.hashes.wheel.txtline 290.Containerfile (3)
8-12: LGTM!The addition of
MATURIN_NO_INSTALL_RUST=1is appropriate to prevent Maturin from attempting to download Rust during builds, which aligns with the hermetic build requirements.
20-21: LGTM!The expanded package list (
gcc cmake cargo) correctly adds build tools required byfastuuidandmaturin. The duplicategccfrom earlier commits has been resolved.
36-43: LGTM!The hermetic build flow is well-structured: sourcing
cachi2.envwhen available, using--no-index --find-linksfor air-gapped builds, and falling back touv syncfor non-hermetic environments.README.md (2)
1187-1194: LGTM!The new documentation clearly explains the hermetic requirements generation workflow, the three output files, and the Tekton pipeline updates. This provides good guidance for maintainers.
19-106: LGTM!Table of contents reorganization improves navigation with proper nesting for the new MCP Server Authentication subsections.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| konflux-requirements: ## generate hermetic requirements.*.txt file for konflux build | ||
| uv pip compile pyproject.toml -o requirements.x86_64.txt --generate-hashes --group llslibdev --python-platform x86_64-unknown-linux-gnu --torch-backend cpu --python-version 3.12 --refresh | ||
| uv pip compile pyproject.toml -o requirements.aarch64.txt --generate-hashes --group llslibdev --python-platform aarch64-unknown-linux-gnu --torch-backend cpu --python-version 3.12 --refresh | ||
| ./scripts/remove_torch_deps.sh requirements.x86_64.txt | ||
| ./scripts/remove_torch_deps.sh requirements.aarch64.txt | ||
| echo "torch==${TORCH_VERSION}" | uv pip compile - -o requirements.torch.txt --generate-hashes --python-version 3.12 --torch-backend cpu --emit-index-url --no-deps --index-url https://download.pytorch.org/whl/cpu --refresh | ||
| uv run pybuild-deps compile --output-file=requirements-build.txt \ | ||
| <(grep -v "^faiss-cpu" requirements.hermetic.txt) \ | ||
| <(grep -Eo "^[a-zA-Z0-9][-a-zA-Z0-9._]*==[^ ]+" requirements.x86_64.txt | grep -v "^faiss-cpu") | ||
| ./scripts/konflux_requirements.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, check if the script exists
if [ -f scripts/konflux_requirements.sh ]; then
echo "=== Checking for error handling in scripts/konflux_requirements.sh ==="
echo ""
echo "--- First 30 lines of the script ---"
head -30 scripts/konflux_requirements.sh
echo ""
echo "--- Searching for set directives ---"
grep -n "^set -" scripts/konflux_requirements.sh || echo "No 'set -' directives found at line start"
else
echo "ERROR: scripts/konflux_requirements.sh not found"
ls -la scripts/ 2>/dev/null || echo "scripts directory not found"
fiRepository: lightspeed-core/lightspeed-stack
Length of output: 1325
Add error handling directives to scripts/konflux_requirements.sh.
The script is missing set -e and set -o pipefail at the top. These are critical since the script generates requirements files that feed into the Tekton pipeline build. Add the following lines right after the shebang:
#!/bin/bash
set -e
set -o pipefail
This ensures the script exits immediately on any command failure or pipe error, preventing partial/corrupted output files from progressing to the build pipeline.
🤖 Prompt for AI Agents
In `@Makefile` around lines 113 - 114, The konflux_requirements generation script
scripts/konflux_requirements.sh lacks strict error handling; update that script
(invoked by the Makefile target konflux-requirements) by adding shell strictness
flags right after the shebang: enable exit-on-error and fail-on-pipe-failure
(i.e., add set -e and set -o pipefail) so the script exits immediately on any
command or pipeline failure and avoids producing partial/corrupted requirements
files for the Tekton build.
| # openai | ||
| hatchling==1.28.0 | ||
| # via | ||
| # a2a-sdk |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect autogenerated header comment.
Line 32 references split_requirements.sh, but the actual script generating these files is konflux_requirements.sh. This inconsistency suggests the file may have been manually edited or generated with an outdated script version.
🤖 Prompt for AI Agents
In `@requirements-build.txt` around lines 32 - 35, The autogenerated header
reference is wrong: replace any mentions of "split_requirements.sh" in the
requirements file header with the actual generator name
"konflux_requirements.sh" and regenerate the file using the
konflux_requirements.sh script (or update the generation metadata) so the header
and provenance match; verify the file contains the expected entries (e.g.,
"hatchling==1.28.0") after regeneration.
| uv pip compile "$WHEEL_FILE" -o "$WHEEL_HASH_FILE" --refresh --generate-hashes --index-url https://console.redhat.com/api/pypi/public-rhai/rhoai/3.2/cpu-ubi9/simple/ --python-version 3.12 --emit-index-url --no-deps --no-annotate --universal | ||
| uv pip compile "$SOURCE_FILE" -o "$SOURCE_HASH_FILE" --refresh --generate-hashes --python-version 3.12 --emit-index-url --no-deps --no-annotate | ||
| uv run pybuild-deps compile --output-file="$BUILD_FILE" "$SOURCE_FILE" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent multi-platform handling between wheel and source compilations.
Line 70 uses --universal for wheel compilation, but line 71 (source compilation) omits it. This means source packages will only be resolved for the default platform (likely x86_64), potentially causing aarch64 builds to fail if any source package has platform-specific dependencies.
Additionally, the initial uv pip compile (lines 18-28) only targets x86_64-unknown-linux-gnu, which may miss aarch64-specific dependencies.
🐛 Proposed fix
uv pip compile "$WHEEL_FILE" -o "$WHEEL_HASH_FILE" --refresh --generate-hashes --index-url https://console.redhat.com/api/pypi/public-rhai/rhoai/3.2/cpu-ubi9/simple/ --python-version 3.12 --emit-index-url --no-deps --no-annotate --universal
-uv pip compile "$SOURCE_FILE" -o "$SOURCE_HASH_FILE" --refresh --generate-hashes --python-version 3.12 --emit-index-url --no-deps --no-annotate
+uv pip compile "$SOURCE_FILE" -o "$SOURCE_HASH_FILE" --refresh --generate-hashes --python-version 3.12 --emit-index-url --no-deps --no-annotate --universal
uv run pybuild-deps compile --output-file="$BUILD_FILE" "$SOURCE_FILE"🤖 Prompt for AI Agents
In `@scripts/konflux_requirements.sh` around lines 70 - 72, The source and wheel
compilation commands are inconsistent: add the same multi-platform/universal
options to the source compile and ensure you generate hashes for both x86_64 and
aarch64 platforms. Specifically, update the uv pip compile invocation for
"$SOURCE_FILE" to include the same --universal (or equivalent multi-platform
flag) and emit-index-url options used for "$WHEEL_FILE", and modify the earlier
uv pip compile steps that target x86_64-unknown-linux-gnu to also run for
aarch64 (e.g., add a second compile invocation with the aarch64 platform) so
that both "$WHEEL_HASH_FILE" and "$SOURCE_HASH_FILE" and the "$BUILD_FILE"
dependency generation cover both architectures.
Signed-off-by: Haoyu Sun <hasun@redhat.com>
31eab47 to
5812aa0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @.tekton/lightspeed-stack-pull-request.yaml:
- Line 58: The generated packages list is unsorted (e.g., "uv" at the end)
because the generator script konflux_requirements.sh is not sorting its output;
update the generator to sort the package names before joining/emitting them (for
example pipe the package lines through sort -u or use an equivalent
language-level sort) so the "packages" string in
.tekton/lightspeed-stack-pull-request.yaml is produced in stable alphabetical
order; ensure duplicates are handled (unique) and adjust the code path that
builds the comma-separated string (the place that constructs the long "packages"
value) to use the sorted list.
In `@scripts/konflux_requirements.sh`:
- Around line 17-28: The requirements are being resolved only for x86_64 due to
the --python-platform x86_64 flag in the uv pip compile invocation (writing to
RAW_REQ_FILE), which can lock versions incompatible with aarch64; update the
script to produce architecture-aware requirements by either invoking uv pip
compile twice (once with --python-platform x86_64 and once with
--python-platform aarch64) and merging outputs, or remove the --python-platform
constraint and use a more inclusive resolution before the later wheel recompile
(--universal) step; ensure the commands that reference RAW_REQ_FILE, the
--python-platform option, and the later universal wheel rebuild are adjusted so
aarch64-only source packages (e.g., polyleven) are not excluded by the initial
x86_64 resolution.
♻️ Duplicate comments (5)
requirements-build.txt (1)
29-43: Blocker: Duplicatehatchlingpins remain in the file.The file contains conflicting pins for
hatchling:
- Line 29:
hatchling==1.26.3- Line 33:
hatchling==1.28.0This will cause
pip installto fail with conflicting requirements. Despite past review comments indicating this was addressed, the duplicates persist.Proposed fix (keep the newer version)
-hatchling==1.26.3 - # via - # hatch-fancy-pypi-readme - # openai hatchling==1.28.0 # via # a2a-sdk # filelock # hatch-fancy-pypi-readme # hatch-vcs # jsonschema # mcp # termcolor # uv-dynamic-versioning # uvicornRegenerate this file using
pybuild-deps compile --output-file=requirements-build.txt requirements.source.txtafter ensuring the source file has consistent dependencies, or manually remove the duplicate entry.scripts/konflux_requirements.sh (4)
32-35: Incorrect script name in autogenerated comments.The header comments reference
split_requirements.shbut the script is namedkonflux_requirements.sh.🐛 Proposed fix
# Initialize output files echo "# Packages from pypi.org" > "$SOURCE_FILE" -echo "# This file was autogenerated by split_requirements.sh" >> "$SOURCE_FILE" +echo "# This file was autogenerated by konflux_requirements.sh" >> "$SOURCE_FILE" echo "# Packages from console.redhat.com" > "$WHEEL_FILE" -echo "# This file was autogenerated by split_requirements.sh" >> "$WHEEL_FILE" +echo "# This file was autogenerated by konflux_requirements.sh" >> "$WHEEL_FILE" echo "--index-url https://console.redhat.com/api/pypi/public-rhai/rhoai/3.2/cpu-ubi9/simple/" >> "$WHEEL_FILE"
60-64: Add validation for emptywheel_packagesto prevent invalid YAML.If
WHEEL_FILEcontains only comments or is empty,wheel_packagesbecomes an empty string, resulting in"packages": ""in the YAML files. This could cause the Konflux pipeline to fail or behave unexpectedly.🐛 Proposed fix
# replace the list of binary packages in konflux pipeline configuration # only the package names, not the versions, delimited by commas wheel_packages=$(grep -v "^[`#-`]" "$WHEEL_FILE" | sed 's/==.*//' | tr '\n' ',' | sed 's/,$//') + +if [[ -z "$wheel_packages" ]]; then + echo "ERROR: No wheel packages found in $WHEEL_FILE" >&2 + exit 1 +fi + # append extra wheels to the list wheel_packages="$wheel_packages,$EXTRA_WHEELS" sed -i 's/"packages": "[^"]*"/"packages": "'"$wheel_packages"'"/' .tekton/lightspeed-stack-pull-request.yaml sed -i 's/"packages": "[^"]*"/"packages": "'"$wheel_packages"'"/' .tekton/lightspeed-stack-push.yaml
70-71: Inconsistent multi-platform handling between wheel and source compilations.Line 70 uses
--universalfor wheel compilation, but line 71 (source compilation) omits it. This means source packages will only be resolved for the default platform (likely x86_64), potentially causing aarch64 builds to fail if any source package has platform-specific dependencies.🐛 Proposed fix
uv pip compile "$WHEEL_FILE" -o "$WHEEL_HASH_FILE" --refresh --generate-hashes --index-url https://console.redhat.com/api/pypi/public-rhai/rhoai/3.2/cpu-ubi9/simple/ --python-version 3.12 --emit-index-url --no-deps --no-annotate --universal -uv pip compile "$SOURCE_FILE" -o "$SOURCE_HASH_FILE" --refresh --generate-hashes --python-version 3.12 --emit-index-url --no-deps --no-annotate +uv pip compile "$SOURCE_FILE" -o "$SOURCE_HASH_FILE" --refresh --generate-hashes --python-version 3.12 --emit-index-url --no-deps --no-annotate --universal uv run pybuild-deps compile --output-file="$BUILD_FILE" "$SOURCE_FILE"
1-1: Addset -euo pipefailfor robust error handling.The script lacks fail-fast behavior. If any command fails silently (e.g.,
uv pip compileorsed -i), downstream steps may produce invalid artifacts.♻️ Proposed fix
#!/bin/bash +set -euo pipefail # Script to split requirements by index source
🧹 Nitpick comments (1)
requirements.overrides.txt (1)
1-11: Add trailing newline for POSIX compliance.The file lacks a trailing newline, which can cause issues with some text processing tools and version control diffs.
♻️ Proposed fix
faiss-cpu==1.12.0 +
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
.tekton/lightspeed-stack-pull-request.yaml.tekton/lightspeed-stack-push.yamlContainerfileMakefileREADME.mdpyproject.tomlrequirements-build.txtrequirements.aarch64.txtrequirements.hashes.source.txtrequirements.hashes.wheel.txtrequirements.overrides.txtrequirements.torch.txtrequirements.x86_64.txtrpms.in.yamlrpms.lock.yamlscripts/konflux_requirements.shscripts/remove_torch_deps.sh
💤 Files with no reviewable changes (2)
- scripts/remove_torch_deps.sh
- requirements.torch.txt
🚧 Files skipped from review as they are similar to previous changes (3)
- Makefile
- rpms.in.yaml
- requirements.hashes.wheel.txt
🧰 Additional context used
📓 Path-based instructions (1)
pyproject.toml
📄 CodeRabbit inference engine (CLAUDE.md)
pyproject.toml: Checkpyproject.tomlfor supported Python versions before development
Always checkpyproject.tomlfor existing dependencies before adding new ones
Always verify current library versions inpyproject.tomlrather than assuming versions
Use pylint withsource-roots = "src"configuration
Files:
pyproject.toml
🧠 Learnings (2)
📚 Learning: 2026-01-11T16:30:41.784Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T16:30:41.784Z
Learning: Use `uv sync --group dev --group llslibdev` to install development dependencies
Applied to files:
Containerfile
📚 Learning: 2026-01-11T16:30:41.784Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T16:30:41.784Z
Learning: Applies to pyproject.toml : Always verify current library versions in `pyproject.toml` rather than assuming versions
Applied to files:
pyproject.toml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build-pr
- GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
- GitHub Check: E2E: library mode / ci
🔇 Additional comments (12)
pyproject.toml (1)
168-174: LGTM on dependency version bumps.The
numpy==2.3.5andtorch==2.9.0pins align with the RHOAI wheel index versions and are compatible with Python 3.12 as required. These updates support the PR objective of using wheels from the RHOAI index.requirements.hashes.source.txt (1)
1-3: Autogenerated source requirements file looks correct.This file is properly generated by
uv pip compilewith hash verification enabled. Per the PR objectives, this file targets PyPI for packages that need to be built from source when not available in the RHOAI wheel index.rpms.lock.yaml (2)
112-125: LLVM and build tooling additions look correct.The
llvm-libs-20.1.8-3.el9andllvm-filesystem-20.1.8-3.el9packages are added consistently for both architectures, supporting compilation of native extensions. The versions are current and appropriate for the RHEL 9 ecosystem.Also applies to: 417-430
7-13: Rust toolchain additions support source builds of Rust-based packages.The addition of
rust-1.88.0-1.el9,cargo-1.88.0-1.el9, andrust-std-static-1.88.0-1.el9for both aarch64 and x86_64 architectures enables building Rust-based Python packages (likematurindependencies) from source. All Rust packages maintain consistent versions across architectures, supporting the PR objective of building packages from source when external wheels aren't available..tekton/lightspeed-stack-push.yaml (2)
43-56: Requirements files and binary packages list updated correctly.The pipeline now references the new hashed requirements files (
requirements.hashes.wheel.txt,requirements.hashes.source.txt,requirements.hermetic.txt) which aligns with the PR objective of using RHOAI wheels. The expandedbinary.packageslist explicitly declares which packages can be fetched as pre-built wheels versus built from source.
50-54: Binary packages list is comprehensive and complete.The packages list in the Tekton configuration includes all 119 packages from
requirements.hashes.wheel.txtand additionally incorporates three build/package management tools (maturin,pip,uv). Verification confirms no packages with native extensions are missing.requirements-build.txt (1)
48-56: Maturin dependency for fastuuid is correctly specified.Line 48-49 shows
maturin==1.10.2is required byfastuuid, a Rust-based UUID library. The dependency is valid—fastuuid uses maturin to build Rust extensions. The Containerfile appropriately documents that cargo (Rust toolchain) is required for both fastuuid and maturin, and the build scripts properly pin maturin to version 1.10.2..tekton/lightspeed-stack-pull-request.yaml (1)
52-54: LGTM!The requirements files have been correctly updated to reference the new hash-based requirement sources generated by
konflux_requirements.sh. This aligns with the hermetic build workflow.Containerfile (2)
8-12: LGTM!The
MATURIN_NO_INSTALL_RUST=1environment variable correctly prevents Maturin from attempting to download Rust, which is appropriate sincecargois now installed viadnfat line 21.
40-41: Good addition ofpip checkfor dependency validation.The
pip checkafter installation validates that all dependencies are correctly satisfied and compatible. This is essential for hermetic builds where dependency resolution happens ahead of time.README.md (2)
1187-1194: LGTM!The documentation accurately describes the new hermetic build workflow, including the generated files and their purposes. This aligns well with the actual implementation in
konflux_requirements.sh.
19-106: Table of contents reformatted.The ToC has been reorganized with updated indentation levels. This appears to be auto-generated by vim-markdown-toc based on the markers at lines 16 and 107.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| "requirements_build_files": ["requirements-build.txt"], | ||
| "binary": { | ||
| "packages": "torch,faiss-cpu,uv,pip,jiter,tiktoken,numpy,scipy,pandas,pillow,scikit_learn,pyarrow,cryptography,fastuuid,hf_xet,maturin,pydantic_core,rpds_py,safetensors,tokenizers", | ||
| "packages": "accelerate,aiohappyeyeballs,aiohttp,aiosignal,aiosqlite,annotated-doc,annotated-types,anyio,asyncpg,attrs,autoevals,cachetools,cffi,charset-normalizer,chevron,click,cryptography,datasets,dill,distro,dnspython,durationpy,faiss-cpu,fire,frozenlist,fsspec,googleapis-common-protos,greenlet,grpcio,h11,hf-xet,httpcore,httpx,httpx-sse,huggingface-hub,idna,jinja2,jiter,joblib,jsonschema-specifications,llama-stack-client,lxml,markdown-it-py,markupsafe,mdurl,mpmath,multidict,networkx,nltk,numpy,oauthlib,opentelemetry-api,opentelemetry-exporter-otlp,opentelemetry-exporter-otlp-proto-common,opentelemetry-exporter-otlp-proto-grpc,opentelemetry-exporter-otlp-proto-http,opentelemetry-instrumentation,opentelemetry-proto,opentelemetry-sdk,opentelemetry-semantic-conventions,packaging,pandas,pillow,ply,polyleven,prompt-toolkit,propcache,proto-plus,psycopg2-binary,pyaml,pyarrow,pyasn1,pyasn1-modules,pycparser,pydantic,pydantic-core,pydantic-settings,pygments,pyjwt,python-dateutil,python-dotenv,python-multipart,pytz,pyyaml,referencing,regex,requests,requests-oauthlib,rich,rpds-py,rsa,safetensors,scikit-learn,scipy,sentence-transformers,setuptools,six,sniffio,sqlalchemy,starlette,sympy,threadpoolctl,tiktoken,tokenizers,torch,tqdm,transformers,triton,typing-extensions,typing-inspection,tzdata,urllib3,wcwidth,websocket-client,wrapt,xxhash,yarl,zipp,uv,pip,maturin", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Extract and verify the packages list is sorted
packages=$(grep -o '"packages": "[^"]*"' .tekton/lightspeed-stack-pull-request.yaml | sed 's/"packages": "//;s/"$//')
echo "$packages" | tr ',' '\n' | sort -c && echo "Packages are sorted" || echo "Packages are NOT sorted"
echo "Total packages: $(echo "$packages" | tr ',' '\n' | wc -l)"Repository: lightspeed-core/lightspeed-stack
Length of output: 162
🏁 Script executed:
# Check the file exists and examine its structure
ls -la .tekton/lightspeed-stack-pull-request.yaml && wc -l .tekton/lightspeed-stack-pull-request.yamlRepository: lightspeed-core/lightspeed-stack
Length of output: 215
🏁 Script executed:
# Look at the context around line 58
sed -n '55,65p' .tekton/lightspeed-stack-pull-request.yaml | cat -nRepository: lightspeed-core/lightspeed-stack
Length of output: 1788
Fix the generator script to produce sorted package output.
The packages list (121 items) is not sorted: uv appears at position 119 but should be alphabetically earlier. While the long line length itself is acceptable for auto-generated content, the unsorted state indicates the konflux_requirements.sh generator is not producing consistent output, making diffs harder to review. Ensure the generator sorts packages alphabetically.
🤖 Prompt for AI Agents
In @.tekton/lightspeed-stack-pull-request.yaml at line 58, The generated
packages list is unsorted (e.g., "uv" at the end) because the generator script
konflux_requirements.sh is not sorting its output; update the generator to sort
the package names before joining/emitting them (for example pipe the package
lines through sort -u or use an equivalent language-level sort) so the
"packages" string in .tekton/lightspeed-stack-pull-request.yaml is produced in
stable alphabetical order; ensure duplicates are handled (unique) and adjust the
code path that builds the comma-separated string (the place that constructs the
long "packages" value) to use the sorted list.
| # Generate requirements list from pyproject.toml from both indexes | ||
| uv pip compile pyproject.toml -o "$RAW_REQ_FILE" \ | ||
| --group llslibdev \ | ||
| --python-platform x86_64-unknown-linux-gnu \ | ||
| --python-version 3.12 \ | ||
| --refresh \ | ||
| --index https://console.redhat.com/api/pypi/public-rhai/rhoai/3.2/cpu-ubi9/simple/ \ | ||
| --default-index https://pypi.org/simple/ \ | ||
| --index-strategy unsafe-best-match \ | ||
| --emit-index-annotation \ | ||
| --no-sources \ | ||
| --override requirements.overrides.txt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
git ls-files | head -20Repository: lightspeed-core/lightspeed-stack
Length of output: 782
🏁 Script executed:
cat -n scripts/konflux_requirements.shRepository: lightspeed-core/lightspeed-stack
Length of output: 4747
🏁 Script executed:
rg -i "aarch64|arm64" --type-listRepository: lightspeed-core/lightspeed-stack
Length of output: 465
🏁 Script executed:
fd pyproject.toml | head -5Repository: lightspeed-core/lightspeed-stack
Length of output: 90
🏁 Script executed:
rg -i "aarch64|arm64" .Repository: lightspeed-core/lightspeed-stack
Length of output: 50389
🏁 Script executed:
fd -e yaml -e yml | xargs grep -l "aarch64\|arm64" | head -10Repository: lightspeed-core/lightspeed-stack
Length of output: 103
🏁 Script executed:
grep -r "uv pip compile" . --include="*.sh" --include="*.yaml" --include="*.yml" | grep -v ".git"Repository: lightspeed-core/lightspeed-stack
Length of output: 613
🏁 Script executed:
cat -n pyproject.toml | head -100Repository: lightspeed-core/lightspeed-stack
Length of output: 3609
🏁 Script executed:
grep -A 5 -B 5 "marker\|platform\|sys_platform\|aarch64\|arm64" pyproject.tomlRepository: lightspeed-core/lightspeed-stack
Length of output: 58
🏁 Script executed:
cat -n pyproject.toml | tail -100Repository: lightspeed-core/lightspeed-stack
Length of output: 3302
🏁 Script executed:
rg "sys_platform|platform_machine|marker|polyleven" pyproject.tomlRepository: lightspeed-core/lightspeed-stack
Length of output: 58
🏁 Script executed:
rg "polyleven" . --type-list 2>/dev/null || rg "polyleven" .Repository: lightspeed-core/lightspeed-stack
Length of output: 5332
The initial x86_64 compilation may miss aarch64-specific dependencies.
The script compiles requirements targeting x86_64 (line 20), which selects package versions based on x86_64 availability. Some packages lack aarch64 wheels—notably polyleven (a dependency of autoevals), which has no pre-built aarch64 wheel and requires source compilation on aarch64 (as noted in the Containerfile). While line 70 re-compiles wheels with --universal, this occurs after the x86_64-based package selection, and line 71 (source packages) has no architecture-specific handling. For aarch64 builds to work correctly, consider either compiling for both platforms separately or using a more inclusive strategy that doesn't lock versions to x86_64's resolution.
🤖 Prompt for AI Agents
In `@scripts/konflux_requirements.sh` around lines 17 - 28, The requirements are
being resolved only for x86_64 due to the --python-platform x86_64 flag in the
uv pip compile invocation (writing to RAW_REQ_FILE), which can lock versions
incompatible with aarch64; update the script to produce architecture-aware
requirements by either invoking uv pip compile twice (once with
--python-platform x86_64 and once with --python-platform aarch64) and merging
outputs, or remove the --python-platform constraint and use a more inclusive
resolution before the later wheel recompile (--universal) step; ensure the
commands that reference RAW_REQ_FILE, the --python-platform option, and the
later universal wheel rebuild are adjusted so aarch64-only source packages
(e.g., polyleven) are not excluded by the initial x86_64 resolution.
Description
To conform with Prodsec requirements, we fetch python wheels from RHOAI index only.
Packages from external index are built from source.
Type of change
Tools used to create PR
Identify any AI code assistants used in this PR (for transparency and review context)
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit
Updates
New Features
Chores
Removed
✏️ Tip: You can customize this high-level summary in your review settings.