test: add e2e tests for TLS security profile watcher#1055
Conversation
|
Important Review skippedAuto reviews are limited based on label configuration. 🚫 Review skipped — only excluded labels are configured. (1)
Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Repository YAML (base), Organization UI (inherited) Review profile: CHILL Plan: Enterprise Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds an OpenShift-only end-to-end test that manipulates configv1.APIServer.spec.TLSSecurityProfile and verifies the observability-operator by observing Deployment readiness and operator pod/container restart signals. The test clears the TLS profile baseline, runs subtests for default/Modern/Custom TLS profiles and a non-TLS annotation stability check, and uses helpers to patch/restore the APIServer, locate the running operator pod, compute restart counts, wait for restarts, and ensure a stable restart-count baseline. Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: IshwarKanse The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Hi @IshwarKanse. Thanks for your PR. I'm waiting for a rhobs member to verify that this patch is reasonable to test. If it is, they should reply with Regular contributors should join the org to skip this step. Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
3d0a9e2 to
8bcce30
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
test/e2e/zz_tls_profile_test.go (1)
397-405: Consider filtering by container name for consistency.Similar to the previous restart detection loop, this checks uptime across all containers. For consistency with
getOperatorContainerRestartCount, consider filtering tooperatorContainerName.♻️ Suggested improvement
for _, cs := range pod.Status.ContainerStatuses { + if cs.Name != operatorContainerName { + continue + } if cs.State.Running != nil { uptime := time.Since(cs.State.Running.StartedAt.Time) if uptime < 15*time.Second {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/zz_tls_profile_test.go` around lines 397 - 405, The uptime check iterates all pod.Status.ContainerStatuses and may include non-operator containers; update the loop to filter by the operatorContainerName (same selector used in getOperatorContainerRestartCount) so you only inspect ContainerStatus entries where cs.Name == operatorContainerName before checking cs.State.Running and StartedAt; this ensures consistency with the restart-detection logic and avoids false waits due to other containers' short uptimes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/e2e/zz_tls_profile_test.go`:
- Around line 361-366: The restart-detection loop iterates all
p.Status.ContainerStatuses and can pick up non-operator containers; update the
loop in the test (the block iterating p.Status.ContainerStatuses) to only
consider the operator by checking cs.Name (or the container name field used)
equals operatorContainerName before comparing RestartCount to baselineRestarts
so only the operator container restarts trigger the true return; you may also
explicitly skip InitContainerStatuses if relevant.
---
Nitpick comments:
In `@test/e2e/zz_tls_profile_test.go`:
- Around line 397-405: The uptime check iterates all
pod.Status.ContainerStatuses and may include non-operator containers; update the
loop to filter by the operatorContainerName (same selector used in
getOperatorContainerRestartCount) so you only inspect ContainerStatus entries
where cs.Name == operatorContainerName before checking cs.State.Running and
StartedAt; this ensures consistency with the restart-detection logic and avoids
false waits due to other containers' short uptimes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: 159c305b-31a0-48e3-8161-9b1252ed052c
📒 Files selected for processing (1)
test/e2e/zz_tls_profile_test.go
d87050d to
cbd97cd
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/e2e/zz_tls_profile_test.go`:
- Around line 190-207: The poll silently passes when pods are replaced or when
lookup errors are swallowed; change the stability check to (1) capture the
operator pod UID alongside the restart count before the wait (introduce or
modify operatorContainerRestartCount to return (int, string, error) or add a
getOperatorPodUID helper and store initialPodUID), and (2) in the
wait.PollUntilContextTimeout lambda return an error when
operatorContainerRestartCount/getOperatorPodUID fails (do not swallow persistent
errors) and treat podUID != initialPodUID OR currentRestarts > initialRestarts
as a restart (return true, fmt.Errorf(...)); keep using
wait.Interrupted/assert.NilError as final assertions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: 51e16b9d-ffa7-41d0-a59f-0bd2014906b8
📒 Files selected for processing (1)
test/e2e/zz_tls_profile_test.go
cbd97cd to
f1a72c6
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/e2e/zz_tls_profile_test.go`:
- Around line 31-35: This test mutates cluster-wide APIServer TLS profile by
calling setTLSProfile(t, nil) and does not restore the pre-test value; capture
the current TLS profile before calling setTLSProfile(nil) (e.g., call a helper
like getTLSProfile or read the APIServer spec), then ensure you restore it in
teardown using t.Cleanup or defer by calling setTLSProfile(t, originalProfile);
apply this around the setTLSProfile invocation in TestTLSProfileWatcher so the
operatorDeploymentName/f.AssertDeploymentReady checks remain but cluster state
is reverted after the test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: a210f140-b03d-46df-b856-04dbeeb5b929
📒 Files selected for processing (1)
test/e2e/zz_tls_profile_test.go
c4a390b to
444c8b3
Compare
simonpasquier
left a comment
There was a problem hiding this comment.
Test file is prefixed with zz_ to run last in the suite, since modifying the APIServer TLS profile triggers a MachineConfigPool rollout
I'm not sure about this: the runner executes tests using the alphabetical order of the test names, not filenames.
Add end-to-end tests that verify the operator correctly watches the APIServer CR for TLS security profile changes and restarts accordingly. Test scenarios: - Operator is running and healthy with the default (nil/Intermediate) TLS profile - Operator restarts when TLS profile changes to Old - Operator restarts when TLS profile changes to a Custom profile - Operator does NOT restart when a non-TLS field (annotation) is modified The test file is prefixed with zz_ to ensure it runs last in the e2e suite, since modifying the APIServer TLS profile triggers a MachineConfigPool rollout which can be disruptive to other tests. Assisted by Claude Code.
Modern is the recommended security profile and is accepted by OpenShift clusters. Verified that it triggers operator restart correctly. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
444c8b3 to
1c20b98
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
test/e2e/zz_tls_profile_test.go (1)
31-35:⚠️ Potential issue | 🟠 Major | ⚡ Quick winRestore the original APIServer TLS profile at test scope.
The test calls
setTLSProfile(t, nil)to establish a baseline but does not capture or restore the pre-test TLS profile value. This leaves cluster-wide state mutated after the test completes. While individual subtests restore their own changes, the suite-level reset at line 33 is never reverted.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@test/e2e/zz_tls_profile_test.go` around lines 31 - 35, Capture the current APIServer TLS profile before calling setTLSProfile(t, nil) and ensure it is restored at test teardown (use t.Cleanup or defer) so suite-level state is not left mutated; specifically, call something like getTLSProfile/tlsProfile := get...() or read the current profile just before setTLSProfile(t, nil) and register a cleanup that calls setTLSProfile(t, tlsProfile) (so the baseline reset is reverted), ensuring this restoration runs even if f.AssertDeploymentReady(operatorDeploymentName, f.OperatorNamespace, framework.WithTimeout(5*time.Minute))(t) fails.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Duplicate comments:
In `@test/e2e/zz_tls_profile_test.go`:
- Around line 31-35: Capture the current APIServer TLS profile before calling
setTLSProfile(t, nil) and ensure it is restored at test teardown (use t.Cleanup
or defer) so suite-level state is not left mutated; specifically, call something
like getTLSProfile/tlsProfile := get...() or read the current profile just
before setTLSProfile(t, nil) and register a cleanup that calls setTLSProfile(t,
tlsProfile) (so the baseline reset is reverted), ensuring this restoration runs
even if f.AssertDeploymentReady(operatorDeploymentName, f.OperatorNamespace,
framework.WithTimeout(5*time.Minute))(t) fails.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 628a83d3-da73-4251-a9b8-46d10b9b3a2c
📒 Files selected for processing (1)
test/e2e/zz_tls_profile_test.go
The Also addressed the |
|
/ok-to-test |
|
@IshwarKanse: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Summary
APIServerCR'stlsSecurityProfilechanges and remains stable when non-TLS fields changezz_to run last in the suite, since modifying the APIServer TLS profile triggers a MachineConfigPool rolloutTest scenarios
Oldtriggers operator container restart; operator recovers to readyCustomprofile with specific ciphers triggers restart; operator recoversDesign decisions
waitForStableRestartCount: waits for restart count to remain unchanged for 30s AND the container to have been running for 15s+ before considering the operator stable — prevents race conditions between test cleanup restarts and subsequent testsspec.audit.profiletriggers disruptive MachineConfigPool rollouts; annotations trigger the watcher reconcile without cluster impactTest plan