Skip to content

[Feature] Track KubeFleet release-process changes downstream #1303

@ytimocin

Description

@ytimocin

Is your feature request related to a problem? Please describe.

KubeFleet is reworking its release process upstream (kubefleet-dev/kubefleet#693) — atomic orchestrator, cosign signing, multi-arch images, multi-minor upgrade tests, release-metadata.json published as a Release asset, etc.

Today, AzureFleet learns about new upstream releases manually. There is no automation to detect new tags, no verification of upstream lineage, no test pipeline that runs Azure-specific overlays against the bumped upstream code, and no provenance link from MCR images back to the upstream KubeFleet artifact.

Describe the solution you'd like

This epic tracks AzureFleet's downstream-side work — discovery, verification, and consumption of upstream releases, plus Azure-specific compatibility surface. Subtasks will be filed as separate issues from this epic as work begins.

Pull-side automation

  • Nightly cron workflow (.github/workflows/upstream-poll.yml): fetch latest upstream GA tag, skip pre-releases, deduplicate against go.mod
  • Verify upstream lineage: cosign verify against image digest from release-metadata.json. Abort + alert on signature failure
  • Auto-open bump PR (bump/kubefleet-vX.Y.Z): bump go.mod, go mod tidy, regenerate manifests
  • Post breaking-change + CRD-change list as PR comment, sourced from release-metadata.json

CI overlay on bump PRs

  • Consume upstream's reusable upgrade-test workflow (uses: kubefleet-dev/kubefleet/.github/workflows/upgrade.yml@<tag>) once it lands upstream
  • Layer Azure overlay tests on bump PRs: Arc chart compat, Deployment + ManagedResource webhook tests, ManagedResource VAP, cmd/crdinstaller upgrade path

Compatibility documentation

  • AzureFleet VERSIONING.md: Azure-specific compat axes (Arc chart, Deployment + ManagedResource webhooks, cmd/crdinstaller) + version-map table to upstream
  • Document AzureFleet support window (may differ from upstream — call out explicitly)

MCR pipeline provenance

  • Capture upstream tag + Rekor URI from release-metadata.json at bump time; pass through the build pipeline
  • MCR build attestation references upstream Rekor entry — preserves verifiable lineage from KubeFleet tag → MCR image
  • Document verification path (Microsoft signature → upstream Rekor → KubeFleet tag) in README / SECURITY.md

Coordination with upstream

  • Decide cmd/crdinstaller/ ownership with upstream maintainers — resolve before KubeFleet ships its standalone CRD tarball
  • Add CONTRIBUTING.md rule: core changes go upstream first; Azure-specific code uses existing extension points (pkg/propertyprovider/azure, types_azure.go pattern)

Describe alternatives you've considered

  1. Status quo (manual bumps) — fragile, no upstream signature verification, doesn't scale.
  2. repository_dispatch from upstream — rejected: couples a CNCF sandbox project to a specific commercial consumer; pull-based is the open-source norm and avoids credential lifecycle on the upstream side.
  3. GitHub release-publish webhook subscription — viable alternative to cron polling. Cron preferred today for simplicity; can revisit if cadence tightens.

Additional context

Pairs with upstream tracking issue kubefleet-dev/kubefleet#693. The full downstream desired-state flow (detect → verify → propose → human gate → MCR build) is documented in the planning doc under "Downstream — nightly poll: detect, verify, propose."

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions