The Kubernetes operator for declarative resource mirroring — any Kind, conflict-safe, watch-driven. Namespaced Projection for in-namespace mirrors, cluster-scoped ClusterProjection for fan-out across namespaces.
projection is a Kubernetes operator that mirrors any Kubernetes object — ConfigMap, Secret, Service, your custom resources — from a source location to a destination, declaratively, per resource. Each mirror is its own first-class CR (a namespaced Projection for single-target, a cluster-scoped ClusterProjection for fan-out across namespaces) with status conditions, events, and a metric you can alert on. Edits to the source propagate to the destination in roughly 100 milliseconds.
It exists because every team eventually rebuilds this with a one-off controller or a Kyverno generate policy, and neither approach is the right shape. projection is meant to be the answer when somebody asks "how do you mirror a Secret across namespaces in this cluster?"
| projection | emberstack/Reflector | Kyverno generate |
|
|---|---|---|---|
| Works on any Kind | ✓ | ConfigMap & Secret only | ✓ |
Source-of-truth lives in a CR you can kubectl get |
✓ (Projection, ClusterProjection) |
✗ (annotations on the source) | ✗ (cluster-wide policy) |
| Cluster-scoped fan-out CR | ✓ (ClusterProjection) |
✗ | ✓ but policy-shaped, not per-resource |
| Tenant self-service for in-namespace mirrors (no cluster-tier authority) | ✓ (namespaced Projection, aggregated into edit) |
✗ | ✗ |
| Per-resource status + Kubernetes Events | ✓ | partial | ✗ |
| Conflict-safe (refuses to overwrite unowned objects) | ✓ | ✗ | ✗ |
| Watch-driven propagation (~100ms) | ✓ | ✓ | ✓ |
| Admission-time validation of source fields | ✓ | n/a | ✓ |
| Prometheus metrics per reconcile outcome | ✓ | partial | ✓ |
| Footprint | two CRDs, one Deployment | one CRD, one Deployment | full policy engine |
For the longer comparison — including the cases where Reflector or Kyverno is the better choice — see docs/comparison.md.
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: platform
annotations:
# Source opts in to projection (default source-mode is "allowlist").
# Set to "false" to veto projection as the source owner.
projection.sh/projectable: "true"
data:
log_level: info
---
apiVersion: projection.sh/v1
kind: Projection
metadata:
name: app-config-mirror
namespace: tenant-a # destination namespace = this
spec:
source:
group: "" # core API
version: v1
kind: ConfigMap
name: app-config
namespace: platform
overlay:
labels:
projected-by: projection$ kubectl get projections -A
NAMESPACE NAME KIND SOURCE-NAMESPACE SOURCE-NAME DESTINATION READY AGE
tenant-a app-config-mirror ConfigMap platform app-config app-config True 2s
$ kubectl get configmap -n tenant-a app-config -o jsonpath='{.metadata.annotations.projection\.sh/owned-by-projection}'
tenant-a/app-config-mirrorEdit the source — destination updates within ~100ms.
Delete the Projection — destination is removed (only if projection still owns it).
Pre-existing object at the destination? Ready=False reason=DestinationConflict. We don't overwrite strangers.
Need to mirror into many namespaces from one source? Use ClusterProjection (cluster-scoped) with either destination.namespaces: [a, b, c] or destination.namespaceSelector — the same source, fanned out, with per-namespace status rolled up into namespacesWritten / namespacesFailed. See Getting started.
- Two CRDs, two RBAC tiers — namespaced
Projectionfor in-namespace single-target mirrors (destination namespace is structurally the Projection's own), cluster-scopedClusterProjectionfor fan-out (destination.namespaces: [a, b, c]ordestination.namespaceSelector). Tenants can self-serveProjectionvia the chart'srbac.aggregate=truedefault;ClusterProjectionrequires an explicit cluster-admin binding. - Any Kind —
RESTMapper-driven GVR resolution. Works on built-in resources, your CRDs, anything the apiserver knows about. Source uses splitgroup+versionfields; for non-core groups, omittingversiontriggers preferred-version lookup that follows CRD promotions. - Watch-driven — dynamic informer registration per source GVK on first reference. Edits propagate in ~100ms; no periodic polling. A label-filtered destination-side watch (
ensureDestWatch) makes manualkubectl deleteof a destination trigger an immediate reconcile. - Fan-out across namespaces — one
ClusterProjectionmirrors its source into every namespace listed indestination.namespacesor matching adestination.namespaceSelector. Destinations are added and removed as namespaces gain or lose the matching label. Bounded fan-out concurrency keeps the apiserver healthy at scale. - Source-owner consent — default
sourceMode=allowlistrequires sources to carryprojection.sh/projectable="true". Source owners can also veto with="false"regardless of mode. - Conflict-safe —
projection.sh/owned-by-projection(orprojection.sh/owned-by-cluster-projection) annotation marks our destinations. We refuse to overwrite objects we don't own and reportDestinationConflicton status. Source deletion (404) automatically cleans up every owned destination. - Clean deletion — finalizers remove destinations on CR deletion. The cluster CRD's finalizer sweeps every owned destination across the cluster; the namespaced CRD's finalizer cleans up its single in-namespace destination. If ownership has been stripped, we leave the object alone.
- Observable — three status conditions (
SourceResolved,DestinationWritten,Ready),events.k8s.io/v1Events withactionverbs (Create/Update/Delete/Get/Validate/Resolve/Write), per-fan-out counters (status.namespacesWritten,status.namespacesFailed), and Prometheus metrics (projection_reconcile_total{kind,result},projection_watched_gvks,projection_watched_dest_gvks). - Validated at admission —
Sourcefields are pattern-validated (DNS-1123 names, PascalCase Kinds) so typos fail atkubectl apply, not at runtime. CEL enforcesversionrequired whengroupis empty, andnamespaces⊕namespaceSelectormutual exclusion onClusterProjection.destination. - Smart copy — strips server-owned metadata, drops
.status, removeskubectl.kubernetes.io/last-applied-configuration, strips Kind-specific apiserver-allocated spec fields (ServiceclusterIP/clusterIPs, PVCvolumeName, PodnodeName, Jobselector+controller-uid labels), and preserves them on update. - Production-grade Helm chart — opt-in
ServiceMonitor,NetworkPolicy(egress lockdown), andPodDisruptionBudgettemplates. Three ClusterRoles for tenant self-service vs cluster-tier authority. Operational tuning viarequeueInterval,leaderElection.leaseDuration, andselectorWriteConcurrency. RBAC scope narrowable viasupportedKinds. - Small — two CRDs, one Deployment, one container. Distroless image, multi-arch (amd64, arm64).
helm install projection oci://ghcr.io/projection-operator/charts/projection \
--version 0.3.0 \
--namespace projection-system --create-namespacekubectl apply -f https://github.com/projection-operator/projection/releases/download/v0.3.0/install.yamlThen create your first Projection:
kubectl apply -f https://raw.githubusercontent.com/projection-operator/projection/main/examples/configmap-cross-namespace.yaml
kubectl get projections -AWhen you create a Projection or ClusterProjection, the controller resolves the source GVR via the RESTMapper, fetches the source object via the dynamic client, builds a sanitized destination object (overlay applied, ownership annotation stamped, server-owned metadata stripped), and creates or updates the destination — but only if projection already owns it. The first reconcile also registers a metadata-only watch on the source's GVK, so future edits to any source of that Kind enqueue the relevant CRs via a field-indexed lookup. A label-filtered watch on the destination GVK (ensureDestWatch) catches manual deletion or drift and triggers an immediate reconcile. Updates that wouldn't change the destination are skipped to avoid noisy events and metric churn. For ClusterProjection, fan-out writes are issued in parallel with a configurable concurrency cap.
See docs/concepts.md for the full picture, docs/observability.md for status/events/metrics, and docs/comparison.md for the deep comparison vs Reflector and Kyverno.
- Secrets across namespaces — distribute a TLS cert from
cert-managerto many application namespaces with oneClusterProjection, or to a single application namespace with a namespacedProjection. - Shared config distribution — one
ConfigMapinplatform, fanned out into every labeled tenant namespace viaClusterProjection'snamespaceSelector. Per-destination overlays (a differenttenant:label per copy) work too — declare one namespacedProjectionper destination instead. - Tenant self-service — give a namespace owner
editontenant-a(chart defaultrbac.aggregate=true) and they can authorProjections pulling shared sources intotenant-awithout any cluster-tier authority. - Service mirroring — expose a backend
Servicefrom one namespace into another without a manualExternalNamedance. - CR replication — mirror an
Issuer, aKafkaTopic, or any custom resource between namespaces in the same cluster.
- Same-cluster only. Cross-cluster mirroring is a non-goal for v0.
- Cluster-scoped Kinds rejected.
projectiononly mirrors namespaced resources. Pointing at aNamespace,ClusterRole, orStorageClasssurfacesSourceResolved=False reason=SourceResolutionFailedwith a clear message. ClusterProjectionfan-out shares one overlay. All destinations in aClusterProjectionget the same overlay; per-destination overlays require multiple namespacedProjections — seeexamples/multiple-destinations-from-one-source.yaml.- A few Kinds need extra care.
Service,PersistentVolumeClaim,Pod, andJobhave apiserver-allocated spec fields handled out of the box. Jobs created withspec.manualSelector: trueare not supported. Other Kinds with similar fields (rare) may need an addition todroppedSpecFieldsByGVK— see limitations. - Pre-1.0. API stability commitments (which fields will not change, how breaking changes are handled) are documented in docs/api-stability.md. v0.3.0 is itself a breaking change. CRD storage version is
v1; future versions will be served alongside with conversion.
- Getting started
- Concepts
- API reference (auto-generated from
api/v1/*.go) - CRD behavior and examples
- Use cases
- Comparison vs alternatives
- Observability
- Security model
- API stability
- Troubleshooting
- Scale and benchmarks
- Limitations & roadmap
Pull requests welcome. See CONTRIBUTING.md. Be excellent to each other — see CODE_OF_CONDUCT.md.
Found a vulnerability? Please report it privately via GitHub Security Advisories. See SECURITY.md.
Apache 2.0. See LICENSE.